Lyly Sara

Lyly Sara

1554082723

Cannot correctly encode string when reading from text file (encoding into sha256…)

Basically what I want to accomplish (simplified...):

I want to make 100 bitcoin addresses from my own password that look kind of like:

password_1 password_2 password_3

So when I do this in the program, I am getting the correct result:

def public_key(src):
    privatekey = (int(hashlib.sha256(src).hexdigest(), 16))
    return generate_address(privatekey)
def private_key(src):
    privatekey = hashlib.sha256(src).hexdigest()
    return str(privatekey)
herewego = "password_1".encode('utf-8')
somevariable = public_key(herewego)
print somevariable 

^ This works as intended...but if I put "password_1" in a txt file and try to read this line, it gives totally different result?

for addr in file:
 address =  addr.encode('utf-8')
 print public_key(address)

So the issue is obviously that Notepad encodes the text file in say ansi or utf-8, it doesn't matter but the line read from there must be looking different to python than when I enter the " ...." within python? So what coding to use or if it's impossible: what alternative to Notepad? This is for Python 2.7 in windows by the way.

#blockchain #bitcoin #python

What is GEEK

Buddha Community

Can you add the method generate_address to the question so that we can run the code fully as well? Github link maybe?

I can’t tell what type of object I’m working with without everything.

Navigating Between DOM Nodes in JavaScript

In the previous chapters you've learnt how to select individual elements on a web page. But there are many occasions where you need to access a child, parent or ancestor element. See the JavaScript DOM nodes chapter to understand the logical relationships between the nodes in a DOM tree.

DOM node provides several properties and methods that allow you to navigate or traverse through the tree structure of the DOM and make changes very easily. In the following section we will learn how to navigate up, down, and sideways in the DOM tree using JavaScript.

Accessing the Child Nodes

You can use the firstChild and lastChild properties of the DOM node to access the first and last direct child node of a node, respectively. If the node doesn't have any child element, it returns null.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");
console.log(main.firstChild.nodeName); // Prints: #text

var hint = document.getElementById("hint");
console.log(hint.firstChild.nodeName); // Prints: SPAN
</script>

Note: The nodeName is a read-only property that returns the name of the current node as a string. For example, it returns the tag name for element node, #text for text node, #comment for comment node, #document for document node, and so on.

If you notice the above example, the nodeName of the first-child node of the main DIV element returns #text instead of H1. Because, whitespace such as spaces, tabs, newlines, etc. are valid characters and they form #text nodes and become a part of the DOM tree. Therefore, since the <div> tag contains a newline before the <h1> tag, so it will create a #text node.

To avoid the issue with firstChild and lastChild returning #text or #comment nodes, you could alternatively use the firstElementChild and lastElementChild properties to return only the first and last element node, respectively. But, it will not work in IE 9 and earlier.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");
alert(main.firstElementChild.nodeName); // Outputs: H1
main.firstElementChild.style.color = "red";

var hint = document.getElementById("hint");
alert(hint.firstElementChild.nodeName); // Outputs: SPAN
hint.firstElementChild.style.color = "blue";
</script>

Similarly, you can use the childNodes property to access all child nodes of a given element, where the first child node is assigned index 0. Here's an example:

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");

// First check that the element has child nodes 
if(main.hasChildNodes()) {
    var nodes = main.childNodes;
    
    // Loop through node list and display node name
    for(var i = 0; i < nodes.length; i++) {
        alert(nodes[i].nodeName);
    }
}
</script>

The childNodes returns all child nodes, including non-element nodes like text and comment nodes. To get a collection of only elements, use children property instead.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");

// First check that the element has child nodes 
if(main.hasChildNodes()) {
    var nodes = main.children;
    
    // Loop through node list and display node name
    for(var i = 0; i < nodes.length; i++) {
        alert(nodes[i].nodeName);
    }
}
</script>

#javascript 

Swift Tips: A Collection Useful Tips for The Swift Language

SwiftTips

The following is a collection of tips I find to be useful when working with the Swift language. More content is available on my Twitter account!

Property Wrappers as Debugging Tools

Property Wrappers allow developers to wrap properties with specific behaviors, that will be seamlessly triggered whenever the properties are accessed.

While their primary use case is to implement business logic within our apps, it's also possible to use Property Wrappers as debugging tools!

For example, we could build a wrapper called @History, that would be added to a property while debugging and would keep track of all the values set to this property.

import Foundation

@propertyWrapper
struct History<Value> {
    private var value: Value
    private(set) var history: [Value] = []

    init(wrappedValue: Value) {
        self.value = wrappedValue
    }
    
    var wrappedValue: Value {
        get { value }

        set {
            history.append(value)
            value = newValue
        }
    }
    
    var projectedValue: Self {
        return self
    }
}

// We can then decorate our business code
// with the `@History` wrapper
struct User {
    @History var name: String = ""
}

var user = User()

// All the existing call sites will still
// compile, without the need for any change
user.name = "John"
user.name = "Jane"

// But now we can also access an history of
// all the previous values!
user.$name.history // ["", "John"]

Localization through String interpolation

Swift 5 gave us the possibility to define our own custom String interpolation methods.

This feature can be used to power many use cases, but there is one that is guaranteed to make sense in most projects: localizing user-facing strings.

import Foundation

extension String.StringInterpolation {
    mutating func appendInterpolation(localized key: String, _ args: CVarArg...) {
        let localized = String(format: NSLocalizedString(key, comment: ""), arguments: args)
        appendLiteral(localized)
    }
}


/*
 Let's assume that this is the content of our Localizable.strings:
 
 "welcome.screen.greetings" = "Hello %@!";
 */

let userName = "John"
print("\(localized: "welcome.screen.greetings", userName)") // Hello John!

Implementing pseudo-inheritance between structs

If you’ve always wanted to use some kind of inheritance mechanism for your structs, Swift 5.1 is going to make you very happy!

Using the new KeyPath-based dynamic member lookup, you can implement some pseudo-inheritance, where a type inherits the API of another one 🎉

(However, be careful, I’m definitely not advocating inheritance as a go-to solution 🙃)

import Foundation

protocol Inherits {
    associatedtype SuperType
    
    var `super`: SuperType { get }
}

extension Inherits {
    subscript<T>(dynamicMember keyPath: KeyPath<SuperType, T>) -> T {
        return self.`super`[keyPath: keyPath]
    }
}

struct Person {
    let name: String
}

@dynamicMemberLookup
struct User: Inherits {
    let `super`: Person
    
    let login: String
    let password: String
}

let user = User(super: Person(name: "John Appleseed"), login: "Johnny", password: "1234")

user.name // "John Appleseed"
user.login // "Johnny"

Composing NSAttributedString through a Function Builder

Swift 5.1 introduced Function Builders: a great tool for building custom DSL syntaxes, like SwiftUI. However, one doesn't need to be building a full-fledged DSL in order to leverage them.

For example, it's possible to write a simple Function Builder, whose job will be to compose together individual instances of NSAttributedString through a nicer syntax than the standard API.

import UIKit

@_functionBuilder
class NSAttributedStringBuilder {
    static func buildBlock(_ components: NSAttributedString...) -> NSAttributedString {
        let result = NSMutableAttributedString(string: "")
        
        return components.reduce(into: result) { (result, current) in result.append(current) }
    }
}

extension NSAttributedString {
    class func composing(@NSAttributedStringBuilder _ parts: () -> NSAttributedString) -> NSAttributedString {
        return parts()
    }
}

let result = NSAttributedString.composing {
    NSAttributedString(string: "Hello",
                       attributes: [.font: UIFont.systemFont(ofSize: 24),
                                    .foregroundColor: UIColor.red])
    NSAttributedString(string: " world!",
                       attributes: [.font: UIFont.systemFont(ofSize: 20),
                                    .foregroundColor: UIColor.orange])
}

Using switch and if as expressions

Contrary to other languages, like Kotlin, Swift does not allow switch and if to be used as expressions. Meaning that the following code is not valid Swift:

let constant = if condition {
                  someValue
               } else {
                  someOtherValue
               }

A common solution to this problem is to wrap the if or switch statement within a closure, that will then be immediately called. While this approach does manage to achieve the desired goal, it makes for a rather poor syntax.

To avoid the ugly trailing () and improve on the readability, you can define a resultOf function, that will serve the exact same purpose, in a more elegant way.

import Foundation

func resultOf<T>(_ code: () -> T) -> T {
    return code()
}

let randomInt = Int.random(in: 0...3)

let spelledOut: String = resultOf {
    switch randomInt {
    case 0:
        return "Zero"
    case 1:
        return "One"
    case 2:
        return "Two"
    case 3:
        return "Three"
    default:
        return "Out of range"
    }
}

print(spelledOut)

Avoiding double negatives within guard statements

A guard statement is a very convenient way for the developer to assert that a condition is met, in order for the execution of the program to keep going.

However, since the body of a guard statement is meant to be executed when the condition evaluates to false, the use of the negation (!) operator within the condition of a guard statement can make the code hard to read, as it becomes a double negative.

A nice trick to avoid such double negatives is to encapsulate the use of the ! operator within a new property or function, whose name does not include a negative.

import Foundation

extension Collection {
    var hasElements: Bool {
        return !isEmpty
    }
}

let array = Bool.random() ? [1, 2, 3] : []

guard array.hasElements else { fatalError("array was empty") }

print(array)

Defining a custom init without loosing the compiler-generated one

It's common knowledge for Swift developers that, when you define a struct, the compiler is going to automatically generate a memberwise init for you. That is, unless you also define an init of your own. Because then, the compiler won't generate any memberwise init.

Yet, there are many instances where we might enjoy the opportunity to get both. As it turns out, this goal is quite easy to achieve: you just need to define your own init in an extension rather than inside the type definition itself.

import Foundation

struct Point {
    let x: Int
    let y: Int
}

extension Point {
    init() {
        x = 0
        y = 0
    }
}

let usingDefaultInit = Point(x: 4, y: 3)
let usingCustomInit = Point()

Implementing a namespace through an empty enum

Swift does not really have an out-of-the-box support of namespaces. One could argue that a Swift module can be seen as a namespace, but creating a dedicated Framework for this sole purpose can legitimately be regarded as overkill.

Some developers have taken the habit to use a struct which only contains static fields to implement a namespace. While this does the job, it requires us to remember to implement an empty private init(), because it wouldn't make sense for such a struct to be instantiated.

It's actually possible to take this approach one step further, by replacing the struct with an enum. While it might seem weird to have an enum with no case, it's actually a very idiomatic way to declare a type that cannot be instantiated.

import Foundation

enum NumberFormatterProvider {
    static var currencyFormatter: NumberFormatter {
        let formatter = NumberFormatter()
        formatter.numberStyle = .currency
        formatter.roundingIncrement = 0.01
        return formatter
    }
    
    static var decimalFormatter: NumberFormatter {
        let formatter = NumberFormatter()
        formatter.numberStyle = .decimal
        formatter.decimalSeparator = ","
        return formatter
    }
}

NumberFormatterProvider() // ❌ impossible to instantiate by mistake

NumberFormatterProvider.currencyFormatter.string(from: 2.456) // $2.46
NumberFormatterProvider.decimalFormatter.string(from: 2.456) // 2,456

Using Never to represent impossible code paths

Never is quite a peculiar type in the Swift Standard Library: it is defined as an empty enum enum Never { }.

While this might seem odd at first glance, it actually yields a very interesting property: it makes it a type that cannot be constructed (i.e. it possesses no instances).

This way, Never can be used as a generic parameter to let the compiler know that a particular feature will not be used.

import Foundation

enum Result<Value, Error> {
    case success(value: Value)
    case failure(error: Error)
}

func willAlwaysSucceed(_ completion: @escaping ((Result<String, Never>) -> Void)) {
    completion(.success(value: "Call was successful"))
}

willAlwaysSucceed( { result in
    switch result {
    case .success(let value):
        print(value)
    // the compiler knows that the `failure` case cannot happen
    // so it doesn't require us to handle it.
    }
})

Providing a default value to a Decodable enum

Swift's Codable framework does a great job at seamlessly decoding entities from a JSON stream. However, when we integrate web-services, we are sometimes left to deal with JSONs that require behaviors that Codable does not provide out-of-the-box.

For instance, we might have a string-based or integer-based enum, and be required to set it to a default value when the data found in the JSON does not match any of its cases.

We might be tempted to implement this via an extensive switch statement over all the possible cases, but there is a much shorter alternative through the initializer init?(rawValue:):

import Foundation

enum State: String, Decodable {
    case active
    case inactive
    case undefined
    
    init(from decoder: Decoder) throws {
        let container = try decoder.singleValueContainer()
        let decodedString = try container.decode(String.self)
        
        self = State(rawValue: decodedString) ?? .undefined
    }
}

let data = """
["active", "inactive", "foo"]
""".data(using: .utf8)!

let decoded = try! JSONDecoder().decode([State].self, from: data)

print(decoded) // [State.active, State.inactive, State.undefined]

Another lightweight dependency injection through default values for function parameters

Dependency injection boils down to a simple idea: when an object requires a dependency, it shouldn't create it by itself, but instead it should be given a function that does it for him.

Now the great thing with Swift is that, not only can a function take another function as a parameter, but that parameter can also be given a default value.

When you combine both those features, you can end up with a dependency injection pattern that is both lightweight on boilerplate, but also type safe.

import Foundation

protocol Service {
    func call() -> String
}

class ProductionService: Service {
    func call() -> String {
        return "This is the production"
    }
}

class MockService: Service {
    func call() -> String {
        return "This is a mock"
    }
}

typealias Provider<T> = () -> T

class Controller {
    
    let service: Service
    
    init(serviceProvider: Provider<Service> = { return ProductionService() }) {
        self.service = serviceProvider()
    }
    
    func work() {
        print(service.call())
    }
}

let productionController = Controller()
productionController.work() // prints "This is the production"

let mockedController = Controller(serviceProvider: { return MockService() })
mockedController.work() // prints "This is a mock"

Lightweight dependency injection through protocol-oriented programming

Singletons are pretty bad. They make your architecture rigid and tightly coupled, which then results in your code being hard to test and refactor. Instead of using singletons, your code should rely on dependency injection, which is a much more architecturally sound approach.

But singletons are so easy to use, and dependency injection requires us to do extra-work. So maybe, for simple situations, we could find an in-between solution?

One possible solution is to rely on one of Swift's most know features: protocol-oriented programming. Using a protocol, we declare and access our dependency. We then store it in a private singleton, and perform the injection through an extension of said protocol.

This way, our code will indeed be decoupled from its dependency, while at the same time keeping the boilerplate to a minimum.

import Foundation

protocol Formatting {
    var formatter: NumberFormatter { get }
}

private let sharedFormatter: NumberFormatter = {
    let sharedFormatter = NumberFormatter()
    sharedFormatter.numberStyle = .currency
    return sharedFormatter
}()

extension Formatting {
    var formatter: NumberFormatter { return sharedFormatter }
}

class ViewModel: Formatting {
    var displayableAmount: String?
    
    func updateDisplay(to amount: Double) {
        displayableAmount = formatter.string(for: amount)
    }
}

let viewModel = ViewModel()

viewModel.updateDisplay(to: 42000.45)
viewModel.displayableAmount // "$42,000.45"

Getting rid of overabundant [weak self] and guard

Callbacks are a part of almost all iOS apps, and as frameworks such as RxSwift keep gaining in popularity, they become ever more present in our codebase.

Seasoned Swift developers are aware of the potential memory leaks that @escaping callbacks can produce, so they make real sure to always use [weak self], whenever they need to use self inside such a context. And when they need to have self be non-optional, they then add a guard statement along.

Consequently, this syntax of a [weak self] followed by a guard rapidly tends to appear everywhere in the codebase. The good thing is that, through a little protocol-oriented trick, it's actually possible to get rid of this tedious syntax, without loosing any of its benefits!

import Foundation
import PlaygroundSupport

PlaygroundPage.current.needsIndefiniteExecution = true

protocol Weakifiable: class { }

extension Weakifiable {
    func weakify(_ code: @escaping (Self) -> Void) -> () -> Void {
        return { [weak self] in
            guard let self = self else { return }
            
            code(self)
        }
    }
    
    func weakify<T>(_ code: @escaping (T, Self) -> Void) -> (T) -> Void {
        return { [weak self] arg in
            guard let self = self else { return }
            
            code(arg, self)
        }
    }
}

extension NSObject: Weakifiable { }

class Producer: NSObject {
    
    deinit {
        print("deinit Producer")
    }
    
    private var handler: (Int) -> Void = { _ in }
    
    func register(handler: @escaping (Int) -> Void) {
        self.handler = handler
        
        DispatchQueue.main.asyncAfter(deadline: .now() + 1.0, execute: { self.handler(42) })
    }
}

class Consumer: NSObject {
    
    deinit {
        print("deinit Consumer")
    }
    
    let producer = Producer()
    
    func consume() {
        producer.register(handler: weakify { result, strongSelf in
            strongSelf.handle(result)
        })
    }
    
    private func handle(_ result: Int) {
        print("🎉 \(result)")
    }
}

var consumer: Consumer? = Consumer()

consumer?.consume()

DispatchQueue.main.asyncAfter(deadline: .now() + 2.0, execute: { consumer = nil })

// This code prints:
// 🎉 42
// deinit Consumer
// deinit Producer

Solving callback hell with function composition

Asynchronous functions are a big part of iOS APIs, and most developers are familiar with the challenge they pose when one needs to sequentially call several asynchronous APIs.

This often results in callbacks being nested into one another, a predicament often referred to as callback hell.

Many third-party frameworks are able to tackle this issue, for instance RxSwift or PromiseKit. Yet, for simple instances of the problem, there is no need to use such big guns, as it can actually be solved with simple function composition.

import Foundation

typealias CompletionHandler<Result> = (Result?, Error?) -> Void

infix operator ~>: MultiplicationPrecedence

func ~> <T, U>(_ first: @escaping (CompletionHandler<T>) -> Void, _ second: @escaping (T, CompletionHandler<U>) -> Void) -> (CompletionHandler<U>) -> Void {
    return { completion in
        first({ firstResult, error in
            guard let firstResult = firstResult else { completion(nil, error); return }
            
            second(firstResult, { (secondResult, error) in
                completion(secondResult, error)
            })
        })
    }
}

func ~> <T, U>(_ first: @escaping (CompletionHandler<T>) -> Void, _ transform: @escaping (T) -> U) -> (CompletionHandler<U>) -> Void {
    return { completion in
        first({ result, error in
            guard let result = result else { completion(nil, error); return }
            
            completion(transform(result), nil)
        })
    }
}

func service1(_ completionHandler: CompletionHandler<Int>) {
    completionHandler(42, nil)
}

func service2(arg: String, _ completionHandler: CompletionHandler<String>) {
    completionHandler("🎉 \(arg)", nil)
}

let chainedServices = service1
    ~> { int in return String(int / 2) }
    ~> service2

chainedServices({ result, _ in
    guard let result = result else { return }
    
    print(result) // Prints: 🎉 21
})

Transform an asynchronous function into a synchronous one

Asynchronous functions are a great way to deal with future events without blocking a thread. Yet, there are times where we would like them to behave in exactly such a blocking way.

Think about writing unit tests and using mocked network calls. You will need to add complexity to your test in order to deal with asynchronous functions, whereas synchronous ones would be much easier to manage.

Thanks to Swift proficiency in the functional paradigm, it is possible to write a function whose job is to take an asynchronous function and transform it into a synchronous one.

import Foundation

func makeSynchrone<A, B>(_ asyncFunction: @escaping (A, (B) -> Void) -> Void) -> (A) -> B {
    return { arg in
        let lock = NSRecursiveLock()
        
        var result: B? = nil
        
        asyncFunction(arg) {
            result = $0
            lock.unlock()
        }
        
        lock.lock()
        
        return result!
    }
}

func myAsyncFunction(arg: Int, completionHandler: (String) -> Void) {
    completionHandler("🎉 \(arg)")
}

let syncFunction = makeSynchrone(myAsyncFunction)

print(syncFunction(42)) // prints 🎉 42

Using KeyPaths instead of closures

Closures are a great way to interact with generic APIs, for instance APIs that allow to manipulate data structures through the use of generic functions, such as filter() or sorted().

The annoying part is that closures tend to clutter your code with many instances of {, } and $0, which can quickly undermine its readably.

A nice alternative for a cleaner syntax is to use a KeyPath instead of a closure, along with an operator that will deal with transforming the provided KeyPath in a closure.

import Foundation

prefix operator ^

prefix func ^ <Element, Attribute>(_ keyPath: KeyPath<Element, Attribute>) -> (Element) -> Attribute {
    return { element in element[keyPath: keyPath] }
}

struct MyData {
    let int: Int
    let string: String
}

let data = [MyData(int: 2, string: "Foo"), MyData(int: 4, string: "Bar")]

data.map(^\.int) // [2, 4]
data.map(^\.string) // ["Foo", "Bar"]

Bringing some type-safety to a userInfo Dictionary

Many iOS APIs still rely on a userInfo Dictionary to handle use-case specific data. This Dictionary usually stores untyped values, and is declared as follows: [String: Any] (or sometimes [AnyHashable: Any].

Retrieving data from such a structure will involve some conditional casting (via the as? operator), which is prone to both errors and repetitions. Yet, by introducing a custom subscript, it's possible to encapsulate all the tedious logic, and end-up with an easier and more robust API.

import Foundation

typealias TypedUserInfoKey<T> = (key: String, type: T.Type)

extension Dictionary where Key == String, Value == Any {
    subscript<T>(_ typedKey: TypedUserInfoKey<T>) -> T? {
        return self[typedKey.key] as? T
    }
}

let userInfo: [String : Any] = ["Foo": 4, "Bar": "forty-two"]

let integerTypedKey = TypedUserInfoKey(key: "Foo", type: Int.self)
let intValue = userInfo[integerTypedKey] // returns 4
type(of: intValue) // returns Int?

let stringTypedKey = TypedUserInfoKey(key: "Bar", type: String.self)
let stringValue = userInfo[stringTypedKey] // returns "forty-two"
type(of: stringValue) // returns String?

Lightweight data-binding for an MVVM implementation

MVVM is a great pattern to separate business logic from presentation logic. The main challenge to make it work, is to define a mechanism for the presentation layer to be notified of model updates.

RxSwift is a perfect choice to solve such a problem. Yet, some developers don't feel confortable with leveraging a third-party library for such a central part of their architecture.

For those situation, it's possible to define a lightweight Variable type, that will make the MVVM pattern very easy to use!

import Foundation

class Variable<Value> {
    var value: Value {
        didSet {
            onUpdate?(value)
        }
    }
    
    var onUpdate: ((Value) -> Void)? {
        didSet {
            onUpdate?(value)
        }
    }
    
    init(_ value: Value, _ onUpdate: ((Value) -> Void)? = nil) {
        self.value = value
        self.onUpdate = onUpdate
        self.onUpdate?(value)
    }
}

let variable: Variable<String?> = Variable(nil)

variable.onUpdate = { data in
    if let data = data {
        print(data)
    }
}

variable.value = "Foo"
variable.value = "Bar"

// prints:
// Foo
// Bar

Using typealias to its fullest

The keyword typealias allows developers to give a new name to an already existing type. For instance, Swift defines Void as a typealias of (), the empty tuple.

But a less known feature of this mechanism is that it allows to assign concrete types for generic parameters, or to rename them. This can help make the semantics of generic types much clearer, when used in specific use cases.

import Foundation

enum Either<Left, Right> {
    case left(Left)
    case right(Right)
}

typealias Result<Value> = Either<Value, Error>

typealias IntOrString = Either<Int, String>

Writing an interruptible overload of forEach

Iterating through objects via the forEach(_:) method is a great alternative to the classic for loop, as it allows our code to be completely oblivious of the iteration logic. One limitation, however, is that forEach(_:) does not allow to stop the iteration midway.

Taking inspiration from the Objective-C implementation, we can write an overload that will allow the developer to stop the iteration, if needed.

import Foundation

extension Sequence {
    func forEach(_ body: (Element, _ stop: inout Bool) throws -> Void) rethrows {
        var stop = false
        for element in self {
            try body(element, &stop)
            
            if stop {
                return
            }
        }
    }
}

["Foo", "Bar", "FooBar"].forEach { element, stop in
    print(element)
    stop = (element == "Bar")
}

// Prints:
// Foo
// Bar

Optimizing the use of reduce()

Functional programing is a great way to simplify a codebase. For instance, reduce is an alternative to the classic for loop, without most the boilerplate. Unfortunately, simplicity often comes at the price of performance.

Consider that you want to remove duplicate values from a Sequence. While reduce() is a perfectly fine way to express this computation, the performance will be sub optimal, because of all the unnecessary Array copying that will happen every time its closure gets called.

That's when reduce(into:_:) comes into play. This version of reduce leverages the capacities of copy-on-write type (such as Array or Dictionnary) in order to avoid unnecessary copying, which results in a great performance boost.

import Foundation

func time(averagedExecutions: Int = 1, _ code: () -> Void) {
    let start = Date()
    for _ in 0..<averagedExecutions { code() }
    let end = Date()
    
    let duration = end.timeIntervalSince(start) / Double(averagedExecutions)
    
    print("time: \(duration)")
}

let data = (1...1_000).map { _ in Int(arc4random_uniform(256)) }


// runs in 0.63s
time {
    let noDuplicates: [Int] = data.reduce([], { $0.contains($1) ? $0 : $0 + [$1] })
}

// runs in 0.15s
time {
    let noDuplicates: [Int] = data.reduce(into: [], { if !$0.contains($1) { $0.append($1) } } )
}

Avoiding hardcoded reuse identifiers

UI components such as UITableView and UICollectionView rely on reuse identifiers in order to efficiently recycle the views they display. Often, those reuse identifiers take the form of a static hardcoded String, that will be used for every instance of their class.

Through protocol-oriented programing, it's possible to avoid those hardcoded values, and instead use the name of the type as a reuse identifier.

import Foundation
import UIKit

protocol Reusable {
    static var reuseIdentifier: String { get }
}

extension Reusable {
    static var reuseIdentifier: String {
        return String(describing: self)
    }
}

extension UITableViewCell: Reusable { }

extension UITableView {
    func register<T: UITableViewCell>(_ class: T.Type) {
        register(`class`, forCellReuseIdentifier: T.reuseIdentifier)
    }
    func dequeueReusableCell<T: UITableViewCell>(for indexPath: IndexPath) -> T {
        return dequeueReusableCell(withIdentifier: T.reuseIdentifier, for: indexPath) as! T
    }
}

class MyCell: UITableViewCell { }

let tableView = UITableView()

tableView.register(MyCell.self)
let myCell: MyCell = tableView.dequeueReusableCell(for: [0, 0])

Defining a union type

The C language has a construct called union, that allows a single variable to hold values from different types. While Swift does not provide such a construct, it provides enums with associated values, which allows us to define a type called Either that implements a union of two types.

import Foundation

enum Either<A, B> {
    case left(A)
    case right(B)
    
    func either(ifLeft: ((A) -> Void)? = nil, ifRight: ((B) -> Void)? = nil) {
        switch self {
        case let .left(a):
            ifLeft?(a)
        case let .right(b):
            ifRight?(b)
        }
    }
}

extension Bool { static func random() -> Bool { return arc4random_uniform(2) == 0 } }

var intOrString: Either<Int, String> = Bool.random() ? .left(2) : .right("Foo")

intOrString.either(ifLeft: { print($0 + 1) }, ifRight: { print($0 + "Bar") })

If you're interested by this kind of data structure, I strongly recommend that you learn more about Algebraic Data Types.

Asserting that classes have associated NIBs and vice-versa

Most of the time, when we create a .xib file, we give it the same name as its associated class. From that, if we later refactor our code and rename such a class, we run the risk of forgetting to rename the associated .xib.

While the error will often be easy to catch, if the .xib is used in a remote section of its app, it might go unnoticed for sometime. Fortunately it's possible to build custom test predicates that will assert that 1) for a given class, there exists a .nib with the same name in a given Bundle, 2) for all the .nib in a given Bundle, there exists a class with the same name.

import XCTest

public func XCTAssertClassHasNib(_ class: AnyClass, bundle: Bundle, file: StaticString = #file, line: UInt = #line) {
    let associatedNibURL = bundle.url(forResource: String(describing: `class`), withExtension: "nib")
    
    XCTAssertNotNil(associatedNibURL, "Class \"\(`class`)\" has no associated nib file", file: file, line: line)
}

public func XCTAssertNibHaveClasses(_ bundle: Bundle, file: StaticString = #file, line: UInt = #line) {
    guard let bundleName = bundle.infoDictionary?["CFBundleName"] as? String,
        let basePath = bundle.resourcePath,
        let enumerator = FileManager.default.enumerator(at: URL(fileURLWithPath: basePath),
                                                    includingPropertiesForKeys: nil,
                                                    options: [.skipsHiddenFiles, .skipsSubdirectoryDescendants]) else { return }
    
    var nibFilesURLs = [URL]()
    
    for case let fileURL as URL in enumerator {
        if fileURL.pathExtension.uppercased() == "NIB" {
            nibFilesURLs.append(fileURL)
        }
    }
    
    nibFilesURLs.map { $0.lastPathComponent }
        .compactMap { $0.split(separator: ".").first }
        .map { String($0) }
        .forEach {
            let associatedClass: AnyClass? = bundle.classNamed("\(bundleName).\($0)")
            
            XCTAssertNotNil(associatedClass, "File \"\($0).nib\" has no associated class", file: file, line: line)
        }
}

XCTAssertClassHasNib(MyFirstTableViewCell.self, bundle: Bundle(for: AppDelegate.self))
XCTAssertClassHasNib(MySecondTableViewCell.self, bundle: Bundle(for: AppDelegate.self))
        
XCTAssertNibHaveClasses(Bundle(for: AppDelegate.self))

Many thanks Benjamin Lavialle for coming up with the idea behind the second test predicate.

Small footprint type-erasing with functions

Seasoned Swift developers know it: a protocol with associated type (PAT) "can only be used as a generic constraint because it has Self or associated type requirements". When we really need to use a PAT to type a variable, the goto workaround is to use a type-erased wrapper.

While this solution works perfectly, it requires a fair amount of boilerplate code. In instances where we are only interested in exposing one particular function of the PAT, a shorter approach using function types is possible.

import Foundation
import UIKit

protocol Configurable {
    associatedtype Model
    
    func configure(with model: Model)
}

typealias Configurator<Model> = (Model) -> ()

extension UILabel: Configurable {
    func configure(with model: String) {
        self.text = model
    }
}

let label = UILabel()
let configurator: Configurator<String> = label.configure

configurator("Foo")

label.text // "Foo"

Performing animations sequentially

UIKit exposes a very powerful and simple API to perform view animations. However, this API can become a little bit quirky to use when we want to perform animations sequentially, because it involves nesting closure within one another, which produces notoriously hard to maintain code.

Nonetheless, it's possible to define a rather simple class, that will expose a really nicer API for this particular use case 👌

import Foundation
import UIKit

class AnimationSequence {
    typealias Animations = () -> Void
    
    private let current: Animations
    private let duration: TimeInterval
    private var next: AnimationSequence? = nil
    
    init(animations: @escaping Animations, duration: TimeInterval) {
        self.current = animations
        self.duration = duration
    }
    
    @discardableResult func append(animations: @escaping Animations, duration: TimeInterval) -> AnimationSequence {
        var lastAnimation = self
        while let nextAnimation = lastAnimation.next {
            lastAnimation = nextAnimation
        }
        lastAnimation.next = AnimationSequence(animations: animations, duration: duration)
        return self
    }
    
    func run() {
        UIView.animate(withDuration: duration, animations: current, completion: { finished in
            if finished, let next = self.next {
                next.run()
            }
        })
    }
}

var firstView = UIView()
var secondView = UIView()

firstView.alpha = 0
secondView.alpha = 0

AnimationSequence(animations: { firstView.alpha = 1.0 }, duration: 1)
            .append(animations: { secondView.alpha = 1.0 }, duration: 0.5)
            .append(animations: { firstView.alpha = 0.0 }, duration: 2.0)
            .run()

Debouncing a function call

Debouncing is a very useful tool when dealing with UI inputs. Consider a search bar, whose content is used to query an API. It wouldn't make sense to perform a request for every character the user is typing, because as soon as a new character is entered, the result of the previous request has become irrelevant.

Instead, our code will perform much better if we "debounce" the API call, meaning that we will wait until some delay has passed, without the input being modified, before actually performing the call.

import Foundation

func debounced(delay: TimeInterval, queue: DispatchQueue = .main, action: @escaping (() -> Void)) -> () -> Void {
    var workItem: DispatchWorkItem?
    
    return {
        workItem?.cancel()
        workItem = DispatchWorkItem(block: action)
        queue.asyncAfter(deadline: .now() + delay, execute: workItem!)
    }
}

let debouncedPrint = debounced(delay: 1.0) { print("Action performed!") }

debouncedPrint()
debouncedPrint()
debouncedPrint()

// After a 1 second delay, this gets
// printed only once to the console:

// Action performed!

Providing useful operators for Optional booleans

When we need to apply the standard boolean operators to Optional booleans, we often end up with a syntax unnecessarily crowded with unwrapping operations. By taking a cue from the world of three-valued logics, we can define a couple operators that make working with Bool? values much nicer.

import Foundation

func && (lhs: Bool?, rhs: Bool?) -> Bool? {
    switch (lhs, rhs) {
    case (false, _), (_, false):
        return false
    case let (unwrapLhs?, unwrapRhs?):
        return unwrapLhs && unwrapRhs
    default:
        return nil
    }
}

func || (lhs: Bool?, rhs: Bool?) -> Bool? {
    switch (lhs, rhs) {
    case (true, _), (_, true):
        return true
    case let (unwrapLhs?, unwrapRhs?):
        return unwrapLhs || unwrapRhs
    default:
        return nil
    }
}

false && nil // false
true && nil // nil
[true, nil, false].reduce(true, &&) // false

nil || true // true
nil || false // nil
[true, nil, false].reduce(false, ||) // true

Removing duplicate values from a Sequence

Transforming a Sequence in order to remove all the duplicate values it contains is a classic use case. To implement it, one could be tempted to transform the Sequence into a Set, then back to an Array. The downside with this approach is that it will not preserve the order of the sequence, which can definitely be a dealbreaker. Using reduce() it is possible to provide a concise implementation that preserves ordering:

import Foundation

extension Sequence where Element: Equatable {
    func duplicatesRemoved() -> [Element] {
        return reduce([], { $0.contains($1) ? $0 : $0 + [$1] })
    }
}

let data = [2, 5, 2, 3, 6, 5, 2]

data.duplicatesRemoved() // [2, 5, 3, 6]

Shorter syntax to deal with optional strings

Optional strings are very common in Swift code, for instance many objects from UIKit expose the text they display as a String?. Many times you will need to manipulate this data as an unwrapped String, with a default value set to the empty string for nil cases.

While the nil-coalescing operator (e.g. ??) is a perfectly fine way to a achieve this goal, defining a computed variable like orEmpty can help a lot in cleaning the syntax.

import Foundation
import UIKit

extension Optional where Wrapped == String {
    var orEmpty: String {
        switch self {
        case .some(let value):
            return value
        case .none:
            return ""
        }
    }
}

func doesNotWorkWithOptionalString(_ param: String) {
    // do something with `param`
}

let label = UILabel()
label.text = "This is some text."

doesNotWorkWithOptionalString(label.text.orEmpty)

Encapsulating background computation and UI update

Every seasoned iOS developers knows it: objects from UIKit can only be accessed from the main thread. Any attempt to access them from a background thread is a guaranteed crash.

Still, running a costly computation on the background, and then using it to update the UI can be a common pattern.

In such cases you can rely on asyncUI to encapsulate all the boilerplate code.

import Foundation
import UIKit

func asyncUI<T>(_ computation: @autoclosure @escaping () -> T, qos: DispatchQoS.QoSClass = .userInitiated, _ completion: @escaping (T) -> Void) {
    DispatchQueue.global(qos: qos).async {
        let value = computation()
        DispatchQueue.main.async {
            completion(value)
        }
    }
}

let label = UILabel()

func costlyComputation() -> Int { return (0..<10_000).reduce(0, +) }

asyncUI(costlyComputation()) { value in
    label.text = "\(value)"
}

Retrieving all the necessary data to build a debug view

A debug view, from which any controller of an app can be instantiated and pushed on the navigation stack, has the potential to bring some real value to a development process. A requirement to build such a view is to have a list of all the classes from a given Bundle that inherit from UIViewController. With the following extension, retrieving this list becomes a piece of cake 🍰

import Foundation
import UIKit
import ObjectiveC

extension Bundle {
    func viewControllerTypes() -> [UIViewController.Type] {
        guard let bundlePath = self.executablePath else { return [] }
        
        var size: UInt32 = 0
        var rawClassNames: UnsafeMutablePointer<UnsafePointer<Int8>>!
        var parsedClassNames = [String]()
        
        rawClassNames = objc_copyClassNamesForImage(bundlePath, &size)
        
        for index in 0..<size {
            let className = rawClassNames[Int(index)]
            
            if let name = NSString.init(utf8String:className) as String?,
                NSClassFromString(name) is UIViewController.Type {
                parsedClassNames.append(name)
            }
        }
        
        return parsedClassNames
            .sorted()
            .compactMap { NSClassFromString($0) as? UIViewController.Type }
    }
}

// Fetch all view controller types in UIKit
Bundle(for: UIViewController.self).viewControllerTypes()

I share the credit for this tip with Benoît Caron.

Defining a function to map over dictionaries

Update As it turns out, map is actually a really bad name for this function, because it does not preserve composition of transformations, a property that is required to fit the definition of a real map function.

Surprisingly enough, the standard library doesn't define a map() function for dictionaries that allows to map both keys and values into a new Dictionary. Nevertheless, such a function can be helpful, for instance when converting data across different frameworks.

import Foundation

extension Dictionary {
    func map<T: Hashable, U>(_ transform: (Key, Value) throws -> (T, U)) rethrows -> [T: U] {
        var result: [T: U] = [:]
        
        for (key, value) in self {
            let (transformedKey, transformedValue) = try transform(key, value)
            result[transformedKey] = transformedValue
        }
        
        return result
    }
}

let data = [0: 5, 1: 6, 2: 7]
data.map { ("\($0)", $1 * $1) } // ["2": 49, "0": 25, "1": 36]

A shorter syntax to remove nil values

Swift provides the function compactMap(), that can be used to remove nil values from a Sequence of optionals when calling it with an argument that just returns its parameter (i.e. compactMap { $0 }). Still, for such use cases it would be nice to get rid of the trailing closure.

The implementation isn't as straightforward as your usual extension, but once it has been written, the call site definitely gets cleaner 👌

import Foundation

protocol OptionalConvertible {
    associatedtype Wrapped
    func asOptional() -> Wrapped?
}

extension Optional: OptionalConvertible {
    func asOptional() -> Wrapped? {
        return self
    }
}

extension Sequence where Element: OptionalConvertible {
    func compacted() -> [Element.Wrapped] {
        return compactMap { $0.asOptional() }
    }
}

let data = [nil, 1, 2, nil, 3, 5, nil, 8, nil]
data.compacted() // [1, 2, 3, 5, 8]

Dealing with expirable values

It might happen that your code has to deal with values that come with an expiration date. In a game, it could be a score multiplier that will only last for 30 seconds. Or it could be an authentication token for an API, with a 15 minutes lifespan. In both instances you can rely on the type Expirable to encapsulate the expiration logic.

import Foundation

struct Expirable<T> {
    private var innerValue: T
    private(set) var expirationDate: Date
    
    var value: T? {
        return hasExpired() ? nil : innerValue
    }
    
    init(value: T, expirationDate: Date) {
        self.innerValue = value
        self.expirationDate = expirationDate
    }
    
    init(value: T, duration: Double) {
        self.innerValue = value
        self.expirationDate = Date().addingTimeInterval(duration)
    }
    
    func hasExpired() -> Bool {
        return expirationDate < Date()
    }
}

let expirable = Expirable(value: 42, duration: 3)

sleep(2)
expirable.value // 42
sleep(2)
expirable.value // nil

I share the credit for this tip with Benoît Caron.

Using parallelism to speed-up map()

Almost all Apple devices able to run Swift code are powered by a multi-core CPU, consequently making a good use of parallelism is a great way to improve code performance. map() is a perfect candidate for such an optimization, because it is almost trivial to define a parallel implementation.

import Foundation

extension Array {
    func parallelMap<T>(_ transform: (Element) -> T) -> [T] {
        let res = UnsafeMutablePointer<T>.allocate(capacity: count)
        
        DispatchQueue.concurrentPerform(iterations: count) { i in
            res[i] = transform(self[i])
        }
        
        let finalResult = Array<T>(UnsafeBufferPointer(start: res, count: count))
        res.deallocate(capacity: count)
        
        return finalResult
    }
}

let array = (0..<1_000).map { $0 }

func work(_ n: Int) -> Int {
    return (0..<n).reduce(0, +)
}

array.parallelMap { work($0) }

🚨 Make sure to only use parallelMap() when the transform function actually performs some costly computations. Otherwise performances will be systematically slower than using map(), because of the multithreading overhead.

Measuring execution time with minimum boilerplate

During development of a feature that performs some heavy computations, it can be helpful to measure just how much time a chunk of code takes to run. The time() function is a nice tool for this purpose, because of how simple it is to add and then to remove when it is no longer needed.

import Foundation

func time(averagedExecutions: Int = 1, _ code: () -> Void) {
    let start = Date()
    for _ in 0..<averagedExecutions { code() }
    let end = Date()
    
    let duration = end.timeIntervalSince(start) / Double(averagedExecutions)
    
    print("time: \(duration)")
}

time {
    (0...10_000).map { $0 * $0 }
}
// time: 0.183973908424377

Running two pieces of code in parallel

Concurrency is definitely one of those topics were the right encapsulation bears the potential to make your life so much easier. For instance, with this piece of code you can easily launch two computations in parallel, and have the results returned in a tuple.

import Foundation

func parallel<T, U>(_ left: @autoclosure () -> T, _ right: @autoclosure () -> U) -> (T, U) {
    var leftRes: T?
    var rightRes: U?
    
    DispatchQueue.concurrentPerform(iterations: 2, execute: { id in
        if id == 0 {
            leftRes = left()
        } else {
            rightRes = right()
        }
    })
    
    return (leftRes!, rightRes!)
}

let values = (1...100_000).map { $0 }

let results = parallel(values.map { $0 * $0 }, values.reduce(0, +))

Making good use of #file, #line and #function

Swift exposes three special variables #file, #line and #function, that are respectively set to the name of the current file, line and function. Those variables become very useful when writing custom logging functions or test predicates.

import Foundation

func log(_ message: String, _ file: String = #file, _ line: Int = #line, _ function: String = #function) {
    print("[\(file):\(line)] \(function) - \(message)")
}

func foo() {
    log("Hello world!")
}

foo() // [MyPlayground.playground:8] foo() - Hello world!

Comparing Optionals through Conditional Conformance

Swift 4.1 has introduced a new feature called Conditional Conformance, which allows a type to implement a protocol only when its generic type also does.

With this addition it becomes easy to let Optional implement Comparable only when Wrapped also implements Comparable:

import Foundation

extension Optional: Comparable where Wrapped: Comparable {
    public static func < (lhs: Optional, rhs: Optional) -> Bool {
        switch (lhs, rhs) {
        case let (lhs?, rhs?):
            return lhs < rhs
        case (nil, _?):
            return true // anything is greater than nil
        case (_?, nil):
            return false // nil in smaller than anything
        case (nil, nil):
            return true // nil is not smaller than itself
        }
    }
}

let data: [Int?] = [8, 4, 3, nil, 12, 4, 2, nil, -5]
data.sorted() // [nil, nil, Optional(-5), Optional(2), Optional(3), Optional(4), Optional(4), Optional(8), Optional(12)]

Safely subscripting a Collection

Any attempt to access an Array beyond its bounds will result in a crash. While it's possible to write conditions such as if index < array.count { array[index] } in order to prevent such crashes, this approach will rapidly become cumbersome.

A great thing is that this condition can be encapsulated in a custom subscript that will work on any Collection:

import Foundation

extension Collection {
    subscript (safe index: Index) -> Element? {
        return indices.contains(index) ? self[index] : nil
    }
}

let data = [1, 3, 4]

data[safe: 1] // Optional(3)
data[safe: 10] // nil

Easier String slicing using ranges

Subscripting a string with a range can be very cumbersome in Swift 4. Let's face it, no one wants to write lines like someString[index(startIndex, offsetBy: 0)..<index(startIndex, offsetBy: 10)] on a regular basis.

Luckily, with the addition of one clever extension, strings can be sliced as easily as arrays 🎉

import Foundation

extension String {
    public subscript(value: CountableClosedRange<Int>) -> Substring {
        get {
            return self[index(startIndex, offsetBy: value.lowerBound)...index(startIndex, offsetBy: value.upperBound)]
        }
    }
    
    public subscript(value: CountableRange<Int>) -> Substring {
        get {
            return self[index(startIndex, offsetBy: value.lowerBound)..<index(startIndex, offsetBy: value.upperBound)]
        }
    }
    
    public subscript(value: PartialRangeUpTo<Int>) -> Substring {
        get {
            return self[..<index(startIndex, offsetBy: value.upperBound)]
        }
    }
    
    public subscript(value: PartialRangeThrough<Int>) -> Substring {
        get {
            return self[...index(startIndex, offsetBy: value.upperBound)]
        }
    }
    
    public subscript(value: PartialRangeFrom<Int>) -> Substring {
        get {
            return self[index(startIndex, offsetBy: value.lowerBound)...]
        }
    }
}

let data = "This is a string!"

data[..<4]  // "This"
data[5..<9] // "is a"
data[10...] // "string!"

Concise syntax for sorting using a KeyPath

By using a KeyPath along with a generic type, a very clean and concise syntax for sorting data can be implemented:

import Foundation

extension Sequence {
    func sorted<T: Comparable>(by attribute: KeyPath<Element, T>) -> [Element] {
        return sorted(by: { $0[keyPath: attribute] < $1[keyPath: attribute] })
    }
}

let data = ["Some", "words", "of", "different", "lengths"]

data.sorted(by: \.count) // ["of", "Some", "words", "lengths", "different"]

If you like this syntax, make sure to checkout KeyPathKit!

Manufacturing cache-efficient versions of pure functions

By capturing a local variable in a returned closure, it is possible to manufacture cache-efficient versions of pure functions. Be careful though, this trick only works with non-recursive function!

import Foundation

func cached<In: Hashable, Out>(_ f: @escaping (In) -> Out) -> (In) -> Out {
    var cache = [In: Out]()
    
    return { (input: In) -> Out in
        if let cachedValue = cache[input] {
            return cachedValue
        } else {
            let result = f(input)
            cache[input] = result
            return result
        }
    }
}

let cachedCos = cached { (x: Double) in cos(x) }

cachedCos(.pi * 2) // value of cos for 2π is now cached

Simplifying complex conditions with pattern matching

When distinguishing between complex boolean conditions, using a switch statement along with pattern matching can be more readable than the classic series of if {} else if {}.

import Foundation

let expr1: Bool
let expr2: Bool
let expr3: Bool

if expr1 && !expr3 {
    functionA()
} else if !expr2 && expr3 {
    functionB()
} else if expr1 && !expr2 && expr3 {
    functionC()
}

switch (expr1, expr2, expr3) {
    
case (true, _, false):
    functionA()
case (_, false, true):
    functionB()
case (true, false, true):
    functionC()
default:
    break
}

Easily generating arrays of data

Using map() on a range makes it easy to generate an array of data.

import Foundation

func randomInt() -> Int { return Int(arc4random()) }

let randomArray = (1...10).map { _ in randomInt() }

Using @autoclosure for cleaner call sites

Using @autoclosure enables the compiler to automatically wrap an argument within a closure, thus allowing for a very clean syntax at call sites.

import UIKit

extension UIView {
    class func animate(withDuration duration: TimeInterval, _ animations: @escaping @autoclosure () -> Void) {
        UIView.animate(withDuration: duration, animations: animations)
    }
}

let view = UIView()

UIView.animate(withDuration: 0.3, view.backgroundColor = .orange)

Observing new and old value with RxSwift

When working with RxSwift, it's very easy to observe both the current and previous value of an observable sequence by simply introducing a shift using skip().

import RxSwift

let values = Observable.of(4, 8, 15, 16, 23, 42)

let newAndOld = Observable.zip(values, values.skip(1)) { (previous: $0, current: $1) }
    .subscribe(onNext: { pair in
        print("current: \(pair.current) - previous: \(pair.previous)")
    })

//current: 8 - previous: 4
//current: 15 - previous: 8
//current: 16 - previous: 15
//current: 23 - previous: 16
//current: 42 - previous: 23

Implicit initialization from literal values

Using protocols such as ExpressibleByStringLiteral it is possible to provide an init that will be automatically when a literal value is provided, allowing for nice and short syntax. This can be very helpful when writing mock or test data.

import Foundation

extension URL: ExpressibleByStringLiteral {
    public init(stringLiteral value: String) {
        self.init(string: value)!
    }
}

let url: URL = "http://www.google.fr"

NSURLConnection.canHandle(URLRequest(url: "http://www.google.fr"))

Achieving systematic validation of data

Through some clever use of Swift private visibility it is possible to define a container that holds any untrusted value (such as a user input) from which the only way to retrieve the value is by making it successfully pass a validation test.

import Foundation

struct Untrusted<T> {
    private(set) var value: T
}

protocol Validator {
    associatedtype T
    static func validation(value: T) -> Bool
}

extension Validator {
    static func validate(untrusted: Untrusted<T>) -> T? {
        if self.validation(value: untrusted.value) {
            return untrusted.value
        } else {
            return nil
        }
    }
}

struct FrenchPhoneNumberValidator: Validator {
    static func validation(value: String) -> Bool {
       return (value.count) == 10 && CharacterSet(charactersIn: value).isSubset(of: CharacterSet.decimalDigits)
    }
}

let validInput = Untrusted(value: "0122334455")
let invalidInput = Untrusted(value: "0123")

FrenchPhoneNumberValidator.validate(untrusted: validInput) // returns "0122334455"
FrenchPhoneNumberValidator.validate(untrusted: invalidInput) // returns nil

Implementing the builder pattern with keypaths

With the addition of keypaths in Swift 4, it is now possible to easily implement the builder pattern, that allows the developer to clearly separate the code that initializes a value from the code that uses it, without the burden of defining a factory method.

import UIKit

protocol With {}

extension With where Self: AnyObject {
    @discardableResult
    func with<T>(_ property: ReferenceWritableKeyPath<Self, T>, setTo value: T) -> Self {
        self[keyPath: property] = value
        return self
    }
}

extension UIView: With {}

let view = UIView()

let label = UILabel()
    .with(\.textColor, setTo: .red)
    .with(\.text, setTo: "Foo")
    .with(\.textAlignment, setTo: .right)
    .with(\.layer.cornerRadius, setTo: 5)

view.addSubview(label)

🚨 The Swift compiler does not perform OS availability checks on properties referenced by keypaths. Any attempt to use a KeyPath for an unavailable property will result in a runtime crash.

I share the credit for this tip with Marion Curtil.

Storing functions rather than values

When a type stores values for the sole purpose of parametrizing its functions, it’s then possible to not store the values but directly the function, with no discernable difference at the call site.

import Foundation

struct MaxValidator {
    let max: Int
    let strictComparison: Bool
    
    func isValid(_ value: Int) -> Bool {
        return self.strictComparison ? value < self.max : value <= self.max
    }
}

struct MaxValidator2 {
    var isValid: (_ value: Int) -> Bool
    
    init(max: Int, strictComparison: Bool) {
        self.isValid = strictComparison ? { $0 < max } : { $0 <= max }
    }
}

MaxValidator(max: 5, strictComparison: true).isValid(5) // false
MaxValidator2(max: 5, strictComparison: false).isValid(5) // true

Defining operators on function types

Functions are first-class citizen types in Swift, so it is perfectly legal to define operators for them.

import Foundation

let firstRange = { (0...3).contains($0) }
let secondRange = { (5...6).contains($0) }

func ||(_ lhs: @escaping (Int) -> Bool, _ rhs: @escaping (Int) -> Bool) -> (Int) -> Bool {
    return { value in
        return lhs(value) || rhs(value)
    }
}

(firstRange || secondRange)(2) // true
(firstRange || secondRange)(4) // false
(firstRange || secondRange)(6) // true

Typealiases for functions

Typealiases are great to express function signatures in a more comprehensive manner, which then enables us to easily define functions that operate on them, resulting in a nice way to write and use some powerful API.

import Foundation

typealias RangeSet = (Int) -> Bool

func union(_ left: @escaping RangeSet, _ right: @escaping RangeSet) -> RangeSet {
    return { left($0) || right($0) }
}

let firstRange = { (0...3).contains($0) }
let secondRange = { (5...6).contains($0) }

let unionRange = union(firstRange, secondRange)

unionRange(2) // true
unionRange(4) // false

Encapsulating state within a function

By returning a closure that captures a local variable, it's possible to encapsulate a mutable state within a function.

import Foundation

func counterFactory() -> () -> Int {
    var counter = 0
    
    return {
        counter += 1
        return counter
    }
}

let counter = counterFactory()

counter() // returns 1
counter() // returns 2

Generating all cases for an Enum

⚠️ Since Swift 4.2, allCases can now be synthesized at compile-time by simply conforming to the protocol CaseIterable. The implementation below should no longer be used in production code.

Through some clever leveraging of how enums are stored in memory, it is possible to generate an array that contains all the possible cases of an enum. This can prove particularly useful when writing unit tests that consume random data.

import Foundation

enum MyEnum { case first; case second; case third; case fourth }

protocol EnumCollection: Hashable {
    static var allCases: [Self] { get }
}

extension EnumCollection {
    public static var allCases: [Self] {
        var i = 0
        return Array(AnyIterator {
            let next = withUnsafePointer(to: &i) {
                $0.withMemoryRebound(to: Self.self, capacity: 1) { $0.pointee }
            }
            if next.hashValue != i { return nil }
            i += 1
            return next
        })
    }
}

extension MyEnum: EnumCollection { }

MyEnum.allCases // [.first, .second, .third, .fourth]

Using map on optional values

The if-let syntax is a great way to deal with optional values in a safe manner, but at times it can prove to be just a little bit to cumbersome. In such cases, using the Optional.map() function is a nice way to achieve a shorter code while retaining safeness and readability.

import UIKit

let date: Date? = Date() // or could be nil, doesn't matter
let formatter = DateFormatter()
let label = UILabel()

if let safeDate = date {
    label.text = formatter.string(from: safeDate)
}

label.text = date.map { return formatter.string(from: $0) }

label.text = date.map(formatter.string(from:)) // even shorter, tough less readable

📣 NEW 📣 Swift Tips are now available on YouTube 👇

Summary

Tips


Download Details:

Author: vincent-pradeilles
Source code: https://github.com/vincent-pradeilles/swift-tips

License: MIT license
#swift 

Cómo construir un detector de noticias falsas en Python

Detección de noticias falsas en Python

Explorar el conjunto de datos de noticias falsas, realizar análisis de datos como nubes de palabras y ngramas, y ajustar el transformador BERT para construir un detector de noticias falsas en Python usando la biblioteca de transformadores.

Las noticias falsas son la transmisión intencional de afirmaciones falsas o engañosas como noticias, donde las declaraciones son deliberadamente engañosas.

Los periódicos, tabloides y revistas han sido reemplazados por plataformas de noticias digitales, blogs, fuentes de redes sociales y una plétora de aplicaciones de noticias móviles. Las organizaciones de noticias se beneficiaron del mayor uso de las redes sociales y las plataformas móviles al proporcionar a los suscriptores información actualizada al minuto.

Los consumidores ahora tienen acceso instantáneo a las últimas noticias. Estas plataformas de medios digitales han aumentado en importancia debido a su fácil conexión con el resto del mundo y permiten a los usuarios discutir y compartir ideas y debatir temas como la democracia, la educación, la salud, la investigación y la historia. Las noticias falsas en las plataformas digitales son cada vez más populares y se utilizan con fines de lucro, como ganancias políticas y financieras.

¿Qué tan grande es este problema?

Debido a que Internet, las redes sociales y las plataformas digitales son ampliamente utilizadas, cualquiera puede propagar información inexacta y sesgada. Es casi imposible evitar la difusión de noticias falsas. Hay un aumento tremendo en la distribución de noticias falsas, que no se restringe a un sector como la política sino que incluye deportes, salud, historia, entretenimiento y ciencia e investigación.

La solución

Es vital reconocer y diferenciar entre noticias falsas y veraces. Un método es hacer que un experto decida y verifique cada pieza de información, pero esto lleva tiempo y requiere experiencia que no se puede compartir. En segundo lugar, podemos utilizar herramientas de aprendizaje automático e inteligencia artificial para automatizar la identificación de noticias falsas.

La información de noticias en línea incluye varios datos en formato no estructurado (como documentos, videos y audio), pero aquí nos concentraremos en las noticias en formato de texto. Con el progreso del aprendizaje automático y el procesamiento del lenguaje natural , ahora podemos reconocer el carácter engañoso y falso de un artículo o declaración.

Se están realizando varios estudios y experimentos para detectar noticias falsas en todos los medios.

Nuestro objetivo principal de este tutorial es:

  • Explore y analice el conjunto de datos de noticias falsas.
  • Cree un clasificador que pueda distinguir noticias falsas con la mayor precisión posible.

Aquí está la tabla de contenido:

  • Introducción
  • ¿Qué tan grande es este problema?
  • La solución
  • Exploración de datos
    • Distribución de Clases
  • Limpieza de datos para análisis
  • Análisis exploratorio de datos
    • Nube de una sola palabra
    • Bigrama más frecuente (combinación de dos palabras)
    • Trigrama más frecuente (combinación de tres palabras)
  • Creación de un clasificador mediante el ajuste fino de BERT
    • Preparación de datos
    • Tokenización del conjunto de datos
    • Cargar y ajustar el modelo
    • Evaluación del modelo
  • Apéndice: Creación de un archivo de envío para Kaggle
  • Conclusión

Exploración de datos

En este trabajo, utilizamos el conjunto de datos de noticias falsas de Kaggle para clasificar artículos de noticias no confiables como noticias falsas. Disponemos de un completo dataset de entrenamiento que contiene las siguientes características:

  • id: identificación única para un artículo de noticias
  • title: título de un artículo periodístico
  • author: autor de la noticia
  • text: texto del artículo; podría estar incompleto
  • label: una etiqueta que marca el artículo como potencialmente no confiable denotado por 1 (poco confiable o falso) o 0 (confiable).

Es un problema de clasificación binaria en el que debemos predecir si una determinada noticia es fiable o no.

Si tiene una cuenta de Kaggle, simplemente puede descargar el conjunto de datos del sitio web y extraer el archivo ZIP.

También cargué el conjunto de datos en Google Drive y puede obtenerlo aquí o usar la gdownbiblioteca para descargarlo automáticamente en Google Colab o cuadernos de Jupyter:

$ pip install gdown
# download from Google Drive
$ gdown "https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t"
Downloading...
From: https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t
To: /content/fake-news.zip
100% 48.7M/48.7M [00:00<00:00, 74.6MB/s]

Descomprimiendo los archivos:

$ unzip fake-news.zip

Aparecerán tres archivos en el directorio de trabajo actual: train.csv, test.csvy submit.csv, que usaremos train.csven la mayor parte del tutorial.

Instalando las dependencias requeridas:

$ pip install transformers nltk pandas numpy matplotlib seaborn wordcloud

Nota: si se encuentra en un entorno local, asegúrese de instalar PyTorch para GPU, diríjase a esta página para una instalación adecuada.

Importemos las bibliotecas esenciales para el análisis:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

El corpus y los módulos NLTK deben instalarse mediante el descargador NLTK estándar:

import nltk
nltk.download('stopwords')
nltk.download('wordnet')

El conjunto de datos de noticias falsas comprende títulos y textos de artículos originales y ficticios de varios autores. Importemos nuestro conjunto de datos:

# load the dataset
news_d = pd.read_csv("train.csv")
print("Shape of News data:", news_d.shape)
print("News data columns", news_d.columns)

Producción:

 Shape of News data: (20800, 5)
 News data columns Index(['id', 'title', 'author', 'text', 'label'], dtype='object')

Así es como se ve el conjunto de datos:

# by using df.head(), we can immediately familiarize ourselves with the dataset. 
news_d.head()

Producción:

id	title	author	text	label
0	0	House Dem Aide: We Didn’t Even See Comey’s Let...	Darrell Lucus	House Dem Aide: We Didn’t Even See Comey’s Let...	1
1	1	FLYNN: Hillary Clinton, Big Woman on Campus - ...	Daniel J. Flynn	Ever get the feeling your life circles the rou...	0
2	2	Why the Truth Might Get You Fired	Consortiumnews.com	Why the Truth Might Get You Fired October 29, ...	1
3	3	15 Civilians Killed In Single US Airstrike Hav...	Jessica Purkiss	Videos 15 Civilians Killed In Single US Airstr...	1
4	4	Iranian woman jailed for fictional unpublished...	Howard Portnoy	Print \nAn Iranian woman has been sentenced to...	1

Tenemos 20.800 filas, que tienen cinco columnas. Veamos algunas estadísticas de la textcolumna:

#Text Word startistics: min.mean, max and interquartile range

txt_length = news_d.text.str.split().str.len()
txt_length.describe()

Producción:

count    20761.000000
mean       760.308126
std        869.525988
min          0.000000
25%        269.000000
50%        556.000000
75%       1052.000000
max      24234.000000
Name: text, dtype: float64

Estadísticas de la titlecolumna:

#Title statistics 

title_length = news_d.title.str.split().str.len()
title_length.describe()

Producción:

count    20242.000000
mean        12.420709
std          4.098735
min          1.000000
25%         10.000000
50%         13.000000
75%         15.000000
max         72.000000
Name: title, dtype: float64

Las estadísticas para los conjuntos de entrenamiento y prueba son las siguientes:

  • El textatributo tiene un conteo de palabras más alto con un promedio de 760 palabras y un 75% con más de 1000 palabras.
  • El titleatributo es una declaración breve con un promedio de 12 palabras, y el 75% de ellas tiene alrededor de 15 palabras.

Nuestro experimento sería con el texto y el título juntos.

Distribución de Clases

Parcelas de conteo para ambas etiquetas:

sns.countplot(x="label", data=news_d);
print("1: Unreliable")
print("0: Reliable")
print("Distribution of labels:")
print(news_d.label.value_counts());

Producción:

1: Unreliable
0: Reliable
Distribution of labels:
1    10413
0    10387
Name: label, dtype: int64

Distribución de etiquetas

print(round(news_d.label.value_counts(normalize=True),2)*100);

Producción:

1    50.0
0    50.0
Name: label, dtype: float64

La cantidad de artículos no confiables (falsos o 1) es 10413, mientras que la cantidad de artículos confiables (confiables o 0) es 10387. Casi el 50% de los artículos son falsos. Por lo tanto, la métrica de precisión medirá qué tan bien funciona nuestro modelo al construir un clasificador.

Limpieza de datos para análisis

En esta sección, limpiaremos nuestro conjunto de datos para hacer algunos análisis:

  • Elimina las filas y columnas que no uses.
  • Realizar imputación de valor nulo.
  • Eliminar caracteres especiales.
  • Elimina las palabras vacías.
# Constants that are used to sanitize the datasets 

column_n = ['id', 'title', 'author', 'text', 'label']
remove_c = ['id','author']
categorical_features = []
target_col = ['label']
text_f = ['title', 'text']
# Clean Datasets
import nltk
from nltk.corpus import stopwords
import re
from nltk.stem.porter import PorterStemmer
from collections import Counter

ps = PorterStemmer()
wnl = nltk.stem.WordNetLemmatizer()

stop_words = stopwords.words('english')
stopwords_dict = Counter(stop_words)

# Removed unused clumns
def remove_unused_c(df,column_n=remove_c):
    df = df.drop(column_n,axis=1)
    return df

# Impute null values with None
def null_process(feature_df):
    for col in text_f:
        feature_df.loc[feature_df[col].isnull(), col] = "None"
    return feature_df

def clean_dataset(df):
    # remove unused column
    df = remove_unused_c(df)
    #impute null values
    df = null_process(df)
    return df

# Cleaning text from unused characters
def clean_text(text):
    text = str(text).replace(r'http[\w:/\.]+', ' ')  # removing urls
    text = str(text).replace(r'[^\.\w\s]', ' ')  # remove everything but characters and punctuation
    text = str(text).replace('[^a-zA-Z]', ' ')
    text = str(text).replace(r'\s\s+', ' ')
    text = text.lower().strip()
    #text = ' '.join(text)    
    return text

## Nltk Preprocessing include:
# Stop words, Stemming and Lemmetization
# For our project we use only Stop word removal
def nltk_preprocess(text):
    text = clean_text(text)
    wordlist = re.sub(r'[^\w\s]', '', text).split()
    #text = ' '.join([word for word in wordlist if word not in stopwords_dict])
    #text = [ps.stem(word) for word in wordlist if not word in stopwords_dict]
    text = ' '.join([wnl.lemmatize(word) for word in wordlist if word not in stopwords_dict])
    return  text

En el bloque de código de arriba:

  • Hemos importado NLTK, que es una plataforma famosa para desarrollar aplicaciones de Python que interactúan con el lenguaje humano. A continuación, importamos repara expresiones regulares.
  • Importamos palabras vacías desde nltk.corpus. Cuando trabajamos con palabras, particularmente cuando consideramos la semántica, a veces necesitamos eliminar palabras comunes que no agregan ningún significado significativo a una declaración, como "but", "can", "we", etc.
  • PorterStemmerse utiliza para realizar palabras derivadas con NLTK. Los lematizadores despojan a las palabras de sus afijos morfológicos, dejando únicamente la raíz de la palabra.
  • Importamos WordNetLemmatizer()de la biblioteca NLTK para la lematización. La lematización es mucho más eficaz que la derivación . Va más allá de la reducción de palabras y evalúa todo el léxico de un idioma para aplicar el análisis morfológico a las palabras, con el objetivo de eliminar los extremos flexivos y devolver la forma base o de diccionario de una palabra, conocida como lema.
  • stopwords.words('english')permítanos ver la lista de todas las palabras vacías en inglés admitidas por NLTK.
  • remove_unused_c()La función se utiliza para eliminar las columnas no utilizadas.
  • Imputamos valores nulos con Noneel uso de la null_process()función.
  • Dentro de la función clean_dataset(), llamamos remove_unused_c()y null_process()funciones. Esta función es responsable de la limpieza de datos.
  • Para limpiar texto de caracteres no utilizados, hemos creado la clean_text()función.
  • Para el preprocesamiento, solo utilizaremos la eliminación de palabras vacías. Creamos la nltk_preprocess()función para ese propósito.

Preprocesando el texty title:

# Perform data cleaning on train and test dataset by calling clean_dataset function
df = clean_dataset(news_d)
# apply preprocessing on text through apply method by calling the function nltk_preprocess
df["text"] = df.text.apply(nltk_preprocess)
# apply preprocessing on title through apply method by calling the function nltk_preprocess
df["title"] = df.title.apply(nltk_preprocess)
# Dataset after cleaning and preprocessing step
df.head()

Producción:

title	text	label
0	house dem aide didnt even see comeys letter ja...	house dem aide didnt even see comeys letter ja...	1
1	flynn hillary clinton big woman campus breitbart	ever get feeling life circle roundabout rather...	0
2	truth might get fired	truth might get fired october 29 2016 tension ...	1
3	15 civilian killed single u airstrike identified	video 15 civilian killed single u airstrike id...	1
4	iranian woman jailed fictional unpublished sto...	print iranian woman sentenced six year prison ...	1

Análisis exploratorio de datos

En esta sección realizaremos:

  • Análisis Univariante : Es un análisis estadístico del texto. Usaremos la nube de palabras para ese propósito. Una nube de palabras es un enfoque de visualización de datos de texto donde el término más común se presenta en el tamaño de fuente más considerable.
  • Análisis bivariado : Bigram y Trigram se utilizarán aquí. Según Wikipedia: " un n-grama es una secuencia contigua de n elementos de una muestra determinada de texto o habla. Según la aplicación, los elementos pueden ser fonemas, sílabas, letras, palabras o pares de bases. Los n-gramas normalmente se recopilan de un corpus de texto o de voz".

Nube de una sola palabra

Las palabras más frecuentes aparecen en negrita y de mayor tamaño en una nube de palabras. Esta sección creará una nube de palabras para todas las palabras del conjunto de datos.

Se usará la función de la biblioteca de WordCloudwordcloud() y generate()se utilizará para generar la imagen de la nube de palabras:

from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt

# initialize the word cloud
wordcloud = WordCloud( background_color='black', width=800, height=600)
# generate the word cloud by passing the corpus
text_cloud = wordcloud.generate(' '.join(df['text']))
# plotting the word cloud
plt.figure(figsize=(20,30))
plt.imshow(text_cloud)
plt.axis('off')
plt.show()

Producción:

WordCloud para todos los datos de noticias falsas

Nube de palabras solo para noticias confiables:

true_n = ' '.join(df[df['label']==0]['text']) 
wc = wordcloud.generate(true_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Producción:

Nube de palabras para noticias confiables

Nube de palabras solo para noticias falsas:

fake_n = ' '.join(df[df['label']==1]['text'])
wc= wordcloud.generate(fake_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Producción:

Nube de palabras para noticias falsas

Bigrama más frecuente (combinación de dos palabras)

Un N-grama es una secuencia de letras o palabras. Un unigrama de carácter se compone de un solo carácter, mientras que un bigrama comprende una serie de dos caracteres. De manera similar, los N-gramas de palabras se componen de una serie de n palabras. La palabra "unidos" es un 1 gramo (unigrama). La combinación de las palabras "estado unido" es de 2 gramos (bigrama), "ciudad de nueva york" es de 3 gramos.

Grafiquemos el bigrama más común en las noticias confiables:

def plot_top_ngrams(corpus, title, ylabel, xlabel="Number of Occurences", n=2):
  """Utility function to plot top n-grams"""
  true_b = (pd.Series(nltk.ngrams(corpus.split(), n)).value_counts())[:20]
  true_b.sort_values().plot.barh(color='blue', width=.9, figsize=(12, 8))
  plt.title(title)
  plt.ylabel(ylabel)
  plt.xlabel(xlabel)
  plt.show()
plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Bigrams', "Bigram", n=2)

Top bigramas sobre noticias falsas

El bigrama más común en las noticias falsas:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Bigrams', "Bigram", n=2)

Top bigramas sobre noticias falsas

Trigrama más frecuente (combinación de tres palabras)

El trigrama más común en noticias confiables:

plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Trigrams', "Trigrams", n=3)

El trigrama más común en las noticias falsas

Para noticias falsas ahora:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Trigrams', "Trigrams", n=3)

Trigramas más comunes en Fake news

Los gráficos anteriores nos dan algunas ideas sobre cómo se ven ambas clases. En la siguiente sección, usaremos la biblioteca de transformadores para construir un detector de noticias falsas.

Creación de un clasificador mediante el ajuste fino de BERT

Esta sección tomará código ampliamente del tutorial BERT de ajuste fino para hacer un clasificador de noticias falsas utilizando la biblioteca de transformadores. Entonces, para obtener información más detallada, puede dirigirse al tutorial original .

Si no instaló transformadores, debe:

$ pip install transformers

Importemos las bibliotecas necesarias:

import torch
from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
from transformers import BertTokenizerFast, BertForSequenceClassification
from transformers import Trainer, TrainingArguments
import numpy as np
from sklearn.model_selection import train_test_split

import random

Queremos que nuestros resultados sean reproducibles incluso si reiniciamos nuestro entorno:

def set_seed(seed: int):
    """
    Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch`` and/or ``tf`` (if
    installed).

    Args:
        seed (:obj:`int`): The seed to set.
    """
    random.seed(seed)
    np.random.seed(seed)
    if is_torch_available():
        torch.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
        # ^^ safe to call this function even if cuda is not available
    if is_tf_available():
        import tensorflow as tf

        tf.random.set_seed(seed)

set_seed(1)

El modelo que vamos a utilizar es el bert-base-uncased:

# the model we gonna train, base uncased BERT
# check text classification models here: https://huggingface.co/models?filter=text-classification
model_name = "bert-base-uncased"
# max sequence length for each document/sentence sample
max_length = 512

Cargando el tokenizador:

# load the tokenizer
tokenizer = BertTokenizerFast.from_pretrained(model_name, do_lower_case=True)

Preparación de datos

Limpiemos ahora los NaNvalores de las columnas text, authory :title

news_df = news_d[news_d['text'].notna()]
news_df = news_df[news_df["author"].notna()]
news_df = news_df[news_df["title"].notna()]

A continuación, crear una función que tome el conjunto de datos como un marco de datos de Pandas y devuelva las divisiones de entrenamiento/validación de textos y etiquetas como listas:

def prepare_data(df, test_size=0.2, include_title=True, include_author=True):
  texts = []
  labels = []
  for i in range(len(df)):
    text = df["text"].iloc[i]
    label = df["label"].iloc[i]
    if include_title:
      text = df["title"].iloc[i] + " - " + text
    if include_author:
      text = df["author"].iloc[i] + " : " + text
    if text and label in [0, 1]:
      texts.append(text)
      labels.append(label)
  return train_test_split(texts, labels, test_size=test_size)

train_texts, valid_texts, train_labels, valid_labels = prepare_data(news_df)

La función anterior toma el conjunto de datos en un tipo de marco de datos y los devuelve como listas divididas en conjuntos de entrenamiento y validación. Establecer include_titleen Truesignifica que agregamos la titlecolumna a la textque vamos a usar para el entrenamiento, establecer include_authoren Truesignifica que también agregamos authoral texto.

Asegurémonos de que las etiquetas y los textos tengan la misma longitud:

print(len(train_texts), len(train_labels))
print(len(valid_texts), len(valid_labels))

Producción:

14628 14628
3657 3657

Tokenización del conjunto de datos

Usemos el tokenizador BERT para tokenizar nuestro conjunto de datos:

# tokenize the dataset, truncate when passed `max_length`, 
# and pad with 0's when less than `max_length`
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=max_length)
valid_encodings = tokenizer(valid_texts, truncation=True, padding=True, max_length=max_length)

Convertir las codificaciones en un conjunto de datos de PyTorch:

class NewsGroupsDataset(torch.utils.data.Dataset):
    def __init__(self, encodings, labels):
        self.encodings = encodings
        self.labels = labels

    def __getitem__(self, idx):
        item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}
        item["labels"] = torch.tensor([self.labels[idx]])
        return item

    def __len__(self):
        return len(self.labels)

# convert our tokenized data into a torch Dataset
train_dataset = NewsGroupsDataset(train_encodings, train_labels)
valid_dataset = NewsGroupsDataset(valid_encodings, valid_labels)

Cargar y ajustar el modelo

Usaremos BertForSequenceClassificationpara cargar nuestro modelo de transformador BERT:

# load the model
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)

Establecemos num_labelsa 2 ya que es una clasificación binaria. A continuación, la función es una devolución de llamada para calcular la precisión en cada paso de validación:

from sklearn.metrics import accuracy_score

def compute_metrics(pred):
  labels = pred.label_ids
  preds = pred.predictions.argmax(-1)
  # calculate accuracy using sklearn's function
  acc = accuracy_score(labels, preds)
  return {
      'accuracy': acc,
  }

Vamos a inicializar los parámetros de entrenamiento:

training_args = TrainingArguments(
    output_dir='./results',          # output directory
    num_train_epochs=1,              # total number of training epochs
    per_device_train_batch_size=10,  # batch size per device during training
    per_device_eval_batch_size=20,   # batch size for evaluation
    warmup_steps=100,                # number of warmup steps for learning rate scheduler
    logging_dir='./logs',            # directory for storing logs
    load_best_model_at_end=True,     # load the best model when finished training (default metric is loss)
    # but you can specify `metric_for_best_model` argument to change to accuracy or other metric
    logging_steps=200,               # log & save weights each logging_steps
    save_steps=200,
    evaluation_strategy="steps",     # evaluate each `logging_steps`
)

Configuré el valor per_device_train_batch_sizeen 10, pero debe configurarlo tan alto como su GPU pueda caber. Establecer el logging_stepsy save_stepsen 200, lo que significa que vamos a realizar una evaluación y guardar los pesos del modelo en cada 200 pasos de entrenamiento.

Puede consultar  esta página  para obtener información más detallada sobre los parámetros de entrenamiento disponibles.

Instanciamos el entrenador:

trainer = Trainer(
    model=model,                         # the instantiated Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=valid_dataset,          # evaluation dataset
    compute_metrics=compute_metrics,     # the callback that computes metrics of interest
)

Entrenamiento del modelo:

# train the model
trainer.train()

El entrenamiento tarda unas horas en finalizar, dependiendo de su GPU. Si está en la versión gratuita de Colab, debería tomar una hora con NVIDIA Tesla K80. Aquí está la salida:

***** Running training *****
  Num examples = 14628
  Num Epochs = 1
  Instantaneous batch size per device = 10
  Total train batch size (w. parallel, distributed & accumulation) = 10
  Gradient Accumulation steps = 1
  Total optimization steps = 1463
 [1463/1463 41:07, Epoch 1/1]
Step	Training Loss	Validation Loss	Accuracy
200		0.250800		0.100533		0.983867
400		0.027600		0.043009		0.993437
600		0.023400		0.017812		0.997539
800		0.014900		0.030269		0.994258
1000	0.022400		0.012961		0.998086
1200	0.009800		0.010561		0.998633
1400	0.007700		0.010300		0.998633
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-200
Configuration saved in ./results/checkpoint-200/config.json
Model weights saved in ./results/checkpoint-200/pytorch_model.bin
<SNIPPED>
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-1400
Configuration saved in ./results/checkpoint-1400/config.json
Model weights saved in ./results/checkpoint-1400/pytorch_model.bin

Training completed. Do not forget to share your model on huggingface.co/models =)

Loading best model from ./results/checkpoint-1400 (score: 0.010299865156412125).
TrainOutput(global_step=1463, training_loss=0.04888018785440506, metrics={'train_runtime': 2469.1722, 'train_samples_per_second': 5.924, 'train_steps_per_second': 0.593, 'total_flos': 3848788517806080.0, 'train_loss': 0.04888018785440506, 'epoch': 1.0})

Evaluación del modelo

Dado que load_best_model_at_endestá configurado en True, los mejores pesos se cargarán cuando se complete el entrenamiento. Vamos a evaluarlo con nuestro conjunto de validación:

# evaluate the current model after training
trainer.evaluate()

Producción:

***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
 [183/183 02:11]
{'epoch': 1.0,
 'eval_accuracy': 0.998632759092152,
 'eval_loss': 0.010299865156412125,
 'eval_runtime': 132.0374,
 'eval_samples_per_second': 27.697,
 'eval_steps_per_second': 1.386}

Guardando el modelo y el tokenizador:

# saving the fine tuned model & tokenizer
model_path = "fake-news-bert-base-uncased"
model.save_pretrained(model_path)
tokenizer.save_pretrained(model_path)

Aparecerá una nueva carpeta que contiene la configuración del modelo y los pesos después de ejecutar la celda anterior. Si desea realizar una predicción, simplemente use el from_pretrained()método que usamos cuando cargamos el modelo, y ya está listo.

A continuación, hagamos una función que acepte el texto del artículo como argumento y devuelva si es falso o no:

def get_prediction(text, convert_to_label=False):
    # prepare our text into tokenized sequence
    inputs = tokenizer(text, padding=True, truncation=True, max_length=max_length, return_tensors="pt").to("cuda")
    # perform inference to our model
    outputs = model(**inputs)
    # get output probabilities by doing softmax
    probs = outputs[0].softmax(1)
    # executing argmax function to get the candidate label
    d = {
        0: "reliable",
        1: "fake"
    }
    if convert_to_label:
      return d[int(probs.argmax())]
    else:
      return int(probs.argmax())

Tomé un ejemplo de test.csvque el modelo nunca vio para realizar inferencias, lo verifiqué y es un artículo real de The New York Times:

real_news = """
Tim Tebow Will Attempt Another Comeback, This Time in Baseball - The New York Times",Daniel Victor,"If at first you don’t succeed, try a different sport. Tim Tebow, who was a Heisman   quarterback at the University of Florida but was unable to hold an N. F. L. job, is pursuing a career in Major League Baseball. <SNIPPED>
"""

El texto original está en el entorno de Colab si desea copiarlo, ya que es un artículo completo. Vamos a pasarlo al modelo y ver los resultados:

get_prediction(real_news, convert_to_label=True)

Producción:

reliable

Apéndice: Creación de un archivo de envío para Kaggle

En esta sección, predeciremos todos los artículos en el test.csvpara crear un archivo de envío para ver nuestra precisión en la prueba establecida en la competencia Kaggle :

# read the test set
test_df = pd.read_csv("test.csv")
# make a copy of the testing set
new_df = test_df.copy()
# add a new column that contains the author, title and article content
new_df["new_text"] = new_df["author"].astype(str) + " : " + new_df["title"].astype(str) + " - " + new_df["text"].astype(str)
# get the prediction of all the test set
new_df["label"] = new_df["new_text"].apply(get_prediction)
# make the submission file
final_df = new_df[["id", "label"]]
final_df.to_csv("submit_final.csv", index=False)

Después de concatenar el autor, el título y el texto del artículo, pasamos la get_prediction()función a la nueva columna para llenar la labelcolumna, luego usamos to_csv()el método para crear el archivo de envío para Kaggle. Aquí está mi puntaje de presentación:

Puntuación de envío

Obtuvimos una precisión del 99,78 % y del 100 % en las tablas de clasificación privadas y públicas. ¡Eso es genial!

Conclusión

Muy bien, hemos terminado con el tutorial. Puede consultar esta página para ver varios parámetros de entrenamiento que puede modificar.

Si tiene un conjunto de datos de noticias falsas personalizado para ajustarlo, simplemente tiene que pasar una lista de muestras al tokenizador como lo hicimos nosotros, no cambiará ningún otro código después de eso.

Consulta el código completo aquí , o el entorno de Colab aquí .

CODE VN

CODE VN

1646025910

Xây Dựng Một Máy Phát Hiện Tin Tức Giả Mạo Bằng Python

Khám phá tập dữ liệu tin tức giả, thực hiện phân tích dữ liệu chẳng hạn như đám mây từ và ngram, đồng thời tinh chỉnh máy biến áp BERT để xây dựng bộ phát hiện tin tức giả bằng Python bằng cách sử dụng thư viện máy biến áp.

Tin tức giả là việc cố ý phát đi các tuyên bố sai sự thật hoặc gây hiểu lầm như một tin tức, trong đó các tuyên bố là cố ý lừa dối.

Báo chí, báo lá cải và tạp chí đã được thay thế bởi các nền tảng tin tức kỹ thuật số, blog, nguồn cấp dữ liệu truyền thông xã hội và rất nhiều ứng dụng tin tức di động. Các tổ chức tin tức được hưởng lợi từ việc tăng cường sử dụng mạng xã hội và các nền tảng di động bằng cách cung cấp cho người đăng ký thông tin cập nhật từng phút.

Người tiêu dùng hiện có thể truy cập ngay vào những tin tức mới nhất. Các nền tảng truyền thông kỹ thuật số này ngày càng nổi tiếng do khả năng kết nối dễ dàng với phần còn lại của thế giới và cho phép người dùng thảo luận, chia sẻ ý tưởng và tranh luận về các chủ đề như dân chủ, giáo dục, y tế, nghiên cứu và lịch sử. Các mục tin tức giả mạo trên các nền tảng kỹ thuật số ngày càng phổ biến và được sử dụng để thu lợi nhuận, chẳng hạn như lợi ích chính trị và tài chính.

Vấn đề này lớn đến mức nào?

Bởi vì Internet, phương tiện truyền thông xã hội và các nền tảng kỹ thuật số được sử dụng rộng rãi, bất kỳ ai cũng có thể tuyên truyền thông tin không chính xác và thiên vị. Gần như không thể ngăn chặn sự lan truyền của tin tức giả mạo. Có một sự gia tăng đáng kể trong việc phát tán tin tức sai lệch, không chỉ giới hạn trong một lĩnh vực như chính trị mà bao gồm thể thao, sức khỏe, lịch sử, giải trí, khoa học và nghiên cứu.

Giải pháp

Điều quan trọng là phải nhận biết và phân biệt giữa tin tức sai và tin tức chính xác. Một phương pháp là nhờ một chuyên gia quyết định và kiểm tra thực tế mọi thông tin, nhưng điều này cần thời gian và cần chuyên môn không thể chia sẻ được. Thứ hai, chúng ta có thể sử dụng các công cụ học máy và trí tuệ nhân tạo để tự động hóa việc xác định tin tức giả mạo.

Thông tin tin tức trực tuyến bao gồm nhiều dữ liệu định dạng phi cấu trúc khác nhau (chẳng hạn như tài liệu, video và âm thanh), nhưng chúng tôi sẽ tập trung vào tin tức định dạng văn bản ở đây. Với tiến bộ của học máyxử lý ngôn ngữ tự nhiên , giờ đây chúng ta có thể nhận ra đặc điểm gây hiểu lầm và sai của một bài báo hoặc câu lệnh.

Một số nghiên cứu và thử nghiệm đang được tiến hành để phát hiện tin tức giả trên tất cả các phương tiện.

Mục tiêu chính của chúng tôi trong hướng dẫn này là:

  • Khám phá và phân tích tập dữ liệu Tin tức giả mạo.
  • Xây dựng một công cụ phân loại có thể phân biệt tin tức Giả với độ chính xác cao nhất có thể.

Đây là bảng nội dung:

  • Giới thiệu
  • Vấn đề này lớn đến mức nào?
  • Giải pháp
  • Khám phá dữ liệu
    • Phân phối các lớp học
  • Làm sạch dữ liệu để phân tích
  • Phân tích dữ liệu khám phá
    • Đám mây một từ
    • Bigram thường xuyên nhất (Kết hợp hai từ)
    • Hình bát quái thường gặp nhất (Kết hợp ba từ)
  • Xây dựng Bộ phân loại bằng cách tinh chỉnh BERT
    • Chuẩn bị dữ liệu
    • Mã hóa tập dữ liệu
    • Tải và tinh chỉnh mô hình
    • Đánh giá mô hình
  • Phụ lục: Tạo tệp đệ trình cho Kaggle
  • Phần kết luận

Khám phá dữ liệu

Trong công việc này, chúng tôi đã sử dụng tập dữ liệu tin tức giả từ Kaggle để phân loại các bài báo không đáng tin cậy là tin giả. Chúng tôi có một tập dữ liệu đào tạo hoàn chỉnh chứa các đặc điểm sau:

  • id: id duy nhất cho một bài báo
  • title: tiêu đề của một bài báo
  • author: tác giả của bài báo
  • text: văn bản của bài báo; có thể không đầy đủ
  • label: nhãn đánh dấu bài viết có khả năng không đáng tin cậy được ký hiệu bằng 1 (không đáng tin cậy hoặc giả mạo) hoặc 0 (đáng tin cậy).

Đó là một bài toán phân loại nhị phân, trong đó chúng ta phải dự đoán xem một câu chuyện tin tức cụ thể có đáng tin cậy hay không.

Nếu bạn có tài khoản Kaggle, bạn có thể chỉ cần tải xuống bộ dữ liệu từ trang web ở đó và giải nén tệp ZIP.

Tôi cũng đã tải tập dữ liệu lên Google Drive và bạn có thể tải tập dữ liệu đó tại đây hoặc sử dụng gdownthư viện để tự động tải xuống tập dữ liệu trong sổ ghi chép Google Colab hoặc Jupyter:

$ pip install gdown
# download from Google Drive
$ gdown "https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t"
Downloading...
From: https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t
To: /content/fake-news.zip
100% 48.7M/48.7M [00:00<00:00, 74.6MB/s]

Giải nén các tệp:

$ unzip fake-news.zip

Ba tệp sẽ xuất hiện trong thư mục làm việc hiện tại:, và train.csv, chúng tôi sẽ sử dụng trong hầu hết các hướng dẫn.test.csvsubmit.csvtrain.csv

Cài đặt các phụ thuộc bắt buộc:

$ pip install transformers nltk pandas numpy matplotlib seaborn wordcloud

Lưu ý: Nếu bạn đang ở trong môi trường cục bộ, hãy đảm bảo rằng bạn cài đặt PyTorch cho GPU, hãy truy cập trang này để cài đặt đúng cách.

Hãy nhập các thư viện cần thiết để phân tích:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

Kho tập tin NLTK và mô-đun phải được cài đặt bằng trình tải xuống NLTK tiêu chuẩn:

import nltk
nltk.download('stopwords')
nltk.download('wordnet')

Tập dữ liệu tin tức giả bao gồm các tiêu đề và văn bản bài báo gốc và hư cấu của nhiều tác giả khác nhau. Hãy nhập tập dữ liệu của chúng tôi:

# load the dataset
news_d = pd.read_csv("train.csv")
print("Shape of News data:", news_d.shape)
print("News data columns", news_d.columns)

Đầu ra:

 Shape of News data: (20800, 5)
 News data columns Index(['id', 'title', 'author', 'text', 'label'], dtype='object')

Đây là giao diện của tập dữ liệu:

# by using df.head(), we can immediately familiarize ourselves with the dataset. 
news_d.head()

Đầu ra:

id	title	author	text	label
0	0	House Dem Aide: We Didn’t Even See Comey’s Let...	Darrell Lucus	House Dem Aide: We Didn’t Even See Comey’s Let...	1
1	1	FLYNN: Hillary Clinton, Big Woman on Campus - ...	Daniel J. Flynn	Ever get the feeling your life circles the rou...	0
2	2	Why the Truth Might Get You Fired	Consortiumnews.com	Why the Truth Might Get You Fired October 29, ...	1
3	3	15 Civilians Killed In Single US Airstrike Hav...	Jessica Purkiss	Videos 15 Civilians Killed In Single US Airstr...	1
4	4	Iranian woman jailed for fictional unpublished...	Howard Portnoy	Print \nAn Iranian woman has been sentenced to...	1

Chúng tôi có 20.800 hàng, có năm cột. Hãy cùng xem một số thống kê của chuyên textmục:

#Text Word startistics: min.mean, max and interquartile range

txt_length = news_d.text.str.split().str.len()
txt_length.describe()

Đầu ra:

count    20761.000000
mean       760.308126
std        869.525988
min          0.000000
25%        269.000000
50%        556.000000
75%       1052.000000
max      24234.000000
Name: text, dtype: float64

Số liệu thống kê cho titlecột:

#Title statistics 

title_length = news_d.title.str.split().str.len()
title_length.describe()

Đầu ra:

count    20242.000000
mean        12.420709
std          4.098735
min          1.000000
25%         10.000000
50%         13.000000
75%         15.000000
max         72.000000
Name: title, dtype: float64

Số liệu thống kê cho các tập huấn luyện và kiểm tra như sau:

  • Thuộc texttính có số từ cao hơn với trung bình 760 từ và 75% có hơn 1000 từ.
  • Thuộc titletính là một câu lệnh ngắn với trung bình 12 từ và 75% trong số đó là khoảng 15 từ.

Thử nghiệm của chúng tôi sẽ kết hợp cả văn bản và tiêu đề.

Phân phối các lớp học

Đếm các ô cho cả hai nhãn:

sns.countplot(x="label", data=news_d);
print("1: Unreliable")
print("0: Reliable")
print("Distribution of labels:")
print(news_d.label.value_counts());

Đầu ra:

1: Unreliable
0: Reliable
Distribution of labels:
1    10413
0    10387
Name: label, dtype: int64

Phân phối nhãn

print(round(news_d.label.value_counts(normalize=True),2)*100);

Đầu ra:

1    50.0
0    50.0
Name: label, dtype: float64

Số lượng bài báo không đáng tin cậy (giả mạo hoặc 1) là 10413, trong khi số bài báo đáng tin cậy (đáng tin cậy hoặc 0) là 10387. Gần 50% số bài báo là giả mạo. Do đó, chỉ số độ chính xác sẽ đo lường mức độ hoạt động của mô hình của chúng tôi khi xây dựng bộ phân loại.

Làm sạch dữ liệu để phân tích

Trong phần này, chúng tôi sẽ làm sạch tập dữ liệu của mình để thực hiện một số phân tích:

  • Bỏ các hàng và cột không sử dụng.
  • Thực hiện gán giá trị null.
  • Loại bỏ các ký tự đặc biệt.
  • Loại bỏ các từ dừng.
# Constants that are used to sanitize the datasets 

column_n = ['id', 'title', 'author', 'text', 'label']
remove_c = ['id','author']
categorical_features = []
target_col = ['label']
text_f = ['title', 'text']
# Clean Datasets
import nltk
from nltk.corpus import stopwords
import re
from nltk.stem.porter import PorterStemmer
from collections import Counter

ps = PorterStemmer()
wnl = nltk.stem.WordNetLemmatizer()

stop_words = stopwords.words('english')
stopwords_dict = Counter(stop_words)

# Removed unused clumns
def remove_unused_c(df,column_n=remove_c):
    df = df.drop(column_n,axis=1)
    return df

# Impute null values with None
def null_process(feature_df):
    for col in text_f:
        feature_df.loc[feature_df[col].isnull(), col] = "None"
    return feature_df

def clean_dataset(df):
    # remove unused column
    df = remove_unused_c(df)
    #impute null values
    df = null_process(df)
    return df

# Cleaning text from unused characters
def clean_text(text):
    text = str(text).replace(r'http[\w:/\.]+', ' ')  # removing urls
    text = str(text).replace(r'[^\.\w\s]', ' ')  # remove everything but characters and punctuation
    text = str(text).replace('[^a-zA-Z]', ' ')
    text = str(text).replace(r'\s\s+', ' ')
    text = text.lower().strip()
    #text = ' '.join(text)    
    return text

## Nltk Preprocessing include:
# Stop words, Stemming and Lemmetization
# For our project we use only Stop word removal
def nltk_preprocess(text):
    text = clean_text(text)
    wordlist = re.sub(r'[^\w\s]', '', text).split()
    #text = ' '.join([word for word in wordlist if word not in stopwords_dict])
    #text = [ps.stem(word) for word in wordlist if not word in stopwords_dict]
    text = ' '.join([wnl.lemmatize(word) for word in wordlist if word not in stopwords_dict])
    return  text

Trong khối mã trên:

  • Chúng tôi đã nhập NLTK, đây là một nền tảng nổi tiếng để phát triển các ứng dụng Python tương tác với ngôn ngữ của con người. Tiếp theo, chúng tôi nhập recho regex.
  • Chúng tôi nhập các từ dừng từ nltk.corpus. Khi làm việc với các từ, đặc biệt là khi xem xét ngữ nghĩa, đôi khi chúng ta cần loại bỏ các từ phổ biến không bổ sung bất kỳ ý nghĩa quan trọng nào cho một câu lệnh, chẳng hạn như "but",, v.v."can""we"
  • PorterStemmerđược sử dụng để thực hiện các từ gốc với NLTK. Các gốc từ loại bỏ các phụ tố hình thái của các từ, chỉ để lại phần gốc của từ.
  • Chúng tôi nhập WordNetLemmatizer()từ thư viện NLTK để lemmatization. Lemmatization hiệu quả hơn nhiều so với việc chiết cành . Nó vượt ra ngoài việc rút gọn từ và đánh giá toàn bộ từ vựng của một ngôn ngữ để áp dụng phân tích hình thái học cho các từ, với mục tiêu chỉ loại bỏ các kết thúc không theo chiều hướng và trả lại dạng cơ sở hoặc dạng từ điển của một từ, được gọi là bổ đề.
  • stopwords.words('english')cho phép chúng tôi xem danh sách tất cả các từ dừng tiếng Anh được NLTK hỗ trợ.
  • remove_unused_c()được sử dụng để loại bỏ các cột không sử dụng.
  • Chúng tôi áp đặt giá trị null bằng Nonecách sử dụng null_process()hàm.
  • Bên trong hàm clean_dataset(), chúng ta gọi remove_unused_c()null_process()hàm. Chức năng này có nhiệm vụ làm sạch dữ liệu.
  • Để làm sạch văn bản khỏi các ký tự không sử dụng, chúng tôi đã tạo clean_text()hàm.
  • Đối với xử lý trước, chúng tôi sẽ chỉ sử dụng loại bỏ từ dừng. Chúng tôi đã tạo ra nltk_preprocess()chức năng cho mục đích đó.

Tiền xử lý texttitle:

# Perform data cleaning on train and test dataset by calling clean_dataset function
df = clean_dataset(news_d)
# apply preprocessing on text through apply method by calling the function nltk_preprocess
df["text"] = df.text.apply(nltk_preprocess)
# apply preprocessing on title through apply method by calling the function nltk_preprocess
df["title"] = df.title.apply(nltk_preprocess)
# Dataset after cleaning and preprocessing step
df.head()

Đầu ra:

title	text	label
0	house dem aide didnt even see comeys letter ja...	house dem aide didnt even see comeys letter ja...	1
1	flynn hillary clinton big woman campus breitbart	ever get feeling life circle roundabout rather...	0
2	truth might get fired	truth might get fired october 29 2016 tension ...	1
3	15 civilian killed single u airstrike identified	video 15 civilian killed single u airstrike id...	1
4	iranian woman jailed fictional unpublished sto...	print iranian woman sentenced six year prison ...	1

Phân tích dữ liệu khám phá

Trong phần này, chúng tôi sẽ thực hiện:

  • Phân tích đơn biến : Nó là một phân tích thống kê của văn bản. Chúng tôi sẽ sử dụng đám mây từ cho mục đích đó. Đám mây từ là một cách tiếp cận trực quan hóa cho dữ liệu văn bản trong đó thuật ngữ phổ biến nhất được trình bày ở kích thước phông chữ đáng kể nhất.
  • Phân tích Bivariate: Bigram và Trigram sẽ được sử dụng ở đây. Theo Wikipedia: " n-gram là một chuỗi n mục liền nhau từ một mẫu văn bản hoặc lời nói nhất định. Theo ứng dụng, các mục có thể là âm vị, âm tiết, chữ cái, từ hoặc các cặp cơ sở. N-gram thường được thu thập từ một văn bản hoặc ngữ liệu lời nói ".

Đám mây một từ

Các từ phổ biến nhất xuất hiện ở phông chữ đậm và lớn hơn trong đám mây từ. Phần này sẽ thực hiện một đám mây từ cho tất cả các từ trong tập dữ liệu.

Chức năng của thư viện WordCloudwordcloud() sẽ được sử dụng và generate()được sử dụng để tạo hình ảnh đám mây từ:

from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt

# initialize the word cloud
wordcloud = WordCloud( background_color='black', width=800, height=600)
# generate the word cloud by passing the corpus
text_cloud = wordcloud.generate(' '.join(df['text']))
# plotting the word cloud
plt.figure(figsize=(20,30))
plt.imshow(text_cloud)
plt.axis('off')
plt.show()

Đầu ra:

WordCloud cho toàn bộ dữ liệu tin tức giả mạo

Đám mây từ chỉ dành cho tin tức đáng tin cậy:

true_n = ' '.join(df[df['label']==0]['text']) 
wc = wordcloud.generate(true_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Đầu ra:

Word Cloud cho tin tức đáng tin cậy

Word cloud chỉ dành cho tin tức giả mạo:

fake_n = ' '.join(df[df['label']==1]['text'])
wc= wordcloud.generate(fake_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Đầu ra:

Word Cloud cho tin tức giả mạo

Bigram thường xuyên nhất (Kết hợp hai từ)

N-gram là một chuỗi các chữ cái hoặc từ. Một ký tự unigram được tạo thành từ một ký tự duy nhất, trong khi một bigram bao gồm một chuỗi hai ký tự. Tương tự, từ N-gram được tạo thành từ một chuỗi n từ. Từ "thống nhất" là 1 gam (unigram). Sự kết hợp của các từ "bang thống nhất" là 2 gam (bigram), "thành phố new york" là 3 gam.

Hãy vẽ biểu đồ phổ biến nhất trên tin tức đáng tin cậy:

def plot_top_ngrams(corpus, title, ylabel, xlabel="Number of Occurences", n=2):
  """Utility function to plot top n-grams"""
  true_b = (pd.Series(nltk.ngrams(corpus.split(), n)).value_counts())[:20]
  true_b.sort_values().plot.barh(color='blue', width=.9, figsize=(12, 8))
  plt.title(title)
  plt.ylabel(ylabel)
  plt.xlabel(xlabel)
  plt.show()
plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Bigrams', "Bigram", n=2)

Bigram hàng đầu về tin tức giả mạo

Biểu đồ phổ biến nhất về tin tức giả:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Bigrams', "Bigram", n=2)

Bigram hàng đầu về tin tức giả mạo

Hình bát quái thường gặp nhất (kết hợp ba từ)

Hình bát quái phổ biến nhất trên các tin tức đáng tin cậy:

plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Trigrams', "Trigrams", n=3)

Bát quái phổ biến nhất về tin tức giả mạo

Đối với tin tức giả mạo bây giờ:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Trigrams', "Trigrams", n=3)

Hình bát quái phổ biến nhất trên tin tức giả mạo

Các biểu đồ trên cho chúng ta một số ý tưởng về giao diện của cả hai lớp. Trong phần tiếp theo, chúng ta sẽ sử dụng thư viện máy biến áp để xây dựng công cụ phát hiện tin tức giả.

Xây dựng Bộ phân loại bằng cách tinh chỉnh BERT

Phần này sẽ lấy mã rộng rãi từ hướng dẫn tinh chỉnh BERT để tạo bộ phân loại tin tức giả bằng cách sử dụng thư viện máy biến áp. Vì vậy, để biết thêm thông tin chi tiết, bạn có thể xem hướng dẫn ban đầu .

Nếu bạn không cài đặt máy biến áp, bạn phải:

$ pip install transformers

Hãy nhập các thư viện cần thiết:

import torch
from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
from transformers import BertTokenizerFast, BertForSequenceClassification
from transformers import Trainer, TrainingArguments
import numpy as np
from sklearn.model_selection import train_test_split

import random

Chúng tôi muốn làm cho kết quả của chúng tôi có thể tái tạo ngay cả khi chúng tôi khởi động lại môi trường của mình:

def set_seed(seed: int):
    """
    Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch`` and/or ``tf`` (if
    installed).

    Args:
        seed (:obj:`int`): The seed to set.
    """
    random.seed(seed)
    np.random.seed(seed)
    if is_torch_available():
        torch.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
        # ^^ safe to call this function even if cuda is not available
    if is_tf_available():
        import tensorflow as tf

        tf.random.set_seed(seed)

set_seed(1)

Mô hình chúng tôi sẽ sử dụng là bert-base-uncased:

# the model we gonna train, base uncased BERT
# check text classification models here: https://huggingface.co/models?filter=text-classification
model_name = "bert-base-uncased"
# max sequence length for each document/sentence sample
max_length = 512

Đang tải tokenizer:

# load the tokenizer
tokenizer = BertTokenizerFast.from_pretrained(model_name, do_lower_case=True)

Chuẩn bị dữ liệu

Bây giờ chúng ta hãy làm sạch NaNcác giá trị khỏi textauthorcác titlecột:

news_df = news_d[news_d['text'].notna()]
news_df = news_df[news_df["author"].notna()]
news_df = news_df[news_df["title"].notna()]

Tiếp theo, tạo một hàm lấy tập dữ liệu làm khung dữ liệu Pandas và trả về phần tách dòng / xác thực của văn bản và nhãn dưới dạng danh sách:

def prepare_data(df, test_size=0.2, include_title=True, include_author=True):
  texts = []
  labels = []
  for i in range(len(df)):
    text = df["text"].iloc[i]
    label = df["label"].iloc[i]
    if include_title:
      text = df["title"].iloc[i] + " - " + text
    if include_author:
      text = df["author"].iloc[i] + " : " + text
    if text and label in [0, 1]:
      texts.append(text)
      labels.append(label)
  return train_test_split(texts, labels, test_size=test_size)

train_texts, valid_texts, train_labels, valid_labels = prepare_data(news_df)

Hàm trên nhận tập dữ liệu trong một kiểu khung dữ liệu và trả về chúng dưới dạng danh sách được chia thành các tập hợp lệ và huấn luyện. Đặt include_titlethành Truecó nghĩa là chúng tôi thêm titlecột vào mục textchúng tôi sẽ sử dụng để đào tạo, đặt include_authorthành Truecó nghĩa là chúng tôi cũng thêm authorvào văn bản.

Hãy đảm bảo rằng các nhãn và văn bản có cùng độ dài:

print(len(train_texts), len(train_labels))
print(len(valid_texts), len(valid_labels))

Đầu ra:

14628 14628
3657 3657

Mã hóa tập dữ liệu

Hãy sử dụng trình mã hóa BERT để mã hóa tập dữ liệu của chúng ta:

# tokenize the dataset, truncate when passed `max_length`, 
# and pad with 0's when less than `max_length`
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=max_length)
valid_encodings = tokenizer(valid_texts, truncation=True, padding=True, max_length=max_length)

Chuyển đổi các mã hóa thành tập dữ liệu PyTorch:

class NewsGroupsDataset(torch.utils.data.Dataset):
    def __init__(self, encodings, labels):
        self.encodings = encodings
        self.labels = labels

    def __getitem__(self, idx):
        item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}
        item["labels"] = torch.tensor([self.labels[idx]])
        return item

    def __len__(self):
        return len(self.labels)

# convert our tokenized data into a torch Dataset
train_dataset = NewsGroupsDataset(train_encodings, train_labels)
valid_dataset = NewsGroupsDataset(valid_encodings, valid_labels)

Tải và tinh chỉnh mô hình

Chúng tôi sẽ sử dụng BertForSequenceClassificationđể tải mô hình máy biến áp BERT của chúng tôi:

# load the model
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)

Chúng tôi đặt num_labelsthành 2 vì đó là phân loại nhị phân. Hàm dưới đây là một lệnh gọi lại để tính độ chính xác trên mỗi bước xác thực:

from sklearn.metrics import accuracy_score

def compute_metrics(pred):
  labels = pred.label_ids
  preds = pred.predictions.argmax(-1)
  # calculate accuracy using sklearn's function
  acc = accuracy_score(labels, preds)
  return {
      'accuracy': acc,
  }

Hãy khởi tạo các tham số huấn luyện:

training_args = TrainingArguments(
    output_dir='./results',          # output directory
    num_train_epochs=1,              # total number of training epochs
    per_device_train_batch_size=10,  # batch size per device during training
    per_device_eval_batch_size=20,   # batch size for evaluation
    warmup_steps=100,                # number of warmup steps for learning rate scheduler
    logging_dir='./logs',            # directory for storing logs
    load_best_model_at_end=True,     # load the best model when finished training (default metric is loss)
    # but you can specify `metric_for_best_model` argument to change to accuracy or other metric
    logging_steps=200,               # log & save weights each logging_steps
    save_steps=200,
    evaluation_strategy="steps",     # evaluate each `logging_steps`
)

Tôi đã đặt thành per_device_train_batch_size10, nhưng bạn nên đặt nó cao nhất có thể phù hợp với GPU của bạn. Đặt logging_stepssave_stepsthành 200, nghĩa là chúng ta sẽ thực hiện đánh giá và lưu trọng số của mô hình trên mỗi 200 bước huấn luyện.

Bạn có thể kiểm tra  trang này  để biết thêm thông tin chi tiết về các thông số đào tạo có sẵn.

Hãy khởi tạo trình huấn luyện:

trainer = Trainer(
    model=model,                         # the instantiated Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=valid_dataset,          # evaluation dataset
    compute_metrics=compute_metrics,     # the callback that computes metrics of interest
)

Đào tạo người mẫu:

# train the model
trainer.train()

Quá trình đào tạo mất vài giờ để kết thúc, tùy thuộc vào GPU của bạn. Nếu bạn đang sử dụng phiên bản Colab miễn phí, sẽ mất một giờ với NVIDIA Tesla K80. Đây là kết quả:

***** Running training *****
  Num examples = 14628
  Num Epochs = 1
  Instantaneous batch size per device = 10
  Total train batch size (w. parallel, distributed & accumulation) = 10
  Gradient Accumulation steps = 1
  Total optimization steps = 1463
 [1463/1463 41:07, Epoch 1/1]
Step	Training Loss	Validation Loss	Accuracy
200		0.250800		0.100533		0.983867
400		0.027600		0.043009		0.993437
600		0.023400		0.017812		0.997539
800		0.014900		0.030269		0.994258
1000	0.022400		0.012961		0.998086
1200	0.009800		0.010561		0.998633
1400	0.007700		0.010300		0.998633
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-200
Configuration saved in ./results/checkpoint-200/config.json
Model weights saved in ./results/checkpoint-200/pytorch_model.bin
<SNIPPED>
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-1400
Configuration saved in ./results/checkpoint-1400/config.json
Model weights saved in ./results/checkpoint-1400/pytorch_model.bin

Training completed. Do not forget to share your model on huggingface.co/models =)

Loading best model from ./results/checkpoint-1400 (score: 0.010299865156412125).
TrainOutput(global_step=1463, training_loss=0.04888018785440506, metrics={'train_runtime': 2469.1722, 'train_samples_per_second': 5.924, 'train_steps_per_second': 0.593, 'total_flos': 3848788517806080.0, 'train_loss': 0.04888018785440506, 'epoch': 1.0})

Đánh giá mô hình

load_best_model_at_endđược đặt thành True, mức tạ tốt nhất sẽ được tải khi quá trình tập luyện hoàn thành. Hãy đánh giá nó với bộ xác thực của chúng tôi:

# evaluate the current model after training
trainer.evaluate()

Đầu ra:

***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
 [183/183 02:11]
{'epoch': 1.0,
 'eval_accuracy': 0.998632759092152,
 'eval_loss': 0.010299865156412125,
 'eval_runtime': 132.0374,
 'eval_samples_per_second': 27.697,
 'eval_steps_per_second': 1.386}

Lưu mô hình và tokenizer:

# saving the fine tuned model & tokenizer
model_path = "fake-news-bert-base-uncased"
model.save_pretrained(model_path)
tokenizer.save_pretrained(model_path)

Một thư mục mới chứa cấu hình mô hình và trọng số sẽ xuất hiện sau khi chạy ô trên. Nếu bạn muốn thực hiện dự đoán, bạn chỉ cần sử dụng from_pretrained()phương pháp chúng tôi đã sử dụng khi tải mô hình và bạn đã sẵn sàng.

Tiếp theo, hãy tạo một hàm chấp nhận văn bản bài viết làm đối số và trả về cho dù nó là giả mạo hay không:

def get_prediction(text, convert_to_label=False):
    # prepare our text into tokenized sequence
    inputs = tokenizer(text, padding=True, truncation=True, max_length=max_length, return_tensors="pt").to("cuda")
    # perform inference to our model
    outputs = model(**inputs)
    # get output probabilities by doing softmax
    probs = outputs[0].softmax(1)
    # executing argmax function to get the candidate label
    d = {
        0: "reliable",
        1: "fake"
    }
    if convert_to_label:
      return d[int(probs.argmax())]
    else:
      return int(probs.argmax())

Tôi đã lấy một ví dụ từ test.csvmô hình chưa từng thấy để thực hiện suy luận, tôi đã kiểm tra nó và đó là một bài báo thực tế từ The New York Times:

real_news = """
Tim Tebow Will Attempt Another Comeback, This Time in Baseball - The New York Times",Daniel Victor,"If at first you don’t succeed, try a different sport. Tim Tebow, who was a Heisman   quarterback at the University of Florida but was unable to hold an N. F. L. job, is pursuing a career in Major League Baseball. <SNIPPED>
"""

Văn bản gốc nằm trong môi trường Colab nếu bạn muốn sao chép nó, vì nó là một bài báo hoàn chỉnh. Hãy chuyển nó cho mô hình và xem kết quả:

get_prediction(real_news, convert_to_label=True)

Đầu ra:

reliable

Phụ lục: Tạo tệp đệ trình cho Kaggle

Trong phần này, chúng tôi sẽ dự đoán tất cả các bài trong phần test.csvđể tạo hồ sơ gửi để xem độ chính xác của chúng tôi trong bộ bài kiểm tra của cuộc thi Kaggle :

# read the test set
test_df = pd.read_csv("test.csv")
# make a copy of the testing set
new_df = test_df.copy()
# add a new column that contains the author, title and article content
new_df["new_text"] = new_df["author"].astype(str) + " : " + new_df["title"].astype(str) + " - " + new_df["text"].astype(str)
# get the prediction of all the test set
new_df["label"] = new_df["new_text"].apply(get_prediction)
# make the submission file
final_df = new_df[["id", "label"]]
final_df.to_csv("submit_final.csv", index=False)

Sau khi chúng tôi nối tác giả, tiêu đề và văn bản bài viết với nhau, chúng tôi truyền get_prediction()hàm vào cột mới để lấp đầy labelcột, sau đó chúng tôi sử dụng to_csv()phương thức để tạo tệp gửi cho Kaggle. Đây là điểm nộp bài của tôi:

Điểm nộp hồ sơ

Chúng tôi nhận được độ chính xác 99,78% và 100% trên bảng xếp hạng riêng tư và công khai. Thật tuyệt vời!

Kết Luận

Được rồi, chúng ta đã hoàn thành phần hướng dẫn. Bạn có thể kiểm tra trang này để xem các thông số đào tạo khác nhau mà bạn có thể điều chỉnh.

Nếu bạn có tập dữ liệu tin tức giả tùy chỉnh để tinh chỉnh, bạn chỉ cần chuyển danh sách các mẫu cho trình mã hóa như chúng tôi đã làm, bạn sẽ không thay đổi bất kỳ mã nào khác sau đó.

Kiểm tra mã hoàn chỉnh tại đây hoặc môi trường Colab tại đây .

中條 美冬

1646044200

Transformersライブラリを使用してPythonでフェイクニュース検出器を構築する方法

Pythonでのフェイクニュースの検出

偽のニュースデータセットを探索し、ワードクラウドやngramなどのデータ分析を実行し、トランスフォーマーライブラリを使用してPythonで偽のニュース検出器を構築するためにBERTトランスフォーマーを微調整します。

フェイクニュースとは、虚偽または誤解を招くような主張をニュースとして意図的に放送することであり、その発言は意図的に欺瞞的です。

新聞、タブロイド紙、雑誌は、デジタルニュースプラットフォーム、ブログ、ソーシャルメディアフィード、および多数のモバイルニュースアプリケーションに取って代わられています。ニュース組織は、加入者に最新の情報を提供することにより、ソーシャルメディアとモバイルプラットフォームの使用の増加から恩恵を受けました。

消費者は現在、最新ニュースに即座にアクセスできます。これらのデジタルメディアプラットフォームは、世界の他の地域との接続が容易であるために注目を集めており、ユーザーは、民主主義、教育、健康、研究、歴史などのアイデアや討論トピックについて話し合い、共有することができます。デジタルプラットフォーム上の偽のニュースアイテムはますます人気が高まっており、政治的および経済的利益などの利益のために使用されています。

この問題はどれくらい大きいですか?

インターネット、ソーシャルメディア、デジタルプラットフォームが広く使用されているため、誰もが不正確で偏った情報を広める可能性があります。フェイクニュースの拡散を防ぐことはほとんど不可能です。虚偽のニュースの配信は急増しています。これは、政治などの1つのセクターに限定されるものではなく、スポーツ、健康、歴史、娯楽、科学と研究などが含まれます。

ソリューション

虚偽のニュースと正確なニュースを認識して区別することが重要です。1つの方法は、専門家にすべての情報を決定して事実を確認させることですが、これには時間がかかり、共有できない専門知識が必要です。次に、機械学習と人工知能ツールを使用して、偽のニュースの識別を自動化できます。

オンラインニュース情報には、さまざまな非構造化形式のデータ(ドキュメント、ビデオ、オーディオなど)が含まれますが、ここではテキスト形式のニュースに焦点を当てます。機械学習自然言語処理の進歩により、記事やステートメントの誤解を招くような誤った性格を認識できるようになりました。

すべての媒体で偽のニュースを検出するために、いくつかの調査と実験が行われています。

このチュートリアルの主な目標は次のとおりです。

  • フェイクニュースのデータセットを調べて分析します。
  • フェイクニュースを可能な限り正確に区別できる分類器を構築します。

コンテンツの表は次のとおりです。

  • 序章
  • この問題はどれくらい大きいですか?
  • ソリューション
  • データ探索
    • クラスの分布
  • 分析のためのデータクリーニング
  • 探索的データ分析
    • シングルワードクラウド
    • 最も頻繁なバイグラム(2単語の組み合わせ)
    • 最も頻繁なトリグラム(3語の組み合わせ)
  • BERTを微調整して分類器を構築する
    • データの準備
    • データセットのトークン化
    • モデルのロードと微調整
    • モデル評価
  • 付録:Kaggleの送信ファイルの作成
  • 結論

データ探索

この作業では、Kaggleのフェイクニュースデータセットを利用して、信頼できないニュース記事をフェイクニュースとして分類しました。次の特性を含む完全なトレーニングデータセットがあります。

  • id:ニュース記事の一意のID
  • title:ニュース記事のタイトル
  • author:ニュース記事の著者
  • text:記事のテキスト; 不完全である可能性があります
  • label:1(信頼できないまたは偽物)または0(信頼できる)で示される、信頼できない可能性のあるものとして記事をマークするラベル。

これは、特定のニュース記事が信頼できるかどうかを予測する必要があるバイナリ分類の問題です。

Kaggleアカウントをお持ちの場合は、そこにあるWebサイトからデータセットをダウンロードして、ZIPファイルを抽出するだけです。

また、データセットをGoogleドライブにアップロードしました。ここで取得するか、ライブラリを使用してgdownGoogleColabまたはJupyterノートブックに自動的にダウンロードできます。

$ pip install gdown
# download from Google Drive
$ gdown "https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t"
Downloading...
From: https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t
To: /content/fake-news.zip
100% 48.7M/48.7M [00:00<00:00, 74.6MB/s]

ファイルを解凍します。

$ unzip fake-news.zip

現在の作業ディレクトリには、、、、の3つのファイルが表示されtrain.csvます。これはtest.csv、ほとんどのチュートリアルでsubmit.csv使用します。train.csv

必要な依存関係のインストール:

$ pip install transformers nltk pandas numpy matplotlib seaborn wordcloud

注:ローカル環境にいる場合は、必ずPyTorch for GPUをインストールしてください。適切にインストールするには、このページにアクセスしてください。

分析に不可欠なライブラリをインポートしましょう。

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

NLTKコーパスとモジュールは、標準のNLTKダウンローダーを使用してインストールする必要があります。

import nltk
nltk.download('stopwords')
nltk.download('wordnet')

フェイクニュースデータセットは、さまざまな著者のオリジナルおよび架空の記事のタイトルとテキストで構成されています。データセットをインポートしましょう:

# load the dataset
news_d = pd.read_csv("train.csv")
print("Shape of News data:", news_d.shape)
print("News data columns", news_d.columns)

出力:

 Shape of News data: (20800, 5)
 News data columns Index(['id', 'title', 'author', 'text', 'label'], dtype='object')

データセットは次のようになります。

# by using df.head(), we can immediately familiarize ourselves with the dataset. 
news_d.head()

出力:

id	title	author	text	label
0	0	House Dem Aide: We Didn’t Even See Comey’s Let...	Darrell Lucus	House Dem Aide: We Didn’t Even See Comey’s Let...	1
1	1	FLYNN: Hillary Clinton, Big Woman on Campus - ...	Daniel J. Flynn	Ever get the feeling your life circles the rou...	0
2	2	Why the Truth Might Get You Fired	Consortiumnews.com	Why the Truth Might Get You Fired October 29, ...	1
3	3	15 Civilians Killed In Single US Airstrike Hav...	Jessica Purkiss	Videos 15 Civilians Killed In Single US Airstr...	1
4	4	Iranian woman jailed for fictional unpublished...	Howard Portnoy	Print \nAn Iranian woman has been sentenced to...	1

20,800行あり、5列あります。text列のいくつかの統計を見てみましょう:

#Text Word startistics: min.mean, max and interquartile range

txt_length = news_d.text.str.split().str.len()
txt_length.describe()

出力:

count    20761.000000
mean       760.308126
std        869.525988
min          0.000000
25%        269.000000
50%        556.000000
75%       1052.000000
max      24234.000000
Name: text, dtype: float64

title列の統計:

#Title statistics 

title_length = news_d.title.str.split().str.len()
title_length.describe()

出力:

count    20242.000000
mean        12.420709
std          4.098735
min          1.000000
25%         10.000000
50%         13.000000
75%         15.000000
max         72.000000
Name: title, dtype: float64

トレーニングセットとテストセットの統計は次のとおりです。

  • このtext属性の単語数は多く、平均760語で、75%が1000語を超えています。
  • title属性は平均12語の短いステートメントであり、そのうちの75%は約15語です。

私たちの実験は、テキストとタイトルの両方を一緒に使用することです。

クラスの分布

両方のラベルのプロットを数える:

sns.countplot(x="label", data=news_d);
print("1: Unreliable")
print("0: Reliable")
print("Distribution of labels:")
print(news_d.label.value_counts());

出力:

1: Unreliable
0: Reliable
Distribution of labels:
1    10413
0    10387
Name: label, dtype: int64

ラベルの配布

print(round(news_d.label.value_counts(normalize=True),2)*100);

出力:

1    50.0
0    50.0
Name: label, dtype: float64

信頼できない記事(偽物または1)の数は10413であり、信頼できる記事(信頼できるまたは0)の数は10387です。記事のほぼ50%は偽物です。したがって、精度メトリックは、分類器を構築するときにモデルがどの程度うまく機能しているかを測定します。

分析のためのデータクリーニング

このセクションでは、データセットをクリーンアップして分析を行います。

  • 未使用の行と列を削除します。
  • null値の代入を実行します。
  • 特殊文字を削除します。
  • ストップワードを削除します。
# Constants that are used to sanitize the datasets 

column_n = ['id', 'title', 'author', 'text', 'label']
remove_c = ['id','author']
categorical_features = []
target_col = ['label']
text_f = ['title', 'text']
# Clean Datasets
import nltk
from nltk.corpus import stopwords
import re
from nltk.stem.porter import PorterStemmer
from collections import Counter

ps = PorterStemmer()
wnl = nltk.stem.WordNetLemmatizer()

stop_words = stopwords.words('english')
stopwords_dict = Counter(stop_words)

# Removed unused clumns
def remove_unused_c(df,column_n=remove_c):
    df = df.drop(column_n,axis=1)
    return df

# Impute null values with None
def null_process(feature_df):
    for col in text_f:
        feature_df.loc[feature_df[col].isnull(), col] = "None"
    return feature_df

def clean_dataset(df):
    # remove unused column
    df = remove_unused_c(df)
    #impute null values
    df = null_process(df)
    return df

# Cleaning text from unused characters
def clean_text(text):
    text = str(text).replace(r'http[\w:/\.]+', ' ')  # removing urls
    text = str(text).replace(r'[^\.\w\s]', ' ')  # remove everything but characters and punctuation
    text = str(text).replace('[^a-zA-Z]', ' ')
    text = str(text).replace(r'\s\s+', ' ')
    text = text.lower().strip()
    #text = ' '.join(text)    
    return text

## Nltk Preprocessing include:
# Stop words, Stemming and Lemmetization
# For our project we use only Stop word removal
def nltk_preprocess(text):
    text = clean_text(text)
    wordlist = re.sub(r'[^\w\s]', '', text).split()
    #text = ' '.join([word for word in wordlist if word not in stopwords_dict])
    #text = [ps.stem(word) for word in wordlist if not word in stopwords_dict]
    text = ' '.join([wnl.lemmatize(word) for word in wordlist if word not in stopwords_dict])
    return  text

上記のコードブロック:

  • 人間の言語と相互作用するPythonアプリケーションを開発するための有名なプラットフォームであるNLTKをインポートしました。次に、re正規表現をインポートします。
  • からストップワードをインポートしnltk.corpusます。単語を扱うとき、特にセマンティクスを検討するときは、、、、など"but"、ステートメントに重要な意味を追加しない一般的な単語を削除する必要がある場合があります。"can""we"
  • PorterStemmerNLTKでステミングワードを実行するために使用されます。ステマーは、形態学的接辞の単語を取り除き、単語の語幹のみを残します。
  • WordNetLemmatizer()レンマ化のためにNLTKライブラリからインポートします。Lemmatizationはステミングよりもはるかに効果的です。これは、単語の削減を超えて、言語の語彙全体を評価し、語形変化の終わりを削除して、見出語として知られる単語のベースまたは辞書形式を返すことを目的として、形態素解析を単語に適用します。
  • stopwords.words('english')NLTKでサポートされているすべての英語のストップワードのリストを見てみましょう。
  • remove_unused_c()関数は、未使用の列を削除するために使用されます。
  • None関数を使用してnull値を代入しますnull_process()
  • 関数内で、関数をclean_dataset()呼び出します。この関数は、データのクリーニングを担当します。remove_unused_c()null_process()
  • 未使用の文字からテキストを削除するために、clean_text()関数を作成しました。
  • 前処理には、ストップワードの削除のみを使用します。nltk_preprocess()そのための関数を作成しました。

textおよびの前処理title

# Perform data cleaning on train and test dataset by calling clean_dataset function
df = clean_dataset(news_d)
# apply preprocessing on text through apply method by calling the function nltk_preprocess
df["text"] = df.text.apply(nltk_preprocess)
# apply preprocessing on title through apply method by calling the function nltk_preprocess
df["title"] = df.title.apply(nltk_preprocess)
# Dataset after cleaning and preprocessing step
df.head()

出力:

title	text	label
0	house dem aide didnt even see comeys letter ja...	house dem aide didnt even see comeys letter ja...	1
1	flynn hillary clinton big woman campus breitbart	ever get feeling life circle roundabout rather...	0
2	truth might get fired	truth might get fired october 29 2016 tension ...	1
3	15 civilian killed single u airstrike identified	video 15 civilian killed single u airstrike id...	1
4	iranian woman jailed fictional unpublished sto...	print iranian woman sentenced six year prison ...	1

探索的データ分析

このセクションでは、以下を実行します。

  • 単変量分析:テキストの統計分析です。そのためにワードクラウドを使用します。ワードクラウドは、最も一般的な用語が最も重要なフォントサイズで表示される、テキストデータの視覚化アプローチです。
  • 二変量解析:ここでは、バイグラムとトリグラムが使用されます。ウィキペディアによると:「n-gramは、テキストまたはスピーチの特定のサンプルからのn個のアイテムの連続したシーケンスです。アプリケーションによると、アイテムは音素、音節、文字、単語、または塩基対です。n-gram通常、テキストまたは音声コーパスから収集されます。」

シングルワードクラウド

最も頻繁に使用される単語は、ワードクラウド内で太字の大きなフォントで表示されます。このセクションでは、データセット内のすべての単語に対してワードクラウドを実行します。

WordCloudライブラリwordcloud()関数が使用され、ワー​​ドgenerate()クラウドイメージの生成に使用されます。

from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt

# initialize the word cloud
wordcloud = WordCloud( background_color='black', width=800, height=600)
# generate the word cloud by passing the corpus
text_cloud = wordcloud.generate(' '.join(df['text']))
# plotting the word cloud
plt.figure(figsize=(20,30))
plt.imshow(text_cloud)
plt.axis('off')
plt.show()

出力:

フェイクニュースデータ全体のWordCloud

信頼できるニュース専用のワードクラウド:

true_n = ' '.join(df[df['label']==0]['text']) 
wc = wordcloud.generate(true_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

出力:

信頼できるニュースのためのワードクラウド

フェイクニュース専用のワードクラウド:

fake_n = ' '.join(df[df['label']==1]['text'])
wc= wordcloud.generate(fake_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

出力:

フェイクニュースのためのワードクラウド

最も頻繁なバイグラム(2単語の組み合わせ)

N-gramは、文字または単語のシーケンスです。文字ユニグラムは1つの文字で構成され、バイグラムは一連の2文字で構成されます。同様に、単語N-gramは一連のn個の単語で構成されます。「団結」という言葉は1グラム(ユニグラム)です。「米国」という言葉の組み合わせは2グラム(バイグラム)、「ニューヨーク市」は3グラムです。

信頼できるニュースで最も一般的なバイグラムをプロットしてみましょう。

def plot_top_ngrams(corpus, title, ylabel, xlabel="Number of Occurences", n=2):
  """Utility function to plot top n-grams"""
  true_b = (pd.Series(nltk.ngrams(corpus.split(), n)).value_counts())[:20]
  true_b.sort_values().plot.barh(color='blue', width=.9, figsize=(12, 8))
  plt.title(title)
  plt.ylabel(ylabel)
  plt.xlabel(xlabel)
  plt.show()
plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Bigrams', "Bigram", n=2)

フェイクニュースのトップバイグラム

フェイクニュースで最も一般的なバイグラム:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Bigrams', "Bigram", n=2)

フェイクニュースのトップバイグラム

最も頻繁なトリグラム(3語の組み合わせ)

信頼できるニュースに関する最も一般的なトリグラム:

plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Trigrams', "Trigrams", n=3)

フェイクニュースで最も一般的なトリグラム

今のフェイクニュースの場合:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Trigrams', "Trigrams", n=3)

フェイクニュースで最も一般的なトリグラム

上記のプロットは、両方のクラスがどのように見えるかについてのいくつかのアイデアを示しています。次のセクションでは、トランスフォーマーライブラリを使用して偽のニュース検出器を構築します。

BERTを微調整して分類器を構築する

このセクションでは、トランスフォーマーライブラリを使用して偽のニュース分類子を作成するために、BERTチュートリアルの微調整からコードを広範囲に取得します。したがって、より詳細な情報については、元のチュートリアルに進むことができます。

トランスフォーマーをインストールしなかった場合は、次のことを行う必要があります。

$ pip install transformers

必要なライブラリをインポートしましょう:

import torch
from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
from transformers import BertTokenizerFast, BertForSequenceClassification
from transformers import Trainer, TrainingArguments
import numpy as np
from sklearn.model_selection import train_test_split

import random

環境を再起動しても、結果を再現可能にしたいと考えています。

def set_seed(seed: int):
    """
    Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch`` and/or ``tf`` (if
    installed).

    Args:
        seed (:obj:`int`): The seed to set.
    """
    random.seed(seed)
    np.random.seed(seed)
    if is_torch_available():
        torch.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
        # ^^ safe to call this function even if cuda is not available
    if is_tf_available():
        import tensorflow as tf

        tf.random.set_seed(seed)

set_seed(1)

使用するモデルは次のbert-base-uncasedとおりです。

# the model we gonna train, base uncased BERT
# check text classification models here: https://huggingface.co/models?filter=text-classification
model_name = "bert-base-uncased"
# max sequence length for each document/sentence sample
max_length = 512

トークナイザーのロード:

# load the tokenizer
tokenizer = BertTokenizerFast.from_pretrained(model_name, do_lower_case=True)

データの準備

次に、、、および列NaNから値をクリーンアップしましょう。textauthortitle

news_df = news_d[news_d['text'].notna()]
news_df = news_df[news_df["author"].notna()]
news_df = news_df[news_df["title"].notna()]

次に、データセットをPandasデータフレームとして受け取り、テキストとラベルのトレイン/検証分割をリストとして返す関数を作成します。

def prepare_data(df, test_size=0.2, include_title=True, include_author=True):
  texts = []
  labels = []
  for i in range(len(df)):
    text = df["text"].iloc[i]
    label = df["label"].iloc[i]
    if include_title:
      text = df["title"].iloc[i] + " - " + text
    if include_author:
      text = df["author"].iloc[i] + " : " + text
    if text and label in [0, 1]:
      texts.append(text)
      labels.append(label)
  return train_test_split(texts, labels, test_size=test_size)

train_texts, valid_texts, train_labels, valid_labels = prepare_data(news_df)

上記の関数は、データフレームタイプのデータセットを取得し、トレーニングセットと検証セットに分割されたリストとしてそれらを返します。に設定include_titleすると、トレーニングに使用する列に列がTrue追加されます。に設定すると、テキストにも列が追加されます。titletextinclude_authorTrueauthor

ラベルとテキストの長さが同じであることを確認しましょう。

print(len(train_texts), len(train_labels))
print(len(valid_texts), len(valid_labels))

出力:

14628 14628
3657 3657

データセットのトークン化

BERTトークナイザーを使用して、データセットをトークン化してみましょう。

# tokenize the dataset, truncate when passed `max_length`, 
# and pad with 0's when less than `max_length`
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=max_length)
valid_encodings = tokenizer(valid_texts, truncation=True, padding=True, max_length=max_length)

エンコーディングをPyTorchデータセットに変換します。

class NewsGroupsDataset(torch.utils.data.Dataset):
    def __init__(self, encodings, labels):
        self.encodings = encodings
        self.labels = labels

    def __getitem__(self, idx):
        item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}
        item["labels"] = torch.tensor([self.labels[idx]])
        return item

    def __len__(self):
        return len(self.labels)

# convert our tokenized data into a torch Dataset
train_dataset = NewsGroupsDataset(train_encodings, train_labels)
valid_dataset = NewsGroupsDataset(valid_encodings, valid_labels)

モデルのロードと微調整

BertForSequenceClassificationBERTトランスフォーマーモデルのロードに使用します。

# load the model
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)

num_labels二項分類なので2に設定します。以下の関数は、各検証ステップの精度を計算するためのコールバックです。

from sklearn.metrics import accuracy_score

def compute_metrics(pred):
  labels = pred.label_ids
  preds = pred.predictions.argmax(-1)
  # calculate accuracy using sklearn's function
  acc = accuracy_score(labels, preds)
  return {
      'accuracy': acc,
  }

トレーニングパラメータを初期化しましょう:

training_args = TrainingArguments(
    output_dir='./results',          # output directory
    num_train_epochs=1,              # total number of training epochs
    per_device_train_batch_size=10,  # batch size per device during training
    per_device_eval_batch_size=20,   # batch size for evaluation
    warmup_steps=100,                # number of warmup steps for learning rate scheduler
    logging_dir='./logs',            # directory for storing logs
    load_best_model_at_end=True,     # load the best model when finished training (default metric is loss)
    # but you can specify `metric_for_best_model` argument to change to accuracy or other metric
    logging_steps=200,               # log & save weights each logging_steps
    save_steps=200,
    evaluation_strategy="steps",     # evaluate each `logging_steps`
)

を10に設定しましたper_device_train_batch_sizeが、GPUが収まる限り高く設定する必要があります。logging_stepsandを200に設定しsave_stepsます。これは、評価を実行し、200のトレーニングステップごとにモデルの重みを保存することを意味します。

 利用可能なトレーニングパラメータの詳細については、このページを確認 してください。

トレーナーをインスタンス化しましょう:

trainer = Trainer(
    model=model,                         # the instantiated Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=valid_dataset,          # evaluation dataset
    compute_metrics=compute_metrics,     # the callback that computes metrics of interest
)

モデルのトレーニング:

# train the model
trainer.train()

GPUによっては、トレーニングが完了するまでに数時間かかります。Colabの無料バージョンを使用している場合は、NVIDIA TeslaK80で1時間かかるはずです。出力は次のとおりです。

***** Running training *****
  Num examples = 14628
  Num Epochs = 1
  Instantaneous batch size per device = 10
  Total train batch size (w. parallel, distributed & accumulation) = 10
  Gradient Accumulation steps = 1
  Total optimization steps = 1463
 [1463/1463 41:07, Epoch 1/1]
Step	Training Loss	Validation Loss	Accuracy
200		0.250800		0.100533		0.983867
400		0.027600		0.043009		0.993437
600		0.023400		0.017812		0.997539
800		0.014900		0.030269		0.994258
1000	0.022400		0.012961		0.998086
1200	0.009800		0.010561		0.998633
1400	0.007700		0.010300		0.998633
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-200
Configuration saved in ./results/checkpoint-200/config.json
Model weights saved in ./results/checkpoint-200/pytorch_model.bin
<SNIPPED>
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-1400
Configuration saved in ./results/checkpoint-1400/config.json
Model weights saved in ./results/checkpoint-1400/pytorch_model.bin

Training completed. Do not forget to share your model on huggingface.co/models =)

Loading best model from ./results/checkpoint-1400 (score: 0.010299865156412125).
TrainOutput(global_step=1463, training_loss=0.04888018785440506, metrics={'train_runtime': 2469.1722, 'train_samples_per_second': 5.924, 'train_steps_per_second': 0.593, 'total_flos': 3848788517806080.0, 'train_loss': 0.04888018785440506, 'epoch': 1.0})

モデル評価

load_best_model_at_endに設定されているためTrue、トレーニングが完了すると、最適なウェイトがロードされます。検証セットを使用して評価してみましょう。

# evaluate the current model after training
trainer.evaluate()

出力:

***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
 [183/183 02:11]
{'epoch': 1.0,
 'eval_accuracy': 0.998632759092152,
 'eval_loss': 0.010299865156412125,
 'eval_runtime': 132.0374,
 'eval_samples_per_second': 27.697,
 'eval_steps_per_second': 1.386}

モデルとトークナイザーの保存:

# saving the fine tuned model & tokenizer
model_path = "fake-news-bert-base-uncased"
model.save_pretrained(model_path)
tokenizer.save_pretrained(model_path)

上記のセルを実行すると、モデルの構成と重みを含む新しいフォルダーが表示されます。予測を実行するfrom_pretrained()場合は、モデルをロードしたときに使用した方法を使用するだけで、準備は完了です。

次に、記事のテキストを引数として受け取り、それが偽物であるかどうかを返す関数を作成しましょう。

def get_prediction(text, convert_to_label=False):
    # prepare our text into tokenized sequence
    inputs = tokenizer(text, padding=True, truncation=True, max_length=max_length, return_tensors="pt").to("cuda")
    # perform inference to our model
    outputs = model(**inputs)
    # get output probabilities by doing softmax
    probs = outputs[0].softmax(1)
    # executing argmax function to get the candidate label
    d = {
        0: "reliable",
        1: "fake"
    }
    if convert_to_label:
      return d[int(probs.argmax())]
    else:
      return int(probs.argmax())

モデルが推論を実行するのを見たことがないという例を取り上げ、test.csvそれを確認しました。これは、ニューヨークタイムズの実際の記事です。

real_news = """
Tim Tebow Will Attempt Another Comeback, This Time in Baseball - The New York Times",Daniel Victor,"If at first you don’t succeed, try a different sport. Tim Tebow, who was a Heisman   quarterback at the University of Florida but was unable to hold an N. F. L. job, is pursuing a career in Major League Baseball. <SNIPPED>
"""

元のテキストは完全な記事であるため、コピーする場合はColab環境にあります。それをモデルに渡して、結果を見てみましょう。

get_prediction(real_news, convert_to_label=True)

出力:

reliable

付録:Kaggleの送信ファイルの作成

このセクションでは、のすべての記事を予測しtest.csvて提出ファイルを作成し、Kaggleコンテストのテストセットでの正確性を確認します。

# read the test set
test_df = pd.read_csv("test.csv")
# make a copy of the testing set
new_df = test_df.copy()
# add a new column that contains the author, title and article content
new_df["new_text"] = new_df["author"].astype(str) + " : " + new_df["title"].astype(str) + " - " + new_df["text"].astype(str)
# get the prediction of all the test set
new_df["label"] = new_df["new_text"].apply(get_prediction)
# make the submission file
final_df = new_df[["id", "label"]]
final_df.to_csv("submit_final.csv", index=False)

著者、タイトル、記事のテキストを連結した後、get_prediction()関数を新しい列に渡して列を埋め、メソッドをlabel使用to_csv()してKaggleの送信ファイルを作成します。これが私の提出スコアです:

提出スコア

プライベートおよびパブリックのリーダーボードで99.78%および100%の精度が得られました。すごい!

結論

了解しました。チュートリアルは終了です。このページをチェックして、微調整できるさまざまなトレーニングパラメータを確認できます。

微調整用のカスタムのフェイクニュースデータセットがある場合は、サンプルのリストをトークン化ツールに渡すだけで済みます。その後、他のコードを変更することはありません。

ここで完全なコードを確認するか、ここでColab環境を確認してください。