Toni  Schmidt

Toni Schmidt

1614434880

A Simple Text Summarizer written in Rust

Motivation

I read a lot of online news sites to help me catch up with the latest stories. From time to time, I found that some news stories from different sources are simply rephrasing each other. This happens sometimes with technical articles too, but I digress.

This love of reading news stories lead me into the idea of text summarization. Wouldn’t it be great if I can summarize across multiple news sources and let me gather all the information?

I have another goal in mind. I love computer languages. Rust has been on my radar for a long time but I have never come up with a good excuse to learn to use it. I figured that a data process program like text summarization could be a good fit to learn about the gory details of memory ownership.

Text Summarization in Short

There are basically two techniques when it comes to text summarization: Abstractive and Extractive.

Abstraction-based summarization takes words from the original article based on semantic meaning. It can sometimes pick words from another source if the words fit the meaning. The idea is not unlike how a human would have summarize a piece of text. As you can imagine, this is not an easy problem and would almost require some form of machine understanding.

Extraction-based summarization takes a different approach. Instead of trying to understand the underlying text. It uses some mathematical formulas to rank each sentence from the article and output only sentences that are above certain score. This way, the meaning of the original text is mostly preserved without coding the machine to understand.

In this article, we will use an extraction based summarization technique. It’s perfect for individual developer, like me (or you), to experiment with and to appreciate the problem.

#rust

What is GEEK

Buddha Community

A Simple Text Summarizer written in Rust

Navigating Between DOM Nodes in JavaScript

In the previous chapters you've learnt how to select individual elements on a web page. But there are many occasions where you need to access a child, parent or ancestor element. See the JavaScript DOM nodes chapter to understand the logical relationships between the nodes in a DOM tree.

DOM node provides several properties and methods that allow you to navigate or traverse through the tree structure of the DOM and make changes very easily. In the following section we will learn how to navigate up, down, and sideways in the DOM tree using JavaScript.

Accessing the Child Nodes

You can use the firstChild and lastChild properties of the DOM node to access the first and last direct child node of a node, respectively. If the node doesn't have any child element, it returns null.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");
console.log(main.firstChild.nodeName); // Prints: #text

var hint = document.getElementById("hint");
console.log(hint.firstChild.nodeName); // Prints: SPAN
</script>

Note: The nodeName is a read-only property that returns the name of the current node as a string. For example, it returns the tag name for element node, #text for text node, #comment for comment node, #document for document node, and so on.

If you notice the above example, the nodeName of the first-child node of the main DIV element returns #text instead of H1. Because, whitespace such as spaces, tabs, newlines, etc. are valid characters and they form #text nodes and become a part of the DOM tree. Therefore, since the <div> tag contains a newline before the <h1> tag, so it will create a #text node.

To avoid the issue with firstChild and lastChild returning #text or #comment nodes, you could alternatively use the firstElementChild and lastElementChild properties to return only the first and last element node, respectively. But, it will not work in IE 9 and earlier.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");
alert(main.firstElementChild.nodeName); // Outputs: H1
main.firstElementChild.style.color = "red";

var hint = document.getElementById("hint");
alert(hint.firstElementChild.nodeName); // Outputs: SPAN
hint.firstElementChild.style.color = "blue";
</script>

Similarly, you can use the childNodes property to access all child nodes of a given element, where the first child node is assigned index 0. Here's an example:

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");

// First check that the element has child nodes 
if(main.hasChildNodes()) {
    var nodes = main.childNodes;
    
    // Loop through node list and display node name
    for(var i = 0; i < nodes.length; i++) {
        alert(nodes[i].nodeName);
    }
}
</script>

The childNodes returns all child nodes, including non-element nodes like text and comment nodes. To get a collection of only elements, use children property instead.

Example

<div id="main">
    <h1 id="title">My Heading</h1>
    <p id="hint"><span>This is some text.</span></p>
</div>

<script>
var main = document.getElementById("main");

// First check that the element has child nodes 
if(main.hasChildNodes()) {
    var nodes = main.children;
    
    // Loop through node list and display node name
    for(var i = 0; i < nodes.length; i++) {
        alert(nodes[i].nodeName);
    }
}
</script>

#javascript 

Serde Rust: Serialization Framework for Rust

Serde

*Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.*

You may be looking for:

Serde in action

Click to show Cargo.toml. Run this code in the playground.

[dependencies]

# The core APIs, including the Serialize and Deserialize traits. Always
# required when using Serde. The "derive" feature is only required when
# using #[derive(Serialize, Deserialize)] to make Serde work with structs
# and enums defined in your crate.
serde = { version = "1.0", features = ["derive"] }

# Each data format lives in its own crate; the sample code below uses JSON
# but you may be using a different one.
serde_json = "1.0"

 

use serde::{Serialize, Deserialize};

#[derive(Serialize, Deserialize, Debug)]
struct Point {
    x: i32,
    y: i32,
}

fn main() {
    let point = Point { x: 1, y: 2 };

    // Convert the Point to a JSON string.
    let serialized = serde_json::to_string(&point).unwrap();

    // Prints serialized = {"x":1,"y":2}
    println!("serialized = {}", serialized);

    // Convert the JSON string back to a Point.
    let deserialized: Point = serde_json::from_str(&serialized).unwrap();

    // Prints deserialized = Point { x: 1, y: 2 }
    println!("deserialized = {:?}", deserialized);
}

Getting help

Serde is one of the most widely used Rust libraries so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo but they tend not to get as many eyes as any of the above and may get closed without a response after some time.

Download Details:
Author: serde-rs
Source Code: https://github.com/serde-rs/serde
License: View license

#rust  #rustlang 

Awesome  Rust

Awesome Rust

1654894080

Serde JSON: JSON Support for Serde Framework

Serde JSON

Serde is a framework for serializing and deserializing Rust data structures efficiently and generically.

[dependencies]
serde_json = "1.0"

You may be looking for:

JSON is a ubiquitous open-standard format that uses human-readable text to transmit data objects consisting of key-value pairs.

{
    "name": "John Doe",
    "age": 43,
    "address": {
        "street": "10 Downing Street",
        "city": "London"
    },
    "phones": [
        "+44 1234567",
        "+44 2345678"
    ]
}

There are three common ways that you might find yourself needing to work with JSON data in Rust.

  • As text data. An unprocessed string of JSON data that you receive on an HTTP endpoint, read from a file, or prepare to send to a remote server.
  • As an untyped or loosely typed representation. Maybe you want to check that some JSON data is valid before passing it on, but without knowing the structure of what it contains. Or you want to do very basic manipulations like insert a key in a particular spot.
  • As a strongly typed Rust data structure. When you expect all or most of your data to conform to a particular structure and want to get real work done without JSON's loosey-goosey nature tripping you up.

Serde JSON provides efficient, flexible, safe ways of converting data between each of these representations.

Operating on untyped JSON values

Any valid JSON data can be manipulated in the following recursive enum representation. This data structure is serde_json::Value.

enum Value {
    Null,
    Bool(bool),
    Number(Number),
    String(String),
    Array(Vec<Value>),
    Object(Map<String, Value>),
}

A string of JSON data can be parsed into a serde_json::Value by the serde_json::from_str function. There is also from_slice for parsing from a byte slice &[u8] and from_reader for parsing from any io::Read like a File or a TCP stream.

use serde_json::{Result, Value};

fn untyped_example() -> Result<()> {
    // Some JSON input data as a &str. Maybe this comes from the user.
    let data = r#"
        {
            "name": "John Doe",
            "age": 43,
            "phones": [
                "+44 1234567",
                "+44 2345678"
            ]
        }"#;

    // Parse the string of data into serde_json::Value.
    let v: Value = serde_json::from_str(data)?;

    // Access parts of the data by indexing with square brackets.
    println!("Please call {} at the number {}", v["name"], v["phones"][0]);

    Ok(())
}

The result of square bracket indexing like v["name"] is a borrow of the data at that index, so the type is &Value. A JSON map can be indexed with string keys, while a JSON array can be indexed with integer keys. If the type of the data is not right for the type with which it is being indexed, or if a map does not contain the key being indexed, or if the index into a vector is out of bounds, the returned element is Value::Null.

When a Value is printed, it is printed as a JSON string. So in the code above, the output looks like Please call "John Doe" at the number "+44 1234567". The quotation marks appear because v["name"] is a &Value containing a JSON string and its JSON representation is "John Doe". Printing as a plain string without quotation marks involves converting from a JSON string to a Rust string with as_str() or avoiding the use of Value as described in the following section.

The Value representation is sufficient for very basic tasks but can be tedious to work with for anything more significant. Error handling is verbose to implement correctly, for example imagine trying to detect the presence of unrecognized fields in the input data. The compiler is powerless to help you when you make a mistake, for example imagine typoing v["name"] as v["nmae"] in one of the dozens of places it is used in your code.

Parsing JSON as strongly typed data structures

Serde provides a powerful way of mapping JSON data into Rust data structures largely automatically.

use serde::{Deserialize, Serialize};
use serde_json::Result;

#[derive(Serialize, Deserialize)]
struct Person {
    name: String,
    age: u8,
    phones: Vec<String>,
}

fn typed_example() -> Result<()> {
    // Some JSON input data as a &str. Maybe this comes from the user.
    let data = r#"
        {
            "name": "John Doe",
            "age": 43,
            "phones": [
                "+44 1234567",
                "+44 2345678"
            ]
        }"#;

    // Parse the string of data into a Person object. This is exactly the
    // same function as the one that produced serde_json::Value above, but
    // now we are asking it for a Person as output.
    let p: Person = serde_json::from_str(data)?;

    // Do things just like with any other Rust data structure.
    println!("Please call {} at the number {}", p.name, p.phones[0]);

    Ok(())
}

This is the same serde_json::from_str function as before, but this time we assign the return value to a variable of type Person so Serde will automatically interpret the input data as a Person and produce informative error messages if the layout does not conform to what a Person is expected to look like.

Any type that implements Serde's Deserialize trait can be deserialized this way. This includes built-in Rust standard library types like Vec<T> and HashMap<K, V>, as well as any structs or enums annotated with #[derive(Deserialize)].

Once we have p of type Person, our IDE and the Rust compiler can help us use it correctly like they do for any other Rust code. The IDE can autocomplete field names to prevent typos, which was impossible in the serde_json::Value representation. And the Rust compiler can check that when we write p.phones[0], then p.phones is guaranteed to be a Vec<String> so indexing into it makes sense and produces a String.

The necessary setup for using Serde's derive macros is explained on the Using derive page of the Serde site.

Constructing JSON values

Serde JSON provides a json! macro to build serde_json::Value objects with very natural JSON syntax.

use serde_json::json;

fn main() {
    // The type of `john` is `serde_json::Value`
    let john = json!({
        "name": "John Doe",
        "age": 43,
        "phones": [
            "+44 1234567",
            "+44 2345678"
        ]
    });

    println!("first phone number: {}", john["phones"][0]);

    // Convert to a string of JSON and print it out
    println!("{}", john.to_string());
}

The Value::to_string() function converts a serde_json::Value into a String of JSON text.

One neat thing about the json! macro is that variables and expressions can be interpolated directly into the JSON value as you are building it. Serde will check at compile time that the value you are interpolating is able to be represented as JSON.

let full_name = "John Doe";
let age_last_year = 42;

// The type of `john` is `serde_json::Value`
let john = json!({
    "name": full_name,
    "age": age_last_year + 1,
    "phones": [
        format!("+44 {}", random_phone())
    ]
});

This is amazingly convenient, but we have the problem we had before with Value: the IDE and Rust compiler cannot help us if we get it wrong. Serde JSON provides a better way of serializing strongly-typed data structures into JSON text.

Creating JSON by serializing data structures

A data structure can be converted to a JSON string by serde_json::to_string. There is also serde_json::to_vec which serializes to a Vec<u8> and serde_json::to_writer which serializes to any io::Write such as a File or a TCP stream.

use serde::{Deserialize, Serialize};
use serde_json::Result;

#[derive(Serialize, Deserialize)]
struct Address {
    street: String,
    city: String,
}

fn print_an_address() -> Result<()> {
    // Some data structure.
    let address = Address {
        street: "10 Downing Street".to_owned(),
        city: "London".to_owned(),
    };

    // Serialize it to a JSON string.
    let j = serde_json::to_string(&address)?;

    // Print, write to a file, or send to an HTTP server.
    println!("{}", j);

    Ok(())
}

Any type that implements Serde's Serialize trait can be serialized this way. This includes built-in Rust standard library types like Vec<T> and HashMap<K, V>, as well as any structs or enums annotated with #[derive(Serialize)].

Performance

It is fast. You should expect in the ballpark of 500 to 1000 megabytes per second deserialization and 600 to 900 megabytes per second serialization, depending on the characteristics of your data. This is competitive with the fastest C and C++ JSON libraries or even 30% faster for many use cases. Benchmarks live in the serde-rs/json-benchmark repo.

Getting help

Serde is one of the most widely used Rust libraries, so any place that Rustaceans congregate will be able to help you out. For chat, consider trying the #rust-questions or #rust-beginners channels of the unofficial community Discord (invite: https://discord.gg/rust-lang-community), the #rust-usage or #beginners channels of the official Rust Project Discord (invite: https://discord.gg/rust-lang), or the #general stream in Zulip. For asynchronous, consider the [rust] tag on StackOverflow, the /r/rust subreddit which has a pinned weekly easy questions post, or the Rust Discourse forum. It's acceptable to file a support issue in this repo, but they tend not to get as many eyes as any of the above and may get closed without a response after some time.

No-std support

As long as there is a memory allocator, it is possible to use serde_json without the rest of the Rust standard library. This is supported on Rust 1.36+. Disable the default "std" feature and enable the "alloc" feature:

[dependencies]
serde_json = { version = "1.0", default-features = false, features = ["alloc"] }

For JSON support in Serde without a memory allocator, please see the serde-json-core crate.

Link: https://crates.io/crates/serde_json

#rust  #rustlang  #encode   #json 

Comment créer un détecteur de fausses nouvelles en Python

Détection de fausses nouvelles en Python

Explorer l'ensemble de données de fausses nouvelles, effectuer une analyse de données telles que des nuages ​​​​de mots et des ngrams, et affiner le transformateur BERT pour créer un détecteur de fausses nouvelles en Python à l'aide de la bibliothèque de transformateurs.

Les fausses nouvelles sont la diffusion intentionnelle d'allégations fausses ou trompeuses en tant que nouvelles, où les déclarations sont délibérément mensongères.

Les journaux, les tabloïds et les magazines ont été supplantés par les plateformes d'actualités numériques, les blogs, les flux de médias sociaux et une pléthore d'applications d'actualités mobiles. Les organes de presse ont profité de l'utilisation accrue des médias sociaux et des plates-formes mobiles en fournissant aux abonnés des informations de dernière minute.

Les consommateurs ont désormais un accès instantané aux dernières nouvelles. Ces plateformes de médias numériques ont gagné en importance en raison de leur connectivité facile au reste du monde et permettent aux utilisateurs de discuter et de partager des idées et de débattre de sujets tels que la démocratie, l'éducation, la santé, la recherche et l'histoire. Les fausses informations sur les plateformes numériques deviennent de plus en plus populaires et sont utilisées à des fins lucratives, telles que des gains politiques et financiers.

Quelle est la taille de ce problème ?

Parce qu'Internet, les médias sociaux et les plateformes numériques sont largement utilisés, n'importe qui peut propager des informations inexactes et biaisées. Il est presque impossible d'empêcher la diffusion de fausses nouvelles. Il y a une énorme augmentation de la diffusion de fausses nouvelles, qui ne se limite pas à un secteur comme la politique, mais comprend le sport, la santé, l'histoire, le divertissement, la science et la recherche.

La solution

Il est essentiel de reconnaître et de différencier les informations fausses des informations exactes. Une méthode consiste à demander à un expert de décider et de vérifier chaque élément d'information, mais cela prend du temps et nécessite une expertise qui ne peut être partagée. Deuxièmement, nous pouvons utiliser des outils d'apprentissage automatique et d'intelligence artificielle pour automatiser l'identification des fausses nouvelles.

Les informations d'actualité en ligne incluent diverses données de format non structuré (telles que des documents, des vidéos et de l'audio), mais nous nous concentrerons ici sur les informations au format texte. Avec les progrès de l'apprentissage automatique et du traitement automatique du langage naturel , nous pouvons désormais reconnaître le caractère trompeur et faux d'un article ou d'une déclaration.

Plusieurs études et expérimentations sont menées pour détecter les fake news sur tous les supports.

Notre objectif principal de ce tutoriel est :

  • Explorez et analysez l'ensemble de données Fake News.
  • Construisez un classificateur capable de distinguer les fausses nouvelles avec autant de précision que possible.

Voici la table des matières :

  • introduction
  • Quelle est la taille de ce problème ?
  • La solution
  • Exploration des données
    • Répartition des cours
  • Nettoyage des données pour l'analyse
  • Analyse exploratoire des données
    • Nuage à un seul mot
    • Bigramme le plus fréquent (combinaison de deux mots)
    • Trigramme le plus fréquent (combinaison de trois mots)
  • Construire un classificateur en affinant le BERT
    • Préparation des données
    • Tokénisation de l'ensemble de données
    • Chargement et réglage fin du modèle
    • Évaluation du modèle
  • Annexe : Création d'un fichier de soumission pour Kaggle
  • Conclusion

Exploration des données

Dans ce travail, nous avons utilisé l'ensemble de données sur les fausses nouvelles de Kaggle pour classer les articles d'actualité non fiables comme fausses nouvelles. Nous disposons d'un jeu de données d'entraînement complet contenant les caractéristiques suivantes :

  • id: identifiant unique pour un article de presse
  • title: titre d'un article de presse
  • author: auteur de l'article de presse
  • text: texte de l'article ; pourrait être incomplet
  • label: une étiquette qui marque l'article comme potentiellement non fiable, notée 1 (non fiable ou faux) ou 0 (fiable).

Il s'agit d'un problème de classification binaire dans lequel nous devons prédire si une nouvelle particulière est fiable ou non.

Si vous avez un compte Kaggle, vous pouvez simplement télécharger l'ensemble de données à partir du site Web et extraire le fichier ZIP.

J'ai également téléchargé l'ensemble de données dans Google Drive, et vous pouvez l'obtenir ici , ou utiliser la gdownbibliothèque pour le télécharger automatiquement dans les blocs-notes Google Colab ou Jupyter :

$ pip install gdown
# download from Google Drive
$ gdown "https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t"
Downloading...
From: https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t
To: /content/fake-news.zip
100% 48.7M/48.7M [00:00<00:00, 74.6MB/s]

Décompressez les fichiers :

$ unzip fake-news.zip

Trois fichiers apparaîtront dans le répertoire de travail actuel : train.csv, test.csv, et submit.csv, que nous utiliserons train.csvdans la majeure partie du didacticiel.

Installation des dépendances requises :

$ pip install transformers nltk pandas numpy matplotlib seaborn wordcloud

Remarque : Si vous êtes dans un environnement local, assurez-vous d'installer PyTorch pour GPU, rendez-vous sur cette page pour une installation correcte.

Importons les bibliothèques essentielles pour l'analyse :

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

Les corpus et modules NLTK doivent être installés à l'aide du téléchargeur NLTK standard :

import nltk
nltk.download('stopwords')
nltk.download('wordnet')

L'ensemble de données sur les fausses nouvelles comprend les titres et le texte d'articles originaux et fictifs de divers auteurs. Importons notre jeu de données :

# load the dataset
news_d = pd.read_csv("train.csv")
print("Shape of News data:", news_d.shape)
print("News data columns", news_d.columns)

Sortir:

 Shape of News data: (20800, 5)
 News data columns Index(['id', 'title', 'author', 'text', 'label'], dtype='object')

Voici à quoi ressemble l'ensemble de données :

# by using df.head(), we can immediately familiarize ourselves with the dataset. 
news_d.head()

Sortir:

id	title	author	text	label
0	0	House Dem Aide: We Didn’t Even See Comey’s Let...	Darrell Lucus	House Dem Aide: We Didn’t Even See Comey’s Let...	1
1	1	FLYNN: Hillary Clinton, Big Woman on Campus - ...	Daniel J. Flynn	Ever get the feeling your life circles the rou...	0
2	2	Why the Truth Might Get You Fired	Consortiumnews.com	Why the Truth Might Get You Fired October 29, ...	1
3	3	15 Civilians Killed In Single US Airstrike Hav...	Jessica Purkiss	Videos 15 Civilians Killed In Single US Airstr...	1
4	4	Iranian woman jailed for fictional unpublished...	Howard Portnoy	Print \nAn Iranian woman has been sentenced to...	1

Nous avons 20 800 lignes, qui ont cinq colonnes. Voyons quelques statistiques de la textcolonne :

#Text Word startistics: min.mean, max and interquartile range

txt_length = news_d.text.str.split().str.len()
txt_length.describe()

Sortir:

count    20761.000000
mean       760.308126
std        869.525988
min          0.000000
25%        269.000000
50%        556.000000
75%       1052.000000
max      24234.000000
Name: text, dtype: float64

Statistiques pour la titlecolonne :

#Title statistics 

title_length = news_d.title.str.split().str.len()
title_length.describe()

Sortir:

count    20242.000000
mean        12.420709
std          4.098735
min          1.000000
25%         10.000000
50%         13.000000
75%         15.000000
max         72.000000
Name: title, dtype: float64

Les statistiques pour les ensembles d'entraînement et de test sont les suivantes :

  • L' textattribut a un nombre de mots plus élevé avec une moyenne de 760 mots et 75% ayant plus de 1000 mots.
  • L' titleattribut est une courte déclaration avec une moyenne de 12 mots, et 75% d'entre eux sont d'environ 15 mots.

Notre expérience porterait à la fois sur le texte et le titre.

Répartition des cours

Compter les parcelles pour les deux étiquettes :

sns.countplot(x="label", data=news_d);
print("1: Unreliable")
print("0: Reliable")
print("Distribution of labels:")
print(news_d.label.value_counts());

Sortir:

1: Unreliable
0: Reliable
Distribution of labels:
1    10413
0    10387
Name: label, dtype: int64

Distribution d'étiquettes

print(round(news_d.label.value_counts(normalize=True),2)*100);

Sortir:

1    50.0
0    50.0
Name: label, dtype: float64

Le nombre d'articles non fiables (faux ou 1) est de 10413, tandis que le nombre d'articles dignes de confiance (fiables ou 0) est de 10387. Près de 50% des articles sont faux. Par conséquent, la métrique de précision mesurera la performance de notre modèle lors de la construction d'un classificateur.

Nettoyage des données pour l'analyse

Dans cette section, nous allons nettoyer notre ensemble de données pour effectuer une analyse :

  • Supprimez les lignes et les colonnes inutilisées.
  • Effectuez une imputation de valeur nulle.
  • Supprimer les caractères spéciaux.
  • Supprimez les mots vides.
# Constants that are used to sanitize the datasets 

column_n = ['id', 'title', 'author', 'text', 'label']
remove_c = ['id','author']
categorical_features = []
target_col = ['label']
text_f = ['title', 'text']
# Clean Datasets
import nltk
from nltk.corpus import stopwords
import re
from nltk.stem.porter import PorterStemmer
from collections import Counter

ps = PorterStemmer()
wnl = nltk.stem.WordNetLemmatizer()

stop_words = stopwords.words('english')
stopwords_dict = Counter(stop_words)

# Removed unused clumns
def remove_unused_c(df,column_n=remove_c):
    df = df.drop(column_n,axis=1)
    return df

# Impute null values with None
def null_process(feature_df):
    for col in text_f:
        feature_df.loc[feature_df[col].isnull(), col] = "None"
    return feature_df

def clean_dataset(df):
    # remove unused column
    df = remove_unused_c(df)
    #impute null values
    df = null_process(df)
    return df

# Cleaning text from unused characters
def clean_text(text):
    text = str(text).replace(r'http[\w:/\.]+', ' ')  # removing urls
    text = str(text).replace(r'[^\.\w\s]', ' ')  # remove everything but characters and punctuation
    text = str(text).replace('[^a-zA-Z]', ' ')
    text = str(text).replace(r'\s\s+', ' ')
    text = text.lower().strip()
    #text = ' '.join(text)    
    return text

## Nltk Preprocessing include:
# Stop words, Stemming and Lemmetization
# For our project we use only Stop word removal
def nltk_preprocess(text):
    text = clean_text(text)
    wordlist = re.sub(r'[^\w\s]', '', text).split()
    #text = ' '.join([word for word in wordlist if word not in stopwords_dict])
    #text = [ps.stem(word) for word in wordlist if not word in stopwords_dict]
    text = ' '.join([wnl.lemmatize(word) for word in wordlist if word not in stopwords_dict])
    return  text

Dans le bloc de code ci-dessus :

  • Nous avons importé NLTK, qui est une plate-forme célèbre pour développer des applications Python qui interagissent avec le langage humain. Ensuite, nous importons repour regex.
  • Nous importons des mots vides à partir de nltk.corpus. Lorsque nous travaillons avec des mots, en particulier lorsque nous considérons la sémantique, nous devons parfois éliminer les mots courants qui n'ajoutent aucune signification significative à une déclaration, tels que "but", "can", "we", etc.
  • PorterStemmerest utilisé pour effectuer des mots radicaux avec NLTK. Les radicaux dépouillent les mots de leurs affixes morphologiques, laissant uniquement le radical du mot.
  • Nous importons WordNetLemmatizer()de la bibliothèque NLTK pour la lemmatisation. La lemmatisation est bien plus efficace que la radicalisation . Il va au-delà de la réduction des mots et évalue l'ensemble du lexique d'une langue pour appliquer une analyse morphologique aux mots, dans le but de supprimer simplement les extrémités flexionnelles et de renvoyer la forme de base ou de dictionnaire d'un mot, connue sous le nom de lemme.
  • stopwords.words('english')permettez-nous de regarder la liste de tous les mots vides en anglais pris en charge par NLTK.
  • remove_unused_c()La fonction est utilisée pour supprimer les colonnes inutilisées.
  • Nous imputons des valeurs nulles à Nonel'aide de la null_process()fonction.
  • A l'intérieur de la fonction clean_dataset(), nous appelons remove_unused_c()et null_process()fonctions. Cette fonction est responsable du nettoyage des données.
  • Pour nettoyer le texte des caractères inutilisés, nous avons créé la clean_text()fonction.
  • Pour le prétraitement, nous n'utiliserons que la suppression des mots vides. Nous avons créé la nltk_preprocess()fonction à cet effet.

Prétraitement de textet title:

# Perform data cleaning on train and test dataset by calling clean_dataset function
df = clean_dataset(news_d)
# apply preprocessing on text through apply method by calling the function nltk_preprocess
df["text"] = df.text.apply(nltk_preprocess)
# apply preprocessing on title through apply method by calling the function nltk_preprocess
df["title"] = df.title.apply(nltk_preprocess)
# Dataset after cleaning and preprocessing step
df.head()

Sortir:

title	text	label
0	house dem aide didnt even see comeys letter ja...	house dem aide didnt even see comeys letter ja...	1
1	flynn hillary clinton big woman campus breitbart	ever get feeling life circle roundabout rather...	0
2	truth might get fired	truth might get fired october 29 2016 tension ...	1
3	15 civilian killed single u airstrike identified	video 15 civilian killed single u airstrike id...	1
4	iranian woman jailed fictional unpublished sto...	print iranian woman sentenced six year prison ...	1

Analyse exploratoire des données

Dans cette section, nous effectuerons :

  • Analyse Univariée : C'est une analyse statistique du texte. Nous utiliserons un nuage de mots à cette fin. Un nuage de mots est une approche de visualisation des données textuelles où le terme le plus courant est présenté dans la taille de police la plus importante.
  • Analyse Bivariée : Bigramme et Trigramme seront utilisés ici. Selon Wikipedia : " un n-gramme est une séquence contiguë de n éléments d'un échantillon donné de texte ou de parole. Selon l'application, les éléments peuvent être des phonèmes, des syllabes, des lettres, des mots ou des paires de bases. Les n-grammes sont généralement collectées à partir d'un corpus textuel ou vocal ».

Nuage à un seul mot

Les mots les plus fréquents apparaissent en caractères gras et plus gros dans un nuage de mots. Cette section effectuera un nuage de mots pour tous les mots du jeu de données.

La fonction de la bibliothèque WordCloudwordcloud() sera utilisée, et la generate()est utilisée pour générer l'image du nuage de mots :

from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt

# initialize the word cloud
wordcloud = WordCloud( background_color='black', width=800, height=600)
# generate the word cloud by passing the corpus
text_cloud = wordcloud.generate(' '.join(df['text']))
# plotting the word cloud
plt.figure(figsize=(20,30))
plt.imshow(text_cloud)
plt.axis('off')
plt.show()

Sortir:

WordCloud pour toutes les fausses données de nouvelles

Nuage de mots pour les informations fiables uniquement :

true_n = ' '.join(df[df['label']==0]['text']) 
wc = wordcloud.generate(true_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Sortir:

Nuage de mots pour des nouvelles fiables

Nuage de mots pour les fake news uniquement :

fake_n = ' '.join(df[df['label']==1]['text'])
wc= wordcloud.generate(fake_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Sortir:

Nuage de mots pour les fausses nouvelles

Bigramme le plus fréquent (combinaison de deux mots)

Un N-gramme est une séquence de lettres ou de mots. Un unigramme de caractère est composé d'un seul caractère, tandis qu'un bigramme est composé d'une série de deux caractères. De même, les N-grammes de mots sont constitués d'une suite de n mots. Le mot "uni" est un 1-gramme (unigramme). La combinaison des mots "États-Unis" est un 2-gramme (bigramme), "new york city" est un 3-gramme.

Traçons le bigramme le plus courant sur les nouvelles fiables :

def plot_top_ngrams(corpus, title, ylabel, xlabel="Number of Occurences", n=2):
  """Utility function to plot top n-grams"""
  true_b = (pd.Series(nltk.ngrams(corpus.split(), n)).value_counts())[:20]
  true_b.sort_values().plot.barh(color='blue', width=.9, figsize=(12, 8))
  plt.title(title)
  plt.ylabel(ylabel)
  plt.xlabel(xlabel)
  plt.show()
plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Bigrams', "Bigram", n=2)

Top des bigrammes sur les fake news

Le bigramme le plus courant sur les fake news :

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Bigrams', "Bigram", n=2)

Top des bigrammes sur les fake news

Trigramme le plus fréquent (combinaison de trois mots)

Le trigramme le plus courant sur les informations fiables :

plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Trigrams', "Trigrams", n=3)

Le trigramme le plus courant sur les fake news

Pour les fausses nouvelles maintenant :

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Trigrams', "Trigrams", n=3)

Les trigrammes les plus courants sur les fausses nouvelles

Les tracés ci-dessus nous donnent quelques idées sur l'apparence des deux classes. Dans la section suivante, nous utiliserons la bibliothèque de transformateurs pour créer un détecteur de fausses nouvelles.

Construire un classificateur en affinant le BERT

Cette section récupèrera largement le code du tutoriel de réglage fin du BERT pour créer un classificateur de fausses nouvelles à l'aide de la bibliothèque de transformateurs. Ainsi, pour des informations plus détaillées, vous pouvez vous diriger vers le tutoriel d'origine .

Si vous n'avez pas installé de transformateurs, vous devez :

$ pip install transformers

Importons les bibliothèques nécessaires :

import torch
from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
from transformers import BertTokenizerFast, BertForSequenceClassification
from transformers import Trainer, TrainingArguments
import numpy as np
from sklearn.model_selection import train_test_split

import random

Nous voulons rendre nos résultats reproductibles même si nous redémarrons notre environnement :

def set_seed(seed: int):
    """
    Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch`` and/or ``tf`` (if
    installed).

    Args:
        seed (:obj:`int`): The seed to set.
    """
    random.seed(seed)
    np.random.seed(seed)
    if is_torch_available():
        torch.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
        # ^^ safe to call this function even if cuda is not available
    if is_tf_available():
        import tensorflow as tf

        tf.random.set_seed(seed)

set_seed(1)

Le modèle que nous allons utiliser est le bert-base-uncased:

# the model we gonna train, base uncased BERT
# check text classification models here: https://huggingface.co/models?filter=text-classification
model_name = "bert-base-uncased"
# max sequence length for each document/sentence sample
max_length = 512

Chargement du tokenizer :

# load the tokenizer
tokenizer = BertTokenizerFast.from_pretrained(model_name, do_lower_case=True)

Préparation des données

Nettoyons maintenant les NaNvaleurs des colonnes text, authoret :title

news_df = news_d[news_d['text'].notna()]
news_df = news_df[news_df["author"].notna()]
news_df = news_df[news_df["title"].notna()]

Ensuite, créez une fonction qui prend l'ensemble de données en tant que dataframe Pandas et renvoie les fractionnements de train/validation des textes et des étiquettes sous forme de listes :

def prepare_data(df, test_size=0.2, include_title=True, include_author=True):
  texts = []
  labels = []
  for i in range(len(df)):
    text = df["text"].iloc[i]
    label = df["label"].iloc[i]
    if include_title:
      text = df["title"].iloc[i] + " - " + text
    if include_author:
      text = df["author"].iloc[i] + " : " + text
    if text and label in [0, 1]:
      texts.append(text)
      labels.append(label)
  return train_test_split(texts, labels, test_size=test_size)

train_texts, valid_texts, train_labels, valid_labels = prepare_data(news_df)

La fonction ci-dessus prend l'ensemble de données dans un type de trame de données et les renvoie sous forme de listes divisées en ensembles d'apprentissage et de validation. Définir include_titlesur Truesignifie que nous ajoutons la titlecolonne à celle textque nous allons utiliser pour la formation, définir include_authorsur Truesignifie que nous ajoutons authorégalement la au texte.

Assurons-nous que les étiquettes et les textes ont la même longueur :

print(len(train_texts), len(train_labels))
print(len(valid_texts), len(valid_labels))

Sortir:

14628 14628
3657 3657

Tokénisation de l'ensemble de données

Utilisons le tokenizer BERT pour tokeniser notre jeu de données :

# tokenize the dataset, truncate when passed `max_length`, 
# and pad with 0's when less than `max_length`
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=max_length)
valid_encodings = tokenizer(valid_texts, truncation=True, padding=True, max_length=max_length)

Conversion des encodages en un jeu de données PyTorch :

class NewsGroupsDataset(torch.utils.data.Dataset):
    def __init__(self, encodings, labels):
        self.encodings = encodings
        self.labels = labels

    def __getitem__(self, idx):
        item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}
        item["labels"] = torch.tensor([self.labels[idx]])
        return item

    def __len__(self):
        return len(self.labels)

# convert our tokenized data into a torch Dataset
train_dataset = NewsGroupsDataset(train_encodings, train_labels)
valid_dataset = NewsGroupsDataset(valid_encodings, valid_labels)

Chargement et réglage fin du modèle

Nous utiliserons BertForSequenceClassificationpour charger notre modèle de transformateur BERT :

# load the model
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)

Nous avons mis num_labelsà 2 puisqu'il s'agit d'une classification binaire. La fonction ci-dessous est un rappel pour calculer la précision à chaque étape de validation :

from sklearn.metrics import accuracy_score

def compute_metrics(pred):
  labels = pred.label_ids
  preds = pred.predictions.argmax(-1)
  # calculate accuracy using sklearn's function
  acc = accuracy_score(labels, preds)
  return {
      'accuracy': acc,
  }

Initialisons les paramètres d'entraînement :

training_args = TrainingArguments(
    output_dir='./results',          # output directory
    num_train_epochs=1,              # total number of training epochs
    per_device_train_batch_size=10,  # batch size per device during training
    per_device_eval_batch_size=20,   # batch size for evaluation
    warmup_steps=100,                # number of warmup steps for learning rate scheduler
    logging_dir='./logs',            # directory for storing logs
    load_best_model_at_end=True,     # load the best model when finished training (default metric is loss)
    # but you can specify `metric_for_best_model` argument to change to accuracy or other metric
    logging_steps=200,               # log & save weights each logging_steps
    save_steps=200,
    evaluation_strategy="steps",     # evaluate each `logging_steps`
)

J'ai réglé le per_device_train_batch_sizeà 10, mais vous devriez le régler aussi haut que votre GPU pourrait éventuellement s'adapter. En réglant le logging_stepset save_stepssur 200, cela signifie que nous allons effectuer une évaluation et enregistrer les poids du modèle à chaque étape de formation de 200.

Vous pouvez consulter  cette page  pour des informations plus détaillées sur les paramètres d'entraînement disponibles.

Instancions le formateur :

trainer = Trainer(
    model=model,                         # the instantiated Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=valid_dataset,          # evaluation dataset
    compute_metrics=compute_metrics,     # the callback that computes metrics of interest
)

Entraînement du modèle :

# train the model
trainer.train()

La formation prend quelques heures pour se terminer, en fonction de votre GPU. Si vous êtes sur la version gratuite de Colab, cela devrait prendre une heure avec NVIDIA Tesla K80. Voici la sortie :

***** Running training *****
  Num examples = 14628
  Num Epochs = 1
  Instantaneous batch size per device = 10
  Total train batch size (w. parallel, distributed & accumulation) = 10
  Gradient Accumulation steps = 1
  Total optimization steps = 1463
 [1463/1463 41:07, Epoch 1/1]
Step	Training Loss	Validation Loss	Accuracy
200		0.250800		0.100533		0.983867
400		0.027600		0.043009		0.993437
600		0.023400		0.017812		0.997539
800		0.014900		0.030269		0.994258
1000	0.022400		0.012961		0.998086
1200	0.009800		0.010561		0.998633
1400	0.007700		0.010300		0.998633
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-200
Configuration saved in ./results/checkpoint-200/config.json
Model weights saved in ./results/checkpoint-200/pytorch_model.bin
<SNIPPED>
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-1400
Configuration saved in ./results/checkpoint-1400/config.json
Model weights saved in ./results/checkpoint-1400/pytorch_model.bin

Training completed. Do not forget to share your model on huggingface.co/models =)

Loading best model from ./results/checkpoint-1400 (score: 0.010299865156412125).
TrainOutput(global_step=1463, training_loss=0.04888018785440506, metrics={'train_runtime': 2469.1722, 'train_samples_per_second': 5.924, 'train_steps_per_second': 0.593, 'total_flos': 3848788517806080.0, 'train_loss': 0.04888018785440506, 'epoch': 1.0})

Évaluation du modèle

Étant donné que load_best_model_at_endest réglé sur True, les meilleurs poids seront chargés une fois l'entraînement terminé. Évaluons-le avec notre ensemble de validation :

# evaluate the current model after training
trainer.evaluate()

Sortir:

***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
 [183/183 02:11]
{'epoch': 1.0,
 'eval_accuracy': 0.998632759092152,
 'eval_loss': 0.010299865156412125,
 'eval_runtime': 132.0374,
 'eval_samples_per_second': 27.697,
 'eval_steps_per_second': 1.386}

Enregistrement du modèle et du tokenizer :

# saving the fine tuned model & tokenizer
model_path = "fake-news-bert-base-uncased"
model.save_pretrained(model_path)
tokenizer.save_pretrained(model_path)

Un nouveau dossier contenant la configuration du modèle et les poids apparaîtra après l'exécution de la cellule ci-dessus. Si vous souhaitez effectuer une prédiction, vous utilisez simplement la from_pretrained()méthode que nous avons utilisée lorsque nous avons chargé le modèle, et vous êtes prêt à partir.

Ensuite, créons une fonction qui accepte le texte de l'article comme argument et retourne s'il est faux ou non :

def get_prediction(text, convert_to_label=False):
    # prepare our text into tokenized sequence
    inputs = tokenizer(text, padding=True, truncation=True, max_length=max_length, return_tensors="pt").to("cuda")
    # perform inference to our model
    outputs = model(**inputs)
    # get output probabilities by doing softmax
    probs = outputs[0].softmax(1)
    # executing argmax function to get the candidate label
    d = {
        0: "reliable",
        1: "fake"
    }
    if convert_to_label:
      return d[int(probs.argmax())]
    else:
      return int(probs.argmax())

J'ai pris un exemple à partir test.csvduquel le modèle n'a jamais vu effectuer d'inférence, je l'ai vérifié, et c'est un article réel du New York Times :

real_news = """
Tim Tebow Will Attempt Another Comeback, This Time in Baseball - The New York Times",Daniel Victor,"If at first you don’t succeed, try a different sport. Tim Tebow, who was a Heisman   quarterback at the University of Florida but was unable to hold an N. F. L. job, is pursuing a career in Major League Baseball. <SNIPPED>
"""

Le texte original se trouve dans l'environnement Colab si vous souhaitez le copier, car il s'agit d'un article complet. Passons-le au modèle et voyons les résultats :

get_prediction(real_news, convert_to_label=True)

Sortir:

reliable

Annexe : Création d'un fichier de soumission pour Kaggle

Dans cette section, nous allons prédire tous les articles dans le test.csvpour créer un dossier de soumission pour voir notre justesse dans le jeu de test sur le concours Kaggle :

# read the test set
test_df = pd.read_csv("test.csv")
# make a copy of the testing set
new_df = test_df.copy()
# add a new column that contains the author, title and article content
new_df["new_text"] = new_df["author"].astype(str) + " : " + new_df["title"].astype(str) + " - " + new_df["text"].astype(str)
# get the prediction of all the test set
new_df["label"] = new_df["new_text"].apply(get_prediction)
# make the submission file
final_df = new_df[["id", "label"]]
final_df.to_csv("submit_final.csv", index=False)

Après avoir concaténé l'auteur, le titre et le texte de l'article, nous passons la get_prediction()fonction à la nouvelle colonne pour remplir la labelcolonne, nous utilisons ensuite la to_csv()méthode pour créer le fichier de soumission pour Kaggle. Voici mon score de soumission :

Note de soumission

Nous avons obtenu une précision de 99,78 % et 100 % sur les classements privés et publics. C'est génial!

Conclusion

Très bien, nous avons terminé avec le tutoriel. Vous pouvez consulter cette page pour voir divers paramètres d'entraînement que vous pouvez modifier.

Si vous avez un ensemble de données de fausses nouvelles personnalisé pour un réglage fin, il vous suffit de transmettre une liste d'échantillons au tokenizer comme nous l'avons fait, vous ne modifierez plus aucun autre code par la suite.

Vérifiez le code complet ici , ou l'environnement Colab ici .

Como construir um detector de notícias falsas em Python

Explorando o conjunto de dados de notícias falsas, realizando análises de dados, como nuvens de palavras e ngrams, e ajustando o transformador BERT para construir um detector de notícias falsas em Python usando a biblioteca de transformadores.

Fake news é a transmissão intencional de alegações falsas ou enganosas como notícias, onde as declarações são propositalmente enganosas.

Jornais, tablóides e revistas foram suplantados por plataformas de notícias digitais, blogs, feeds de mídia social e uma infinidade de aplicativos de notícias móveis. As organizações de notícias se beneficiaram do aumento do uso de mídias sociais e plataformas móveis, fornecendo aos assinantes informações atualizadas.

Os consumidores agora têm acesso instantâneo às últimas notícias. Essas plataformas de mídia digital ganharam destaque devido à sua fácil conexão com o resto do mundo e permitem aos usuários discutir e compartilhar ideias e debater temas como democracia, educação, saúde, pesquisa e história. As notícias falsas nas plataformas digitais estão cada vez mais populares e são usadas para fins lucrativos, como ganhos políticos e financeiros.

Quão Grande é este Problema?

Como a Internet, as mídias sociais e as plataformas digitais são amplamente utilizadas, qualquer pessoa pode propagar informações imprecisas e tendenciosas. É quase impossível evitar a disseminação de notícias falsas. Há um tremendo aumento na distribuição de notícias falsas, que não se restringe a um setor como a política, mas inclui esportes, saúde, história, entretenimento, ciência e pesquisa.

A solução

É vital reconhecer e diferenciar entre notícias falsas e verdadeiras. Um método é fazer com que um especialista decida e verifique cada informação, mas isso leva tempo e requer conhecimentos que não podem ser compartilhados. Em segundo lugar, podemos usar ferramentas de aprendizado de máquina e inteligência artificial para automatizar a identificação de notícias falsas.

As informações de notícias on-line incluem vários dados de formato não estruturado (como documentos, vídeos e áudio), mas vamos nos concentrar nas notícias em formato de texto aqui. Com o progresso do aprendizado de máquina e do processamento de linguagem natural , agora podemos reconhecer o caráter enganoso e falso de um artigo ou declaração.

Vários estudos e experimentos estão sendo realizados para detectar notícias falsas em todos os meios.

Nosso principal objetivo deste tutorial é:

  • Explore e analise o conjunto de dados de Fake News.
  • Construa um classificador que possa distinguir Fake news com o máximo de precisão possível.

Aqui está a tabela de conteúdo:

  • Introdução
  • Quão Grande é este Problema?
  • A solução
  • Exploração de dados
    • Distribuição de aulas
  • Limpeza de dados para análise
  • Análise Explorativa de Dados
    • Nuvem de palavra única
    • Bigrama mais frequente (combinação de duas palavras)
    • Trigrama mais frequente (combinação de três palavras)
  • Construindo um classificador ajustando o BERT
    • Preparação de dados
    • Tokenização do conjunto de dados
    • Carregando e Ajustando o Modelo
    • Avaliação do modelo
  • Apêndice: Criando um arquivo de envio para o Kaggle
  • Conclusão

Exploração de dados

Neste trabalho, utilizamos o conjunto de dados de notícias falsas do Kaggle para classificar notícias não confiáveis ​​como notícias falsas. Temos um conjunto de dados de treinamento completo contendo as seguintes características:

  • id: ID exclusivo para um artigo de notícias
  • title: título de uma notícia
  • author: autor da reportagem
  • text: texto do artigo; pode estar incompleto
  • label: um rótulo que marca o artigo como potencialmente não confiável indicado por 1 (não confiável ou falso) ou 0 (confiável).

É um problema de classificação binária no qual devemos prever se uma determinada notícia é confiável ou não.

Se você tiver uma conta Kaggle, basta baixar o conjunto de dados do site e extrair o arquivo ZIP.

Também carreguei o conjunto de dados no Google Drive, e você pode obtê-lo aqui ou usar a gdownbiblioteca para baixá-lo automaticamente nos notebooks do Google Colab ou Jupyter:

$ pip install gdown
# download from Google Drive
$ gdown "https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t"
Downloading...
From: https://drive.google.com/uc?id=178f_VkNxccNidap-5-uffXUW475pAuPy&confirm=t
To: /content/fake-news.zip
100% 48.7M/48.7M [00:00<00:00, 74.6MB/s]

Descompactando os arquivos:

$ unzip fake-news.zip

Três arquivos aparecerão no diretório de trabalho atual: train.csv, test.csv, e submit.csv, que usaremos train.csvna maior parte do tutorial.

Instalando as dependências necessárias:

$ pip install transformers nltk pandas numpy matplotlib seaborn wordcloud

Nota: Se você estiver em um ambiente local, certifique-se de instalar o PyTorch para GPU, vá para esta página para uma instalação adequada.

Vamos importar as bibliotecas essenciais para análise:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

Os corpora e módulos NLTK devem ser instalados usando o downloader NLTK padrão:

import nltk
nltk.download('stopwords')
nltk.download('wordnet')

O conjunto de dados de notícias falsas inclui títulos e textos de artigos originais e fictícios de vários autores. Vamos importar nosso conjunto de dados:

# load the dataset
news_d = pd.read_csv("train.csv")
print("Shape of News data:", news_d.shape)
print("News data columns", news_d.columns)

Saída:

 Shape of News data: (20800, 5)
 News data columns Index(['id', 'title', 'author', 'text', 'label'], dtype='object')

Veja como fica o conjunto de dados:

# by using df.head(), we can immediately familiarize ourselves with the dataset. 
news_d.head()

Saída:

id	title	author	text	label
0	0	House Dem Aide: We Didn’t Even See Comey’s Let...	Darrell Lucus	House Dem Aide: We Didn’t Even See Comey’s Let...	1
1	1	FLYNN: Hillary Clinton, Big Woman on Campus - ...	Daniel J. Flynn	Ever get the feeling your life circles the rou...	0
2	2	Why the Truth Might Get You Fired	Consortiumnews.com	Why the Truth Might Get You Fired October 29, ...	1
3	3	15 Civilians Killed In Single US Airstrike Hav...	Jessica Purkiss	Videos 15 Civilians Killed In Single US Airstr...	1
4	4	Iranian woman jailed for fictional unpublished...	Howard Portnoy	Print \nAn Iranian woman has been sentenced to...	1

Temos 20.800 linhas, que têm cinco colunas. Vamos ver algumas estatísticas da textcoluna:

#Text Word startistics: min.mean, max and interquartile range

txt_length = news_d.text.str.split().str.len()
txt_length.describe()

Saída:

count    20761.000000
mean       760.308126
std        869.525988
min          0.000000
25%        269.000000
50%        556.000000
75%       1052.000000
max      24234.000000
Name: text, dtype: float64

Estatísticas da titlecoluna:

#Title statistics 

title_length = news_d.title.str.split().str.len()
title_length.describe()

Saída:

count    20242.000000
mean        12.420709
std          4.098735
min          1.000000
25%         10.000000
50%         13.000000
75%         15.000000
max         72.000000
Name: title, dtype: float64

As estatísticas para os conjuntos de treinamento e teste são as seguintes:

  • O textatributo possui maior contagem de palavras com média de 760 palavras e 75% com mais de 1000 palavras.
  • O titleatributo é uma declaração curta com uma média de 12 palavras, sendo que 75% delas são em torno de 15 palavras.

Nosso experimento seria com texto e título juntos.

Distribuição de aulas

Contando parcelas para ambos os rótulos:

sns.countplot(x="label", data=news_d);
print("1: Unreliable")
print("0: Reliable")
print("Distribution of labels:")
print(news_d.label.value_counts());

Saída:

1: Unreliable
0: Reliable
Distribution of labels:
1    10413
0    10387
Name: label, dtype: int64

Distribuição de rótulos

print(round(news_d.label.value_counts(normalize=True),2)*100);

Saída:

1    50.0
0    50.0
Name: label, dtype: float64

O número de artigos não confiáveis ​​(falsos ou 1) é 10.413, enquanto o número de artigos confiáveis ​​(confiáveis ​​ou 0) é 10.387. Quase 50% dos artigos são falsos. Portanto, a métrica de precisão medirá o desempenho do nosso modelo ao construir um classificador.

Limpeza de dados para análise

Nesta seção, vamos limpar nosso conjunto de dados para fazer algumas análises:

  • Elimine linhas e colunas não utilizadas.
  • Execute a imputação de valor nulo.
  • Remova os caracteres especiais.
  • Remova palavras de parada.
# Constants that are used to sanitize the datasets 

column_n = ['id', 'title', 'author', 'text', 'label']
remove_c = ['id','author']
categorical_features = []
target_col = ['label']
text_f = ['title', 'text']
# Clean Datasets
import nltk
from nltk.corpus import stopwords
import re
from nltk.stem.porter import PorterStemmer
from collections import Counter

ps = PorterStemmer()
wnl = nltk.stem.WordNetLemmatizer()

stop_words = stopwords.words('english')
stopwords_dict = Counter(stop_words)

# Removed unused clumns
def remove_unused_c(df,column_n=remove_c):
    df = df.drop(column_n,axis=1)
    return df

# Impute null values with None
def null_process(feature_df):
    for col in text_f:
        feature_df.loc[feature_df[col].isnull(), col] = "None"
    return feature_df

def clean_dataset(df):
    # remove unused column
    df = remove_unused_c(df)
    #impute null values
    df = null_process(df)
    return df

# Cleaning text from unused characters
def clean_text(text):
    text = str(text).replace(r'http[\w:/\.]+', ' ')  # removing urls
    text = str(text).replace(r'[^\.\w\s]', ' ')  # remove everything but characters and punctuation
    text = str(text).replace('[^a-zA-Z]', ' ')
    text = str(text).replace(r'\s\s+', ' ')
    text = text.lower().strip()
    #text = ' '.join(text)    
    return text

## Nltk Preprocessing include:
# Stop words, Stemming and Lemmetization
# For our project we use only Stop word removal
def nltk_preprocess(text):
    text = clean_text(text)
    wordlist = re.sub(r'[^\w\s]', '', text).split()
    #text = ' '.join([word for word in wordlist if word not in stopwords_dict])
    #text = [ps.stem(word) for word in wordlist if not word in stopwords_dict]
    text = ' '.join([wnl.lemmatize(word) for word in wordlist if word not in stopwords_dict])
    return  text

No bloco de código acima:

  • Importamos o NLTK, que é uma famosa plataforma de desenvolvimento de aplicativos Python que interagem com a linguagem humana. Em seguida, importamos repara regex.
  • Importamos palavras irrelevantes de nltk.corpus. Ao trabalhar com palavras, principalmente ao considerar a semântica, às vezes precisamos eliminar palavras comuns que não adicionam nenhum significado significativo a uma declaração, como "but", "can", "we", etc.
  • PorterStemmeré usado para executar palavras derivadas com NLTK. Stemmers retiram palavras de seus afixos morfológicos, deixando apenas o radical da palavra.
  • Importamos WordNetLemmatizer()da biblioteca NLTK para lematização. A lematização é muito mais eficaz do que a derivação . Ele vai além da redução de palavras e avalia todo o léxico de uma língua para aplicar a análise morfológica às palavras, com o objetivo de apenas remover as extremidades flexionais e retornar a forma base ou dicionário de uma palavra, conhecida como lema.
  • stopwords.words('english')permite-nos ver a lista de todas as palavras de parada em inglês suportadas pelo NLTK.
  • remove_unused_c()A função é usada para remover as colunas não utilizadas.
  • Nós imputamos valores nulos Noneusando a null_process()função.
  • Dentro da função clean_dataset(), chamamos remove_unused_c()e null_process()funções. Esta função é responsável pela limpeza dos dados.
  • Para limpar o texto de caracteres não utilizados, criamos a clean_text()função.
  • Para pré-processamento, usaremos apenas a remoção de palavras de parada. Criamos a nltk_preprocess()função para isso.

Pré-processando o texte title:

# Perform data cleaning on train and test dataset by calling clean_dataset function
df = clean_dataset(news_d)
# apply preprocessing on text through apply method by calling the function nltk_preprocess
df["text"] = df.text.apply(nltk_preprocess)
# apply preprocessing on title through apply method by calling the function nltk_preprocess
df["title"] = df.title.apply(nltk_preprocess)
# Dataset after cleaning and preprocessing step
df.head()

Saída:

title	text	label
0	house dem aide didnt even see comeys letter ja...	house dem aide didnt even see comeys letter ja...	1
1	flynn hillary clinton big woman campus breitbart	ever get feeling life circle roundabout rather...	0
2	truth might get fired	truth might get fired october 29 2016 tension ...	1
3	15 civilian killed single u airstrike identified	video 15 civilian killed single u airstrike id...	1
4	iranian woman jailed fictional unpublished sto...	print iranian woman sentenced six year prison ...	1

Análise Explorativa de Dados

Nesta seção, vamos realizar:

  • Análise Univariada : É uma análise estatística do texto. Usaremos a nuvem de palavras para esse propósito. Uma nuvem de palavras é uma abordagem de visualização de dados de texto em que o termo mais comum é apresentado no tamanho de fonte mais considerável.
  • Análise Bivariada : Bigrama e Trigrama serão usados ​​aqui. Segundo a Wikipedia: " um n-grama é uma sequência contígua de n itens de uma determinada amostra de texto ou fala. De acordo com a aplicação, os itens podem ser fonemas, sílabas, letras, palavras ou pares de bases. Os n-gramas são normalmente coletados de um texto ou corpus de fala".

Nuvem de palavra única

As palavras mais frequentes aparecem em negrito e fonte maior em uma nuvem de palavras. Esta seção realizará uma nuvem de palavras para todas as palavras no conjunto de dados.

A função da biblioteca WordCloudwordcloud() será usada, e o generate()é utilizado para gerar a imagem da nuvem de palavras:

from wordcloud import WordCloud, STOPWORDS
import matplotlib.pyplot as plt

# initialize the word cloud
wordcloud = WordCloud( background_color='black', width=800, height=600)
# generate the word cloud by passing the corpus
text_cloud = wordcloud.generate(' '.join(df['text']))
# plotting the word cloud
plt.figure(figsize=(20,30))
plt.imshow(text_cloud)
plt.axis('off')
plt.show()

Saída:

WordCloud para todos os dados de notícias falsas

Nuvem de palavras apenas para notícias confiáveis:

true_n = ' '.join(df[df['label']==0]['text']) 
wc = wordcloud.generate(true_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Saída:

Word Cloud para notícias confiáveis

Nuvem de palavras apenas para notícias falsas:

fake_n = ' '.join(df[df['label']==1]['text'])
wc= wordcloud.generate(fake_n)
plt.figure(figsize=(20,30))
plt.imshow(wc)
plt.axis('off')
plt.show()

Saída:

Nuvem de palavras para notícias falsas

Bigrama mais frequente (combinação de duas palavras)

Um N-gram é uma sequência de letras ou palavras. Um unigrama de caractere é composto por um único caractere, enquanto um bigrama compreende uma série de dois caracteres. Da mesma forma, os N-gramas de palavras são compostos de uma série de n palavras. A palavra "unidos" é um 1 grama (unigrama). A combinação das palavras "estado unido" é um 2 gramas (bigrama), "nova york cidade" é um 3 gramas.

Vamos traçar o bigrama mais comum nas notícias confiáveis:

def plot_top_ngrams(corpus, title, ylabel, xlabel="Number of Occurences", n=2):
  """Utility function to plot top n-grams"""
  true_b = (pd.Series(nltk.ngrams(corpus.split(), n)).value_counts())[:20]
  true_b.sort_values().plot.barh(color='blue', width=.9, figsize=(12, 8))
  plt.title(title)
  plt.ylabel(ylabel)
  plt.xlabel(xlabel)
  plt.show()
plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Bigrams', "Bigram", n=2)

Principais bigramas em notícias falsas

O bigrama mais comum nas notícias falsas:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Bigrams', "Bigram", n=2)

Principais bigramas em notícias falsas

Trigrama mais frequente (combinação de três palavras)

O trigrama mais comum em notícias confiáveis:

plot_top_ngrams(true_n, 'Top 20 Frequently Occuring True news Trigrams', "Trigrams", n=3)

O trigrama mais comum em notícias falsas

Para notícias falsas agora:

plot_top_ngrams(fake_n, 'Top 20 Frequently Occuring Fake news Trigrams', "Trigrams", n=3)

Trigramas mais comuns em Fake news

Os gráficos acima nos dão algumas ideias de como as duas classes se parecem. Na próxima seção, usaremos a biblioteca de transformadores para construir um detector de notícias falsas.

Construindo um classificador ajustando o BERT

Esta seção irá pegar o código extensivamente do tutorial BERT de ajuste fino para fazer um classificador de notícias falsas usando a biblioteca de transformadores. Portanto, para obter informações mais detalhadas, você pode acessar o tutorial original .

Se você não instalou transformadores, você deve:

$ pip install transformers

Vamos importar as bibliotecas necessárias:

import torch
from transformers.file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
from transformers import BertTokenizerFast, BertForSequenceClassification
from transformers import Trainer, TrainingArguments
import numpy as np
from sklearn.model_selection import train_test_split

import random

Queremos tornar nossos resultados reproduzíveis mesmo se reiniciarmos nosso ambiente:

def set_seed(seed: int):
    """
    Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch`` and/or ``tf`` (if
    installed).

    Args:
        seed (:obj:`int`): The seed to set.
    """
    random.seed(seed)
    np.random.seed(seed)
    if is_torch_available():
        torch.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
        # ^^ safe to call this function even if cuda is not available
    if is_tf_available():
        import tensorflow as tf

        tf.random.set_seed(seed)

set_seed(1)

O modelo que vamos usar é o bert-base-uncased:

# the model we gonna train, base uncased BERT
# check text classification models here: https://huggingface.co/models?filter=text-classification
model_name = "bert-base-uncased"
# max sequence length for each document/sentence sample
max_length = 512

Carregando o tokenizador:

# load the tokenizer
tokenizer = BertTokenizerFast.from_pretrained(model_name, do_lower_case=True)

Preparação de dados

Vamos agora limpar os NaNvalores das colunas text, authore :title

news_df = news_d[news_d['text'].notna()]
news_df = news_df[news_df["author"].notna()]
news_df = news_df[news_df["title"].notna()]

Em seguida, criando uma função que recebe o conjunto de dados como um dataframe do Pandas e retorna as divisões de trem/validação de textos e rótulos como listas:

def prepare_data(df, test_size=0.2, include_title=True, include_author=True):
  texts = []
  labels = []
  for i in range(len(df)):
    text = df["text"].iloc[i]
    label = df["label"].iloc[i]
    if include_title:
      text = df["title"].iloc[i] + " - " + text
    if include_author:
      text = df["author"].iloc[i] + " : " + text
    if text and label in [0, 1]:
      texts.append(text)
      labels.append(label)
  return train_test_split(texts, labels, test_size=test_size)

train_texts, valid_texts, train_labels, valid_labels = prepare_data(news_df)

A função acima pega o conjunto de dados em um tipo de dataframe e os retorna como listas divididas em conjuntos de treinamento e validação. Definir include_titlepara Truesignifica que adicionamos a titlecoluna ao textque vamos usar para treinamento, definir include_authorpara Truesignifica que também adicionamos o authorao texto.

Vamos garantir que os rótulos e os textos tenham o mesmo comprimento:

print(len(train_texts), len(train_labels))
print(len(valid_texts), len(valid_labels))

Saída:

14628 14628
3657 3657

Tokenização do conjunto de dados

Vamos usar o tokenizer BERT para tokenizar nosso conjunto de dados:

# tokenize the dataset, truncate when passed `max_length`, 
# and pad with 0's when less than `max_length`
train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=max_length)
valid_encodings = tokenizer(valid_texts, truncation=True, padding=True, max_length=max_length)

Convertendo as codificações em um conjunto de dados PyTorch:

class NewsGroupsDataset(torch.utils.data.Dataset):
    def __init__(self, encodings, labels):
        self.encodings = encodings
        self.labels = labels

    def __getitem__(self, idx):
        item = {k: torch.tensor(v[idx]) for k, v in self.encodings.items()}
        item["labels"] = torch.tensor([self.labels[idx]])
        return item

    def __len__(self):
        return len(self.labels)

# convert our tokenized data into a torch Dataset
train_dataset = NewsGroupsDataset(train_encodings, train_labels)
valid_dataset = NewsGroupsDataset(valid_encodings, valid_labels)

Carregando e Ajustando o Modelo

Usaremos BertForSequenceClassificationpara carregar nosso modelo de transformador BERT:

# load the model
model = BertForSequenceClassification.from_pretrained(model_name, num_labels=2)

Definimos num_labelscomo 2, pois é uma classificação binária. A função abaixo é um retorno de chamada para calcular a precisão em cada etapa de validação:

from sklearn.metrics import accuracy_score

def compute_metrics(pred):
  labels = pred.label_ids
  preds = pred.predictions.argmax(-1)
  # calculate accuracy using sklearn's function
  acc = accuracy_score(labels, preds)
  return {
      'accuracy': acc,
  }

Vamos inicializar os parâmetros de treinamento:

training_args = TrainingArguments(
    output_dir='./results',          # output directory
    num_train_epochs=1,              # total number of training epochs
    per_device_train_batch_size=10,  # batch size per device during training
    per_device_eval_batch_size=20,   # batch size for evaluation
    warmup_steps=100,                # number of warmup steps for learning rate scheduler
    logging_dir='./logs',            # directory for storing logs
    load_best_model_at_end=True,     # load the best model when finished training (default metric is loss)
    # but you can specify `metric_for_best_model` argument to change to accuracy or other metric
    logging_steps=200,               # log & save weights each logging_steps
    save_steps=200,
    evaluation_strategy="steps",     # evaluate each `logging_steps`
)

Eu configurei o per_device_train_batch_sizepara 10, mas você deve defini-lo o mais alto que sua GPU possa caber. Definindo o logging_stepse save_stepspara 200, o que significa que vamos realizar a avaliação e salvar os pesos do modelo em cada 200 etapas de treinamento.

Você pode verificar  esta página  para obter informações mais detalhadas sobre os parâmetros de treinamento disponíveis.

Vamos instanciar o treinador:

trainer = Trainer(
    model=model,                         # the instantiated Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=valid_dataset,          # evaluation dataset
    compute_metrics=compute_metrics,     # the callback that computes metrics of interest
)

Treinando o modelo:

# train the model
trainer.train()

O treinamento leva algumas horas para terminar, dependendo da sua GPU. Se você estiver na versão gratuita do Colab, deve levar uma hora com o NVIDIA Tesla K80. Aqui está a saída:

***** Running training *****
  Num examples = 14628
  Num Epochs = 1
  Instantaneous batch size per device = 10
  Total train batch size (w. parallel, distributed & accumulation) = 10
  Gradient Accumulation steps = 1
  Total optimization steps = 1463
 [1463/1463 41:07, Epoch 1/1]
Step	Training Loss	Validation Loss	Accuracy
200		0.250800		0.100533		0.983867
400		0.027600		0.043009		0.993437
600		0.023400		0.017812		0.997539
800		0.014900		0.030269		0.994258
1000	0.022400		0.012961		0.998086
1200	0.009800		0.010561		0.998633
1400	0.007700		0.010300		0.998633
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-200
Configuration saved in ./results/checkpoint-200/config.json
Model weights saved in ./results/checkpoint-200/pytorch_model.bin
<SNIPPED>
***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
Saving model checkpoint to ./results/checkpoint-1400
Configuration saved in ./results/checkpoint-1400/config.json
Model weights saved in ./results/checkpoint-1400/pytorch_model.bin

Training completed. Do not forget to share your model on huggingface.co/models =)

Loading best model from ./results/checkpoint-1400 (score: 0.010299865156412125).
TrainOutput(global_step=1463, training_loss=0.04888018785440506, metrics={'train_runtime': 2469.1722, 'train_samples_per_second': 5.924, 'train_steps_per_second': 0.593, 'total_flos': 3848788517806080.0, 'train_loss': 0.04888018785440506, 'epoch': 1.0})

Avaliação do modelo

Como load_best_model_at_endestá definido como True, os melhores pesos serão carregados quando o treinamento for concluído. Vamos avaliá-lo com nosso conjunto de validação:

# evaluate the current model after training
trainer.evaluate()

Saída:

***** Running Evaluation *****
  Num examples = 3657
  Batch size = 20
 [183/183 02:11]
{'epoch': 1.0,
 'eval_accuracy': 0.998632759092152,
 'eval_loss': 0.010299865156412125,
 'eval_runtime': 132.0374,
 'eval_samples_per_second': 27.697,
 'eval_steps_per_second': 1.386}

Salvando o modelo e o tokenizer:

# saving the fine tuned model & tokenizer
model_path = "fake-news-bert-base-uncased"
model.save_pretrained(model_path)
tokenizer.save_pretrained(model_path)

Uma nova pasta contendo a configuração do modelo e pesos aparecerá após a execução da célula acima. Se você deseja realizar a previsão, basta usar o from_pretrained()método que usamos quando carregamos o modelo e pronto.

Em seguida, vamos fazer uma função que aceite o texto do artigo como argumento e retorne se é falso ou não:

def get_prediction(text, convert_to_label=False):
    # prepare our text into tokenized sequence
    inputs = tokenizer(text, padding=True, truncation=True, max_length=max_length, return_tensors="pt").to("cuda")
    # perform inference to our model
    outputs = model(**inputs)
    # get output probabilities by doing softmax
    probs = outputs[0].softmax(1)
    # executing argmax function to get the candidate label
    d = {
        0: "reliable",
        1: "fake"
    }
    if convert_to_label:
      return d[int(probs.argmax())]
    else:
      return int(probs.argmax())

Peguei um exemplo de test.csvque o modelo nunca viu fazer inferência, eu verifiquei, e é um artigo real do The New York Times:

real_news = """
Tim Tebow Will Attempt Another Comeback, This Time in Baseball - The New York Times",Daniel Victor,"If at first you don’t succeed, try a different sport. Tim Tebow, who was a Heisman   quarterback at the University of Florida but was unable to hold an N. F. L. job, is pursuing a career in Major League Baseball. <SNIPPED>
"""

O texto original está no ambiente Colab caso queira copiá-lo, pois é um artigo completo. Vamos passar para o modelo e ver os resultados:

get_prediction(real_news, convert_to_label=True)

Saída:

reliable

Apêndice: Criando um arquivo de envio para o Kaggle

Nesta seção, vamos prever todos os artigos test.csvpara criar um arquivo de submissão para ver nossa precisão no teste definido na competição Kaggle :

# read the test set
test_df = pd.read_csv("test.csv")
# make a copy of the testing set
new_df = test_df.copy()
# add a new column that contains the author, title and article content
new_df["new_text"] = new_df["author"].astype(str) + " : " + new_df["title"].astype(str) + " - " + new_df["text"].astype(str)
# get the prediction of all the test set
new_df["label"] = new_df["new_text"].apply(get_prediction)
# make the submission file
final_df = new_df[["id", "label"]]
final_df.to_csv("submit_final.csv", index=False)

Depois de concatenar o autor, título e texto do artigo juntos, passamos a get_prediction()função para a nova coluna para preencher a labelcoluna, então usamos to_csv()o método para criar o arquivo de submissão para o Kaggle. Aqui está a minha pontuação de submissão:

Pontuação de envio

Obtivemos 99,78% e 100% de precisão nas tabelas de classificação privadas e públicas. Fantástico!

Conclusão

Pronto, terminamos o tutorial. Você pode verificar esta página para ver vários parâmetros de treinamento que você pode ajustar.

Se você tiver um conjunto de dados de notícias falsas personalizado para ajuste fino, basta passar uma lista de amostras para o tokenizer como fizemos, você não alterará nenhum outro código depois disso.

Confira o código completo aqui , ou o ambiente Colab aqui .