Originally published by Mohd Sanad Zaki Rizvi at  analyticsvidhya.com on JULY 18, 2019

Overview

  • We look at the latest state-of-the-art NLP library in this article called PyTorch-Transformers
  • We will also implement PyTorch-Transformers in Python using popular NLP models like Google’s BERT and OpenAI’s GPT-2!
  • This has the potential to revolutionize the landscape of NLP as we know it

Introduction

“NLP’s ImageNet moment has arrived.” – Sebastian Ruder

Imagine having the power to build the Natural Language Processing (NLP) model that powers Google Translate. What if I told you this can be done using just a few lines of code in Python? Sounds like an incredibly exciting opportunity.

Well – we can now do this sitting in front of our own machines! The latest state-of-the-art NLP release is called PyTorch-Transformers by the folks at HuggingFace. This PyTorch-Transformers library was actually released just yesterday and I’m thrilled to present my first impressions along with the Python code.

The ability to harness this research would have taken a combination of years, some of the best minds, as well as extensive resources to be created. And we get to simply import it in Python and experiment with it. What a time to be alive!

I am truly astonished at the speed of research and development in NLP nowadays. Every new paper/framework/library just pushes the boundary of this incredibly powerful field. And due to the open culture of research around AI and large amounts of freely available text data, there is almost nothing that we can’t do today.

Now, I can’t stress enough the impact that PyTorch-Transformers will have on the research community as well as the NLP industry. I believe this has the potential to revolutionize the landscape of NLP as we know it.

 Table of Contents

  1. Demystifying State-of-the-Art in NLP
  2. What is PyTorch-Transformers?
  3. Installing PyTorch-Transformers on our Machine
  4. Predicting the next word using GPT-2
  5. Natural Language Generation
  6. GPT-2
  7. Transformer-XL
  8. XLNet
  9. Training a Masked Language Model for BERT
  10. Analytics Vidhya’s Take on PyTorch-Transformers

Demystifying State-of-the-Art in NLP

Essentially, Natural Language Processing is about teaching computers to understand the intricacies of human language.

Before we get into the technical details of PyTorch-Transformers, let’s quickly revisit the very concept on which the library is built – NLP. We’ll also understand what state-of-the-art means as that will set the context for the article.

Here are a few things that you need to know before we start with PyTorch-Transformers:

  • State-of-the-Art means an algorithm or a technique that is currently the “best” for a task. When we say “best”, we mean these are the algorithms pioneered by giants like Google, Facebook, Microsoft, and Amazon
  • NLP has many well-defined tasks that researchers are studying to create intelligent techniques to solve them. Some of the most popular tasks are Language Translation, Text Summarization, Question Answering systems, etc.
  • Deep Learning techniques like Recurrent Neural Networks (RNNs)Sequence2Sequence, Attention, and Word Embeddings (Glove, Word2Vec) have previously been the State-of-the-Art for NLP tasks
  • These techniques were superseded by a framework called Transformers that is behind almost all of the current State-of-the-Art NLP models

Note: This article is going to be full of Transformers so I’d highly recommend that you read the below guide in case you need a quick refresher:

What is PyTorch-Transformers?

PyTorch-Transformers is a library of state-of-the-art pre-trained models for Natural Language Processing (NLP).

I have taken this section from PyTorch-Transformers’ documentation. This library currently contains PyTorch implementations, pre-trained model weights, usage scripts and conversion utilities for the following models:

  1. BERT (from Google) released with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  2. GPT (from OpenAI) released with the paper Improving Language Understanding by Generative Pre-Training
  3. GPT-2 (from OpenAI) released with the paper Language Models are Unsupervised Multitask Learners
  4. Transformer-XL (from Google/CMU) released with the paper Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
  5. XLNet (from Google/CMU) released with the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding
  6. XLM (from Facebook) released together with the paper Cross-lingual Language Model Pretraining

All of the above models are the best in class for various NLP tasks. Some of these models are as recent as the previous month!

Most of the State-of-the-Art models require tons of training data and days of training on expensive GPU hardware which is something only the big technology companies and research labs can afford. But with the launch of PyTorch-Transformers, now anyone can utilize the power of State-of-the-Art models!

Installing PyTorch-Transformers on your Machine

Installing Pytorch-Transformers is pretty straightforward in Python. You can just use pip install:

pip install pytorch-transformers

or if you are working on Colab:

!pip install pytorch-transformers

Since most of these models are GPU heavy, I would suggest working with Google Colab for this article.

Note: The code in this article is written using the PyTorch framework.

 Predicting the next word using GPT-2

Because PyTorch-Transformers supports many NLP models that are trained for Language Modelling, it easily allows for natural language generation tasks like sentence completion.

In February 2019, OpenAI created quite the storm through their release of a new transformer-based language model called GPT-2. GPT-2 is a transformer-based generative language model that was trained on 40GB of curated text from the internet.

Being trained in an unsupervised manner, it simply learns to predict a sequence of most likely tokens (i.e. words) that follow a given prompt, based on the patterns it learned to recognize through its training.

Let’s build our own sentence completion model using GPT-2. We’ll try to predict the next word in the sentence:

what is the fastest car in the _________

I chose this example because this is the first suggestion that Google’s text completion gives. Here is the code for doing the same:

# Import required libraries
	import torch
	from pytorch_transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pre-trained model tokenizer (vocabulary)
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')


# Encode a text inputs
text = "What is the fastest car in the"
indexed_tokens = tokenizer.encode(text)


# Convert indexed tokens in a PyTorch tensor
tokens_tensor = torch.tensor([indexed_tokens])


# Load pre-trained model (weights)
model = GPT2LMHeadModel.from_pretrained('gpt2')


# Set the model in evaluation mode to deactivate the DropOut modules
model.eval()


# If you have a GPU, put everything on cuda
tokens_tensor = tokens_tensor.to('cuda')
model.to('cuda')


# Predict all tokens
with torch.no_grad():
    outputs = model(tokens_tensor)
    predictions = outputs[0]


# Get the predicted next sub-word
predicted_index = torch.argmax(predictions[0, -1, :]).item()
predicted_text = tokenizer.decode(indexed_tokens + [predicted_index])


# Print the predicted word
print(predicted_text)

The code is straightforward. We tokenize and index the text as a sequence of numbers and pass it to the GPT2LMHeadModel. This is nothing but the GPT2 model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).

Awesome! The model successfully predicts the next word as “world”. This is pretty amazing as this is what Google was suggesting. I recommend you try this model with different input sentences and see how it performs while predicting the next word in a sentence.

Natural Language Generation using GPT-2, Transformer-XL and XLNet

Let’s take Text Generation to the next level now. Instead of predicting only the next word, we will generate a paragraph of text based on the given input. Let’s see what output our models give for the following input text:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

We will be using the readymade script that PyTorch-Transformers provides for this task. Let’s clone their repository first:

!git clone https://github.com/huggingface/pytorch-transformers.git


GPT-2

Now, you just need a single command to start the model!

!python pytorch-transformers/examples/run_generation.py 
–model_type=gpt2
–length=100
–model_name_or_path=gpt2

Let’s see what output our GPT-2 model gives for the input text:

The unicorns had seemed to know each other almost as well as they did common humans. The study was published in Science Translational Medicine on May 6. What’s more, researchers found that five percent of the unicorns recognized each other well. The study team thinks this might translate into a future where humans would be able to communicate more clearly with those known as super Unicorns. And if we’re going to move ahead with that future, we’ve got to do it at least a

Isn’t that crazy? The text that the model generated is very cohesive and actually can be mistaken as a real news article.

XLNet

XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin. XLNet achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.

You can use the following code for the same:

!python pytorch-transformers/examples/run_generation.py 
–model_type=xlnet
–length=50
–model_name_or_path=xlnet-base-cased

This is the output that XLNet gives:

St. Nicholas was located in the valley in Chile. And, they were familiar with the southern part of Spain. Since 1988, people had lived in the valley, for many years. Even without a natural shelter, people were getting a temporary shelter. Some of the unicorns were acquainted with the Spanish language, but the rest were completely unfamiliar with English. But, they were also finding relief in the valley.<eop> Bioinfo < The Bioinfo website has an open, live community about the

Interesting. While the GPT-2 model focussed directly on the scientific angle of the news about unicorns, XLNet actually nicely built up the context and subtly introduced the topic of unicorns. Let’s see how does Transformer-XL performs!

Transformer-XL

Transformer networks are limited by a fixed-length context and thus can be improved through learning longer-term dependency. That’s why Google proposed a novel method called Transformer-XL (meaning extra long) for language modeling, which enables a Transformer architecture to learn longer-term dependency.

Transformer-XL is up to 1800 times faster than a typical Transformer.

You can use the below code to run Transformer-XL:

!python pytorch-transformers/examples/run_generation.py 
–model_type=transfo-xl
–length=100
–model_name_or_path=transfo-xl-wt103

Here’s the text generated:

both never spoke in their native language ( a natural language ). If they are speaking in their native language they will have no communication with the original speakers. The encounter with a dingo brought between two and four unicorns to a head at once, thus crossing the border into Peru to avoid internecine warfare, as they did with the Aztecs. On September 11, 1930, three armed robbers killed a donkey for helping their fellow soldiers fight alongside a group of Argentines. During the same year

Now, this is awesome. It is interesting to see how different models focus on different aspects of the input text to generate further. This variation is due to a lot of factors but mostly can be attributed to different training data and model architectures.

But there’s a caveat. Neural text generation has been facing a bit of backlash in recent times as people worry it can increase problems related to fake news. But think about the positive side of it! We can use it for many positive applications like- helping writers/creatives with new ideas, and so on.

Training a Masked Language Model for BERT

The BERT framework, a new language representation model from Google AI, uses pre-training and fine-tuning to create state-of-the-art NLP models for a wide range of tasks. These tasks include question answering systems, sentiment analysis, and language inference.

BERT is pre-trained using the following two unsupervised prediction tasks:

  1. Masked Language Modeling (MLM)
  2. Next Sentence Prediction

And you can implement both of these using PyTorch-Transformers. In fact, you can build your own BERT model from scratch or fine-tune a pre-trained version. So, let’s see how can we implement the Masked Language Model for BERT.

Problem Definition

Let’s formally define our problem statement:

Given an input sequence, we will randomly mask some words. The model then should predict the original value of the masked words, based on the context provided by the other, non-masked, words in the sequence.

So why are we doing this? The model learns the rules of the language during the training process. And we’ll soon see how effective this process is.

First, let’s prepare a tokenized input from a text string using BertTokenizer:

import torch
from pytorch_transformers import BertTokenizer, BertModel, BertForMaskedLM

Load pre-trained model tokenizer (vocabulary)

tokenizer = BertTokenizer.from_pretrained(‘bert-base-uncased’)

Tokenize input

text = “[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]”
tokenized_text = tokenizer.tokenize(text)

This is how our text looks like after tokenization:

The next step would be to convert this into a sequence of integers and create PyTorch tensors of them so that we can use them directly for computation:

# Mask a token that we will try to predict back with BertForMaskedLM
masked_index = 8
tokenized_text[masked_index] = ‘[MASK]’
assert tokenized_text == [‘[CLS]’, ‘who’, ‘was’, ‘jim’, ‘henson’, ‘?’, ‘[SEP]’, ‘jim’, ‘[MASK]’, ‘was’, ‘a’, ‘puppet’, ‘##eer’, ‘[SEP]’]

Convert token to vocabulary indices

indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)

Define sentence A and B indices associated to 1st and 2nd sentences (see paper)

segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]

Convert inputs to PyTorch tensors

tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])

Notice that we have set [MASK] at the 8th index in the sentence which is the word ‘Hensen’. This is what our model will try to predict.

Now that our data is rightly pre-processed for BERT, we will create a Masked Language Model. Let’s now use BertForMaskedLM to predict a masked token:

# Load pre-trained model (weights)
model = BertForMaskedLM.from_pretrained(‘bert-base-uncased’)
model.eval()

If you have a GPU, put everything on cuda

tokens_tensor = tokens_tensor.to(‘cuda’)
segments_tensors = segments_tensors.to(‘cuda’)
model.to(‘cuda’)

Predict all tokens

with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
predictions = outputs[0]

confirm we were able to predict ‘henson’

predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
assert predicted_token == ‘henson’
print(‘Predicted token is:’,predicted_token)

Let’s see what is the output of our model:

Predicted token is: henson

That’s quite impressive.

This was a small demo of training a Masked Language Model on a single input sequence. Nevertheless, it is a very important part of the training process for many Transformer-based architectures. This is because it allows bidirectional training in models – which was previously impossible.

Congratulations! You’ve just implemented your first Masked Language Model! If you were trying to train BERT, you just finished half your work. This example will have given you a good idea of how to use PyTorch-Transformers to work with the BERT model.

Analytics Vidhya’s take on PyTorch-Transformers

In this article, we implemented and explored various State-of-the-Art NLP models like BERT, GPT-2, Transformer-XL, and XLNet using PyTorch-Transformers. This was more like a firest impressions expertiment that I did to give you a good intuition on how to work with this amazing library.

Here are 6 compelling reasons why I think you would love this library:

  1. Pre-trained models: It provides pre-trained models for 6 State-of-the-Art NLP architectures and pre-trained weights for 27 variations of these models
  2. Preprocessing and Finetuning API: PyTorch-Transformers doesn’t stop at pre-trained weights. It also provides a simple API for doing all the preprocessing and finetuning steps required for these models. Now, if you have read recent research papers, you’d know many of the State-of-the-Art models have unique ways of preprocessing the data and a lot of times it becomes a hassle to write code for the entire preprocessing pipeline
  3. Usage scripts: It also comes with scripts to run these models against benchmark NLP datasets like SQUAD 2.0 (Stanford Question Answering Dataset), and GLUE (General Language Understanding Evaluation). By using PyTorch-Transformers, you can directly run your model against these datasets and evaluate the performance accordingly
  4. Multilingual: PyTorch-Transformers has multilingual support. This is because some of the models already work well for multiple languages
  5. TensorFlow Compatibility: You can import TensorFlow checkpoints as models in PyTorch
  6. BERTology: There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”)

Have you ever implemented State-of-the-Art models like BERT and GPT-2? What’s your first take on PyTorch-Transformers? Let’s discuss in the comments section below.

Originally published by Mohd Sanad Zaki Rizvi at  analyticsvidhya.com on JULY 18, 2019

============================================

Thanks for reading :heart: If you liked this post, share it with all of your programming buddies! Follow me on Facebook | Twitter

Learn More

☞ PyTorch for Deep Learning and Computer Vision

☞ Practical Deep Learning with PyTorch

☞ Data Science, Deep Learning, & Machine Learning with Python

☞ Deep Learning A-Z™: Hands-On Artificial Neural Networks

☞ Machine Learning A-Z™: Hands-On Python & R In Data Science

☞ Python for Data Science and Machine Learning Bootcamp

☞ Machine Learning, Data Science and Deep Learning with Python

☞ [2019] Machine Learning Classification Bootcamp in Python

☞ Introduction to Machine Learning & Deep Learning in Python

☞ Machine Learning Career Guide – Technical Interview

☞ Machine Learning Guide: Learn Machine Learning Algorithms

☞ Machine Learning Basics: Building Regression Model in Python

☞ Machine Learning using Python - A Beginner’s Guide


#python #machine-learning #deep-learning

Introduction to PyTorch-Transformers: An Incredible Library for State-of-the-Art NLP (with Python code)
5 Likes74.65 GEEK