**Using Transformer models has never been simpler!**Yes that’s what **Simple Transformers**
author Thilina Rajapakse says and I agree with him so should you. You might have seen lengthy code with hundreds of lines to implement transformers models such as BERT, RoBERTa, etc. Once you understand how to use **Simple Transformers**
you will know how easy and simple it is to use transformer models.
The**Simple Transformers**
library is built on top of Hugging Face **Transformers**
library. Hugging Face Transformersprovides state-of-the-art general-purpose architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet, T5, etc.) for Natural Language Understanding (NLU) and Natural Language Generation (NLG) and provides more than thousand pre-trained models and covers around 100+ languages.
Simple Transformers can be used for Text Classification, Named Entity Recognition, Question Answering, Language Modelling, etc. In the article, we will solve the binary classification problem with Simple Transformers on NLP with Disaster Tweets dataset from Kaggle. Let’s get started.
Download dataset from Kaggle to Colab. From your Kaggle profile navigate to **My Account > API**
and then click on **Create New API Token.**
This will download the **kaggle.json**
file. Once you have this file, run the below code. During the execution, it will prompt you to upload a JSON file so you can upload the **kaggle.json**
file.
from google.colab import files
files.upload()
!pip install -q kaggle
!mkdir ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle competitions download -c nlp-getting-started
#nlp #transformers #data-science #classification #machine-learning #deep learning