State of the art faster Natural Language Processing in Tensorflow 2.0. tf-transformers is designed to harness the full power of Tensorflow 2, to make it much faster and simpler comparing to existing Tensorflow based NLP architectures. On an average, there is 80 % improvement over current exsting Tensorflow based libraries, on text generation and other tasks. You can find more details in the Benchmarks section.
tf-transformers: faster and easier state-of-the-art NLP in TensorFlow 2.0
tf-transformers is designed to harness the full power of Tensorflow 2, to make it much faster and simpler comparing to existing Tensorflow based NLP architectures. On an average, there is 80 % improvement over current exsting Tensorflow based libraries, on text generation and other tasks. You can find more details in the Benchmarks section.
All / Most NLP downstream tasks can be integrated into Tranformer based models with much ease. All the models can be trained using model.fit
, which supports GPU, multi-GPU, TPU.
tf.keras.Model.fit
using model.compile2 . Refer examples or blogEvaluating performance benhcmarks is trickier. I evaluated tf-transformers, primarily on text-generation tasks with GPT2 small and t5 small, with amazing HuggingFace, as it is the ready to go library for NLP right now. Text generation tasks require efficient caching to make use of past Key and Value pairs.
On an average, tf-transformers is 80 % faster than HuggingFace Tensorflow implementation and in most cases it is comparable or faster than PyTorch.
The evaluation is based on average of 5 runs, with different batch_size, beams, sequence_length etc. So, there is qute a larg combination, when it comes to BEAM and top-k8 decoding. The figures are *randomly taken 10 samples. But, you can see the full code and figures in the repo.
Codes to reproduce GPT2 benchmark experiments
Codes to reproduce T5 benchmark experiments
I am providing some basic tutorials here, which covers basics of tf-transformers and how can we use it for other downstream tasks. All/most tutorials has following structure:
tf.saved_model
in production + pipelinesStart by converting HuggingFace models (base models only) to tf-transformers models.
Here are a few examples : Jupyter Notebooks:
Use state-of-the-art models in Production, with less than 10 lines of code.
Make industry based experience to avaliable to students and community with clear tutorials
Train any model on GPU, multi-GPU, TPU with amazing tf.keras.Model.fit
Customize any models or pipelines with minimal or no code change.
We have conducted few experiments to squeeze the power of Albert base models ( concept is applicable to any models and in tf-transformers, it is out of the box.)
The idea is minimize the loss for specified task in each layer of your model and check predictions at each layer. as per our experiments, we are able to get the best smaller model (thanks to Albert), and from layer 4 onwards we beat all the smaller model in GLUE benchmark. By layer 6, we got a GLUE score of 81.0, which is 4 points ahead of Distillbert with GLUE score of 77 and MobileBert GLUE score of 78.
The Albert model has 14 million parameters, and by using layer 6, we were able to speed up the compuation by 50% .
The concept is applicable to all the models.
Codes to reproduce GLUE Joint Loss experiments
Benchmark Results
We have trained Squad v1.1 with joint loss. At layer 6 we were able to achieve same performance as of Distillbert - (EM - 78.1 and F1 - 86.2), but slightly worser than MobileBert.
Benchmark Results
Codes to reproduce Squad v1.1 Joint Loss experiments
Note: We have a new model in pipeline. :-)
This repository is tested on Python 3.7+, and Tensorflow 2.4.0
Recommended to use a virtual environment.
Assuming Tensorflow 2.0 is installed
pip install tf-transformers
Assuming poetry is installed. If not pip install poetry
.
git clone https://github.com/legacyai/tf-transformers.git
cd tf-transformers
poetry install
Pipeline in tf-transformers is different from HuggingFace. Here, pipeline for specific tasks expects a model and tokenizer_fn. Because in an ideal scenario, no one will be able to understand whats the kind of pre-processing we want to do to our inputs. Please refer above tutorial notebooks for examples.
Token Classificaton Pipeline (NER)
from tf_transformers.pipeline import Token_Classification_Pipeline
def tokenizer_fn(feature):
"""
feature: tokenized text (tokenizer.tokenize)
"""
result = {}
result["input_ids"] = tokenizer.convert_tokens_to_ids([tokenizer.cls_token] + feature['input_ids'] + [tokenizer.bos_token])
result["input_mask"] = [1] * len(result["input_ids"])
result["input_type_ids"] = [0] * len(result["input_ids"])
return result
# load Keras/ Serialized Model
model_ner = # Load Model
slot_map_reverse = # dictionary index - entity mapping
pipeline = Token_Classification_Pipeline( model = model_ner,
tokenizer = tokenizer,
tokenizer_fn = tokenizer_fn,
SPECIAL_PIECE = SPIECE_UNDERLINE,
label_map = slot_map_reverse,
max_seq_length = 128,
batch_size=32)
sentences = ['I would love to listen to Carnatic music by Yesudas',
'Play Carnatic Fusion by Various Artists',
'Please book 2 tickets from Bangalore to Kerala']
result = pipeline(sentences)
Span Selection Pipeline (QA)
from tf_transformers.pipeline import Span_Extraction_Pipeline
def tokenizer_fn(features):
"""
features: dict of tokenized text
Convert them into ids
"""
result = {}
input_ids = tokenizer.convert_tokens_to_ids(features['input_ids'])
input_type_ids = tf.zeros_like(input_ids).numpy().tolist()
input_mask = tf.ones_like(input_ids).numpy().tolist()
result['input_ids'] = input_ids
result['input_type_ids'] = input_type_ids
result['input_mask'] = input_mask
return result
model = # Load keras/ saved_model
# Span Extraction Pipeline
pipeline = Span_Extraction_Pipeline(model = model,
tokenizer = tokenizer,
tokenizer_fn = tokenizer_fn,
SPECIAL_PIECE = ROBERTA_SPECIAL_PEICE,
n_best_size = 20,
n_best = 5,
max_answer_length = 30,
max_seq_length = 384,
max_query_length=64,
doc_stride=20)
questions = ['When was Kerala formed?']
contexts = ['''Kerala (English: /ˈkɛrələ/; Malayalam: [ke:ɾɐɭɐm] About this soundlisten (help·info)) is a state on the southwestern Malabar Coast of India. It was formed on 1 November 1956, following the passage of the States Reorganisation Act, by combining Malayalam-speaking regions of the erstwhile states of Travancore-Cochin and Madras. Spread over 38,863 km2 (15,005 sq mi), Kerala is the twenty-first largest Indian state by area. It is bordered by Karnataka to the north and northeast, Tamil Nadu to the east and south, and the Lakshadweep Sea[14] to the west. With 33,387,677 inhabitants as per the 2011 Census, Kerala is the thirteenth-largest Indian state by population. It is divided into 14 districts with the capital being Thiruvananthapuram. Malayalam is the most widely spoken language and is also the official language of the state.[15]''']
result = pipeline(questions=questions, contexts=contexts)
Classification Model Pipeline
from tf_transformers.pipeline import Classification_Pipeline
from tf_transformers.data import pad_dataset_normal
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
max_seq_length = 128
@pad_dataset_normal
def tokenizer_fn(texts):
"""
feature: tokenized text (tokenizer.tokenize)
pad_dataset_noral will automatically pad it.
"""
input_ids = []
input_type_ids = []
input_mask = []
for text in texts:
input_ids_ex = [tokenizer.cls_token] + tokenizer.tokenize(text)[: max_seq_length-2] + [tokenizer.sep_token] # -2 to add CLS and SEP
input_ids_ex = tokenizer.convert_tokens_to_ids(input_ids_ex)
input_mask_ex = [1] * len(input_ids_ex)
input_type_ids_ex = [0] * len(input_ids_ex)
input_ids.append(input_ids_ex)
input_type_ids.append(input_type_ids_ex)
input_mask.append(input_mask_ex)
result = {}
result['input_ids'] = input_ids
result['input_type_ids'] = input_type_ids
result['input_mask'] = input_mask
return result
model = # Load keras/ saved_model
label_map_reverse = {0: 'unacceptable', 1: 'acceptable'}
pipeline = Classification_Pipeline( model = model,
tokenizer_fn = tokenizer_fn,
label_map = label_map_reverse,
batch_size=32)
sentences = ['In which way is Sandy very anxious to see if the students will be able to solve the homework problem?',
'The book was written by John.',
'Play Carnatic Fusion by Various Artists',
'She voted herself.']
result = pipeline(sentences)
tf-transformers currently provides the following architectures .
tf-transformers
is a personal project. This has nothing to do with any organization. So, I might not be able to host equivalent checkpoints
of all base models. As a result, there is a conversion notebooks, to convert above mentioned architectures from HuggingFace to tf-transformers.
I want to give credits to Tensorflow NLP official repository. I used November 2019 version of master branch ( where tf.keras.Network
) was used for models. I have modified that by large extend now.
Apart from that, I have used many common scripts from many open repos. I might not be able to recall everything as it is. But still credit goes to them too.
Author: legacyai Source Code: https://github.com/legacyai/tf-transformers
Most popular Data Science and Machine Learning courses — August 2020. This list was last updated in August 2020 — and will be updated regularly so as to keep it relevant
In Conversation With Dr Suman Sanyal, NIIT University,he shares his insights on how universities can contribute to this highly promising sector and what aspirants can do to build a successful data science career.
PyTorch for Deep Learning | Data Science | Machine Learning | Python. PyTorch is a library in Python which provides tools to build deep learning models. What python does for programming PyTorch does for deep learning. Python is a very flexible language for programming and just like python, the PyTorch library provides flexible tools for deep learning.
This video will help you get an idea about the top machine learning and deep learning interview questions that are crucial to crack any data science interview. We have included conceptual, theoretical and practical questions on machine learning and deep learning techniques. Let’s begin!
This "Deep Learning vs Machine Learning vs AI vs Data Science" video talks about the differences and relationship between Artificial Intelligence, Machine Learning, Deep Learning, and Data Science.