Analytics India Guru - What are Transformers (in HINDI)

In This Video, we will introduce you to transformers in Machine Learning.Recent advances in modern Natural Language Processing (NLP) research have been domin...

PictureText: Interactive Visuals of Text

PictureText: Interactive Visuals of Text. Solving this would be a tremendous step forward in how we consume information and I will definitely NOT be able to solve it by the end of this article. My aim is, however, to propose an approach for a tiny step forward.

From Transformers to Performers: Approximating Attention

From Transformers to Performers: Approximating Attention. The mathematical trick to speed up transformers

A Deep Dive Into the Transformer Architecture — The Development of Transformer Models

A Deep Dive Into the Transformer Architecture — The Development of Transformer Models Transformers for Natural Language Processing. First, we'll dive deep into the fundamental concepts used to build the original 2017 Transformer.

The Race for Intelligent AI

The Race for Intelligent AI. GPT3 like architectures and their limitations

Why are LSTMs struggling to matchup with Transformers?

This article throws light on the performance of Long Short-Term Memory (LSTM) and Transformer networks. We’ll start with taking cognizance of information on LSTM ’s and Transformers and move on to internal mechanisms with which they work.

Understanding Transformers, the Data Science Way

Transformer, a model architecture first explained in the paper Attention is all you need, lets go of this recurrence and instead relies entirely on an attention mechanism to draw global dependencies between input and output. And that makes it FAST. This is the picture of the full transformer as taken from the paper.

Essential Guide to Transformer Models in Machine Learning

So, in this article I’m putting the whole idea down as simply as possible. I’ll try to keep the jargon and the technicality to a minimum, but do keep in mind that this topic is complicated. I’ll also include some basic math and try to keep things light to ensure the long journey is fun.

A Practical Demonstration of Using Vision Transformers in PyTorch

In this article, I will give a hands-on example (with code) of how one can use the popular PyTorch framework to apply the Vision Transformer, which was suggested in the paper “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” (which I reviewed in another post), to a practical computer vision task.

Are You Ready for Vision Transformer (ViT)?

Are You Ready for Vision Transformer (ViT)? “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” May Bring Another Breakthrough to Computer Vision

Search— A Gentle Introduction

From basic building blocks to DIY search engine. Provide a gentle introduction to Search using both Google and Elasticsearch as examples

Sentence Embeddings with sentence-transformers library

Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch. This article requires knowledge of Embeddings (word embeddings or sentence embeddings). You can refer to this article to quickly refresh your memory.

Search (Pt 2) — Semantic Horse Race

In this article, we will: Understand how keyword and contextual searches compare and where the latest in NLP can help us with search. Consider some examples test out different queries and how the two differ. Finally, we will consider pros and cons of the approaches altogether

Fine Tuning XLNet Model for Text Classification in 3 Lines of Code

Fine Tuning XLNet Model for Text Classification in 3 Lines of Code. In this article, we will take a pretrained XLNet model and fine tune it on our dataset.

Search (Pt 3) — Elastic Transformers

Making BERT stretchy — Scalable Semantic Search on a Jupyter Notebook. In this article we will show how to add one more — contextual semantic search.

Building a Faster and Accurate Search Engine on Custom Dataset with Transformers 🤗

In this article, we will build a search engine on a huge corpus of custom dataset, which will not only retrieve the search results based on the query/questions but also give us a 1000 words context around the response.

Multi-Label, Multi-Class Text Classification with BERT, Transformer and Keras

In this article, I’ll show how to do a multi-label, multi-class text classification task using Huggingface Transformers library and Tensorflow Keras API. In doing so, you’ll learn how to use a BERT model from Transformer as a layer in a Tensorflow model built using the Keras API.

Lost In (Machine) Translation

Lost In (Machine) Translation. Small batch machine translation of speeches and news articles (English-to-Chinese and vice versa) in under-30 lines of code.

Zero-Shot Text Classification with Hugging Face

This post is about detecting text sentiment in an unsupervised way, using Hugging Face zero-shot text classification model. A few weeks ago I was implementing POC with one of the requirements to be able to detect text sentiment in an unsupervised way (without having training data in advance and building a model). More specifically it was about data extraction.

NLP Language Models BERT, GPT2, T-NLG: Changing the rules of the game

We all are aware about the current revolution in field of Artificial Intelligence(AI) and Natural Language Processing(NLP) is one of the major contributor.