Introduction to Word Embeddings (NLP)

Introduction to Word Embeddings (NLP)

Word embedding is one of the most popular representation of document vocabulary. A complete study about capturing the contextual meanings of neighbouring words using techniques like Word2Vec & GloVe. Word2Vec is one of the most popular technique to learn word embeddings using shallow neural network. GloVe or global vectors for word representation is one such approach that tries to directly optimize the vector representation of each word just using co-occurence statistics, unlike Word2Vec which sets up an ancillary prediction task.

One hot encoding usually works in some situations but breaks down when we have a large vocabulary to deal with because the size of our word representation grows with the number of words. What we need is a way to control the size of our word representation by limiting it to a fixed size vector. There comes the need for word embeddings!

In other words, we want to find an embedding for each word in some vector space and we wanted to exhibit some desired properties.

Image for post

Representation of different words in vector space (Image by author)

For example, if two words are similar in meaning, they should be closer to each other compared to words that are not. And, if two pair of words have a similar difference in their meanings, _**_they should be approximately equally separated in the embedded space.**

We could use such a representation for a variety of purposes like finding synonyms and analogies, identifying concepts around which words are clustered, classifying words as positive, negative, neutral, etc. By combining word vectors, we can come up with another way of representing documents as well.

Word2Vec — The General Idea

Word2Vec is perhaps one of the most popular examples of word embeddings used in practice. As the name Word2Vec indicates, it transforms words to vectors. But what the name doesn’t give away is how that transformation is performed.

Image for post

Continuous Bag of Words (CBoW) & Continuous Skip-gram Model (Image by author)

The core idea behind Word2Vec is this, a model that is able to predict a given word, given neighboring words, or vice versa, predict neighboring words for a given word _**_is likely to capture the contextual meanings of words very well_. And, these are infact, two flavours of Word2Vec models, one where we are given neighboring words called _[continuous bag of words](https://cs224d.stanford.edu/lecture_notes/notes1.pdf)_, and the other where we are given the middle word called _skip-gram**.

In the skip gram model, we pick any word from a sentence, convert it into a one-hot encoded vector and feed it into a Neural network or some other probabilistic model that is designed to predict a few surrounding words, its context. Using a suitable loss function, the weights or parameters of the model are optimized and this step is repeated till it learns to predict context words as best as it can.

Image for post

Architecture of Skip-Gram Model (Image by author)

Now, taking an intermediate representation like a hidden layer in a neural network. The outputs of that layer for a given word become the corresponding word vector. The continuous bag of words variation also uses a similar strategy!

Properties of Word2Vec:

  1. It is a Robust and Distributed representation.
  2. Vector size of Word2Vec is independent of the vocabulary.
  3. Training once and storing in lookup table.
  4. Deep learning ready!

This yields a very robust representation of words because the meaning of each word is distributed throughout the vector. The size of the word vector is up to us, how we want to tune performance versus complexity. It remains constant no matter how many words we train on, unlike the Bag of Words model, for instance, *where the size grows with the number of unique words. *And, once we pre-train a large set of word vectors, we can use them efficiently without having to transform again and again, just storing them in a lookup table. Finally, it is ready to be used in Deep learning architectures.

Image for post

For example, it can be used as the input vector for recurrent neural nets. It is also possible to use RNN’s to learn even better word embeddings. Some other optimizations are possible that further reduce the model and training complexity such as representing the output words using** Hierarchical Softmax, computing loss using [Sparse Cross Entropy**](https://cwiki.apache.org/confluence/display/MXNET/Multi-hot+Sparse+Categorical+Cross-entropy), etc.

machine-learning data-science artificial-intelligence developer

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

Hire Machine Learning Developers in India

We supply you with world class machine learning experts / ML Developers with years of domain experience who can add more value to your business.

Most popular Data Science and Machine Learning courses — July 2020

Most popular Data Science and Machine Learning courses — August 2020. This list was last updated in August 2020 — and will be updated regularly so as to keep it relevant

Artificial Intelligence vs Machine Learning vs Data Science

Artificial Intelligence, Machine Learning, and Data Science are amongst a few terms that have become extremely popular amongst professionals in almost all the fields.

AI(Artificial Intelligence): The Business Benefits of Machine Learning

Enroll now at CETPA, the best Institute in India for Artificial Intelligence Online Training Course and Certification for students & working professionals & avail 50% instant discount.

Data science vs. Machine Learning vs. Artificial Intelligence

In this tutorial on "Data Science vs Machine Learning vs Artificial Intelligence," we are going to cover the whole relationship between them and how they are different from each other.