Getting Started with Natural Language Processing in Python

Getting Started with Natural Language Processing in Python

In this tutorial, you'll learn to analyze textual data using Natural Language Processing in Python.

A significant portion of the data that is generated today is unstructured. Unstructured data includes social media comments, browsing history and customer feedback. Have you found yourself in a situation with a bunch of textual data to analyse, and no idea how to proceed? Natural language processing in Python can help.

The objective of this tutorial is to enable you to analyze textual data in Python through the concepts of Natural Language Processing (NLP). You will first learn how to tokenize your text into smaller chunks, normalize words to their root forms, and then, remove any noise in your documents to prepare them for further analysis.

Let’s get started!

Prerequisites

In this tutorial, we will use Python’s nltk library to perform all NLP operations on the text. At the time of writing this tutorial, we used version 3.4 of nltk. To install the library, you can use the pip command on the terminal:

pip install nltk==3.4


To check which version of nltk you have in the system, you can import the library into the Python interpreter and check the version:

import nltk
print(nltk.__version__)


To perform certain actions within nltk in this tutorial, you may have to download specific resources. We will describe each resource as and when required.

However, if you would like to avoid downloading individual resources later in the tutorial and grab them now in one go, run the following command:

python -m nltk.downloader all


Step 1: Convert into Tokens

A computer system can not find meaning in natural language by itself. The first step in processing natural language is to convert the original text into tokens. A token is a combination of continuous characters, with some meaning. It is up to you to decide how to break a sentence into tokens. For instance, an easy method is to split a sentence by whitespace to break it into individual words.

In the NLTK library, you can use the word_tokenize() function to convert a string to tokens. However, you will first need to download the punkt resource. Run the following command in the terminal:

nltk.download('punkt')


Next, you need to import word_tokenize from nltk.tokenize to use it.

from nltk.tokenize import word_tokenize
print(word_tokenize("Hi, this is a nice hotel."))


The output of the code is as follows:

['Hi', ',', 'this', 'is', 'a', 'nice', 'hotel', '.']


You’ll notice that word_tokenize does not simply split a string based on whitespace, but also separates punctuation into tokens. It’s up to you if you would like to retain the punctuation marks in the analysis.

Step 2: Convert Words to their Base Forms

When you are processing natural language, you’ll often notice that there are various grammatical forms of the same word. For instance, “go”, “going” and “gone” are forms of the same verb, “go”.

While the necessities of your project may require you to retain words in various grammatical forms, let us discuss a way to convert various grammatical forms of the same word into its base form. There are two techniques that you can use to convert a word to its base.

The first technique is stemming. Stemming is a simple algorithm that removes affixes from a word. There are various stemming algorithms available for use in NLTK. We will use the Porter algorithm in this tutorial.

We first import PorterStemmer from nltk.stem.porter. Next, we initialize the stemmer to the stemmer variable and then use the .stem() method to find the base form of a word.

from nltk.stem.porter import PorterStemmer 
stemmer = PorterStemmer()
print(stemmer.stem("going"))


The output of the code above is go. If you run the stemmer for the other forms of “go” described above, you will notice that the stemmer returns the same base form, “go”. However, as stemming is only a simple algorithm based on removing word affixes, it fails when the words are less commonly used in language.

When you try the stemmer on the word “constitutes”, it gives an unintuitive result.

print(stemmer.stem("constitutes"))


You will notice the output is “constitut”.

This issue is solved by moving on to a more complex approach towards finding the base form of a word in a given context. The process is called lemmatization. Lemmatization normalizes a word based on the context and vocabulary of the text. In NLTK, you can lemmatize sentences using the WordNetLemmatizer class.

First, you need to download the wordnet resource from the NLTK downloader in the Python terminal.

nltk.download('wordnet')


Once it is downloaded, you need to import the WordNetLemmatizer class and initialize it.

from nltk.stem.wordnet import WordNetLemmatizer 
lem = WordNetLemmatizer()


To use the lemmatizer, use the .lemmatize() method. It takes two arguments — the word and the context. In our example, we will use “v” for context. Let us explore the context further after looking at the output of the .lemmatize() method.

print(lem.lemmatize('constitutes', 'v'))


You would notice that the .lemmatize() method correctly converts the word “constitutes” to its base form, “constitute”. You would also notice that lemmatization takes longer than stemming, as the algorithm is more complex.

Let’s check how to determine the second argument of the .lemmatize() method programmatically. NLTK has a pos_tag function which helps in determining the context of a word in a sentence. However, you first need to download the averaged_perceptron_tagger resource through the NLTK downloader.

nltk.download('averaged_perceptron_tagger')


Next, import the pos_tag function and run it on a sentence.

from nltk.tag import pos_tag
sample = "Hi, this is a nice hotel."
print(pos_tag(word_tokenize(sample)))


You will notice that the output is a list of pairs. Each pair consists of a token and its tag, which signifies the context of a token in the overall text. Notice that the tag for a punctuation mark is itself.

[('Hi', 'NNP'),
 (',', ','),
 ('this', 'DT'),
 ('is', 'VBZ'),
 ('a', 'DT'),
 ('nice', 'JJ'),
 ('hotel', 'NN'),
 ('.', '.')]


How do you decode the context of each token? Here is a full list of all tags and their corresponding meanings on the web. Notice that the tags of all nouns begin with “N”, and for all verbs begin with “V”. We can use this information in the second argument of our .lemmatize() method.

def lemmatize_tokens(stentence):
    lemmatizer = WordNetLemmatizer()
    lemmatized_tokens = []
    for word, tag in pos_tag(stentence):
        if tag.startswith('NN'):
            pos = 'n'
        elif tag.startswith('VB'):
            pos = 'v'
        else:
            pos = 'a'
        lemmatized_tokens.append(lemmatizer.lemmatize(word, pos))
    return lemmatized_tokens

sample = "Legal authority constitutes all magistrates."
print(lemmatize_tokens(word_tokenize(sample)))


The output of the code above is as follows:

['Legal', 'authority', 'constitute', 'all', 'magistrate', '.']


This output is on expected grounds, where “constitutes” and “magistrates” have been converted to “constitute” and “magistrate”, respectively.

Step 3: Data Cleaning

The next step in preparing data is to clean the data and remove anything that does not add meaning to your analysis. Broadly, we will look at removing punctuation and stop words from your analysis.

Removing punctuation is a fairly easy task. The punctuation object of the string library contains all the punctuation marks in English.

import string
print(string.punctuation)


The output of this code snippet is as follows:

'!"#$%&\'()*+,-./:;<=>[email protected][\\]^_`{|}~'


In order to remove punctuation from tokens, you can simply run:

for token in tokens:
    if token in string.punctuation:
        # Do something


Next, we will focus on removing stop words. Stop words are commonly used words in language like “I”, “a” and “the”, which add little meaning to text when analyzing it. We will therefore, remove stop words from our analysis. First, download the stopwords resource from the NLTK downloader.

nltk.download('stopwords')


Once your download is complete, import stopwords from nltk.corpus and use the .words() method with “english” as the argument. It is a list of 179 stop words in the English language.

from nltk.corpus import stopwords
stop_words = stopwords.words('english')


We can combine the lemmatization example with the concepts discussed in this section to create the following function, clean_data(). Additionally, before comparing if a word is a part of the stop words list, we convert it to the lower case. This way, we still capture a stop word if it occurs at the start of a sentence and is capitalized.

def clean_data(tokens, stop_words = ()):

    cleaned_tokens = []

    for token, tag in pos_tag(tokens):
        if tag.startswith("NN"):
            pos = 'n'
        elif tag.startswith('VB'):
            pos = 'v'
        else:
            pos = 'a'

        lemmatizer = WordNetLemmatizer()
        token = lemmatizer.lemmatize(token, pos)

        if token not in string.punctuation and token.lower() not in stop_words:
            cleaned_tokens.append(token)
    return cleaned_tokens

sample = "The quick brown fox jumps over the lazy dog."
stop_words = stopwords.words('english')

clean_data(word_tokenize(sample), stop_words)


The output of the example is as follows:

['quick', 'brown', 'fox', 'jump', 'lazy', 'dog']


As you can see, the punctuation and stop words have been removed.

Word Frequency Distribution

Now that you are familiar with the basic cleaning techniques in NLP, let’s try and find the frequency of words in text. For this exercise, we’ll use the text of the fairy tale, The Mouse, The Bird and The Sausage, which is available freely on Gutenberg. We’ll store the text of this fairy tale in a string, text.

First, we tokenize text and then clean it using the function clean_data that we defined above.

tokens = word_tokenize(text)
cleaned_tokens = clean_data(tokens, stop_words = stop_words)


To find the frequency distribution of words in your text, you can use FreqDist class of NLTK. Initialize the class with the tokens as an argument. Then use the .most_common() method to find the commonly occurring terms. Let us try and find the top ten terms in this case.

from nltk import FreqDist

freq_dist = FreqDist(cleaned_tokens)
freq_dist.most_common(10)


Here are the ten most commonly occurring terms in this fairy tale.

python [('bird', 15), ('sausage', 11), ('mouse', 8), ('wood', 7), ('time', 6), ('long', 5), ('make', 5), ('fly', 4), ('fetch', 4), ('water', 4)]

Unsurprisingly, the three most common terms are the three main characters in the fairy tale.

The frequency of words may not be very important when analysing text. Typically, the next step in NLP is to generate a statistic — TF – IDF (term frequency – inverse document frequency), which signifies the importance of a word in a list of documents.

Conclusion

In this post, you were introduced to natural language processing in Python. You converted text to tokens, converted words to their base forms and finally, cleaned the text to remove any part which didn’t add meaning to the analysis.

Although you looked at simple NLP tasks in this tutorial, there are many techniques to explore. One may wish to perform topic modelling on textual data, where the objective is to find a common topic that a text might be talking about. A more complex task in NLP is the implementation of a sentiment analysis model to determine the feeling behind any text.

What procedures do you follow when you are given a pile of text to work with? Let us know in the comments below.

Machine Learning, Data Science and Deep Learning with Python

Machine Learning, Data Science and Deep Learning with Python

Complete hands-on Machine Learning tutorial with Data Science, Tensorflow, Artificial Intelligence, and Neural Networks. Introducing Tensorflow, Using Tensorflow, Introducing Keras, Using Keras, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Learning Deep Learning, Machine Learning with Neural Networks, Deep Learning Tutorial with Python

Machine Learning, Data Science and Deep Learning with Python

Complete hands-on Machine Learning tutorial with Data Science, Tensorflow, Artificial Intelligence, and Neural Networks

Explore the full course on Udemy (special discount included in the link): http://learnstartup.net/p/BkS5nEmZg

In less than 3 hours, you can understand the theory behind modern artificial intelligence, and apply it with several hands-on examples. This is machine learning on steroids! Find out why everyone’s so excited about it and how it really works – and what modern AI can and cannot really do.

In this course, we will cover:
• Deep Learning Pre-requistes (gradient descent, autodiff, softmax)
• The History of Artificial Neural Networks
• Deep Learning in the Tensorflow Playground
• Deep Learning Details
• Introducing Tensorflow
• Using Tensorflow
• Introducing Keras
• Using Keras to Predict Political Parties
• Convolutional Neural Networks (CNNs)
• Using CNNs for Handwriting Recognition
• Recurrent Neural Networks (RNNs)
• Using a RNN for Sentiment Analysis
• The Ethics of Deep Learning
• Learning More about Deep Learning

At the end, you will have a final challenge to create your own deep learning / machine learning system to predict whether real mammogram results are benign or malignant, using your own artificial neural network you have learned to code from scratch with Python.

Separate the reality of modern AI from the hype – by learning about deep learning, well, deeply. You will need some familiarity with Python and linear algebra to follow along, but if you have that experience, you will find that neural networks are not as complicated as they sound. And how they actually work is quite elegant!

This is hands-on tutorial with real code you can download, study, and run yourself.

Python Tutorial - Learn Python for Machine Learning and Web Development

Python Tutorial - Learn Python for Machine Learning and Web Development

Python tutorial for beginners - Learn Python for Machine Learning and Web Development. Can Python be used for machine learning? Python is widely considered as the preferred language for teaching and learning ML (Machine Learning). Can I use Python for web development? Python can be used to build server-side web applications. Why Python is suitable for machine learning? How Python is used in AI? What language is best for machine learning?

Python tutorial for beginners - Learn Python for Machine Learning and Web Development

TABLE OF CONTENT

  • 00:00:00 Introduction
  • 00:01:49 Installing Python 3
  • 00:06:10 Your First Python Program
  • 00:08:11 How Python Code Gets Executed
  • 00:11:24 How Long It Takes To Learn Python
  • 00:13:03 Variables
  • 00:18:21 Receiving Input
  • 00:22:16 Python Cheat Sheet
  • 00:22:46 Type Conversion
  • 00:29:31 Strings
  • 00:37:36 Formatted Strings
  • 00:40:50 String Methods
  • 00:48:33 Arithmetic Operations
  • 00:51:33 Operator Precedence
  • 00:55:04 Math Functions
  • 00:58:17 If Statements
  • 01:06:32 Logical Operators
  • 01:11:25 Comparison Operators
  • 01:16:17 Weight Converter Program
  • 01:20:43 While Loops
  • 01:24:07 Building a Guessing Game
  • 01:30:51 Building the Car Game
  • 01:41:48 For Loops
  • 01:47:46 Nested Loops
  • 01:55:50 Lists
  • 02:01:45 2D Lists
  • 02:05:11 My Complete Python Course
  • 02:06:00 List Methods
  • 02:13:25 Tuples
  • 02:15:34 Unpacking
  • 02:18:21 Dictionaries
  • 02:26:21 Emoji Converter
  • 02:30:31 Functions
  • 02:35:21 Parameters
  • 02:39:24 Keyword Arguments
  • 02:44:45 Return Statement
  • 02:48:55 Creating a Reusable Function
  • 02:53:42 Exceptions
  • 02:59:14 Comments
  • 03:01:46 Classes
  • 03:07:46 Constructors
  • 03:14:41 Inheritance
  • 03:19:33 Modules
  • 03:30:12 Packages
  • 03:36:22 Generating Random Values
  • 03:44:37 Working with Directories
  • 03:50:47 Pypi and Pip
  • 03:55:34 Project 1: Automation with Python
  • 04:10:22 Project 2: Machine Learning with Python
  • 04:58:37 Project 3: Building a Website with Django

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

Complete Python Bootcamp: Go from zero to hero in Python 3

Machine Learning A-Z™: Hands-On Python & R In Data Science

Python and Django Full Stack Web Developer Bootcamp

Complete Python Masterclass

Python Programming Tutorial | Full Python Course for Beginners 2019 👍

Top 10 Python Frameworks for Web Development In 2019

Python for Financial Analysis and Algorithmic Trading

Building A Concurrent Web Scraper With Python and Selenium

Machine Learning Full Course - Learn Machine Learning

Machine Learning Full Course - Learn Machine Learning

This complete Machine Learning full course video covers all the topics that you need to know to become a master in the field of Machine Learning.

Machine Learning Full Course | Learn Machine Learning | Machine Learning Tutorial

It covers all the basics of Machine Learning (01:46), the different types of Machine Learning (18:32), and the various applications of Machine Learning used in different industries (04:54:48).This video will help you learn different Machine Learning algorithms in Python. Linear Regression, Logistic Regression (23:38), K Means Clustering (01:26:20), Decision Tree (02:15:15), and Support Vector Machines (03:48:31) are some of the important algorithms you will understand with a hands-on demo. Finally, you will see the essential skills required to become a Machine Learning Engineer (04:59:46) and come across a few important Machine Learning interview questions (05:09:03). Now, let's get started with Machine Learning.

Below topics are explained in this Machine Learning course for beginners:

  1. Basics of Machine Learning - 01:46

  2. Why Machine Learning - 09:18

  3. What is Machine Learning - 13:25

  4. Types of Machine Learning - 18:32

  5. Supervised Learning - 18:44

  6. Reinforcement Learning - 21:06

  7. Supervised VS Unsupervised - 22:26

  8. Linear Regression - 23:38

  9. Introduction to Machine Learning - 25:08

  10. Application of Linear Regression - 26:40

  11. Understanding Linear Regression - 27:19

  12. Regression Equation - 28:00

  13. Multiple Linear Regression - 35:57

  14. Logistic Regression - 55:45

  15. What is Logistic Regression - 56:04

  16. What is Linear Regression - 59:35

  17. Comparing Linear & Logistic Regression - 01:05:28

  18. What is K-Means Clustering - 01:26:20

  19. How does K-Means Clustering work - 01:38:00

  20. What is Decision Tree - 02:15:15

  21. How does Decision Tree work - 02:25:15 

  22. Random Forest Tutorial - 02:39:56

  23. Why Random Forest - 02:41:52

  24. What is Random Forest - 02:43:21

  25. How does Decision Tree work- 02:52:02

  26. K-Nearest Neighbors Algorithm Tutorial - 03:22:02

  27. Why KNN - 03:24:11

  28. What is KNN - 03:24:24

  29. How do we choose 'K' - 03:25:38

  30. When do we use KNN - 03:27:37

  31. Applications of Support Vector Machine - 03:48:31

  32. Why Support Vector Machine - 03:48:55

  33. What Support Vector Machine - 03:50:34

  34. Advantages of Support Vector Machine - 03:54:54

  35. What is Naive Bayes - 04:13:06

  36. Where is Naive Bayes used - 04:17:45

  37. Top 10 Application of Machine Learning - 04:54:48

  38. How to become a Machine Learning Engineer - 04:59:46

  39. Machine Learning Interview Questions - 05:09:03