Natural Language Processing (NLP) with Python

Tutorial on the basics of natural language processing (NLP) with sample coding implementations in Python

Author(s): Pratik Shukla, Roberto Iriondo

In this article, we explore the basics of natural language processing (NLP) with code examples. We dive into the natural language toolkit (NLTK) library to present how it can be useful for natural language processing related-tasks. Afterward, we will discuss the basics of other Natural Language Processing libraries and other essential methods for NLP, along with their respective coding sample implementations in Python.

📚 Resources:Google Colab Implementation | GitHub Repository 📚

Table of Contents:

  1. What is Natural Language Processing?
  2. Applications of NLP
  3. Understanding Natural Language Processing (NLP)
  4. Rule-based NLP vs. Statistical NLP
  5. Components of Natural Language Processing (NLP)
  6. Current challenges in NLP
  7. Easy to Use NLP Libraries
  8. Exploring Features of NLTK
  9. Word Cloud
  10. Stemming
  11. Lemmatization
  12. Part-of-Speech (PoS) tagging
  13. Chunking
  14. Chinking
  15. Named Entity Recognition (NER)
  16. WordNet
  17. Bag of Words
  18. TF-IDF

What is Natural Language Processing?

Computers and machines are great at working with tabular data or spreadsheets. However, as human beings generally communicate in words and sentences, not in the form of tables. Much information that humans speak or write is unstructured. So it is not very clear for computers to interpret such. In natural language processing (NLP), the goal is to make computers understand the unstructured text and retrieve meaningful pieces of information from it. Natural language Processing (NLP) is a subfield of artificial intelligence, in which its depth involves the interactions between computers and humans.

Applications of NLP:

  • Machine Translation.
  • Speech Recognition.
  • Sentiment Analysis.
  • Question Answering.
  • Summarization of Text.
  • Chatbot.
  • Intelligent Systems.
  • Text Classifications.
  • Character Recognition.
  • Spell Checking.
  • Spam Detection.
  • Autocomplete.
  • Named Entity Recognition.
  • Predictive Typing.

Understanding Natural Language Processing (NLP):

Figure 1: Revealing, listening, and understand.

Figure 1: Revealing, listening, and understand.

We, as humans, perform natural language processing (NLP) considerably well, but even then, we are not perfect. We often misunderstand one thing for another, and we often interpret the same sentences or words differently.

For instance, consider the following sentence, we will try to understand its interpretation in many different ways:

Example 1:

Figure 2: NLP Example Sentence with text: “I saw a man on a hill with a telescope.”

Figure 2: NLP example sentence with the text: “I saw a man on a hill with a telescope.”

These are some interpretations of the sentence shown above.

  • There is a man on the hill, and I watched him with my telescope.
  • There is a man on the hill, and he has a telescope.
  • I’m on a hill, and I saw a man using my telescope.
  • I’m on a hill, and I saw a man who has a telescope.
  • There is a man on a hill, and I saw him something with my telescope.

Example 2:

Figure 3: NLP example sentence with the text: “Can you help me with the can?”

Figure 3: NLP example sentence with the text: “Can you help me with the can?”

In the sentence above, we can see that there are two “can” words, but both of them have different meanings. Here the first “can” word is used for question formation. The second “can” word at the end of the sentence is used to represent a container that holds food or liquid.

Hence, from the examples above, we can see that language processing is not “deterministic” (the same language has the same interpretations), and something suitable to one person might not be suitable to another. Therefore, Natural Language Processing (NLP) has a non-deterministic approach. In other words, Natural Language Processing can be used to create a new intelligent system that can understand how humans understand and interpret language in different situations.

Rule-based NLP vs. Statistical NLP:

Natural Language Processing is separated in two different approaches:

Rule-based Natural Language Processing:

It uses common sense reasoning for processing tasks. For instance, the freezing temperature can lead to death, or hot coffee can burn people’s skin, along with other common sense reasoning tasks. However, this process can take much time, and it requires manual effort.

Statistical Natural Language Processing:

It uses large amounts of data and tries to derive conclusions from it. Statistical NLP uses machine learning algorithms to train NLP models. After successful training on large amounts of data, the trained model will have positive outcomes with deduction.

Comparison:

Figure 4: Rule-Based NLP vs Statistical NLP.

Figure 4: Rule-Based NLP vs. Statistical NLP.

Components of Natural Language Processing (NLP):

Figure 5: Components of Natural Language Processing (NLP).

Figure 5: Components of Natural Language Processing (NLP).

a. Lexical Analysis:

With lexical analysis, we divide a whole chunk of text into paragraphs, sentences, and words. It involves identifying and analyzing words’ structure.

b. Syntactic Analysis:

Syntactic analysis involves the analysis of words in a sentence for grammar and arranging words in a manner that shows the relationship among the words. For instance, the sentence “The shop goes to the house” does not pass.

c. Semantic Analysis:

Semantic analysis draws the exact meaning for the words, and it analyzes the text meaningfulness. Sentences such as “hot ice-cream” do not pass.

d. Disclosure Integration:

Disclosure integration takes into account the context of the text. It considers the meaning of the sentence before it ends. For example: “He works at Google.” In this sentence, “he” must be referenced in the sentence before it.

e. Pragmatic Analysis:

Pragmatic analysis deals with overall communication and interpretation of language. It deals with deriving meaningful use of language in various situations.

📚 Check out an overview of machine learning algorithms for beginners with code examples in Python. 📚

Current challenges in NLP:

  1. Breaking sentences into tokens.
  2. Tagging parts of speech (POS).
  3. Building an appropriate vocabulary.
  4. Linking the components of a created vocabulary.
  5. Understanding the context.
  6. Extracting semantic meaning.
  7. Named Entity Recognition (NER).
  8. Transforming unstructured data into structured data.
  9. Ambiguity in speech.

Easy to use NLP libraries:

a. NLTK (Natural Language Toolkit)

The NLTK Python framework is generally used as an education and research tool. It’s not usually used on production applications. However, it can be used to build exciting programs due to its ease of use.

Features:

  • Tokenization.
  • Part Of Speech tagging (POS).
  • Named Entity Recognition (NER).
  • Classification.
  • Sentiment analysis.
  • Packages of chatbots.

Use-cases:

  • Recommendation systems.
  • Sentiment analysis.
  • Building chatbots.

Figure 6: Pros and cons of using the NLTK framework.

Figure 6: Pros and cons of using the NLTK framework.

b. spaCy

spaCy is an open-source natural language processing Python library designed to be fast and production-ready. spaCy focuses on providing software for production usage.

Features:

  • Tokenization.
  • Part Of Speech tagging (POS).
  • Named Entity Recognition (NER).
  • Classification.
  • Sentiment analysis.
  • Dependency parsing.
  • Word vectors.

Use-cases:

  • Autocomplete and autocorrect.
  • Analyzing reviews.
  • Summarization.

Figure 7: Pros and cons of the spaCy framework.

Figure 7: Pros and cons of the spaCy framework.

c. Gensim

Gensim is an NLP Python framework generally used in topic modeling and similarity detection. It is not a general-purpose NLP library, but it handles tasks assigned to it very well.

Features:

  • Latent semantic analysis.
  • Non-negative matrix factorization.
  • TF-IDF.

Use-cases:

  • Converting documents to vectors.
  • Finding text similarity.
  • Text summarization.

Figure 8: Pros and cons of the Gensim framework.

Figure 8: Pros and cons of the Gensim framework.

d. Pattern

Pattern is an NLP Python framework with straightforward syntax. It’s a powerful tool for scientific and non-scientific tasks. It is highly valuable to students.

Features:

  • Tokenization.
  • Part of Speech tagging.
  • Named entity recognition.
  • Parsing.
  • Sentiment analysis.

Use-cases:

  • Spelling correction.
  • Search engine optimization.
  • Sentiment analysis.

Figure 9: Pros and cons of the Pattern framework.

Figure 9: Pros and cons of the Pattern framework.

e. TextBlob

TextBlob is a Python library designed for processing textual data.

Features:

  • Part-of-Speech tagging.
  • Noun phrase extraction.
  • Sentiment analysis.
  • Classification.
  • Language translation.
  • Parsing.
  • Wordnet integration.

Use-cases:

  • Sentiment Analysis.
  • Spelling Correction.
  • Translation and Language Detection.

Figure 10: Pros and cons of the TextBlob library.

Figure 10: Pros and cons of the TextBlob library.

For this tutorial, we are going to focus more on the NLTK library. Let’s dig deeper into natural language processing by making some examples.

Exploring Features of NLTK:

a. Open the text file for processing:

First, we are going to open and read the file which we want to analyze.

Figure 11: Small code snippet to open and read the text file and analyze it.

Figure 11: Small code snippet to open and read the text file and analyze it.

Figure 12: Text string file.

Figure 12: Text string file.

Next, notice that the data type of the text file read is a String. The number of characters in our text file is 675.

b. Import required libraries:

For various data processing cases in NLP, we need to import some libraries. In this case, we are going to use NLTK for Natural Language Processing. We will use it to perform various operations on the text.

Figure 13: Importing the required libraries.

Figure 13: Importing the required libraries.

c. Sentence tokenizing:

By tokenizing the text with sent_tokenize( ), we can get the text as sentences.

Figure 14: Using sent_tokenize( ) to tokenize the text as sentences.

Figure 14: Using sent_tokenize( ) to tokenize the text as sentences.

Figure 15: Text sample data.

Figure 15: Text sample data.

In the example above, we can see the entire text of our data is represented as sentences and also notice that the total number of sentences here is 9.

d. Word tokenizing:

By tokenizing the text with word_tokenize( ), we can get the text as words.

Image for post

Figure 16: Using word_tokenize() to tokenize the text as words.

Figure 17: Text sample data.

Figure 17: Text sample data.

Next, we can see the entire text of our data is represented as words and also notice that the total number of words here is 144.

e. Find the frequency distribution:

Let’s find out the frequency of words in our text.

Figure 18: Using FreqDist() to find the frequency of words in our sample text.

Figure 18: Using FreqDist() to find the frequency of words in our sample text.

Figure 19: Printing the ten most common words from the sample text.

Figure 19: Printing the ten most common words from the sample text.

Notice that the most used words are punctuation marks and stopwords. We will have to remove such words to analyze the actual text.

f. Plot the frequency graph:

Let’s plot a graph to visualize the word distribution in our text.

Figure 20: Plotting a graph to visualize the text distribution.

Figure 20: Plotting a graph to visualize the text distribution.

In the graph above, notice that a period “.” is used nine times in our text. Analytically speaking, punctuation marks are not that important for natural language processing. Therefore, in the next step, we will be removing such punctuation marks.

g. Remove punctuation marks:

Next, we are going to remove the punctuation marks as they are not very useful for us. We are going to use isalpha( ) method to separate the punctuation marks from the actual text. Also, we are going to make a new list called words_no_punc, which will store the words in lower case but exclude the punctuation marks.

Figure 21: Using the isalpha() method to separate the punctuation marks, along with creating a list under words_no_punc to se

Figure 21: Using the isalpha() method to separate the punctuation marks, along with creating a list under words_no_punc to separate words with no punctuation marks.

Figure 22: Text sample data.

Figure 22: Text sample data.

As shown above, all the punctuation marks from our text are excluded. These can also cross-check with the number of words.

h. Plotting graph without punctuation marks:

Figure 23: Printing the ten most common words from the sample text.

Figure 23: Printing the ten most common words from the sample text.

Figure 24: Plotting the graph without punctuation marks.

Figure 24: Plotting the graph without punctuation marks.

Notice that we still have many words that are not very useful in the analysis of our text file sample, such as “and,” “but,” “so,” and others. Next, we need to remove coordinating conjunctions.

i. List of stopwords:

Image for post

Figure 25: Importing the list of stopwords.

Figure 26: Text sample data.

Figure 26: Text sample data.

j. Removing stopwords:

Figure 27: Cleaning the text sample data.

Figure 27: Cleaning the text sample data.

Figure 28: Cleaned data.

Figure 28: Cleaned data.

k. Final frequency distribution:

Figure 29: Displaying the final frequency distribution of the most common words found.

Figure 29: Displaying the final frequency distribution of the most common words found.

Figure 30: Visualization of the most common words found in the group.

Figure 30: Visualization of the most common words found in the group.

As shown above, the final graph has many useful words that help us understand what our sample data is about, showing how essential it is to perform data cleaning on NLP.

Next, we will cover various topics in NLP with coding examples.

#python #nlp #machine-learning #developer

What is GEEK

Buddha Community

Natural Language Processing (NLP) with Python
Sival Alethea

Sival Alethea

1624381200

Natural Language Processing (NLP) Tutorial with Python & NLTK

This video will provide you with a comprehensive and detailed knowledge of Natural Language Processing, popularly known as NLP. You will also learn about the different steps involved in processing the human language like Tokenization, Stemming, Lemmatization and more. Python, NLTK, & Jupyter Notebook are used to demonstrate the concepts.

📺 The video in this post was made by freeCodeCamp.org
The origin of the article: https://www.youtube.com/watch?v=X2vAabgKiuM&list=PLWKjhJtqVAbnqBxcdjVGgT3uVR10bzTEB&index=16
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#natural language processing #nlp #python #python & nltk #nltk #natural language processing (nlp) tutorial with python & nltk

Ray  Patel

Ray Patel

1619518440

top 30 Python Tips and Tricks for Beginners

Welcome to my Blog , In this article, you are going to learn the top 10 python tips and tricks.

1) swap two numbers.

2) Reversing a string in Python.

3) Create a single string from all the elements in list.

4) Chaining Of Comparison Operators.

5) Print The File Path Of Imported Modules.

6) Return Multiple Values From Functions.

7) Find The Most Frequent Value In A List.

8) Check The Memory Usage Of An Object.

#python #python hacks tricks #python learning tips #python programming tricks #python tips #python tips and tricks #python tips and tricks advanced #python tips and tricks for beginners #python tips tricks and techniques #python tutorial #tips and tricks in python #tips to learn python #top 30 python tips and tricks for beginners

8 Open-Source Tools To Start Your NLP Journey

Teaching machines to understand human context can be a daunting task. With the current evolving landscape, Natural Language Processing (NLP) has turned out to be an extraordinary breakthrough with its advancements in semantic and linguistic knowledge. NLP is vastly leveraged by businesses to build customised chatbots and voice assistants using its optical character and speed recognition techniques along with text simplification.

To address the current requirements of NLP, there are many open-source NLP tools, which are free and flexible enough for developers to customise it according to their needs. Not only these tools will help businesses analyse the required information from the unstructured text but also help in dealing with text analysis problems like classification, word ambiguity, sentiment analysis etc.

Here are eight NLP toolkits, in no particular order, that can help any enthusiast start their journey with Natural language Processing.


Also Read: Deep Learning-Based Text Analysis Tools NLP Enthusiasts Can Use To Parse Text

1| Natural Language Toolkit (NLTK)

About: Natural Language Toolkit aka NLTK is an open-source platform primarily used for Python programming which analyses human language. The platform has been trained on more than 50 corpora and lexical resources, including multilingual WordNet. Along with that, NLTK also includes many text processing libraries which can be used for text classification tokenisation, parsing, and semantic reasoning, to name a few. The platform is vastly used by students, linguists, educators as well as researchers to analyse text and make meaning out of it.


#developers corner #learning nlp #natural language processing #natural language processing tools #nlp #nlp career #nlp tools #open source nlp tools #opensource nlp tools

Ray  Patel

Ray Patel

1619510796

Lambda, Map, Filter functions in python

Welcome to my Blog, In this article, we will learn python lambda function, Map function, and filter function.

Lambda function in python: Lambda is a one line anonymous function and lambda takes any number of arguments but can only have one expression and python lambda syntax is

Syntax: x = lambda arguments : expression

Now i will show you some python lambda function examples:

#python #anonymous function python #filter function in python #lambda #lambda python 3 #map python #python filter #python filter lambda #python lambda #python lambda examples #python map

Elthel Mario

Elthel Mario

1596442370

Practice parsing text in Natural language processing (NLP) with Python

Natural language processing (NLP) is a specialized field for analysis and generation of human languages. Human languages, rightly called natural language, are highly context-sensitive and often ambiguous in order to produce a distinct meaning. (Remember the joke where the wife asks the husband to “get a carton of milk and if they have eggs, get six,” so he gets six cartons of milk because they had eggs.) NLP provides the ability to comprehend natural language input and produce natural language output appropriately.

Computational linguistics (CL) is the larger field of linguistic comprehension and modeling. NLP is a subset of CL that deals with the engineering aspects of language understanding and generation. NLP is an interdisciplinary domain that touches on multiple fields including artificial intelligence (AI), machine learning (ML), deep learning (DL), mathematics, and statistics.

Some of the applications you can build with NLP include:

  • Machine translation: With over 6,000 languages in the world, NLP coupled with neural machine translation can ease text translation from one language into other.
  • Chatbots: Personal assistants like Alexa, Siri, and the open source Mycroft are blended into our lives today. NLP is at the core of these chatbots, helping machines analyze, learn, and understand speech as well as provide vocal response.
  • Voice enablement: NLP makes it possible to serve customers in healthcare, travel, retail, and other industries in a friendly way.
  • Sentiment analysis: Businesses always want to have a finger on customers’ pulse and take proactive actions when they sense discontent. NLP makes this possible.
  • HR productivity: Human resources professionals must handle a mountain of documents, and NLP can use document process automation to alleviate some of that burden.

NLP building blocks

Like a skyscraper is built brick by brick, you can build large applications like the ones above by using NLP’s fundamental and essential building blocks.

There are several open source NLP libraries available, such as Stanford CoreNLP, spaCy, and Genism in Python, Apache OpenNLP, and GateNLP in Java and other languages.

To demonstrate the functions of NLP’s building blocks, I’ll use Python and its primary NLP library, Natural Language Toolkit (NLTK). NLTK was created at the University of Pennsylvania. It is a widely used and convenient starting point for getting into NLP. After learning its concepts, you can explore other libraries to build your “skyscraper” NLP applications.

The fundamental building blocks covered in this article are:

  • Tokenize into sentences and words
  • Stopwords
  • Collocations
  • Parts of speech identification
  • Stemming and lemmatization
  • Corpus

#natural language processing #nlp #python