1595583062
Tutorial on the basics of natural language processing (NLP) with sample coding implementations in Python
Author(s): Pratik Shukla, Roberto Iriondo
In this article, we explore the basics of natural language processing (NLP) with code examples. We dive into the natural language toolkit (NLTK) library to present how it can be useful for natural language processing related-tasks. Afterward, we will discuss the basics of other Natural Language Processing libraries and other essential methods for NLP, along with their respective coding sample implementations in Python.
📚 Resources:Google Colab Implementation | GitHub Repository 📚
Computers and machines are great at working with tabular data or spreadsheets. However, as human beings generally communicate in words and sentences, not in the form of tables. Much information that humans speak or write is unstructured. So it is not very clear for computers to interpret such. In natural language processing (NLP), the goal is to make computers understand the unstructured text and retrieve meaningful pieces of information from it. Natural language Processing (NLP) is a subfield of artificial intelligence, in which its depth involves the interactions between computers and humans.
Figure 1: Revealing, listening, and understand.
We, as humans, perform natural language processing (NLP) considerably well, but even then, we are not perfect. We often misunderstand one thing for another, and we often interpret the same sentences or words differently.
For instance, consider the following sentence, we will try to understand its interpretation in many different ways:
Example 1:
Figure 2: NLP example sentence with the text: “I saw a man on a hill with a telescope.”
These are some interpretations of the sentence shown above.
Example 2:
Figure 3: NLP example sentence with the text: “Can you help me with the can?”
In the sentence above, we can see that there are two “can” words, but both of them have different meanings. Here the first “can” word is used for question formation. The second “can” word at the end of the sentence is used to represent a container that holds food or liquid.
Hence, from the examples above, we can see that language processing is not “deterministic” (the same language has the same interpretations), and something suitable to one person might not be suitable to another. Therefore, Natural Language Processing (NLP) has a non-deterministic approach. In other words, Natural Language Processing can be used to create a new intelligent system that can understand how humans understand and interpret language in different situations.
Natural Language Processing is separated in two different approaches:
It uses common sense reasoning for processing tasks. For instance, the freezing temperature can lead to death, or hot coffee can burn people’s skin, along with other common sense reasoning tasks. However, this process can take much time, and it requires manual effort.
It uses large amounts of data and tries to derive conclusions from it. Statistical NLP uses machine learning algorithms to train NLP models. After successful training on large amounts of data, the trained model will have positive outcomes with deduction.
Figure 4: Rule-Based NLP vs. Statistical NLP.
Figure 5: Components of Natural Language Processing (NLP).
With lexical analysis, we divide a whole chunk of text into paragraphs, sentences, and words. It involves identifying and analyzing words’ structure.
Syntactic analysis involves the analysis of words in a sentence for grammar and arranging words in a manner that shows the relationship among the words. For instance, the sentence “The shop goes to the house” does not pass.
Semantic analysis draws the exact meaning for the words, and it analyzes the text meaningfulness. Sentences such as “hot ice-cream” do not pass.
Disclosure integration takes into account the context of the text. It considers the meaning of the sentence before it ends. For example: “He works at Google.” In this sentence, “he” must be referenced in the sentence before it.
Pragmatic analysis deals with overall communication and interpretation of language. It deals with deriving meaningful use of language in various situations.
📚 Check out an overview of machine learning algorithms for beginners with code examples in Python. 📚
The NLTK Python framework is generally used as an education and research tool. It’s not usually used on production applications. However, it can be used to build exciting programs due to its ease of use.
Features:
Use-cases:
Figure 6: Pros and cons of using the NLTK framework.
spaCy is an open-source natural language processing Python library designed to be fast and production-ready. spaCy focuses on providing software for production usage.
Features:
Use-cases:
Figure 7: Pros and cons of the spaCy framework.
Gensim is an NLP Python framework generally used in topic modeling and similarity detection. It is not a general-purpose NLP library, but it handles tasks assigned to it very well.
Features:
Use-cases:
Figure 8: Pros and cons of the Gensim framework.
Pattern is an NLP Python framework with straightforward syntax. It’s a powerful tool for scientific and non-scientific tasks. It is highly valuable to students.
Features:
Use-cases:
Figure 9: Pros and cons of the Pattern framework.
TextBlob is a Python library designed for processing textual data.
Features:
Use-cases:
Figure 10: Pros and cons of the TextBlob library.
For this tutorial, we are going to focus more on the NLTK library. Let’s dig deeper into natural language processing by making some examples.
First, we are going to open and read the file which we want to analyze.
Figure 11: Small code snippet to open and read the text file and analyze it.
Figure 12: Text string file.
Next, notice that the data type of the text file read is a String. The number of characters in our text file is 675.
For various data processing cases in NLP, we need to import some libraries. In this case, we are going to use NLTK for Natural Language Processing. We will use it to perform various operations on the text.
Figure 13: Importing the required libraries.
By tokenizing the text with sent_tokenize( )
, we can get the text as sentences.
Figure 14: Using sent_tokenize( ) to tokenize the text as sentences.
Figure 15: Text sample data.
In the example above, we can see the entire text of our data is represented as sentences and also notice that the total number of sentences here is 9.
By tokenizing the text with word_tokenize( )
, we can get the text as words.
Figure 16: Using word_tokenize() to tokenize the text as words.
Figure 17: Text sample data.
Next, we can see the entire text of our data is represented as words and also notice that the total number of words here is 144.
Let’s find out the frequency of words in our text.
Figure 18: Using FreqDist() to find the frequency of words in our sample text.
Figure 19: Printing the ten most common words from the sample text.
Notice that the most used words are punctuation marks and stopwords. We will have to remove such words to analyze the actual text.
Let’s plot a graph to visualize the word distribution in our text.
Figure 20: Plotting a graph to visualize the text distribution.
In the graph above, notice that a period “.” is used nine times in our text. Analytically speaking, punctuation marks are not that important for natural language processing. Therefore, in the next step, we will be removing such punctuation marks.
Next, we are going to remove the punctuation marks as they are not very useful for us. We are going to use isalpha( )
method to separate the punctuation marks from the actual text. Also, we are going to make a new list called words_no_punc
, which will store the words in lower case but exclude the punctuation marks.
Figure 21: Using the isalpha() method to separate the punctuation marks, along with creating a list under words_no_punc to separate words with no punctuation marks.
Figure 22: Text sample data.
As shown above, all the punctuation marks from our text are excluded. These can also cross-check with the number of words.
Figure 23: Printing the ten most common words from the sample text.
Figure 24: Plotting the graph without punctuation marks.
Notice that we still have many words that are not very useful in the analysis of our text file sample, such as “and,” “but,” “so,” and others. Next, we need to remove coordinating conjunctions.
Figure 25: Importing the list of stopwords.
Figure 26: Text sample data.
Figure 27: Cleaning the text sample data.
Figure 28: Cleaned data.
Figure 29: Displaying the final frequency distribution of the most common words found.
Figure 30: Visualization of the most common words found in the group.
As shown above, the final graph has many useful words that help us understand what our sample data is about, showing how essential it is to perform data cleaning on NLP.
Next, we will cover various topics in NLP with coding examples.
#python #nlp #machine-learning #developer
1624381200
This video will provide you with a comprehensive and detailed knowledge of Natural Language Processing, popularly known as NLP. You will also learn about the different steps involved in processing the human language like Tokenization, Stemming, Lemmatization and more. Python, NLTK, & Jupyter Notebook are used to demonstrate the concepts.
📺 The video in this post was made by freeCodeCamp.org
The origin of the article: https://www.youtube.com/watch?v=X2vAabgKiuM&list=PLWKjhJtqVAbnqBxcdjVGgT3uVR10bzTEB&index=16
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!
#natural language processing #nlp #python #python & nltk #nltk #natural language processing (nlp) tutorial with python & nltk
1619518440
Welcome to my Blog , In this article, you are going to learn the top 10 python tips and tricks.
…
#python #python hacks tricks #python learning tips #python programming tricks #python tips #python tips and tricks #python tips and tricks advanced #python tips and tricks for beginners #python tips tricks and techniques #python tutorial #tips and tricks in python #tips to learn python #top 30 python tips and tricks for beginners
1598709780
Teaching machines to understand human context can be a daunting task. With the current evolving landscape, Natural Language Processing (NLP) has turned out to be an extraordinary breakthrough with its advancements in semantic and linguistic knowledge. NLP is vastly leveraged by businesses to build customised chatbots and voice assistants using its optical character and speed recognition techniques along with text simplification.
To address the current requirements of NLP, there are many open-source NLP tools, which are free and flexible enough for developers to customise it according to their needs. Not only these tools will help businesses analyse the required information from the unstructured text but also help in dealing with text analysis problems like classification, word ambiguity, sentiment analysis etc.
Here are eight NLP toolkits, in no particular order, that can help any enthusiast start their journey with Natural language Processing.
Also Read: Deep Learning-Based Text Analysis Tools NLP Enthusiasts Can Use To Parse Text
About: Natural Language Toolkit aka NLTK is an open-source platform primarily used for Python programming which analyses human language. The platform has been trained on more than 50 corpora and lexical resources, including multilingual WordNet. Along with that, NLTK also includes many text processing libraries which can be used for text classification tokenisation, parsing, and semantic reasoning, to name a few. The platform is vastly used by students, linguists, educators as well as researchers to analyse text and make meaning out of it.
#developers corner #learning nlp #natural language processing #natural language processing tools #nlp #nlp career #nlp tools #open source nlp tools #opensource nlp tools
1619510796
Welcome to my Blog, In this article, we will learn python lambda function, Map function, and filter function.
Lambda function in python: Lambda is a one line anonymous function and lambda takes any number of arguments but can only have one expression and python lambda syntax is
Syntax: x = lambda arguments : expression
Now i will show you some python lambda function examples:
#python #anonymous function python #filter function in python #lambda #lambda python 3 #map python #python filter #python filter lambda #python lambda #python lambda examples #python map
1596442370
Natural language processing (NLP) is a specialized field for analysis and generation of human languages. Human languages, rightly called natural language, are highly context-sensitive and often ambiguous in order to produce a distinct meaning. (Remember the joke where the wife asks the husband to “get a carton of milk and if they have eggs, get six,” so he gets six cartons of milk because they had eggs.) NLP provides the ability to comprehend natural language input and produce natural language output appropriately.
Computational linguistics (CL) is the larger field of linguistic comprehension and modeling. NLP is a subset of CL that deals with the engineering aspects of language understanding and generation. NLP is an interdisciplinary domain that touches on multiple fields including artificial intelligence (AI), machine learning (ML), deep learning (DL), mathematics, and statistics.
Some of the applications you can build with NLP include:
Like a skyscraper is built brick by brick, you can build large applications like the ones above by using NLP’s fundamental and essential building blocks.
There are several open source NLP libraries available, such as Stanford CoreNLP, spaCy, and Genism in Python, Apache OpenNLP, and GateNLP in Java and other languages.
To demonstrate the functions of NLP’s building blocks, I’ll use Python and its primary NLP library, Natural Language Toolkit (NLTK). NLTK was created at the University of Pennsylvania. It is a widely used and convenient starting point for getting into NLP. After learning its concepts, you can explore other libraries to build your “skyscraper” NLP applications.
The fundamental building blocks covered in this article are:
#natural language processing #nlp #python