Adam Daniels

Adam Daniels

1560845986

Getting Started with Natural Language Processing in Python

A significant portion of the data that is generated today is unstructured. Unstructured data includes social media comments, browsing history and customer feedback. Have you found yourself in a situation with a bunch of textual data to analyse, and no idea how to proceed? Natural language processing in Python can help.

The objective of this tutorial is to enable you to analyze textual data in Python through the concepts of Natural Language Processing (NLP). You will first learn how to tokenize your text into smaller chunks, normalize words to their root forms, and then, remove any noise in your documents to prepare them for further analysis.

Let’s get started!

Prerequisites

In this tutorial, we will use Python’s nltk library to perform all NLP operations on the text. At the time of writing this tutorial, we used version 3.4 of nltk. To install the library, you can use the pip command on the terminal:

pip install nltk==3.4


To check which version of nltk you have in the system, you can import the library into the Python interpreter and check the version:

import nltk
print(nltk.__version__)


To perform certain actions within nltk in this tutorial, you may have to download specific resources. We will describe each resource as and when required.

However, if you would like to avoid downloading individual resources later in the tutorial and grab them now in one go, run the following command:

python -m nltk.downloader all


Step 1: Convert into Tokens

A computer system can not find meaning in natural language by itself. The first step in processing natural language is to convert the original text into tokens. A token is a combination of continuous characters, with some meaning. It is up to you to decide how to break a sentence into tokens. For instance, an easy method is to split a sentence by whitespace to break it into individual words.

In the NLTK library, you can use the word_tokenize() function to convert a string to tokens. However, you will first need to download the punkt resource. Run the following command in the terminal:

nltk.download('punkt')


Next, you need to import word_tokenize from nltk.tokenize to use it.

from nltk.tokenize import word_tokenize
print(word_tokenize("Hi, this is a nice hotel."))


The output of the code is as follows:

['Hi', ',', 'this', 'is', 'a', 'nice', 'hotel', '.']


You’ll notice that word_tokenize does not simply split a string based on whitespace, but also separates punctuation into tokens. It’s up to you if you would like to retain the punctuation marks in the analysis.

Step 2: Convert Words to their Base Forms

When you are processing natural language, you’ll often notice that there are various grammatical forms of the same word. For instance, “go”, “going” and “gone” are forms of the same verb, “go”.

While the necessities of your project may require you to retain words in various grammatical forms, let us discuss a way to convert various grammatical forms of the same word into its base form. There are two techniques that you can use to convert a word to its base.

The first technique is stemming. Stemming is a simple algorithm that removes affixes from a word. There are various stemming algorithms available for use in NLTK. We will use the Porter algorithm in this tutorial.

We first import PorterStemmer from nltk.stem.porter. Next, we initialize the stemmer to the stemmer variable and then use the .stem() method to find the base form of a word.

from nltk.stem.porter import PorterStemmer 
stemmer = PorterStemmer()
print(stemmer.stem("going"))


The output of the code above is go. If you run the stemmer for the other forms of “go” described above, you will notice that the stemmer returns the same base form, “go”. However, as stemming is only a simple algorithm based on removing word affixes, it fails when the words are less commonly used in language.

When you try the stemmer on the word “constitutes”, it gives an unintuitive result.

print(stemmer.stem("constitutes"))


You will notice the output is “constitut”.

This issue is solved by moving on to a more complex approach towards finding the base form of a word in a given context. The process is called lemmatization. Lemmatization normalizes a word based on the context and vocabulary of the text. In NLTK, you can lemmatize sentences using the WordNetLemmatizer class.

First, you need to download the wordnet resource from the NLTK downloader in the Python terminal.

nltk.download('wordnet')


Once it is downloaded, you need to import the WordNetLemmatizer class and initialize it.

from nltk.stem.wordnet import WordNetLemmatizer 
lem = WordNetLemmatizer()


To use the lemmatizer, use the .lemmatize() method. It takes two arguments — the word and the context. In our example, we will use “v” for context. Let us explore the context further after looking at the output of the .lemmatize() method.

print(lem.lemmatize('constitutes', 'v'))


You would notice that the .lemmatize() method correctly converts the word “constitutes” to its base form, “constitute”. You would also notice that lemmatization takes longer than stemming, as the algorithm is more complex.

Let’s check how to determine the second argument of the .lemmatize() method programmatically. NLTK has a pos_tag function which helps in determining the context of a word in a sentence. However, you first need to download the averaged_perceptron_tagger resource through the NLTK downloader.

nltk.download('averaged_perceptron_tagger')


Next, import the pos_tag function and run it on a sentence.

from nltk.tag import pos_tag
sample = "Hi, this is a nice hotel."
print(pos_tag(word_tokenize(sample)))


You will notice that the output is a list of pairs. Each pair consists of a token and its tag, which signifies the context of a token in the overall text. Notice that the tag for a punctuation mark is itself.

[('Hi', 'NNP'),
 (',', ','),
 ('this', 'DT'),
 ('is', 'VBZ'),
 ('a', 'DT'),
 ('nice', 'JJ'),
 ('hotel', 'NN'),
 ('.', '.')]


How do you decode the context of each token? Here is a full list of all tags and their corresponding meanings on the web. Notice that the tags of all nouns begin with “N”, and for all verbs begin with “V”. We can use this information in the second argument of our .lemmatize() method.

def lemmatize_tokens(stentence):
    lemmatizer = WordNetLemmatizer()
    lemmatized_tokens = []
    for word, tag in pos_tag(stentence):
        if tag.startswith('NN'):
            pos = 'n'
        elif tag.startswith('VB'):
            pos = 'v'
        else:
            pos = 'a'
        lemmatized_tokens.append(lemmatizer.lemmatize(word, pos))
    return lemmatized_tokens

sample = "Legal authority constitutes all magistrates."
print(lemmatize_tokens(word_tokenize(sample)))


The output of the code above is as follows:

['Legal', 'authority', 'constitute', 'all', 'magistrate', '.']


This output is on expected grounds, where “constitutes” and “magistrates” have been converted to “constitute” and “magistrate”, respectively.

Step 3: Data Cleaning

The next step in preparing data is to clean the data and remove anything that does not add meaning to your analysis. Broadly, we will look at removing punctuation and stop words from your analysis.

Removing punctuation is a fairly easy task. The punctuation object of the string library contains all the punctuation marks in English.

import string
print(string.punctuation)


The output of this code snippet is as follows:

'!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'


In order to remove punctuation from tokens, you can simply run:

for token in tokens:
    if token in string.punctuation:
        # Do something


Next, we will focus on removing stop words. Stop words are commonly used words in language like “I”, “a” and “the”, which add little meaning to text when analyzing it. We will therefore, remove stop words from our analysis. First, download the stopwords resource from the NLTK downloader.

nltk.download('stopwords')


Once your download is complete, import stopwords from nltk.corpus and use the .words() method with “english” as the argument. It is a list of 179 stop words in the English language.

from nltk.corpus import stopwords
stop_words = stopwords.words('english')


We can combine the lemmatization example with the concepts discussed in this section to create the following function, clean_data(). Additionally, before comparing if a word is a part of the stop words list, we convert it to the lower case. This way, we still capture a stop word if it occurs at the start of a sentence and is capitalized.

def clean_data(tokens, stop_words = ()):

    cleaned_tokens = []

    for token, tag in pos_tag(tokens):
        if tag.startswith("NN"):
            pos = 'n'
        elif tag.startswith('VB'):
            pos = 'v'
        else:
            pos = 'a'

        lemmatizer = WordNetLemmatizer()
        token = lemmatizer.lemmatize(token, pos)

        if token not in string.punctuation and token.lower() not in stop_words:
            cleaned_tokens.append(token)
    return cleaned_tokens

sample = "The quick brown fox jumps over the lazy dog."
stop_words = stopwords.words('english')

clean_data(word_tokenize(sample), stop_words)


The output of the example is as follows:

['quick', 'brown', 'fox', 'jump', 'lazy', 'dog']


As you can see, the punctuation and stop words have been removed.

Word Frequency Distribution

Now that you are familiar with the basic cleaning techniques in NLP, let’s try and find the frequency of words in text. For this exercise, we’ll use the text of the fairy tale, The Mouse, The Bird and The Sausage, which is available freely on Gutenberg. We’ll store the text of this fairy tale in a string, text.

First, we tokenize text and then clean it using the function clean_data that we defined above.

tokens = word_tokenize(text)
cleaned_tokens = clean_data(tokens, stop_words = stop_words)


To find the frequency distribution of words in your text, you can use FreqDist class of NLTK. Initialize the class with the tokens as an argument. Then use the .most_common() method to find the commonly occurring terms. Let us try and find the top ten terms in this case.

from nltk import FreqDist

freq_dist = FreqDist(cleaned_tokens)
freq_dist.most_common(10)


Here are the ten most commonly occurring terms in this fairy tale.

python [('bird', 15), ('sausage', 11), ('mouse', 8), ('wood', 7), ('time', 6), ('long', 5), ('make', 5), ('fly', 4), ('fetch', 4), ('water', 4)]

Unsurprisingly, the three most common terms are the three main characters in the fairy tale.

The frequency of words may not be very important when analysing text. Typically, the next step in NLP is to generate a statistic — TF – IDF (term frequency – inverse document frequency), which signifies the importance of a word in a list of documents.

Conclusion

In this post, you were introduced to natural language processing in Python. You converted text to tokens, converted words to their base forms and finally, cleaned the text to remove any part which didn’t add meaning to the analysis.

Although you looked at simple NLP tasks in this tutorial, there are many techniques to explore. One may wish to perform topic modelling on textual data, where the objective is to find a common topic that a text might be talking about. A more complex task in NLP is the implementation of a sentiment analysis model to determine the feeling behind any text.

What procedures do you follow when you are given a pile of text to work with? Let us know in the comments below.

#python #machine-learning

What is GEEK

Buddha Community

Getting Started with Natural Language Processing in Python
Shubham Ankit

Shubham Ankit

1657081614

How to Automate Excel with Python | Python Excel Tutorial (OpenPyXL)

How to Automate Excel with Python

In this article, We will show how we can use python to automate Excel . A useful Python library is Openpyxl which we will learn to do Excel Automation

What is OPENPYXL

Openpyxl is a Python library that is used to read from an Excel file or write to an Excel file. Data scientists use Openpyxl for data analysis, data copying, data mining, drawing charts, styling sheets, adding formulas, and more.

Workbook: A spreadsheet is represented as a workbook in openpyxl. A workbook consists of one or more sheets.

Sheet: A sheet is a single page composed of cells for organizing data.

Cell: The intersection of a row and a column is called a cell. Usually represented by A1, B5, etc.

Row: A row is a horizontal line represented by a number (1,2, etc.).

Column: A column is a vertical line represented by a capital letter (A, B, etc.).

Openpyxl can be installed using the pip command and it is recommended to install it in a virtual environment.

pip install openpyxl

CREATE A NEW WORKBOOK

We start by creating a new spreadsheet, which is called a workbook in Openpyxl. We import the workbook module from Openpyxl and use the function Workbook() which creates a new workbook.

from openpyxl
import Workbook
#creates a new workbook
wb = Workbook()
#Gets the first active worksheet
ws = wb.active
#creating new worksheets by using the create_sheet method

ws1 = wb.create_sheet("sheet1", 0) #inserts at first position
ws2 = wb.create_sheet("sheet2") #inserts at last position
ws3 = wb.create_sheet("sheet3", -1) #inserts at penultimate position

#Renaming the sheet
ws.title = "Example"

#save the workbook
wb.save(filename = "example.xlsx")

READING DATA FROM WORKBOOK

We load the file using the function load_Workbook() which takes the filename as an argument. The file must be saved in the same working directory.

#loading a workbook
wb = openpyxl.load_workbook("example.xlsx")

 

GETTING SHEETS FROM THE LOADED WORKBOOK

 

#getting sheet names
wb.sheetnames
result = ['sheet1', 'Sheet', 'sheet3', 'sheet2']

#getting a particular sheet
sheet1 = wb["sheet2"]

#getting sheet title
sheet1.title
result = 'sheet2'

#Getting the active sheet
sheetactive = wb.active
result = 'sheet1'

 

ACCESSING CELLS AND CELL VALUES

 

#get a cell from the sheet
sheet1["A1"] <
  Cell 'Sheet1'.A1 >

  #get the cell value
ws["A1"].value 'Segment'

#accessing cell using row and column and assigning a value
d = ws.cell(row = 4, column = 2, value = 10)
d.value
10

 

ITERATING THROUGH ROWS AND COLUMNS

 

#looping through each row and column
for x in range(1, 5):
  for y in range(1, 5):
  print(x, y, ws.cell(row = x, column = y)
    .value)

#getting the highest row number
ws.max_row
701

#getting the highest column number
ws.max_column
19

There are two functions for iterating through rows and columns.

Iter_rows() => returns the rows
Iter_cols() => returns the columns {
  min_row = 4, max_row = 5, min_col = 2, max_col = 5
} => This can be used to set the boundaries
for any iteration.

Example:

#iterating rows
for row in ws.iter_rows(min_row = 2, max_col = 3, max_row = 3):
  for cell in row:
  print(cell) <
  Cell 'Sheet1'.A2 >
  <
  Cell 'Sheet1'.B2 >
  <
  Cell 'Sheet1'.C2 >
  <
  Cell 'Sheet1'.A3 >
  <
  Cell 'Sheet1'.B3 >
  <
  Cell 'Sheet1'.C3 >

  #iterating columns
for col in ws.iter_cols(min_row = 2, max_col = 3, max_row = 3):
  for cell in col:
  print(cell) <
  Cell 'Sheet1'.A2 >
  <
  Cell 'Sheet1'.A3 >
  <
  Cell 'Sheet1'.B2 >
  <
  Cell 'Sheet1'.B3 >
  <
  Cell 'Sheet1'.C2 >
  <
  Cell 'Sheet1'.C3 >

To get all the rows of the worksheet we use the method worksheet.rows and to get all the columns of the worksheet we use the method worksheet.columns. Similarly, to iterate only through the values we use the method worksheet.values.


Example:

for row in ws.values:
  for value in row:
  print(value)

 

WRITING DATA TO AN EXCEL FILE

Writing to a workbook can be done in many ways such as adding a formula, adding charts, images, updating cell values, inserting rows and columns, etc… We will discuss each of these with an example.

 

CREATING AND SAVING A NEW WORKBOOK

 

#creates a new workbook
wb = openpyxl.Workbook()

#saving the workbook
wb.save("new.xlsx")

 

ADDING AND REMOVING SHEETS

 

#creating a new sheet
ws1 = wb.create_sheet(title = "sheet 2")

#creating a new sheet at index 0
ws2 = wb.create_sheet(index = 0, title = "sheet 0")

#checking the sheet names
wb.sheetnames['sheet 0', 'Sheet', 'sheet 2']

#deleting a sheet
del wb['sheet 0']

#checking sheetnames
wb.sheetnames['Sheet', 'sheet 2']

 

ADDING CELL VALUES

 

#checking the sheet value
ws['B2'].value
null

#adding value to cell
ws['B2'] = 367

#checking value
ws['B2'].value
367

 

ADDING FORMULAS

 

We often require formulas to be included in our Excel datasheet. We can easily add formulas using the Openpyxl module just like you add values to a cell.
 

For example:

import openpyxl
from openpyxl
import Workbook

wb = openpyxl.load_workbook("new1.xlsx")
ws = wb['Sheet']

ws['A9'] = '=SUM(A2:A8)'

wb.save("new2.xlsx")

The above program will add the formula (=SUM(A2:A8)) in cell A9. The result will be as below.

image

 

MERGE/UNMERGE CELLS

Two or more cells can be merged to a rectangular area using the method merge_cells(), and similarly, they can be unmerged using the method unmerge_cells().

For example:
Merge cells

#merge cells B2 to C9
ws.merge_cells('B2:C9')
ws['B2'] = "Merged cells"

Adding the above code to the previous example will merge cells as below.

image

UNMERGE CELLS

 

#unmerge cells B2 to C9
ws.unmerge_cells('B2:C9')

The above code will unmerge cells from B2 to C9.

INSERTING AN IMAGE

To insert an image we import the image function from the module openpyxl.drawing.image. We then load our image and add it to the cell as shown in the below example.

Example:

import openpyxl
from openpyxl
import Workbook
from openpyxl.drawing.image
import Image

wb = openpyxl.load_workbook("new1.xlsx")
ws = wb['Sheet']
#loading the image(should be in same folder)
img = Image('logo.png')
ws['A1'] = "Adding image"
#adjusting size
img.height = 130
img.width = 200
#adding img to cell A3

ws.add_image(img, 'A3')

wb.save("new2.xlsx")

Result:

image

CREATING CHARTS

Charts are essential to show a visualization of data. We can create charts from Excel data using the Openpyxl module chart. Different forms of charts such as line charts, bar charts, 3D line charts, etc., can be created. We need to create a reference that contains the data to be used for the chart, which is nothing but a selection of cells (rows and columns). I am using sample data to create a 3D bar chart in the below example:

Example

import openpyxl
from openpyxl
import Workbook
from openpyxl.chart
import BarChart3D, Reference, series

wb = openpyxl.load_workbook("example.xlsx")
ws = wb.active

values = Reference(ws, min_col = 3, min_row = 2, max_col = 3, max_row = 40)
chart = BarChart3D()
chart.add_data(values)
ws.add_chart(chart, "E3")
wb.save("MyChart.xlsx")

Result
image


How to Automate Excel with Python with Video Tutorial

Welcome to another video! In this video, We will cover how we can use python to automate Excel. I'll be going over everything from creating workbooks to accessing individual cells and stylizing cells. There is a ton of things that you can do with Excel but I'll just be covering the core/base things in OpenPyXl.

⭐️ Timestamps ⭐️
00:00 | Introduction
02:14 | Installing openpyxl
03:19 | Testing Installation
04:25 | Loading an Existing Workbook
06:46 | Accessing Worksheets
07:37 | Accessing Cell Values
08:58 | Saving Workbooks
09:52 | Creating, Listing and Changing Sheets
11:50 | Creating a New Workbook
12:39 | Adding/Appending Rows
14:26 | Accessing Multiple Cells
20:46 | Merging Cells
22:27 | Inserting and Deleting Rows
23:35 | Inserting and Deleting Columns
24:48 | Copying and Moving Cells
26:06 | Practical Example, Formulas & Cell Styling

📄 Resources 📄
OpenPyXL Docs: https://openpyxl.readthedocs.io/en/stable/ 
Code Written in This Tutorial: https://github.com/techwithtim/ExcelPythonTutorial 
Subscribe: https://www.youtube.com/c/TechWithTim/featured 

#python 

Sival Alethea

Sival Alethea

1624381200

Natural Language Processing (NLP) Tutorial with Python & NLTK

This video will provide you with a comprehensive and detailed knowledge of Natural Language Processing, popularly known as NLP. You will also learn about the different steps involved in processing the human language like Tokenization, Stemming, Lemmatization and more. Python, NLTK, & Jupyter Notebook are used to demonstrate the concepts.

📺 The video in this post was made by freeCodeCamp.org
The origin of the article: https://www.youtube.com/watch?v=X2vAabgKiuM&list=PLWKjhJtqVAbnqBxcdjVGgT3uVR10bzTEB&index=16
🔥 If you’re a beginner. I believe the article below will be useful to you ☞ What You Should Know Before Investing in Cryptocurrency - For Beginner
⭐ ⭐ ⭐The project is of interest to the community. Join to Get free ‘GEEK coin’ (GEEKCASH coin)!
☞ **-----CLICK HERE-----**⭐ ⭐ ⭐
Thanks for visiting and watching! Please don’t forget to leave a like, comment and share!

#natural language processing #nlp #python #python & nltk #nltk #natural language processing (nlp) tutorial with python & nltk

Ray  Patel

Ray Patel

1619510796

Lambda, Map, Filter functions in python

Welcome to my Blog, In this article, we will learn python lambda function, Map function, and filter function.

Lambda function in python: Lambda is a one line anonymous function and lambda takes any number of arguments but can only have one expression and python lambda syntax is

Syntax: x = lambda arguments : expression

Now i will show you some python lambda function examples:

#python #anonymous function python #filter function in python #lambda #lambda python 3 #map python #python filter #python filter lambda #python lambda #python lambda examples #python map

Shardul Bhatt

Shardul Bhatt

1626775355

Why use Python for Software Development

No programming language is pretty much as diverse as Python. It enables building cutting edge applications effortlessly. Developers are as yet investigating the full capability of end-to-end Python development services in various areas. 

By areas, we mean FinTech, HealthTech, InsureTech, Cybersecurity, and that's just the beginning. These are New Economy areas, and Python has the ability to serve every one of them. The vast majority of them require massive computational abilities. Python's code is dynamic and powerful - equipped for taking care of the heavy traffic and substantial algorithmic capacities. 

Programming advancement is multidimensional today. Endeavor programming requires an intelligent application with AI and ML capacities. Shopper based applications require information examination to convey a superior client experience. Netflix, Trello, and Amazon are genuine instances of such applications. Python assists with building them effortlessly. 

5 Reasons to Utilize Python for Programming Web Apps 

Python can do such numerous things that developers can't discover enough reasons to admire it. Python application development isn't restricted to web and enterprise applications. It is exceptionally adaptable and superb for a wide range of uses.

Robust frameworks 

Python is known for its tools and frameworks. There's a structure for everything. Django is helpful for building web applications, venture applications, logical applications, and mathematical processing. Flask is another web improvement framework with no conditions. 

Web2Py, CherryPy, and Falcon offer incredible capabilities to customize Python development services. A large portion of them are open-source frameworks that allow quick turn of events. 

Simple to read and compose 

Python has an improved sentence structure - one that is like the English language. New engineers for Python can undoubtedly understand where they stand in the development process. The simplicity of composing allows quick application building. 

The motivation behind building Python, as said by its maker Guido Van Rossum, was to empower even beginner engineers to comprehend the programming language. The simple coding likewise permits developers to roll out speedy improvements without getting confused by pointless subtleties. 

Utilized by the best 

Alright - Python isn't simply one more programming language. It should have something, which is the reason the business giants use it. Furthermore, that too for different purposes. Developers at Google use Python to assemble framework organization systems, parallel information pusher, code audit, testing and QA, and substantially more. Netflix utilizes Python web development services for its recommendation algorithm and media player. 

Massive community support 

Python has a steadily developing community that offers enormous help. From amateurs to specialists, there's everybody. There are a lot of instructional exercises, documentation, and guides accessible for Python web development solutions. 

Today, numerous universities start with Python, adding to the quantity of individuals in the community. Frequently, Python designers team up on various tasks and help each other with algorithmic, utilitarian, and application critical thinking. 

Progressive applications 

Python is the greatest supporter of data science, Machine Learning, and Artificial Intelligence at any enterprise software development company. Its utilization cases in cutting edge applications are the most compelling motivation for its prosperity. Python is the second most well known tool after R for data analytics.

The simplicity of getting sorted out, overseeing, and visualizing information through unique libraries makes it ideal for data based applications. TensorFlow for neural networks and OpenCV for computer vision are two of Python's most well known use cases for Machine learning applications.

Summary

Thinking about the advances in programming and innovation, Python is a YES for an assorted scope of utilizations. Game development, web application development services, GUI advancement, ML and AI improvement, Enterprise and customer applications - every one of them uses Python to its full potential. 

The disadvantages of Python web improvement arrangements are regularly disregarded by developers and organizations because of the advantages it gives. They focus on quality over speed and performance over blunders. That is the reason it's a good idea to utilize Python for building the applications of the future.

#python development services #python development company #python app development #python development #python in web development #python software development

Art  Lind

Art Lind

1602968400

Python Tricks Every Developer Should Know

Python is awesome, it’s one of the easiest languages with simple and intuitive syntax but wait, have you ever thought that there might ways to write your python code simpler?

In this tutorial, you’re going to learn a variety of Python tricks that you can use to write your Python code in a more readable and efficient way like a pro.

Let’s get started

Swapping value in Python

Instead of creating a temporary variable to hold the value of the one while swapping, you can do this instead

>>> FirstName = "kalebu"
>>> LastName = "Jordan"
>>> FirstName, LastName = LastName, FirstName 
>>> print(FirstName, LastName)
('Jordan', 'kalebu')

#python #python-programming #python3 #python-tutorials #learn-python #python-tips #python-skills #python-development