Text data is everywhere, from your daily Facebook or Twitter newsfeed to textbooks and customer feedback. Data is the new oil, and text is an oil well that we need to drill deeper. Before we can actually use the oil, we must preprocess it so it fits our machines. Same for data, we must clean and preprocess the data to fit our purposes. This post will include a few simple approaches to cleaning and preprocessing text data for text analytics tasks.

We will model the approach on the Covid-19 Twitter dataset. There are 3 major components to this approach:

First, we clean and filter all non-English tweets/texts as we want consistency in the data.

Second, we create a simplified version for our complex text data.

Finally, we vectorize the text and save their embedding for future analysis.

If you want to check out the code: feel free to check out the code for part 1part 2, and part 3 embedded here. You can also check the whole project blogpost and codes here.

Part 1: Clean & Filter text

First, to simplify the text, we want to standardize our text into only English characters. This function will remove all non-English characters.

def clean_non_english(txt):
    txt = re.sub(r'\W+', ' ', txt)
    txt = txt.lower()
    txt = txt.replace("[^a-zA-Z]", " ")
    word_tokens = word_tokenize(txt)
    filtered_word = [w for w in word_tokens if all(ord(c) < 128 for c in w)]
    filtered_word = [w + " " for w in filtered_word]
    return "".join(filtered_word)

We can even do better by removing the stopwords. Stopwords are common words that appear in English sentences without contributing much to the meaning. We will use the nltk package to filter the stopwords. As our main task is visualizing the common theme of tweets using word cloud, this step is necessary to avoid common words like “the,” “a,” etc.

However, if your tasks require full sentence structure, like next word prediction or grammar check, you can skip this step.

import nltk
nltk.download('punkt') ## one time execution
nltk.download('stopwords')
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
def clean_text(english_txt):
    try:
       word_tokens = word_tokenize(english_txt)
       filtered_word = [w for w in word_tokens if not w in stop_words]
       filtered_word = [w + " " for w in filtered_word]
       return "".join(filtered_word)
    except:
       return np.nan

#data-science #ai #nlp #twitter

NLP Text Preprocessing: Steps, tools, and examples
1.25 GEEK