For this example, we will use a Twitter dataset that comes with NLTK. This dataset has been manually annotated and serves to establish baselines for models quickly. The sample dataset from NLTK is separated into positive and negative tweets. It contains 5000 positive tweets and 5000 negative tweets exactly. The exact match between these classes is not a coincidence. The intention is to have a balanced dataset. That does not reflect the real distributions of positive and negative classes in live Twitter streams. It is just because balanced datasets simplify the design of most computational methods that are required for sentiment analysis. However, it is better to be aware that this balance of classes is artificial. Let us import them now as well as a few other libraries we will be using.
import nltk ## Python library for NLP
from nltk.corpus import twitter_samples ## sample Twitter dataset from NLTK
from collections import Counter
nltk.download('twitter_samples')
## select the set of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
all_positive_tweets[0:10]
and we get:
['#FollowFriday @France_Inte @PKuchly57 @Milipol_Paris for being top engaged members in my community this week :)',
'@Lamb2ja Hey James! How odd :/ Please call our Contact Centre on 02392441234 and we will be able to assist you :) Many thanks!',
'@DespiteOfficial we had a listen last night :) As You Bleed is an amazing track. When are you in Scotland?!',
'@97sides CONGRATS :)',
'yeaaaah yippppy!!! my accnt verified rqst has succeed got a blue tick mark on my fb profile :) in 15 days',
'@BhaktisBanter @PallaviRuhail This one is irresistible :)\n#FlipkartFashionFriday http://t.co/EbZ0L2VENM',
"We don't like to keep our lovely customers waiting for long! We hope you enjoy! Happy Friday! - LWWF :) https://t.co/smyYriipxI",
'@Impatientraider On second thought, there’s just not enough time for a DD :) But new shorts entering system. Sheep must be buying.',
'Jgh , but we have to go to Bayan :D bye',
'As an act of mischievousness, am calling the ETL layer of our in-house warehousing app Katamari.\n\nWell… as the name implies :p.']
We will iterate over reviews. For each word in a positive review, we will increase the count for that word in both our positive counter and the total words counter; likewise, for each word in a negative review, we will increase the count for that word in both our negative counter and the total words counter.
I found the Counter class to be useful in this task.
## Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(all_positive_tweets)):
for word in all_positive_tweets[i].lower().split(" "):
positive_counts[word]+=1
total_counts[word]+=1
for i in range(len(all_negative_tweets)):
for word in all_negative_tweets[i].lower().split(" "):
negative_counts[word]+=1
total_counts[word]+=1
Let’s have a look at the most common **positive **and **negative **words:
positive_counts.most_common()[0:10]
And we get:
[(':)', 3154),
('you', 1316),
('to', 1081),
('the', 1076),
('i', 1042),
('a', 920),
('for', 769),
('and', 688),
(':-)', 615),
(':d', 609)]
And fro the negative:
negative_counts.most_common()[0:10]
And we get:
[(':(', 3723),
('i', 2093),
('to', 1090),
('the', 915),
('my', 738),
('you', 665),
('and', 660),
('a', 650),
('me', 627),
('so', 571)]
As you can see, common words like “ the”, “ a”, “ i” appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you’ll need to calculate the ratios of word usage between positive and negative reviews.
pos_neg_ratios = Counter()
## Calculate the ratios of positive and negative uses of the most common words
## Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
Let’s have a look at the top 20 words with the highest Positive Sentiment Score:
pos_neg_ratios.most_common()[0:20]
And we get:
[(':)', 3154.0),
(':-)', 615.0),
(':d', 609.0),
(':p', 128.0),
(':))', 108.0),
('thanks', 15.818181818181818),
('great', 8.941176470588236),
('thank', 8.25),
('happy', 7.12),
('hi', 6.045454545454546),
('<3', 5.291666666666667),
('nice', 4.7894736842105265),
('!', 4.7727272727272725),
('our', 2.78),
('new', 2.75),
('an', 2.659090909090909),
('follow', 2.6481481481481484),
('us', 2.6052631578947367),
('your', 2.470149253731343),
('good', 2.465909090909091)]
Let’s have a look at the top 20 words with the highest Negative Sentiment Score:
pos_neg_ratios.most_common()[::-1][0:20]
And we get:
[(':(((', 0.0),
(':((', 0.0),
(':-(', 0.0),
(':(', 0.0002685284640171858),
('sad', 0.03773584905660377),
('miss', 0.0759493670886076),
('followed', 0.11711711711711711),
('sorry', 0.12030075187969924),
('why', 0.17834394904458598),
('wish', 0.19540229885057472),
("can't", 0.23863636363636365),
('feel', 0.2857142857142857),
('wanna', 0.29473684210526313),
('want', 0.33796296296296297),
('please', 0.35),
('been', 0.3770491803278688),
('still', 0.3884297520661157),
('but', 0.4028436018957346),
('im', 0.421875),
('too', 0.4221105527638191)]
As we can see got expected results and more specifically:
:)
, :-)
, :D
, :P
:))
. Notice that we have converted all the letters to lower case.thanks
, great
, happy
and nice
,:(((
, :((
, :-(
and :(
Originally published at https://predictivehacks.com.
#nlp #python #nltk #sentiment-analysis #sentiment