For this example, we will use a Twitter dataset that comes with NLTK. This dataset has been manually annotated and serves to establish baselines for models quickly. The sample dataset from NLTK is separated into positive and negative tweets. It contains 5000 positive tweets and 5000 negative tweets exactly. The exact match between these classes is not a coincidence. The intention is to have a balanced dataset. That does not reflect the real distributions of positive and negative classes in live Twitter streams. It is just because balanced datasets simplify the design of most computational methods that are required for sentiment analysis. However, it is better to be aware that this balance of classes is artificial. Let us import them now as well as a few other libraries we will be using.
import nltk ## Python library for NLP from nltk.corpus import twitter_samples ## sample Twitter dataset from NLTK from collections import Counter nltk.download('twitter_samples') ## select the set of positive and negative tweets all_positive_tweets = twitter_samples.strings('positive_tweets.json') all_negative_tweets = twitter_samples.strings('negative_tweets.json') all_positive_tweets[0:10]
and we get:
['#FollowFriday @France_Inte @PKuchly57 @Milipol_Paris for being top engaged members in my community this week :)', '@Lamb2ja Hey James! How odd :/ Please call our Contact Centre on 02392441234 and we will be able to assist you :) Many thanks!', '@DespiteOfficial we had a listen last night :) As You Bleed is an amazing track. When are you in Scotland?!', '@97sides CONGRATS :)', 'yeaaaah yippppy!!! my accnt verified rqst has succeed got a blue tick mark on my fb profile :) in 15 days', '@BhaktisBanter @PallaviRuhail This one is irresistible :)\n#FlipkartFashionFriday http://t.co/EbZ0L2VENM', "We don't like to keep our lovely customers waiting for long! We hope you enjoy! Happy Friday! - LWWF :) https://t.co/smyYriipxI", '@Impatientraider On second thought, there’s just not enough time for a DD :) But new shorts entering system. Sheep must be buying.', 'Jgh , but we have to go to Bayan :D bye', 'As an act of mischievousness, am calling the ETL layer of our in-house warehousing app Katamari.\n\nWell… as the name implies :p.']
We will iterate over reviews. For each word in a positive review, we will increase the count for that word in both our positive counter and the total words counter; likewise, for each word in a negative review, we will increase the count for that word in both our negative counter and the total words counter.
I found the Counter class to be useful in this task.
## Create three Counter objects to store positive, negative and total counts positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for i in range(len(all_positive_tweets)): for word in all_positive_tweets[i].lower().split(" "): positive_counts[word]+=1 total_counts[word]+=1 for i in range(len(all_negative_tweets)): for word in all_negative_tweets[i].lower().split(" "): negative_counts[word]+=1 total_counts[word]+=1
Let’s have a look at the most common **positive **and **negative **words:
And we get:
[(':)', 3154), ('you', 1316), ('to', 1081), ('the', 1076), ('i', 1042), ('a', 920), ('for', 769), ('and', 688), (':-)', 615), (':d', 609)]
And fro the negative:
And we get:
[(':(', 3723), ('i', 2093), ('to', 1090), ('the', 915), ('my', 738), ('you', 665), ('and', 660), ('a', 650), ('me', 627), ('so', 571)]
As you can see, common words like “ the”, “ a”, “ i” appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you’ll need to calculate the ratios of word usage between positive and negative reviews.
pos_neg_ratios = Counter() ## Calculate the ratios of positive and negative uses of the most common words ## Consider words to be "common" if they've been used at least 100 times for term,cnt in list(total_counts.most_common()): if(cnt > 100): pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1) pos_neg_ratios[term] = pos_neg_ratio
Let’s have a look at the top 20 words with the highest Positive Sentiment Score:
And we get:
[(':)', 3154.0), (':-)', 615.0), (':d', 609.0), (':p', 128.0), (':))', 108.0), ('thanks', 15.818181818181818), ('great', 8.941176470588236), ('thank', 8.25), ('happy', 7.12), ('hi', 6.045454545454546), ('<3', 5.291666666666667), ('nice', 4.7894736842105265), ('!', 4.7727272727272725), ('our', 2.78), ('new', 2.75), ('an', 2.659090909090909), ('follow', 2.6481481481481484), ('us', 2.6052631578947367), ('your', 2.470149253731343), ('good', 2.465909090909091)]
Let’s have a look at the top 20 words with the highest Negative Sentiment Score:
And we get:
[(':(((', 0.0), (':((', 0.0), (':-(', 0.0), (':(', 0.0002685284640171858), ('sad', 0.03773584905660377), ('miss', 0.0759493670886076), ('followed', 0.11711711711711711), ('sorry', 0.12030075187969924), ('why', 0.17834394904458598), ('wish', 0.19540229885057472), ("can't", 0.23863636363636365), ('feel', 0.2857142857142857), ('wanna', 0.29473684210526313), ('want', 0.33796296296296297), ('please', 0.35), ('been', 0.3770491803278688), ('still', 0.3884297520661157), ('but', 0.4028436018957346), ('im', 0.421875), ('too', 0.4221105527638191)]
As we can see got expected results and more specifically:
:)). Notice that we have converted all the letters to lower case.
Originally published at https://predictivehacks.com.
#nlp #python #nltk #sentiment-analysis #sentiment
Welcome to my Blog , In this article, you are going to learn the top 10 python tips and tricks.
#python #python hacks tricks #python learning tips #python programming tricks #python tips #python tips and tricks #python tips and tricks advanced #python tips and tricks for beginners #python tips tricks and techniques #python tutorial #tips and tricks in python #tips to learn python #top 30 python tips and tricks for beginners
Welcome to my Blog, In this article, we will learn python lambda function, Map function, and filter function.
Lambda function in python: Lambda is a one line anonymous function and lambda takes any number of arguments but can only have one expression and python lambda syntax is
Syntax: x = lambda arguments : expression
Now i will show you some python lambda function examples:
#python #anonymous function python #filter function in python #lambda #lambda python 3 #map python #python filter #python filter lambda #python lambda #python lambda examples #python map
Python is awesome, it’s one of the easiest languages with simple and intuitive syntax but wait, have you ever thought that there might ways to write your python code simpler?
In this tutorial, you’re going to learn a variety of Python tricks that you can use to write your Python code in a more readable and efficient way like a pro.
Swapping value in Python
Instead of creating a temporary variable to hold the value of the one while swapping, you can do this instead
>>> FirstName = "kalebu" >>> LastName = "Jordan" >>> FirstName, LastName = LastName, FirstName >>> print(FirstName, LastName) ('Jordan', 'kalebu')
#python #python-programming #python3 #python-tutorials #learn-python #python-tips #python-skills #python-development
Today you’re going to learn how to use Python programming in a way that can ultimately save a lot of space on your drive by removing all the duplicates.
In many situations you may find yourself having duplicates files on your disk and but when it comes to tracking and checking them manually it can tedious.
Heres a solution
Instead of tracking throughout your disk to see if there is a duplicate, you can automate the process using coding, by writing a program to recursively track through the disk and remove all the found duplicates and that’s what this article is about.
But How do we do it?
If we were to read the whole file and then compare it to the rest of the files recursively through the given directory it will take a very long time, then how do we do it?
The answer is hashing, with hashing can generate a given string of letters and numbers which act as the identity of a given file and if we find any other file with the same identity we gonna delete it.
There’s a variety of hashing algorithms out there such as
#python-programming #python-tutorials #learn-python #python-project #python3 #python #python-skills #python-tips
Magic Methods are the special methods which gives us the ability to access built in syntactical features such as ‘<’, ‘>’, ‘==’, ‘+’ etc…
You must have worked with such methods without knowing them to be as magic methods. Magic methods can be identified with their names which start with __ and ends with __ like init, call, str etc. These methods are also called Dunder Methods, because of their name starting and ending with Double Underscore (Dunder).
Now there are a number of such special methods, which you might have come across too, in Python. We will just be taking an example of a few of them to understand how they work and how we can use them.
class AnyClass: def __init__(): print("Init called on its own") obj = AnyClass()
The first example is _init, _and as the name suggests, it is used for initializing objects. Init method is called on its own, ie. whenever an object is created for the class, the init method is called on its own.
The output of the above code will be given below. Note how we did not call the init method and it got invoked as we created an object for class AnyClass.
Init called on its own
Let’s move to some other example, add gives us the ability to access the built in syntax feature of the character +. Let’s see how,
class AnyClass: def __init__(self, var): self.some_var = var def __add__(self, other_obj): print("Calling the add method") return self.some_var + other_obj.some_var obj1 = AnyClass(5) obj2 = AnyClass(6) obj1 + obj2
#python3 #python #python-programming #python-web-development #python-tutorials #python-top-story #python-tips #learn-python