Performing Sentiment Analysis on Tweets from Node.js

Originally published by Jeong Woo Chang at https://medium.com

I will guide you to make a Node.js app that crawls tweets from Twitter and calculates a keyword’s sentiment analysis trend in last 24 hours. I will make the code to be generic to use any keywords but in the sample code, I will use “Bitcoin” as the keyword.

Beware that these settings may or may not work if you are using different versions of packages. I used Node.js v10.16.0 for running this.

Prerequisites

Get Node.js v10: https://nodejs.org

Get MongoDB v4: https://www.mongodb.com/download-center/community

  • Twitter API

Sign up and register an app to get consumer_key, consumer_secret, access_token and access_token_secret.

https://developer.twitter.com

Let’s start!

You need create a directory, mkdir tweet-sentiment && cd tweet-sentiment and run npm init -y on it. It will give you a package.json file like this.

{
  "name": "tweet-sentiment",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "node index.js"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "axios": "^0.19.0",
    "mongodb": "^3.3.0-beta2",
    "sentiment": "^5.0.1",
    "twit": "^2.2.11"
  }
}

Now it is time to install the needed dependencies.

npm install --save axios@0.19.0 mongodb@3.3.0-beta2 sentiment@5.0.1 twit@2.2.11

What are all these packages?

Axios: Promise based HTTP client

MongoDB: MongoDB native Node.js driver

Sentiment: AFINN-based sentiment analysis for Node.js

Twitter: Twitter API Client for node

Tweet crawler

In order to run sentiment analysis on tweets, you need to crawl the tweets in specific keyword and store them in a database.

In this case, you will use twit library to grab those tweets and store them in MongoDB using mongodb.

const Twit = require('twit');
const Sentiment = require('sentiment');

// Create a sentiment instance
const sentiment = new Sentiment();

// Create a Twit instance
const T = new Twit({
consumer_key: ‘consumer key obtained from twitter developer’, // Twitter Developer - https://developer.twitter.com
consumer_secret: ‘consumer secret obtained from twitter developer’,
access_token: ‘access token obatined from twitter developer’,
access_token_secret: ‘access token secret obtained from twitter developer’,
timeout_ms: 60 * 1000, // optional HTTP request timeout to apply to all requests.
strictSSL: true, // optional - requires SSL certificates to be valid.
});

// Take a MongoDB instance and a string keyword as parameter
module.exports = (db, keyword) => {
// Create a Tweet stream with given keyword
const stream = T.stream(‘statuses/filter’, {
track: keyword,
});

// When a new tweet with the keyword is published, this function will be called
stream.on(‘tweet’, async tweet => {
const {
created_at,
id,
lang,
text,
user: { screen_name, profile_image_url_https },
timestamp_ms,
} = tweet;

// Since the sentiment only supports English, it need to be filtered with language, "en".
if (lang === 'en') {
  let { score } = sentiment.analyze(text);
  if (score > 0) { // Treat greater than zero as 1
    score = 1;
  } else if (score < 0) { // Treat less than zero as -1
    score = -1;
  }
  
  // Save the tweet information with the score in the keyword collection
  await db.collection(`keyword_${keyword}`).insertOne({
    score,
    createdAt: new Date(created_at),
    id,
    lang,
    text,
    screen_name,
    profile_image_url_https,
    timestamp_ms,
  });
}

});
};

Note that the score is normalized to either 1 (positive) and -1 (negative), because you may not want to weigh specific tweets’ sentiment analysis heavier than other tweets. In here, every tweets are weighed equally.

Tweet crawler runner

You already wrote the tweet crawler. Now it is time for writing the runner for it. In this code, you will create a keyword register that runs every N seconds and some other codes for calculating the overall score index per keywords.

const { MongoClient } = require(‘mongodb’);
const axios = require(‘axios’);
const createStream = require(‘./tweetCrawler’);

// Either take the MongoDB connection string from envrionment variable MONGODB_URL or the local mongodb ‘twtsnt’
const url = process.env.MONGO_URL || ‘mongodb://localhost:27017/twtsnt’;
const dbName = ‘twtsnt’;
// Create a MongoDB client instance
const client = new MongoClient(url, { useNewUrlParser: true });

// Get current Bitcoin price, BTC-USD, from GDAX API (Coinbase Pro)
const getCoinPrice = async (coin = ‘btc’) => {
try {
const res = await axios.get(
https://api.gdax.com/products/${coin}-usd/stats,
);
return res.data;
} catch (err) {
console.error(err);
return null;
}
};

// Calculate the keyword score and save the score along with the keyword analysis data
const calculateAndStore = async (db, keyword) => {
// get the keyword collection
const data = db.collection(keyword_${keyword});

// Lightweight way of getting total document count in the collection
const totalCount = await data.estimatedDocumentCount();
// Count positive tweets
const positiveCount = await data.countDocuments({ score: 1 });
// Count negative tweets
const negativeCount = await data.countDocuments({ score: -1 });

// get bitcoin from coinbase and 24 hour change
// https://api.gdax.com/products/btc-usd/stats
// “open”:“3804.41000000”,“high”:“3874.12000000”,“low”:“3730.00000000”,“volume”:“8860.63154635”,“last”:“3835.00000000”,“volume_30day”:“452787.73451902”}
// last / open - 1
const stats = await getCoinPrice(‘btc’);
const change = (stats.last / stats.open - 1) * 100;

// Calculate trend
const trendFromTweet = ((positiveCount - negativeCount) / totalCount) * 100;

// “Calc” collection for the keyword calculation
const calc = db.collection(‘calc’);
// Create index for the “Calc” collection
await db.createIndex(‘calc’, { keyword: 1 }, { unique: true });

// Upsert (Update or create one if not exist) the keyword information
await calc.updateOne(
{ keyword },
{
$set: {
price: stats.last,
change,
trend: trendFromTweet,
tweetCount: {
positive: positiveCount,
negative: negativeCount,
neutral: totalCount - positiveCount - negativeCount,
},
createAt: new Date(),
},
},
{ upsert: true },
);
};

// Create keyword indexes
const createIndex = async (db, keyword) => {
const indexes = Promise.all([
// index with score, id, timestamp_ms
db.createIndex(
keyword_${keyword},
{
score: 1,
id: 1,
timestamp_ms: -1,
},
{
unique: true,
},
),
// index for assigning TTL (remove tweet data older than 24 hours old)
db.createIndex(
keyword_${keyword},
{
createdAt: 1,
},
{
expireAfterSeconds: 60 * 60 * 24,
},
),
]);

return indexes;
};

const registerKeyword = async (db, keyword, interval) => {
// Indexes
await createIndex(db, keyword);

// Stream
createStream(db, keyword);

// Calculate and Store
calculateAndStore(db, keyword);
// Repeat
setInterval(() => {
calculateAndStore(db, keyword);
}, interval);
};

(async () => {
try {
// Use connect method to connect to the Server
await client.connect();

// Database instance
const db = client.db(dbName);

// Register "bitcoin" keyword
await registerKeyword(db, 'bitcoin', 3000);

} catch (err) {
console.log(err.stack);
}
})();

With these two files, you can run node runner.js for gathering and keeping the data for last 24 hours.

REST APIs for the analysis data

You will need to create two APIs. One for retrieving tweets with sentiment analysis data and the other one for getting score of the keyword.

const express = require(‘express’);
const helmet = require(‘helmet’);
const { MongoClient, ObjectID } = require(‘mongodb’);

const url = process.env.MONGO_URL || ‘mongodb://localhost:27017/twtsnt’;
const dbName = ‘twtsnt’;
let client = null;
const port = process.env.PORT || 3000;
const app = express();

// Get a database instance
async function getDB() {
if (client && !client.isConnected) {
client = null;
}

if (client === null) {
client = new MongoClient(url, { useNewUrlParser: true });
} else if (client && client.isConnected) {
return client.db(dbName);
}

try {
await client.connect();
return client.db(dbName);
} catch (err) {
return err;
}
}

// HTTP Security header middleware
app.use(helmet());

// Get score from the given keyword i.e.) /api/score?keyword=bitcoin
app.get(‘/api/score’, async (req, res) => {
try {
const db = await getDB();
if (!req.query.keyword) {
res.status(400).json({ message: ‘missing keyword’ });
}

const resp = {};
const keyword = req.query.keyword;
const collection = db.collection('calc');

const data = await collection.findOne({ keyword });
res.json({
  keyword,
  data,
});

} catch (err) {
res.status(500).json({
message: err.message,
});
}
});

// Get sentiment analysis tweets data
app.get(‘/api/tweets’, async (req, res) => {
try {
const db = await getDB();
if (!req.query.keyword) {
res.status(400).json({ message: ‘missing keyword’ });
}

const resp = {};
const keyword = req.query.keyword;
let limit;
if (!!req.query.limit) {
  req.query.limit = Number(req.query.limit);
}
// Get documents (limit 1 - 100)
if (req.query.limit > 0 && req.query.limit <= 100) {
  limit = req.query.limit;
} else {
  limit = 100;
}
resp.limit = limit;

let collection = db.collection(`keyword_${keyword}`);

const skip = req.query.skip;
if (skip) {
  resp.skip = skip;
  // use MongoID index is given by default, no need to make extra index for this
  collection = collection.find({ _id: { $lt: new ObjectID(skip) }});
} else {
  collection = collection.find();
}

resp.tweets = await collection.sort({ _id: -1 }).limit(limit).toArray();

res.json(resp);

} catch (err) {
res.status(500).json({
message: err.message,
});
}
});

module.exports = app;

// Start the express server
app.listen(port, err => {
if (err) throw err;
console.log(> Ready On Server http://localhost:${port});
});

Note that getDB is called for every endpoints. This is done this way, because in serverless environment, the database connection will not be guaranteed to last.

I introduced how to integrate sentiment library, twitter API and MongoDB altogether to build REST APIs of sentiment analysis tweet data of keywords.

Thanks for reading

If you liked this post, share it with all of your programming buddies!

Follow us on Facebook | Twitter

Further reading

The Complete Node.js Developer Course (3rd Edition)

Angular & NodeJS - The MEAN Stack Guide

NodeJS - The Complete Guide (incl. MVC, REST APIs, GraphQL)

Best 50 Nodejs interview questions from Beginners to Advanced in 2019

Node.js 12: The future of server-side JavaScript

An Introduction to Node.js Design Patterns

Basic Server Side Rendering with Vue.js and Express

Fullstack Vue App with MongoDB, Express.js and Node.js

How to create a full stack React/Express/MongoDB app using Docker



#node-js #mongodb #rest #api

Performing Sentiment Analysis on Tweets from Node.js
1 Likes36.90 GEEK