1592633785

# Content-Based Recommendation System Implementation

A content-based recommendation system revolves around a user’s profiles. It is based on the user’s ratings including the number of times a…

#pandas #movie-recommendation #content-based-filtering #python #recommendation-system #programming

1592633785

## Content-Based Recommendation System Implementation

A content-based recommendation system revolves around a user’s profiles. It is based on the user’s ratings including the number of times a…

#pandas #movie-recommendation #content-based-filtering #python #recommendation-system #programming

1596963300

## Math Behind Content Based Recommendation System.

Concept Behind Content based Recommendation system:

Firstly I would like to give the intuition regarding the content based recommendation system, Like how it Works, in real practices and later We’ll jump into the mathematical part behind it!

Assuming, we have User 1, who had saw movie 1(Action) rated it 5/5, movie 2(Romance) rated it 4/5, and movie 3(Action) rated it 5/5 respectively.

Now, If User 2, watches the Movie 6 (Action) rates it 5/5, and Movie 7(Romance) rates it 5/5, So, the Content based recommendation system will most probably recommend the Action Movie 1 or Action Movie 3 for User 2, based on the ratings and the type of movie, with which both the Users are Related.

In-short, these algorithms try to recommend items that are similar to those that a user liked in the past, or is examining in the present.

Well, this is how Content Based Recommendation system works in a Nutshell, but It is also very important to understand the math behind every Algorithm, so lets Dive into the Math behind this algorithm.

Math behind the Algorithm:

So, lets start with a simple example, assuming the following data,

So the question is, How can we Recommend the Unknown rating of the Users!?

Based the above data, we can see that. Movie 1, Movie 2 and Movie 3 tend to be more Action based Movies, while Movie 4 and Movie 5, tend to be Romantic ones, Also we can conclude that, User 1 and User 2 prefer Action movies over Romantic ones!, and Vice-Versa for the User 3, and User 4 respectively.

Where,

N(U)=No. of Users=4,

N(M)=No. of Movies=5, and

N(Features)=2 i.e.(Action and Romance)

So let’s Consider, Movie 1, Assuming the X-intercept value as X(0)=1,and Considering Feature Values, We can write, Feature Vector for Movie 1, as Vector of Matrix(3,1) as [1 0.9 0], Similarly we’ll have Feature Vectors for Movie 2,3,4 and 5.

Now, For Each user “j”, learns a parameter Theta(j)==Real Number^(3) i.e.(Feature(2)+1), So by using

(Theta^(j))^(T)*x^(i), we can find the Rating of Movie(i), using The Parameter Vector Theta(i) for each User, where (i), is no. of user.

#recommender-systems #data-science #math #recommendation-system #machine-learning #deep learning

1620633584

## System Databases in SQL Server

##### Introduction

In SSMS, we many of may noticed System Databases under the Database Folder. But how many of us knows its purpose?. In this article lets discuss about the System Databases in SQL Server.

##### System Database

Fig. 1 System Databases

There are five system databases, these databases are created while installing SQL Server.

• Master
• Model
• MSDB
• Tempdb
• Resource
##### Master
• This database contains all the System level Information in SQL Server. The Information in form of Meta data.
• Because of this master database, we are able to access the SQL Server (On premise SQL Server)
##### Model
• This database is used as a template for new databases.
• Whenever a new database is created, initially a copy of model database is what created as new database.
##### MSDB
• This database is where a service called SQL Server Agent stores its data.
• SQL server Agent is in charge of automation, which includes entities such as jobs, schedules, and alerts.
##### TempDB
• The Tempdb is where SQL Server stores temporary data such as work tables, sort space, row versioning information and etc.
• User can create their own version of temporary tables and those are stored in Tempdb.
• But this database is destroyed and recreated every time when we restart the instance of SQL Server.
##### Resource
• The resource database is a hidden, read only database that holds the definitions of all system objects.
• When we query system object in a database, they appear to reside in the sys schema of the local database, but in actually their definitions reside in the resource db.

#sql server #master system database #model system database #msdb system database #sql server system databases #ssms #system database #system databases in sql server #tempdb system database

1624797300

## Reinforcement Learning Based Recommender Systems

### Develop personalized apps using a combination of Reinforcement Learning and NLP/Chatbots

**Abstract. **We present a Reinforcement Learning (RL) based approach to implement Recommender Systems. The results are based on a real-life Wellness app that is able to provide personalized health / activity related content to users in an interactive fashion. Unfortunately, current recommender systems are unable to adapt to continuously evolving features, e.g. user sentiment, and scenarios where the RL reward needs to computed based on multiple and unreliable feedback channels (e.g., sensors, wearables). To overcome this, we propose three constructs: (i) weighted feedback channels, (ii) delayed rewards, and (iii) reward boosting, which we believe are essential for RL to be used in Recommender Systems.

This paper appears in the proceedings of AAI4H — Advances in Artificial Intelligence for Healthcare Workshop, co-located with the 24th European Conference on Artificial Intelligence (ECAI 2020), Sep 2020 (paper pdf) (ppt)

## 1 Introduction

Health / Wellness apps have historically suffered from low adoption rates. Personalized recommendations have the potential of improving adoption, by making increasingly relevant and timely recommendations to users. While recommendation engines (and consequently, the apps based on them) have grown in maturity, they still suffer from the ‘cold start’ problem and the fact that it is basically a push-based mechanism lacking the level of interactivity needed to make such apps appealing to millennials.

We present a Wellness app case-study where we applied a combination of Reinforcement Learning (RL) and Natural Language Processing (NLP) / Chatbots to provide a highly personalized and interactive experience to users. We focus on the interactive aspect of the app, where the app is able to profile and converse with users in real-time, providing relevant content adapted to the current sentiment and past preferences of the user.

The core of such chatbots is an intent recognition Natural Language Understanding (NLU) engine, which is trained with hard-coded examples of question variations. When no intent is matched with a confidence level above 30%, the chatbot returns a fallback answer. The user sentiment is computed based on both the (explicit) user response and (implicit) environmental aspects, e.g. location (home, office, market, …), temperature, lighting, time of the day, weather, other family members present in the vicinity, and so on; to further adapt the chatbot response.

RL refers to a branch of Artificial Intelligence (AI), which is able to achieve complex goals by maximizing a reward function in real-time. The reward function works similar to incentivizing a child with candy and spankings, such that the algorithm is penalized when it takes a wrong decision and rewarded when it takes a right one — this is reinforcement. The reinforcement aspect also allows it to adapt faster to real-time changes in the user sentiment. For a detailed introduction to RL frameworks, the interested reader is referred to [1].

Previous works have explored RL in the context of Recommender Systems [2, 3, 4, 5], and enterprise adoption also seems to be gaining momentum with the recent availability of Cloud APIs (e.g. Azure Personalizer [6, 7]) and Google’s RecSim [8]. However, they still work like a typical Recommender System. Given a user profile and categorized recommendations, the system makes a recommendation based on popularity, interests, demographics, frequency and other features. The main novelty of these systems is that they are able to identify the features (or combination of features) of recommendations getting higher rewards for a specific user; which can then be customized for that user to provide better recommendations [9].

Unfortunately, this is still inefficient for real-life systems which need to adapt to continuously evolving features, e.g. user sentiment, and where the reward needs to computed based on multiple and unreliable feedback channels (e.g., sensors, wearables).

The rest of the paper is organized as follows: Section 2 outlines the problem scenario and formulates it as an RL problem. In Section 3, we propose

three RL constructs needed to overcome the above limitations: (i) weighted feedback channels, (ii) delayed rewards, and (iii) reward boosting, which we believe are essential constructs for RL to be used in Recommender Systems.

‘Delayed Rewards’ in this context is different from the notion of Delayed RL [10], where rewards in the distant future are not considered as valuable as immediate rewards. This is very different from our notion of ‘Delayed Rewards’ where a received reward is only applied after its consistency has been validated by a subsequent action. Section 4 concludes the paper and provides directions for future research.

#recommendation-system #data-science #reinforcement-learning #machine-learning #chatbots #reinforcement learning based recommender systems

1598882040

## Introduction

Over time, we rely more and more heavily on online platforms and applications such as Netflix, Amazon, Spotify etc. we are finding ourselves having to constantly choose from a wide range of options.

One may think that having many options is a good thing, as opposed to having very few, but an excess of options can lead to what is known as a “decision paralysis”. As Barry Schwartz writes in The Paradox of Choice:

“A large array of options may discourage consumers because it forces an increase in the effort that goes into making a decision. So consumers decide not to decide, and don’t buy the product. Or if they do, the effort that the decision requires detracts from the enjoyment derived from the results.”

Also resulting in another, more subtle, negative effect:

“A large array of options may diminish the attractiveness of what people actually choose, the reason being that thinking about the attractions of some of the unchosen options detracts from the pleasure derived from the chosen one.”

An obvious consequence of this, is that we end up not making any effort in scrutinising among multiple options unless it is made easier for us; in other words, unless these are filtered according to our preferences.

This is why recommender systems have become a crucial component in platforms as the aforementioned, in which users have a myriad range of options available. Their success will heavily depend on their ability to narrow down the set of options available, making it easier for us to make a choice.

A major drive in the field is Netflix, which is continuously advancing the state-of-the-art through research and by having sponsored the Netflix Prize between 2006 to 2009, which hugely energised research in the field.

In addition, the Netflix’s recommender has a huge presence in the platform. When we search for a movie, we immediately get a selection of similar movies which we are likely to enjoy too:

### Outline

This post starts by exposing the different paradigms in recommender systems, and goes through a hands on approach to a content based recommender. I’ll be using the well known MovieLens dataset, and show how we could recommend new movies based on their features.

This is the first in a series of two posts (perhaps more) on recommender systems, the upcoming one will be on Collaborative filtering.

Find a jupyter notebook version of this post with all the code here.

#recommendations #python #data-science #machine-learning #recommendation-system