This post discusses a case study of the possibilities of image and text feature extraction and how they can be used to enhance recommendation systems. We will also discuss how a Sesame Street character can help you decide where to eat, and how images are worth 2048 words and not 1000 as people might tell you.

Overview

Asia Miles has partnered with hundreds of restaurants to give our members the possibility of earning miles while eating at their favourite restaurants. The choice of restaurants in Hong Kong alone is mind boggling, so, how do we make sure our members receive relevant and exciting suggestions? Here is where data science comes in.

  • knowledge based: your basic filter on a search bar. e.g. “I want Japanese restaurants”
  • content based: the recommendation is based on the item and its features. We are basically recommending cafés because you once went to a café.
  • collaborative filtering: we recommend based on your behaviour and other people’s behaviour. Here, we have the “Other people liked…” section of recommenders.

Most recommendation engines are a mixture of these techniques, which means that we need some content information about the items that we want to recommend.

This leads us to our first challenge and the main topic of this post: “How do we extract information about restaurants?” There are plenty of sites that will provide a detailed taxonomy of a restaurant: cuisine, price range, atmosphere, location, opening hours, etc. However, this data is not usually available publicly, and it relies upon manual classification. We could have an army of summer interns eating and classifying the hundreds of restaurants in Hong Kong, but the classification would be limited and there is quite a sizeable number of “Japanese mid-range restaurants in Hong Kong”, for example.

#similarity #recommendations #data-science #nlp #image-processing #data science

Recommending Restaurants With Asia Miles
1.10 GEEK