#!/usr/bin/env python # coding: utf-8 # # Introduction # # In this notebook, I will extract restaurant ratings and reviews from [Foursquare](https://foursquare.com/) and use distance (one of the main ideas behind recommender systems) to generate recommendations for restaurants in one city that have similar reviews to restaurants in another city. # # ### Motivation # # I grew up in Austin, Texas, and moved to Minneapolis, Minnesota for my wife's work a few years ago. My wife and I are people who love food, and loved the food culture in Austin. After our move, we wanted to find new restaurants to replace our favorites from back home. However, most decently rated places in Minneapolis we went to just didn't quite live up to our Austin expectations. These restaurants usually came at the recommendations of locals, acquaintances, or from Google ratings, but the food was often bland and overpriced. It took us one deep dive into the actual reviews to figure it out. # # In order to better illustrate our problem, below are recent reviews from three restaurants that left us disappointed. On the left is a typical American restaurant, the middle is a Vietnamese restaurant, and the right is a pizza shop: # # # # I highlighted the main points to stand out against the small font. Service, atmosphere, and apparently eggrolls were the most common and unifying factors. You see very little discussion on the quality of the actual food, and you can even see an instance where a reviewer rates the pizza place as 5/5 even after saying that it is expensive. I began to notice a disconnect in how I evaluate restaurants versus how the people of Minneapolis evaluate restaurants. If you have previously worked with recommender systems, you already know where I'm going with this. If not, here is a primer: # # ### Recommender Systems Overview # # Before getting into the overview of recommender systems, I wanted to point out that I won't actually be building a legitimate recommender system in this notebook. There are some [great](https://turi.com/learn/userguide/recommender/introduction.html) [packages](https://github.com/lyst/lightfm) for doing so, but I'm going to stick with one of the main ideas behind recommender systems. This is for two reasons: # # **1)** Classes have started back up, and my available free time for side projects like this one is almost non-existant. # # **2)** My gastronomic adventures don't need the added benefits that a recommender system provides over what I'll be doing. # # Let's get back into it. # # In the world of recommender systems, there are three broad types: # # - **[Collaborative Filtering (user-user)](https://en.wikipedia.org/wiki/Collaborative_filtering)**: This is the most prevalent type of recommender systems that uses "wisdom of the crowd" for popularity among peers. This option is particularly popular because you don't need to know a lot about the item itself, you only need the ratings submitted by reviewers. The two primary restrictions are that it makes the assumption that peoples' tastes do not change over time, and new items run into the "[cold start problem](https://en.wikipedia.org/wiki/Cold_start)". This is when either a new item has not yet received any ratings and fails to appear on recommendation lists, or a new user has not reviewed anything so we don't know what their tastes are. # - **E.x.:** People who like item **X** also like item **Y** # - This is how Spotify selects songs for your recommended play list. Specifically, it will take songs from other play lists that contain songs you recently liked. # # # - **[Content-Based (item-item)](https://en.wikipedia.org/wiki/Recommender_system#Content-based_filtering)**: This method recommends items based off of their similarity to other items. This requires reliable information about the items themselves, which makes it difficult to implement in a lot of cases. Additionally, recommendations generated from this will option likely not deviate very far from the item being compared to, but there are tricks available to account for this. # - **E.x.:** Item **X** is similar to item **Y** # - This is how Pandora selects songs for your stations. Specifically, it assigns each song a list of characteristics (assigned through the [Music Genome Project](http://www.pandora.com/corporate/mgp.shtml)), and selects songs with similar characteristics as those that you liked. # # # - **[Hybrid](https://en.wikipedia.org/wiki/Recommender_system#Hybrid_recommender_systems)**: You probably guessed it - this is a combination of the above two types. The idea here is use what you have if you have it. [Here](http://www.math.uci.edu/icamp/courses/math77b/lecture_12w/pdfs/Chapter%2005%20-%20Hybrid%20recommendation%20approaches.pdf) are a few designs related to this that are worth looking into. # # # Those are the three main types, but there is one additional type that you may find if you are diving a little deeper into the subject material: # # # - **[Knowledge-Based](https://en.wikipedia.org/wiki/Knowledge-based_recommender_system)**: This is is the most rare type mainly because it requires explicit domain knowledge. It is often used for products that have a low number of available ratings, such as high luxury goods like hypercars. We won't delve any further into this type, but I recommend reading more about it if you're interested in the concept. # # # ### Methodology # # Let's return to our problem. The previous way of selecting restaurants at the recommendation of locals and acquaintances (collaborative filtering) wasn't always successful, so we are going to use the idea behind content-based recommender systems to evaluate our options. However, we don't have a lot of content about the restaurants available, so we are going to primarily use the reviews people left for them. More specifically, we are going to determine similarity between restaurants based off of the similarity of the reviews that people have written for them. # # We're going to use cosine similarity since it's generally accepted as producing better results in item-to-item filtering: # # $\hspace{8cm}sim(A, B) = \cos(\theta) = \frac{A \cdot B}{\|A\|\|B\|}$ # # Before calculating this, we need to perform a couple of pre-processing steps on our reviews in order to make the data usable for our cosine similarity calculation. These will be common NLP (**n**atural **l**anguage **p**rocessing) techniques that you should be familiar with if you have worked with text before. These are the steps I took, but I am open to feedback and improvement if you have recommendations on other methods that may yield better results. # # **1) Normalizing**: This step converts our words into lower case so that when we map to our feature space, we don't end up with redundant features for the same words. # # Ex. "Central Texas barbecue is the best smoked and the only barbecue that matters" # # $\hspace{1.75cm}$becomes # # ``` # "central texas barbecue is the best smoked and the only barbecue that matters" # ``` # # **2) Tokenizing**: This step breaks up a sentence into individual words, essentially turning our reviews into [bags of words](https://en.wikipedia.org/wiki/Bag-of-words_model), which makes it easier to perform other operations. Though we are going to perform many other preprocessing operations, this is more or less the beginning of mapping our reviews into the feature space. # # Ex. 'Central Texas barbecue is the best smoked and the only barbecue that matters' # # $\hspace{1.75cm}$becomes # # ``` # ['Central', 'Texas', 'barbecue', 'is', 'the', 'best', 'smoked', 'and', 'the', 'only', 'barbecue', 'that', 'matters'] # ``` # # **3) Removing Stopwords and Punctuation**: This step removes unnecessary words and punctuation often used in language that computers don't need such as *as*, *the*, *and*, and *of*. # # Ex. ['central', 'texas', 'barbecue', 'is', 'the', 'best', 'smoked', 'and', 'the', 'only', 'barbecue', 'that', 'matters'] # # $\hspace{1.75cm}$becomes # # ``` # ['central', 'texas', 'barbecue', 'best', 'smoked', 'only', 'barbecue', 'matters'] # ``` # # **4) Lemmatizing (Stemming)**: Lemmatizing (which is very similar to stemming) removes variations at the end of a word to revert words to their root word. # # Ex. ['central', 'texas', 'barbecue', 'best', 'smoked', 'only', 'barbecue', 'matters'] # # $\hspace{1.75cm}$becomes # # ``` # ['central', 'texas', 'barbecue', 'best', 'smoke', 'only', 'barbecue', 'matter'] # ``` # # **5) Term Frequency-Inverse Document Frequency (TF-IDF)**: This technique determines how important a word is to a document (which is a review in this case) within a corpus (the collection documents, or all reviews). This doesn't necessarily help establish context within our reviews themselves (for example, 'this Pad Kee Mao is bad ass' is technically a good thing, which wouldn't be accounted for unless we did [n-grams](https://en.wikipedia.org/wiki/N-gram) (which will give my laptop a much more difficult time)), but it does help with establishing the importance of the word. # # $\hspace{8cm}TFIDF(t, d) = TF(t, d) \cdot IDF(t)$ # # $\hspace{8cm}IDF(t) = 1 + \log\Big(\frac{\#\ Documents}{\#\ Documents\ Containing\ t}\Big)$ # # $\hspace{8cm}t:\ \text{Term}$ # # $\hspace{8cm}d:\ \text{Document}$ # # On a side note, sarcasm, slang, misspellings, emoticons, and context are common problems in NLP, but we will be ignoring these due to time limitations. # # ### Assumptions # # It's always important to state your assumptions in any analysis because a violation of them will often impact the reliability of the results. My assumptions in this case are as follows: # # - The reviews are indicative of the characteristics of the restaurant. # - The language used in the reviews does not directly impact the rating a user gives. # - E.g. Reviews contain a description of their experience, but ratings are the result of the user applying weights to specific things they value. # - Ex. "The food was great, but the service was terrible." would be a 2/10 for one user, but a 7/10 for users like myself. # - The restaurants did not undergo significant changes in the time frame for the reviews being pulled. # - Sarcasm, slang, misspellings, and other common NLP problems will not have a significant impact on our results. # # --- # # Restaurant Recommender # # # If you're still with us after all of that, let's get started! # # --- # # In addition to the library imports, we have to specify our credentials to access the Foursquare API. I'm not keen on sharing mine, but you can get your own by [signing up](https://developer.foursquare.com/). I stored mine in a text file which is being read in as a variable before making the API calls. # In[1]: import numpy as np import pandas as pd import nltk # Natural Language Processing import re # Regex import matplotlib.pyplot as plt import seaborn as sns # Functions for preparing and calculating similarity from scipy import sparse from sklearn.metrics.pairwise import cosine_similarity from sklearn.preprocessing import MinMaxScaler, normalize from sklearn.feature_extraction import text as sktext import foursquare # Controls aesthetics for plots sns.set_context("notebook", font_scale=1.1) sns.set_style("ticks") get_ipython().run_line_magic('matplotlib', 'inline') # In[2]: # Reading credentials for the API with open('foursquareCredentials.txt') as f: lines = [line.split('=')[1] for line in f.readlines()] credentials = [line.replace("'", '').replace('\n', '') for line in lines] f.close() # Assigning the credentials to an object client = foursquare.Foursquare( client_id=credentials[0], client_secret=credentials[1], redirect_uri=credentials[2]) # ## The Data # # Foursquare works similarly to Yelp where users will review restaurants. They can either leave a rating (1-10), or write a review for the restaurant. The reviews are what we're interested in here since I established above that the rating has less meaning due to the way people rate restaurants differently between the two cities. # # The [documentation](https://developer.foursquare.com/docs/) was fortunately fairly robust. I used the [foursquare categoryID tree](https://developer.foursquare.com/categorytree) in order to grab the venue category ID for the different types of restaurants. The [venue search](https://developer.foursquare.com/docs/venues/search) function grabs the actual restaurants, and the [tips](https://developer.foursquare.com/docs/venues/tips) function returns the reviews (and not what users left for a tip like you'd think). # # We're going to use the individual reviews, restaurant category, price tier, and the number of check-ins, reviews, and users. # # # ### Restaurants # # Before pulling our restaurants, we have to first include a few parameters. We'll begin with the cities we want to include and the restaurant types we want to include. # # Since we're not interested in chains or fast food restaurants, we have to specify which venue category IDs we want. There are also a few grocery stores (such as Whole Foods and the magnificent [H-E-B](https://www.heb.com/)) that appear under the bakery and deli categories, so excluding this is easier than manually filtering them out. I left them commented out so there is the full list in case anyone else would like to include them for their own uses. # In[3]: cities = ['Austin, TX', # Where I'm from 'Minneapolis, MN'] # Where I'm going to grad school venueCategoryId = ['503288ae91d4c4b30a586d67' # Afghan , '4bf58dd8d48988d1c8941735' # African , '4bf58dd8d48988d14e941735' # American , '4bf58dd8d48988d142941735' # Asian. Includes most Asian countries. , '4bf58dd8d48988d169941735' # Australian , '52e81612bcbc57f1066b7a01' # Austrian , '4bf58dd8d48988d1df931735' # BBQ , '4bf58dd8d48988d179941735' # Bagel # , '4bf58dd8d48988d16a941735' # Bakery , '52e81612bcbc57f1066b7a02' # Belgian , '52e81612bcbc57f1066b79f1' # Bistro , '4bf58dd8d48988d143941735' # Breakfast , '52e81612bcbc57f1066b7a0c' # Bubble Tea , '4bf58dd8d48988d16c941735' # Burgers. Does not include fast food. , '4bf58dd8d48988d16d941735' # Cafe , '4bf58dd8d48988d17a941735' # Cajun/Creole , '4bf58dd8d48988d144941735' # Caribbean , '5293a7d53cf9994f4e043a45' # Caucasian , '4bf58dd8d48988d1e0931735' # Coffee Shop , '52e81612bcbc57f1066b7a00' # Comfort Food , '52e81612bcbc57f1066b79f2' # Creperie , '52f2ae52bcbc57f1066b8b81' # Czech # , '4bf58dd8d48988d146941735' # Deli , '4bf58dd8d48988d1d0941735' # Dessert , '4bf58dd8d48988d147941735' # Diner , '4bf58dd8d48988d148941735' # Donuts , '5744ccdfe4b0c0459246b4d0' # Dutch , '4bf58dd8d48988d109941735' # Eastern Europe , '52e81612bcbc57f1066b7a05' # English , '4bf58dd8d48988d10b941735' # Falafel , '4edd64a0c7ddd24ca188df1a' # Fish & Chips , '52e81612bcbc57f1066b7a09' # Fondue , '56aa371be4b08b9a8d57350b' # Food Stand , '4bf58dd8d48988d1cb941735' # Food Truck , '4bf58dd8d48988d10c941735' # French , '4bf58dd8d48988d155941735' # Gastropub , '4bf58dd8d48988d10d941735' # German , '4bf58dd8d48988d10e941735' # Greek , '52e81612bcbc57f1066b79ff' # Halal , '52e81612bcbc57f1066b79fe' # Hawaiian , '52e81612bcbc57f1066b79fa' # Hungarian , '4bf58dd8d48988d10f941735' # Indian , '52e81612bcbc57f1066b7a06' # Irish Pub , '4bf58dd8d48988d110941735' # Italian , '52e81612bcbc57f1066b79fd' # Jewish , '4bf58dd8d48988d112941735' # Juice Bar , '5283c7b4e4b094cb91ec88d7' # Kebab , '4bf58dd8d48988d1be941735' # Latin American , '4bf58dd8d48988d1c0941735' # Mediterranean , '4bf58dd8d48988d1c1941735' # Mexican , '4bf58dd8d48988d115941735' # Middle Eastern , '52e81612bcbc57f1066b79f9' # Modern European , '52e81612bcbc57f1066b79f8' # Pakistani , '56aa371be4b08b9a8d573508' # Pet Cafe , '4bf58dd8d48988d1ca941735' # Pizza , '52e81612bcbc57f1066b7a04' # Polish , '4def73e84765ae376e57713a' # Portuguese , '5293a7563cf9994f4e043a44' # Russian , '4bf58dd8d48988d1bd941735' # Salad , '4bf58dd8d48988d1c6941735' # Scandinavian , '5744ccdde4b0c0459246b4a3' # Scottish , '4bf58dd8d48988d1ce941735' # Seafood , '56aa371be4b08b9a8d57355a' # Slovak , '4bf58dd8d48988d1dd931735' # Soup , '4bf58dd8d48988d14f941735' # Southern , '4bf58dd8d48988d150941735' # Spanish , '5413605de4b0ae91d18581a9' # Sri Lankan , '4bf58dd8d48988d1cc941735' # Steakhouse , '4bf58dd8d48988d158941735' # Swiss , '4bf58dd8d48988d1dc931735' # Tea Room , '56aa371be4b08b9a8d573538' # Theme Restaurant , '4f04af1f2fb6e1c99f3db0bb' # Turkish , '52e928d0bcbc57f1066b7e96' # Ukranian , '4bf58dd8d48988d1d3941735' # Vegetarian ] venueCategoryIdstr = ','.join(venueCategoryId) # Converts to string for API call # With those specified, we'll go ahead and pull in the restaurants. This loop will go through each restaurant category, grab the restaurants that meet our criteria, and puts them into a data frame called dfRest (**d**ata **f**rame of **rest**aurants). # # Tasks like these can sometimes take a lot of time, but this fortunately runs in under a minute. # In[4]: get_ipython().run_cell_magic('time', '', "\n# Empty data frame to be filled with restaurant information\ndfRest = pd.DataFrame()\n\nfor city in cities:\n \n # Run the API for each category\n for category in venueCategoryId:\n apiCall = client.venues.search(params = \n {'near': city\n , 'intent': 'browse' # Non-user based\n , 'limit': '50' # Max is 50\n , 'radius': '56372' # In meters. Converts to 35 miles\n , 'categoryId': category})['venues']\n if len(apiCall) == 0:\n pass\n else:\n for restaurant in np.arange(len(apiCall)):\n \n # Error handling due to not every restaurant possessing\n # a value for city/state\n try:\n apiCity = apiCall[restaurant]['location']['city']\n apiState = apiCall[restaurant]['location']['state']\n except:\n apiCity = np.NaN\n apiState = np.NaN\n \n # Appending a temporary data frame to dfRest to prevent overwriting\n temp = pd.DataFrame({\n 'name': apiCall[restaurant]['name'],\n 'id': apiCall[restaurant]['id'],\n 'category': apiCall[restaurant]['categories'][0]['name'],\n 'shortCategory': apiCall[restaurant]['categories'][0]['shortName'],\n 'city': apiCity,\n 'state': apiState,\n 'location': city,\n 'checkinsCount': apiCall[restaurant]['stats']['checkinsCount'],\n 'commentsCount': apiCall[restaurant]['stats']['tipCount'],\n 'usersCount': apiCall[restaurant]['stats']['usersCount']\n }, index=[0])\n dfRest = pd.concat([dfRest, temp])\n") # In[5]: dfRest.shape # Now that we have pulled the restaurants, let's clean it up by removing any restaurants with: # - **Duplicates:** A small handful of these made it through, and we only need one of each. # - **$<$ 10 reviews:** These won't be too useful for our purposes. # - **Categories we don't want:** Some of these also got through, and I have no interest in seeing how the Burger King around the corner compares to a Whataburger back home. # In[6]: # Cleaning the dataset before making more API calls dfRest = dfRest.dropna().drop_duplicates() dfRest = dfRest[dfRest['commentsCount'] >= 10] # Removing unwanted categories excludeCategories = ['Fast Food', 'Gas Station', 'Golf Course', 'Bowling Alley'] dfRest = dfRest.loc[~dfRest.category.isin(excludeCategories)] # In[7]: dfRest.shape # While that removed more than half of our records, ~1,300 restaurants is still plenty enough to work with. This will also drastically reduce the number of reviews we need to grab, as well. # # Moving on, let's look at a few charts for exploratory analysis to see what restaurants we ended up with: # In[8]: # Number of reviews per restaurant plt.figure(figsize=(8, 5)) sns.kdeplot(dfRest['commentsCount'], legend=False) plt.title('# Comments per Restaurant Distribution') sns.despine() # In[9]: # Number of restaurants per city plt.figure(figsize=(8, 5)) dfRest['id'].groupby(dfRest['location']).count().plot(kind='bar') plt.title('# Restaurants per City') sns.despine() # In[10]: # Breakdown of restaurant type by city plt.figure(figsize=(12, 20)) # Grouping by the number of restaurants per category and city categoryPlot = dfRest['id'].groupby( (dfRest['shortCategory'], dfRest['location'])).count().sort_values( ascending=False)[:100].reset_index() # Plotting sns.barplot(x="id", y="shortCategory", hue="location", data=categoryPlot) # Setting labels and configuring visuals plt.xlabel('Count') plt.ylabel('Category') sns.despine() # So our number of restaurants between each city is basically even, most restaurants have between 10 and 70 reviews, and we see a few interesting things in the breakdown between categories. To summarize the chart: # # **Austin:** # - Significantly more BBQ, tacos, food trucks, donuts, juice bars, and Cajun restaurants (but I could have told you this) # - Seemingly more diversity in the smaller categories # # **Minneapolis:** # - American is king # - Significantly more bars, bakeries, middle eastern, cafés, tea rooms, German, and breweries # # Our initial pull didn't include the ratings, so we'll go ahead and pull those in along with the price tier. # # This loop takes a lot longer (~7 minutes), possibly due to making individual calls for each restaurant (~1,300) instead of just each category (~72). # In[11]: get_ipython().run_cell_magic('time', '', "\nratings = []\nnumRatings = []\npriceTiers = []\n\nfor restId in dfRest['id']:\n apiCall = client.venues(restId)['venue']\n \n try:\n rate = apiCall['rating']\n numRates = apiCall['ratingSignals']\n tiers = apiCall['attributes']['groups'][0]['items'][0]['priceTier']\n except:\n rate = np.NaN\n numRates = np.NaN\n tiers = np.NaN\n \n ratings.append(rate)\n numRatings.append(numRates)\n priceTiers.append(tiers)\n \ndfRest['rating'] = ratings\ndfRest['numRatings'] = numRatings\ndfRest['priceTier'] = priceTiers\n") # In[12]: # Re-ordering the columns since the loop puts them out of order dfRest = dfRest.reindex_axis(['id', 'name', 'category', 'shortCategory', 'checkinsCount', 'city','state', 'location', 'commentsCount', 'usersCount', 'priceTier', 'numRatings', 'rating'], axis=1).reset_index(drop=True) # In[13]: # Cleaning the dataset by dropping duplicates dfRest = dfRest.dropna().drop_duplicates() dfRest.head() # ### Reviews # # These are the actual comments people write about the restaurants, and they are what we will be comparing for our "recommender" system. # comments # This loop grabs the actual comments per restaurant, and puts them into the data frame dfComments. It takes ~10 minutes to run. # # Since I'm using a free developer key, I'm currently limited to 30 comments per restaurant ID. This will impact the quality for some restaurants, but we saw that most of our restaurants had under 30 comments in the earlier chart. # In[15]: get_ipython().run_cell_magic('time', '', "\ndfComments = pd.DataFrame()\n\nfor rest in dfRest['id']:\n apiCall = client.venues.tips(rest)['tips']['items']\n numComments = len(apiCall)\n \n for idx in np.arange(numComments):\n temp = pd.DataFrame({\n 'id': rest,\n 'comment': apiCall[idx]['text']\n }, index=[0])\n dfComments = pd.concat([dfComments, temp])\n \ndfComments = dfComments.reindex_axis(['id', 'comment'], axis = 1).reset_index(drop=True)\n") # In[16]: dfComments.head() # Now we need to group the comments together so we have one set of comments per restaurant before joining everything together. # In[17]: # Grouping individual comments by restaurant ID groupedComments = dfComments.groupby('id')['comment'].apply( lambda x: "{%s}" % ''.join(x)).reset_index() # This is where we merge everything into one data frame, df, which has a row for each restaurant that includes all of the comments. We'll also perform an additional sanitation step here by removing non-ASCII characters from the comments. # In[18]: # Merging everything together into one data frame df = pd.merge(dfRest, groupedComments, on='id') # Removing non-ASCII characters df['comments'] = df['comment'].apply( lambda comment: re.sub(r'[^\x00-\x7f]', r'', comment)) df.drop('comment', axis=1, inplace=True) df.head() # Here's an example of the comments for one random restaurant to give a better idea of the data we're dealing with: # In[19]: df['comments'][np.random.randint(df.shape[0])] # Selects one restaurant at random # ## Data Processing # # When working with language, we have to process the text into something that a computer can handle more easily. Our end result will be a large number of numerical features for each restaurant that we can use to calculate the cosine similarity. # # The steps here are: # 1. Normalizing # 2. Tokenizing # 3. Removing stopwords # 4. Lemmatizing (Stemming) # 5. Term Frequency-Inverse Document Frequency (TF-IDF) # # I'll explain a little more on what these are and why we are doing them below in case you aren't familiar with them. # ### 1) Normalizing # # This section uses regex scripts that makes cases every word lower cased, removes punctuation, and removes digits. # # For example: # # **Before:** # # $\hspace{1cm}$"ThIs Is HoW mIdDlE sChOoLeRs TaLkEd 2 EaCh OtHeR oN AIM!!!!" # # **After:** # # $\hspace{1cm}$"this is how middle schoolers talked each other on aim" # # The benefit in this is that it vastly reduces our feature space. Our pre-processed example would have created an additional ~10 features from someone who doesn't know how to type like a regular human being. # In[21]: # Converting all words to lower case and removing punctuation df['comments'] = [re.sub(r'\d+\S*', '', row.lower().replace('.', ' ').replace('_', '').replace('/', '')) for row in df['comments']] df['comments'] = [re.sub(r'(?:^| )\w(?:$| )', '', row) for row in df['comments']] # Removing numbers df['comments'] = [re.sub(r'\d+', '', row) for row in df['comments']] df['comments'].head() # ### 2) Tokenizing # # Tokenizing a sentence is a way to map our words into a feature space. This is achieved by treating every word as an individual object. # # **Before:** # # $\hspace{1cm}$'central texas barbecue is the best smoked and the only barbecue that matters' # # **After:** # # $\hspace{1cm}$['central', 'texas', 'barbecue', 'is', 'the', 'best', 'smoked', 'and', 'the', 'only', 'barbecue', 'that', 'matters'] # # In[23]: # Tokenizing comments and putting them into a new column tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+') # by blank space df['tokens'] = df['comments'].apply(tokenizer.tokenize) df['tokens'].head() # ### 3) Removing Stopwords & Punctuation # # Stopwords are unnecessary words like *as*, *the*, *and*, and *of* that aren't very useful for our purposes. Since they don't have any intrinsic value, removing them reduces our feature space which will speed up our computations. # # **Before:** # # $\hspace{1cm}$['central', 'texas', 'barbecue', 'is', 'the', 'best', 'smoked', 'and', 'the', 'only', 'barbecue', 'that', 'matters'] # # **After:** # # $\hspace{1cm}$['central', 'texas', 'barbecue', 'best', 'smoked', 'only', 'barbecue', 'matters'] # # This does take a bit longer to run at ~6 minutes # In[24]: get_ipython().run_cell_magic('time', '', "\nfiltered_words = []\nfor row in df['tokens']:\n filtered_words.append([\n word.lower() for word in row\n if word.lower() not in nltk.corpus.stopwords.words('english')\n ])\n\ndf['tokens'] = filtered_words\n") # ### 4) Lemmatizing (Stemming) # # Stemming removes variations at the end of a word to revert words to their root in order to reduce our overall feature space (e.x. running $\rightarrow$ run). This has the possibility to adversely impact our performance when the root word is different (e.x. university $\rightarrow$ universe), but the net positives typically outweigh the net negatives. # # **Before:** # # $\hspace{1cm}$['central', 'texas', 'barbecue', 'best', 'smoked', 'only', 'barbecue', 'matters'] # # **After:** # # $\hspace{1cm}$['central', 'texas', 'barbecue', 'best', 'smoke', 'only', 'barbecue', 'matter'] # # One very important thing to note here is that we're actually doing something called **[Lemmatization](https://en.wikipedia.org/wiki/Lemmatisation)**, which is similar to [stemming](https://en.wikipedia.org/wiki/Stemming), but is a little different. Both seek to reduce inflectional forms and sometimes derivationally related forms of a word to a common base form, but they go about it in different ways. In order to illustrate the difference, here's a dictionary entry: # # # # Lemmatization seeks to get the *lemma*, or the base dictionary form of the word. In our example above, that would be "graduate". It does this by using vocabulary and a morphological analysis of the words, rather than just chopping off the variations (the "verb forms" in the example above) like a traditional stemmer would. # # The advantage of lemmatization here is that we don't run into issues like our other example of *university* $\rightarrow$ *universe* that can happen in conventional stemmers. It is also relatively quick on this data set! # # The disadvantage is that it is not able to infer if the word is a noun/verb/adjective/etc., so we have to specify which type it is. Since we're looking at, well, everything, we're going to lemmatize for nouns, verbs, and adjectives. # # [Here](https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html) is an excerpt from the Stanford book *Introduction to Information Retrieval* if you wanted to read more stemming and lemmatization. # In[27]: get_ipython().run_cell_magic('time', '', "\n# Setting the Lemmatization object\nlmtzr = nltk.stem.wordnet.WordNetLemmatizer()\n\n# Looping through the words and appending the lemmatized version to a list\nstemmed_words = []\nfor row in df['tokens']:\n stemmed_words.append([\n # Verbs\n lmtzr.lemmatize( \n # Adjectives\n lmtzr.lemmatize( \n # Nouns\n lmtzr.lemmatize(word.lower()), 'a'), 'v')\n for word in row\n if word.lower() not in nltk.corpus.stopwords.words('english')])\n\n# Adding the list as a column in the data frame\ndf['tokens'] = stemmed_words\n") # Let's take a look at how many unique words we now have and a few of the examples: # In[28]: # Appends all words to a list in order to find the unique words allWords = [] for row in stemmed_words: for word in row: allWords.append(str(word)) uniqueWords = np.unique(allWords) print('Number of unique words:', len(uniqueWords), '\n') print('Previewing sample of unique words:\n', uniqueWords[1234:1244]) # We can see a few of the challenges from slang or typos that I mentioned in the beginning. These will pose problems for what we're doing, but we'll just have to assume that the vast majority of words are spelled correctly. # # Before doing the TF-IDF transformation, we need to make sure that we have spaces in between each word in the comments: # In[29]: stemmed_sentences = [] # Spacing out the words in the reviews for each restaurant for row in df['tokens']: stemmed_string = '' for word in row: stemmed_string = stemmed_string + ' ' + word stemmed_sentences.append(stemmed_string) df['tokens'] = stemmed_sentences stemmed_sentences[np.random.randint(len(stemmed_sentences))] # ### 5) Term Frequency-Inverse Document Frequency (TF-IDF) # # This determines how important a word is to a document (which is a review in this case) within a corpus (the collection documents). It is a number resulting from the following formula: # # $\hspace{8cm}TFIDF(t, d) = TF(t, d) \cdot IDF(t)$ # # # $\hspace{8cm}IDF(t) = 1 + \log\Big(\frac{\#\ Documents}{\#\ Documents\ Containing\ t}\Big)$ # # # $\hspace{8cm}t:\ \text{Term}$ # # $\hspace{8cm}d:\ \text{Document}$ # # Scikit-learn has an [excellent function](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) that is able to transform our processed text into a TF-IDF matrix very quickly. We'll convert it back to a data frame, and join it to our original data frame by the indexes. # In[30]: get_ipython().run_cell_magic('time', '', "\n# Creating the sklearn object\ntfidf = sktext.TfidfVectorizer(smooth_idf=False)\n\n# Transforming our 'tokens' column into a TF-IDF matrix and then a data frame\ntfidf_df = pd.DataFrame(tfidf.fit_transform(df['tokens']).toarray(), \n columns=tfidf.get_feature_names())\n") # In[31]: print(tfidf_df.shape) tfidf_df.head() # Since we transformed *all* of the words, we have a [sparse matrix](https://en.wikipedia.org/wiki/Sparse_matrix). We don't care about things like typos or words specific to one particular restaurant, so we're going to remove columns that don't have a lot of contents. # In[32]: # Removing sparse columns tfidf_df = tfidf_df[tfidf_df.columns[tfidf_df.sum() > 2.5]] # Removing any remaining digits tfidf_df = tfidf_df.filter(regex=r'^((?!\d).)*$') print(tfidf_df.shape) tfidf_df.head() # This drastically reduced the dimensions of our data set, and we now have something usable to calculate similarity. # In[33]: # Storing the original data frame before the merge in case any changes are needed df_orig = df.copy() # Renaming columns that conflict with column names in tfidfCore df.rename(columns={'name': 'Name', 'city': 'City', 'location': 'Location'}, inplace=True) # Merging the data frames by index df = pd.merge(df, tfidf_df, how='inner', left_index=True, right_index=True) df.head() # Lastly, we're going to add additional features for the category. This just puts a heavier weight on those with the same type, so for example a Mexican restaurant will be more likely to have Mexican restaurants show up as most similar instead of Brazilian restaurants. # In[34]: # Creates dummy variables out of the restaurant category df = pd.concat([df, pd.get_dummies(df['shortCategory'])], axis=1) df.head() # Because we introduced an additional type of feature, we'll have to check it's weight in comparison to the TF-IDF features: # In[35]: # Summary stats of TF-IDF print('Max:', np.max(tfidf_df.max()), '\n', 'Mean:', np.mean(tfidf_df.mean()), '\n', 'Standard Deviation:', np.std(tfidf_df.std())) # The dummy variables for the restaurant type are quite a bit higher than the average word, but I'm comfortable with this since I think it has a benefit. # # "Recommender System" # # As a reminder, we are not using a conventional recommender system. Instead, we are using recommender system theory by calculating the cosine distance between comments in order to find restaurants with the most similar comments. # ### Loading in personal ratings # # In order to recommend restaurants with this approach, we have to identify the restaurants to which we want to find the most similarities. I took the data frame and assigned my own ratings to some of my favorites. # In[36]: # Loading in self-ratings for restaurants in the data set selfRatings = pd.read_csv('selfRatings.csv', usecols=[0, 4]) selfRatings.head() # In[37]: # Merging into df to add the column 'selfRating' df = pd.merge(df, selfRatings) df.head() # ### Additional features & min-max scaling # # We're going to include a few additional features from the original data set to capture information that the comments may not have. Specifically: # # - **Popularity:** checkinsCount, commentsCount, usersCount, numRatings # - **Price:** priceTier # # We're also going to scale these down so they don't carry a huge advantage over everything else. I'm going to scale the popularity attributes to be between 0 and 0.5, and the price attribute to be between 0 and 1. I'll do this by first min-max scaling everything (to put it between 0 and 1), and then dividing the popularity features in half. # In[38]: # Removing everything that won't be used in the similarity calculation df_item = df.drop(['id', 'category', 'Name', 'shortCategory', 'City', 'tokens', 'comments', 'state', 'Location', 'selfRating', 'rating'], axis=1) # Copying into a separate data frame to be normalized df_item_norm = df_item.copy() columns_to_scale = ['checkinsCount', 'commentsCount', 'usersCount', 'priceTier', 'numRatings'] # Split df_item_split = df_item[columns_to_scale] df_item_norm.drop(columns_to_scale, axis=1, inplace=True) # Apply df_item_split = pd.DataFrame(MinMaxScaler().fit_transform(df_item_split), columns=df_item_split.columns) df_item_split_half = df_item_split.drop('priceTier', axis=1) df_item_split_half = df_item_split_half / 2 df_item_split_half['priceTier'] = df_item_split['priceTier'] # Combine df_item_norm = df_item_norm.merge(df_item_split, left_index=True, right_index=True) df_item_norm.head() # ### Calculating cosine similarities # # Here's the moment that we've spent all of this time getting to: the similarity. # # This section calculates the cosine similarity and puts it into a matrix with the pairwise similarity: # # | | 0 | 1 | ... | n | # |------|------|------|------|------| # | 0 | 1.00 | 0.03 | ... | 0.15 | # | 1 | 0.31 | 1.00 | ... | 0.89 | # | ... | ... | ... | ... | ... | # | n | 0.05 | 0.13 | ... | 1.00 | # # As a reminder, we're using cosine similarity because it's generally accepted as producing better results in item-to-item filtering. For all you math folk, here's the formula again: # # $\hspace{8cm}sim(A, B) = \cos(\theta) = \frac{A \cdot B}{\|A\|\|B\|}$ # In[39]: # Calculating cosine similarity df_item_norm_sparse = sparse.csr_matrix(df_item_norm) similarities = cosine_similarity(df_item_norm_sparse) # Putting into a data frame dfCos = pd.DataFrame(similarities) dfCos.head() # These are the some of the restaurants I rated very highly, and I'm pulling these up so we can use the index number in order to compare it to the others in our data set: # In[40]: # Filtering to those from my list with the highest ratings topRated = df[df['selfRating'] >= 8].drop_duplicates('Name') # Preparing for display topRated[['Name', 'category', 'Location', 'selfRating']].sort_values( 'selfRating', ascending=False) # In order to speed things up, we'll make a function that formats the cosine similarity data frame and retrieves the top $n$ most similar restaurants for the given restaurant: # In[41]: def retrieve_recommendations(restaurant_index, num_recommendations=5): """ Retrieves the most similar restaurants for the index of a given restaurant Outputs a data frame showing similarity, name, location, category, and rating """ # Formatting the cosine similarity data frame for merging similarity = pd.melt(dfCos[dfCos.index == restaurant_index]) similarity.columns = (['restIndex', 'cosineSimilarity']) # Merging the cosine similarity data frame to the original data frame similarity = similarity.merge( df[['Name', 'City', 'state', 'Location', 'category', 'rating', 'selfRating']], left_on=similarity['restIndex'], right_index=True) similarity.drop(['restIndex'], axis=1, inplace=True) # Ensuring that retrieved recommendations are for Minneapolis similarity = similarity[(similarity['Location'] == 'Minneapolis, MN') | ( similarity.index == restaurant_index)] # Sorting by similarity similarity = similarity.sort_values( 'cosineSimilarity', ascending=False)[:num_recommendations + 1] return similarity # Alright, let's test it out! # # ### Barbecue # # Let's start with the [Salt Lick](http://saltlickbbq.com/). This is a popular central Texas barbecue place featured on [various food shows](https://www.youtube.com/watch?v=vLnsXechOWc). They are well-known for their open smoke pit: # # # # In case you're not familiar with [central Texas barbecue](https://en.wikipedia.org/wiki/Barbecue_in_Texas), it primarily features smoked meats (especially brisket) with white bread, onions, pickles, potato salad, beans and cornbread on the side. Sweet tea is usually the drink of choice if you're not drinking a Shiner or a Lonestar. # In[42]: # Salt Lick retrieve_recommendations(66) # Surprisingly, our top recommendation is one of my favorite restaurants I've found in Minneapolis - Brasa! They're actually a [Creole](https://en.wikipedia.org/wiki/Louisiana_Creole_cuisine) restaurant, but they have a lot of smoked meats, beans, and corn bread, and they're probably the only restaurant I've found so far that lists sweet tea on the menu: # # # # Funny enough, Brasa was also in [Man vs Food](https://www.youtube.com/watch?v=gZmGAi5DKE4) with Andrew Zimmerman as a guest. # # Famous Dave's is a Midwestern barbecue chain that focuses more on ribs, which isn't generally considered a Texan specialty. Psycho Suzi's (a theme restaurant that serves pizza and cocktails) and Brit's Pub (an English pub with a lawn bowling field) don't seem very similar, but their cosine similarity scores reflect that. # # ### Donuts # # Moving on, let's find some donuts. Before maple-bacon-cereal-whatever donuts become the craze (thanks for nothing, Portland), my home town was also famous for [Round Rock Donuts](http://roundrockdonuts.com/), a simple and delicious no-nonsense donut shop. And yes, Man vs. Food also did a segment here. # # # In[43]: # Round Rock Donuts retrieve_recommendations(222) # Sadly, a lot of the most similar places our results returned were places I've tried and didn't like. For some reason, the donuts at most places up here are usually cold, cake-based, and covered in kitschy stuff like bacon. However, Granny donuts looks like it could be promising, as does Bogart's: # # # # ### Tacos # # This is another Austin specialty that likely won't give promising results, but let's try it anyway. # # [Tacodeli](http://www.tacodeli.com/) is my personal favorite taco place in Austin (yes, it's better than [Torchy's](http://torchystacos.com/)), and they're a little different than the traditional taco (corn tortilla, meat, onions, cilantro, and lime that you might find at traditional Mexican taquerias). They're typically on flour tortillas, and diversify their flavor profiles and toppings: # # # In[44]: # Tacodeli retrieve_recommendations(420) # It looks like there's a pretty sharp drop-off in cosine similarity after our second recommendation (which makes sense when you look at the ratio of taco places in Austin vs. Minneapolis from when we pulled our data), so I'm going to discard the bottom three. I'm surprised again that Psycho Suzi's and Brit's Pub made a second appearance since neither of them serve tacos, but I won't into that too much since their cosine similarity is really low. # # I have tried Rusty Taco, and it does seem a lot like Tacodeli. They even sell breakfast tacos, which is a very Texan thing that can be rare in the rest of the country. The primary difference is in the diversity and freshness of ingredients, and subsequently for me, taste: # # # # Taco Taxi looks like it could be promising, but they appear to be more of a traditional taqueria (delicious but dissimilar). To be fair, taquerias have had the best tacos I've found up here (though most of them aren't included in this list because they were outside of the search range). # # ### Burritos # # I'm not actually going to run our similarity function for this part because the burrito place back home actually disappeared from our data pulling query in between me originally running this and finally having time to annotate everything and write this post. However, I wanted to include it because it was one of my other field tests. # # [Cabo Bob's](http://cabobobs.com/) is my favorite burrito place back home, and they made it to the semi-finals in the [538 best burrito in America search](https://fivethirtyeight.com/burrito/#brackets-view) losing to the overall champion. To anyone not familiar with non-chain burrito restaurants, they are similar to Chipotle, but are usually higher quality. # # # # [El Burrito Mercado](http://elburritomercado.com/) returned as highly similar, so we tried it. It's tucked in the back of a mercado, and has both a sit-down section as well as a lunch line similar to Cabo Bob's. We decided to go for the sit-down section since we had come from the opposite side of the metropolitan area, so the experience was a little different. My burrito was more of a traditional Mexican burrito (as opposed to Tex-Mex), but it was still pretty darn good. # # # # ### Indian # # Next up is the the [Clay Pit](https://www.claypit.com/), a contemporary Indian restaurant in Austin. They focus mostly on curry dishes with naan, though some of my friends from grad school can tell you that India has way more cuisine diversity than curry dishes. # # # In[45]: # Clay Pit retrieve_recommendations(338, 8) # This was actually the first place I did a field test on. When I originally looked this up, we ended up trying [Gorkha Palace](http://gorkhapalace.com/) since it was the closest one to our house with the best reviews. It has a more expanded offering including Nepali and Tibetan food (though I wasn't complaining because I love [momos](https://en.wikipedia.org/wiki/Momo_(food)). It was delicious, and was very similar to the Clay Pit. We'll be going back. # # # # ### French/Bistro # # One of our favorite places back home is [Blue Dahlia Bistro](http://www.bluedahliabistro.com/), a European-style bistro specializing in French fusion. They use a lot of fresh and simple ingredients, and it's a great place for a date night due to its cozy interior and decorated back patio. # # # In[46]: # Blue Dahlia retrieve_recommendations(124) # I think our heavier category weighting is hurting us here since Blue Dahlia is classified as a café. Most of the recommendations focus on American food (remember, American food is king in Minneapolis), but I'm guessing the [Wilde Roast Cafe](http://wildecafe.com/) was listed as the most similar restaurant due to the similarly cozy interior and various espresso drinks they offer. I've been to the Wilde Roast before beginning this project, and I can tell you that the food is completely different. # # # # # ### Coffee # # Speaking of coffee, let's wrap this up with coffee recommendations. I still have a lot of places to find matches for, but since I did this project as a poor graduate student, most of them would be "that looks promising, but I haven't tried it yet". # # Sadly, a lot of my favorite coffee shops from in and around Austin didn't show up since Starbucks took up most of the space when searching for coffee places (remember, we're limited to 50 results per category). We ended up with [Mozart's](http://www.mozartscoffee.com/): # # # # and the [Coffee Bean & Tea Leaf](https://www.coffeebean.com/): # # # # I ran the results for Mozart's and didn't get anything too similar back. To be fair, there aren't any coffee shops on a river up here, and I'm sure most comments for Mozart's are about the view. # # Let's go with The Coffee Bean & Tea Leaf instead. It's actually a small chain out of California that is, in my opinion, tastier than Starbucks. # In[47]: # Coffee Bean & Tea Leaf retrieve_recommendations(558) # These are perfectly logical results. [Caribou](https://www.cariboucoffee.com/) is a chain popular in the Midwest (I also describe it as 'like Starbucks but better' to friends back home), and [Dunn Bros](https://dunnbrothers.com/) is similar, but specific to Minnesota. # # This chart from [an article on FlowingData](http://flowingdata.com/2014/03/18/coffee-place-geography/) helps describe why I think these results make so much sense: # # # # As for my verdict on Caribou, I actually like it better than the Coffee Bean and Tea Leaf. In fact, I actually have a gift card for them in my wallet right now. There also used to be a location for the Coffee Bean and Tea Leaf up here, but they closed it down shortly after I moved here (just like all of the Minnesotan [Schlotzsky's](https://www.schlotzskys.com/) locations...I'm still mad about that). # # Summary # # Like any other tool, this method isn't perfect for every application, but it can work if we use it effectively. While there is room for improvement, I am pleased with how it has been working for me so far. # # I'm going to continue to use this when we can't decide on a restaurant and feeling homesick, and will likely update this post in the future with either more places I try out or when I move to a new city in the future. In the meantime, I hope you enjoyed reading, and feel free to use my code ([here is the github link](https://github.com/JeffMacaluso/Blog/blob/master/Restaurant%20Recommender.ipynb)) to try it out for your own purposes. # # Happy eating!