This IPython notebook illustrates the usage of the cmfrec Python package for building recommender systems through different matrix factorization models with or without using information about user and item attributes – for more details see the references at the bottom.
The example uses the MovieLens-1M data which consists of ratings from users about movies + user demographic information, plus the movie tag genome. Note however that, for implicit-feedback datasets (e.g. item purchases), it's recommended to use different models than the ones shown here (see documentation for details about models in the package aimed at implicit-feedback data).
Small note: if the TOC here is not clickable or the math symbols don't show properly, try visualizing this same notebook from nbviewer following this link.
3. Examining top-N recommended lists
5. Recommendations for new users
7. Adding implicit features and dynamic regularization
import numpy as np, pandas as pd, pickle
ratings = pickle.load(open("ratings.p", "rb"))
item_sideinfo_pca = pickle.load(open("item_sideinfo_pca.p", "rb"))
user_side_info = pickle.load(open("user_side_info.p", "rb"))
movie_id_to_title = pickle.load(open("movie_id_to_title.p", "rb"))
ratings.head()
UserId | ItemId | Rating | |
---|---|---|---|
0 | 1 | 1193 | 5 |
1 | 1 | 661 | 3 |
2 | 1 | 914 | 3 |
3 | 1 | 3408 | 4 |
4 | 1 | 2355 | 5 |
item_sideinfo_pca.head()
ItemId | pc1 | pc2 | pc3 | pc4 | pc5 | pc6 | pc7 | pc8 | pc9 | ... | pc41 | pc42 | pc43 | pc44 | pc45 | pc46 | pc47 | pc48 | pc49 | pc50 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 1.193171 | 2.085621 | 2.634135 | 1.156088 | 0.721649 | 0.995436 | 1.250474 | -0.779532 | 1.616702 | ... | -0.317134 | -0.070338 | -0.019553 | 0.169051 | 0.201415 | -0.094831 | -0.250461 | -0.149919 | -0.031735 | -0.177708 |
1 | 2 | -1.333533 | 1.743796 | 1.352161 | 0.795724 | -0.484175 | 0.380645 | 0.804462 | -0.598527 | 0.917250 | ... | 0.300060 | -0.261956 | 0.054457 | 0.003863 | 0.304605 | -0.315796 | 0.360203 | 0.152770 | 0.144790 | -0.096549 |
2 | 3 | -1.363395 | -0.017107 | 0.530395 | -0.316202 | 0.469430 | 0.164630 | 0.019083 | 0.159188 | -0.232969 | ... | 0.215020 | -0.060682 | -0.280852 | 0.001087 | 0.084960 | -0.257190 | -0.136963 | -0.113914 | 0.128352 | -0.203658 |
3 | 4 | -1.237840 | -0.993731 | 0.809815 | -0.303009 | -0.088991 | -0.049621 | -0.179544 | -0.771278 | -0.400499 | ... | 0.066207 | 0.056054 | -0.223027 | 0.400157 | 0.292300 | 0.260936 | -0.307608 | -0.224141 | 0.488955 | 0.439189 |
4 | 5 | -1.611499 | -0.251899 | 1.126443 | -0.135702 | 0.403340 | 0.187289 | 0.108451 | -0.275341 | -0.261142 | ... | 0.109560 | -0.086042 | -0.236327 | 0.461589 | 0.013350 | -0.192557 | -0.234025 | -0.369643 | -0.041060 | -0.074656 |
5 rows × 51 columns
user_side_info.head()
UserId | Gender_F | Gender_M | Age_1 | Age_18 | Age_25 | Age_35 | Age_45 | Age_50 | Age_56 | ... | Occupation_unemployed | Occupation_writer | Region_Middle Atlantic | Region_Midwest | Region_New England | Region_South | Region_Southwest | Region_UnknownOrNonUS | Region_UsOther | Region_West | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | True | False | True | False | False | False | False | False | False | ... | False | False | False | True | False | False | False | False | False | False |
1 | 2 | False | True | False | False | False | False | False | False | True | ... | False | False | False | False | False | True | False | False | False | False |
2 | 3 | False | True | False | False | True | False | False | False | False | ... | False | False | False | True | False | False | False | False | False | False |
3 | 4 | False | True | False | False | False | False | True | False | False | ... | False | False | False | False | True | False | False | False | False | False |
4 | 5 | False | True | False | False | True | False | False | False | False | ... | False | True | False | True | False | False | False | False | False | False |
5 rows × 39 columns
This section fits different recommendation models and then compares the recommendations produced by them.
Usual low-rank matrix factorization model with no user/item attributes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B $$ Where
(For more details see references at the bottom)
%%time
from cmfrec import CMF
model_no_sideinfo = CMF(method="als", k=40, lambda_=1e+1)
model_no_sideinfo.fit(ratings)
CPU times: user 6.75 s, sys: 1.56 s, total: 8.31 s Wall time: 592 ms
Collective matrix factorization model (explicit-feedback variant)
The collective matrix factorization model extends the earlier model by making the user and item factor matrices also be able to make low-rank approximate factorizations of the user and item attributes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B ,\:\:\:\: \mathbf{U} \approx \mathbf{A} \mathbf{C}^T + \mathbf{\mu}_U ,\:\:\:\: \mathbf{I} \approx \mathbf{B} \mathbf{D}^T + \mathbf{\mu}_I $$
Where
In addition, this package can also apply sigmoid transformations on the attribute columns which are binary. Note that this requires a different optimization approach which is slower than the ALS (alternating least-squares) method used here.
%%time
model_with_sideinfo = CMF(method="als", k=40, lambda_=1e+1, w_main=0.5, w_user=0.25, w_item=0.25)
model_with_sideinfo.fit(X=ratings, U=user_side_info, I=item_sideinfo_pca)
### for the sigmoid transformations:
# model_with_sideinfo = CMF(method="lbfgs", maxiter=0, k=40, lambda_=1e+1, w_main=0.5, w_user=0.25, w_item=0.25)
# model_with_sideinfo.fit(X=ratings, U_bin=user_side_info, I=item_sideinfo_pca)
CPU times: user 11.2 s, sys: 13 s, total: 24.2 s Wall time: 1.5 s
Collective matrix factorization model (explicit-feedback variant)
(Note that, since the side info has variables in a different scale, even though the weights sum up to 1, it's still not the same as the earlier model w.r.t. the regularization parameter - this type of model requires more hyperparameter tuning too.)
This is a model in which the factorizing matrices are constrained to be linear combinations of the user and item attributes, thereby making the recommendations based entirely on side information, with no free parameters for specific users or items: $$ \mathbf{X} \approx (\mathbf{U} \mathbf{C}) (\mathbf{I} \mathbf{D})^T + \mu $$
(Note that the movie attributes are not available for all the movies with ratings)
%%time
from cmfrec import ContentBased
model_content_based = ContentBased(k=40, maxiter=0, user_bias=False, item_bias=False)
model_content_based.fit(X=ratings.loc[lambda x: x["ItemId"].isin(item_sideinfo_pca["ItemId"])],
U=user_side_info,
I=item_sideinfo_pca.loc[lambda x: x["ItemId"].isin(ratings["ItemId"])])
CPU times: user 13min 8s, sys: 23min 31s, total: 36min 39s Wall time: 1min 57s
Content-based factorization model (explicit-feedback)
This is an intercepts-only version of the classical model, which estimates one parameter per user and one parameter per item, and as such produces a simple rank of the items based on those parameters. It is intended for comparison purposes and can be helpful to check that the recommendations for different users are having some variability (e.g. setting too large regularization values will tend to make all personalzied recommended lists similar to each other).
%%time
from cmfrec import MostPopular
model_non_personalized = MostPopular(user_bias=True, implicit=False)
model_non_personalized.fit(ratings)
CPU times: user 304 ms, sys: 800 ms, total: 1.1 s Wall time: 105 ms
Most-Popular recommendation model (explicit-feedback variant)
This section will examine what would each model recommend to the user with ID 948.
This is the demographic information for the user:
user_side_info.loc[user_side_info["UserId"] == 948].T.where(lambda x: x > 0).dropna()
947 | |
---|---|
UserId | 948 |
Gender_M | True |
Age_56 | True |
Occupation_programmer | True |
Region_Midwest | True |
These are the highest-rated movies from the user:
(
ratings
.loc[lambda x: x["UserId"] == 948]
.sort_values("Rating", ascending=False)
.assign(Movie=lambda x: x["ItemId"].map(movie_id_to_title))
.head(10)
)
UserId | ItemId | Rating | Movie | |
---|---|---|---|---|
146721 | 948 | 3789 | 5 | Pawnbroker, The (1965) |
146889 | 948 | 2665 | 5 | Earth Vs. the Flying Saucers (1956) |
146871 | 948 | 2640 | 5 | Superman (1978) |
146872 | 948 | 2641 | 5 | Superman II (1980) |
147105 | 948 | 2761 | 5 | Iron Giant, The (1999) |
146875 | 948 | 2644 | 5 | Dracula (1931) |
146878 | 948 | 2648 | 5 | Frankenstein (1931) |
147097 | 948 | 1019 | 5 | 20,000 Leagues Under the Sea (1954) |
146881 | 948 | 2657 | 5 | Rocky Horror Picture Show, The (1975) |
146884 | 948 | 2660 | 5 | Thing From Another World, The (1951) |
These are the lowest-rated movies from the user:
(
ratings
.loc[lambda x: x["UserId"] == 948]
.sort_values("Rating", ascending=True)
.assign(Movie=lambda x: x["ItemId"].map(movie_id_to_title))
.head(10)
)
UserId | ItemId | Rating | Movie | |
---|---|---|---|---|
147237 | 948 | 1247 | 1 | Graduate, The (1967) |
147173 | 948 | 70 | 1 | From Dusk Till Dawn (1996) |
146768 | 948 | 748 | 1 | Arrival, The (1996) |
147135 | 948 | 45 | 1 | To Die For (1995) |
146812 | 948 | 780 | 1 | Independence Day (ID4) (1996) |
146813 | 948 | 788 | 1 | Nutty Professor, The (1996) |
146814 | 948 | 3201 | 1 | Five Easy Pieces (1970) |
147118 | 948 | 356 | 1 | Forrest Gump (1994) |
146821 | 948 | 3070 | 1 | Adventures of Buckaroo Bonzai Across the 8th D... |
146822 | 948 | 1617 | 1 | L.A. Confidential (1997) |
Now producing recommendations from each model:
### Will exclude already-seen movies
exclude = ratings["ItemId"].loc[ratings["UserId"] == 948]
exclude_cb = exclude.loc[lambda x: x.isin(item_sideinfo_pca["ItemId"])]
### Recommended lists with those excluded
recommended_non_personalized = model_non_personalized.topN(user=948, n=10, exclude=exclude)
recommended_no_side_info = model_no_sideinfo.topN(user=948, n=10, exclude=exclude)
recommended_with_side_info = model_with_sideinfo.topN(user=948, n=10, exclude=exclude)
recommended_content_based = model_content_based.topN(user=948, n=10, exclude=exclude_cb)
recommended_non_personalized
array([2019, 318, 2905, 745, 1148, 1212, 3435, 923, 720, 3307])
A handy function to print top-N recommended lists with associated information:
from collections import defaultdict
# aggregate statistics
avg_movie_rating = defaultdict(lambda: 0)
num_ratings_per_movie = defaultdict(lambda: 0)
for i in ratings.groupby('ItemId')['Rating'].mean().to_frame().itertuples():
avg_movie_rating[i.Index] = i.Rating
for i in ratings.groupby('ItemId')['Rating'].agg(lambda x: len(tuple(x))).to_frame().itertuples():
num_ratings_per_movie[i.Index] = i.Rating
# function to print recommended lists more nicely
def print_reclist(reclist):
list_w_info = [str(m + 1) + ") - " + movie_id_to_title[reclist[m]] +\
" - Average Rating: " + str(np.round(avg_movie_rating[reclist[m]], 2))+\
" - Number of ratings: " + str(num_ratings_per_movie[reclist[m]])\
for m in range(len(reclist))]
print("\n".join(list_w_info))
print("Recommended from non-personalized model")
print_reclist(recommended_non_personalized)
print("----------------")
print("Recommended from ratings-only model")
print_reclist(recommended_no_side_info)
print("----------------")
print("Recommended from attributes-only model")
print_reclist(recommended_content_based)
print("----------------")
print("Recommended from hybrid model")
print_reclist(recommended_with_side_info)
Recommended from non-personalized model 1) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628 2) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227 3) - Sanjuro (1962) - Average Rating: 4.61 - Number of ratings: 69 4) - Close Shave, A (1995) - Average Rating: 4.52 - Number of ratings: 657 5) - Wrong Trousers, The (1993) - Average Rating: 4.51 - Number of ratings: 882 6) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480 7) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551 8) - Citizen Kane (1941) - Average Rating: 4.39 - Number of ratings: 1116 9) - Wallace & Gromit: The Best of Aardman Animation (1996) - Average Rating: 4.43 - Number of ratings: 438 10) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271 ---------------- Recommended from ratings-only model 1) - Arsenic and Old Lace (1944) - Average Rating: 4.17 - Number of ratings: 672 2) - Beauty and the Beast (1991) - Average Rating: 3.89 - Number of ratings: 1060 3) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238 4) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729 5) - Invasion of the Body Snatchers (1956) - Average Rating: 3.91 - Number of ratings: 628 6) - Hurricane, The (1999) - Average Rating: 3.85 - Number of ratings: 509 7) - Contender, The (2000) - Average Rating: 3.78 - Number of ratings: 388 8) - Wolf Man, The (1941) - Average Rating: 3.76 - Number of ratings: 134 9) - Apostle, The (1997) - Average Rating: 3.73 - Number of ratings: 471 10) - Mummy, The (1932) - Average Rating: 3.54 - Number of ratings: 162 ---------------- Recommended from attributes-only model 1) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227 2) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480 3) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271 4) - Jean de Florette (1986) - Average Rating: 4.32 - Number of ratings: 216 5) - It Happened One Night (1934) - Average Rating: 4.28 - Number of ratings: 374 6) - Central Station (Central do Brasil) (1998) - Average Rating: 4.28 - Number of ratings: 215 7) - Man Who Would Be King, The (1975) - Average Rating: 4.13 - Number of ratings: 310 8) - Best Years of Our Lives, The (1946) - Average Rating: 4.12 - Number of ratings: 236 9) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551 10) - In the Heat of the Night (1967) - Average Rating: 4.13 - Number of ratings: 348 ---------------- Recommended from hybrid model 1) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729 2) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238 3) - Beauty and the Beast (1991) - Average Rating: 3.89 - Number of ratings: 1060 4) - Arsenic and Old Lace (1944) - Average Rating: 4.17 - Number of ratings: 672 5) - Invasion of the Body Snatchers (1956) - Average Rating: 3.91 - Number of ratings: 628 6) - Mr. Smith Goes to Washington (1939) - Average Rating: 4.24 - Number of ratings: 383 7) - Life Is Beautiful (La Vita è bella) (1997) - Average Rating: 4.33 - Number of ratings: 1152 8) - Gold Rush, The (1925) - Average Rating: 4.19 - Number of ratings: 275 9) - Bride of Frankenstein (1935) - Average Rating: 3.91 - Number of ratings: 216 10) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
(As can be seen, the personalized recommendations tend to recommend very old movies, which is what this user seems to rate highly, with no overlap with the non-personalized recommendations).
The models here offer many tuneable parameters which can be tweaked in order to alter the recommended lists in some way. For example, setting a low regularization to the item biases will tend to favor movies with a high average rating regardless of the number of ratings, while setting a high regularization for the factorizing matrices will tend to produce the same recommendations for all users.
### Less personalized (underfitted)
reclist = \
CMF(lambda_=[1e+3, 1e+1, 1e+2, 1e+2, 1e+2, 1e+2])\
.fit(ratings)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
1) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628 2) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227 3) - Close Shave, A (1995) - Average Rating: 4.52 - Number of ratings: 657 4) - Wrong Trousers, The (1993) - Average Rating: 4.51 - Number of ratings: 882 5) - Sanjuro (1962) - Average Rating: 4.61 - Number of ratings: 69 6) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480 7) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551 8) - Wallace & Gromit: The Best of Aardman Animation (1996) - Average Rating: 4.43 - Number of ratings: 438 9) - Citizen Kane (1941) - Average Rating: 4.39 - Number of ratings: 1116 10) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271
### More personalized (overfitted)
reclist = \
CMF(lambda_=[0., 1e+3, 1e-1, 1e-1, 1e-1, 1e-1])\
.fit(ratings)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
1) - Plan 9 from Outer Space (1958) - Average Rating: 2.63 - Number of ratings: 249 2) - East-West (Est-ouest) (1999) - Average Rating: 3.77 - Number of ratings: 103 3) - Rugrats Movie, The (1998) - Average Rating: 2.78 - Number of ratings: 141 4) - Taste of Cherry (1997) - Average Rating: 3.53 - Number of ratings: 32 5) - Julien Donkey-Boy (1999) - Average Rating: 3.33 - Number of ratings: 12 6) - Original Kings of Comedy, The (2000) - Average Rating: 3.23 - Number of ratings: 147 7) - Maya Lin: A Strong Clear Vision (1994) - Average Rating: 4.1 - Number of ratings: 59 8) - Double Life of Veronique, The (La Double Vie de Véronique) (1991) - Average Rating: 3.94 - Number of ratings: 129 9) - Crash (1996) - Average Rating: 2.76 - Number of ratings: 141 10) - Faraway, So Close (In Weiter Ferne, So Nah!) (1993) - Average Rating: 3.71 - Number of ratings: 66
The collective model can also have variations such as weighting each factorization differently, or setting components (factors) that are not to be shared between factorizations (not shown).
### More oriented towards content-based than towards collaborative-filtering
reclist = \
CMF(k=40, w_main=0.5, w_item=3., w_user=5., lambda_=1e+1)\
.fit(ratings, U=user_side_info, I=item_sideinfo_pca)\
.topN(user=948, n=10, exclude=exclude)
print_reclist(reclist)
1) - Wrong Trousers, The (1993) - Average Rating: 4.51 - Number of ratings: 882 2) - Willy Wonka and the Chocolate Factory (1971) - Average Rating: 3.86 - Number of ratings: 1313 3) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628 4) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729 5) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480 6) - Close Shave, A (1995) - Average Rating: 4.52 - Number of ratings: 657 7) - Grand Day Out, A (1992) - Average Rating: 4.36 - Number of ratings: 473 8) - Citizen Kane (1941) - Average Rating: 4.39 - Number of ratings: 1116 9) - Singin' in the Rain (1952) - Average Rating: 4.28 - Number of ratings: 751 10) - Rebecca (1940) - Average Rating: 4.2 - Number of ratings: 386
Models can also be used to make recommendations for new users based on ratings and/or side information.
(Be aware that, due to the nature of computer floating point aithmetic, there might be some slight discrepancies between the results from topN
and topN_warm
)
print_reclist(model_with_sideinfo.topN_warm(X_col=ratings["ItemId"].loc[ratings["UserId"] == 948],
X_val=ratings["Rating"].loc[ratings["UserId"] == 948],
exclude=exclude))
1) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729 2) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238 3) - Beauty and the Beast (1991) - Average Rating: 3.89 - Number of ratings: 1060 4) - Arsenic and Old Lace (1944) - Average Rating: 4.17 - Number of ratings: 672 5) - Invasion of the Body Snatchers (1956) - Average Rating: 3.91 - Number of ratings: 628 6) - Mr. Smith Goes to Washington (1939) - Average Rating: 4.24 - Number of ratings: 383 7) - Life Is Beautiful (La Vita è bella) (1997) - Average Rating: 4.33 - Number of ratings: 1152 8) - Gold Rush, The (1925) - Average Rating: 4.19 - Number of ratings: 275 9) - Bride of Frankenstein (1935) - Average Rating: 3.91 - Number of ratings: 216 10) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
print_reclist(model_with_sideinfo.topN_warm(X_col=ratings["ItemId"].loc[ratings["UserId"] == 948],
X_val=ratings["Rating"].loc[ratings["UserId"] == 948],
U=user_side_info.loc[lambda x: x["UserId"] == 948],
exclude=exclude))
1) - It's a Wonderful Life (1946) - Average Rating: 4.3 - Number of ratings: 729 2) - Nosferatu (Nosferatu, eine Symphonie des Grauens) (1922) - Average Rating: 3.99 - Number of ratings: 238 3) - Beauty and the Beast (1991) - Average Rating: 3.89 - Number of ratings: 1060 4) - Arsenic and Old Lace (1944) - Average Rating: 4.17 - Number of ratings: 672 5) - Invasion of the Body Snatchers (1956) - Average Rating: 3.91 - Number of ratings: 628 6) - Mr. Smith Goes to Washington (1939) - Average Rating: 4.24 - Number of ratings: 383 7) - Life Is Beautiful (La Vita è bella) (1997) - Average Rating: 4.33 - Number of ratings: 1152 8) - Gold Rush, The (1925) - Average Rating: 4.19 - Number of ratings: 275 9) - Bride of Frankenstein (1935) - Average Rating: 3.91 - Number of ratings: 216 10) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628
print_reclist(
model_with_sideinfo.topN_cold(
U=user_side_info.loc[lambda x: x["UserId"] == 948].drop("UserId", axis=1),
exclude=exclude
)
)
1) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227 2) - Seven Samurai (The Magnificent Seven) (Shichinin no samurai) (1954) - Average Rating: 4.56 - Number of ratings: 628 3) - Wrong Trousers, The (1993) - Average Rating: 4.51 - Number of ratings: 882 4) - Close Shave, A (1995) - Average Rating: 4.52 - Number of ratings: 657 5) - Sanjuro (1962) - Average Rating: 4.61 - Number of ratings: 69 6) - Wallace & Gromit: The Best of Aardman Animation (1996) - Average Rating: 4.43 - Number of ratings: 438 7) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551 8) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480 9) - Life Is Beautiful (La Vita è bella) (1997) - Average Rating: 4.33 - Number of ratings: 1152 10) - Grand Day Out, A (1992) - Average Rating: 4.36 - Number of ratings: 473
This last one is very similar to the non-personalized recommended list - that is, the user side information had very little leverage in the model, at least for that user - in this regard, the content-based model tends to be better at cold-start recommendations:
print_reclist(
model_content_based.topN_cold(
U=user_side_info.loc[lambda x: x["UserId"] == 948].drop("UserId", axis=1),
exclude=exclude_cb
)
)
1) - Shawshank Redemption, The (1994) - Average Rating: 4.55 - Number of ratings: 2227 2) - Third Man, The (1949) - Average Rating: 4.45 - Number of ratings: 480 3) - City Lights (1931) - Average Rating: 4.39 - Number of ratings: 271 4) - Jean de Florette (1986) - Average Rating: 4.32 - Number of ratings: 216 5) - It Happened One Night (1934) - Average Rating: 4.28 - Number of ratings: 374 6) - Central Station (Central do Brasil) (1998) - Average Rating: 4.28 - Number of ratings: 215 7) - Man Who Would Be King, The (1975) - Average Rating: 4.13 - Number of ratings: 310 8) - Best Years of Our Lives, The (1946) - Average Rating: 4.12 - Number of ratings: 236 9) - Double Indemnity (1944) - Average Rating: 4.42 - Number of ratings: 551 10) - In the Heat of the Night (1967) - Average Rating: 4.13 - Number of ratings: 348
(For this use-case, would also be better to add item biases to the content-based model though)
This section shows usage of the predict
family of functions for getting the predicted rating for a given user and item, in order to calculate evaluation metrics such as RMSE and tune model parameters.
Note that, while widely used in earlier literature, RMSE might not provide a good overview of the ranking of items (which is what matters for recommendations), and it's recommended to also evaluate other metrics such as NDCG@K, P@K, correlations, etc.
Also be aware that there is a different class CMF_implicit
which might perform better at implicit-feedback metrics such as P@K.
When making recommendations, there's quite a difference between making predictions based on ratings data or based on side information alone. In this regard, one can classify prediction types into 4 types:
(One could sub-divide further according to users/items which were present in the training data with only ratings or with only side information, but this notebook will not go into that level of detail)
The classic model is only able to make predictions for the first case, while the collective model can leverage the side information in order to make predictions for (2) and (3). In theory, it could also do (4), but this is not recommended and the API does not provide such functionality.
The content-based model, on the other hand, is an ideal approach for case (4). The package also provides a different model (the "offsets" model - see references at the bottom) aimed at improving cases (2) and (3) when there is side information about only user or only about items at the expense of case (1), but such models are not shown in this notebook.
Producing a training and test set split of the ratings and side information:
from sklearn.model_selection import train_test_split
users_train, users_test = train_test_split(ratings["UserId"].unique(), test_size=0.2, random_state=1)
items_train, items_test = train_test_split(ratings["ItemId"].unique(), test_size=0.2, random_state=2)
ratings_train, ratings_test1 = train_test_split(ratings.loc[ratings["UserId"].isin(users_train) &
ratings["ItemId"].isin(items_train)],
test_size=0.2, random_state=123)
users_train = ratings_train["UserId"].unique()
items_train = ratings_train["ItemId"].unique()
ratings_test1 = ratings_test1.loc[ratings_test1["UserId"].isin(users_train) &
ratings_test1["ItemId"].isin(items_train)]
user_attr_train = user_side_info.loc[lambda x: x["UserId"].isin(users_train)]
item_attr_train = item_sideinfo_pca.loc[lambda x: x["ItemId"].isin(items_train)]
ratings_test2 = ratings.loc[ratings["UserId"].isin(users_train) &
~ratings["ItemId"].isin(items_train) &
ratings["ItemId"].isin(item_sideinfo_pca["ItemId"])]
ratings_test3 = ratings.loc[~ratings["UserId"].isin(users_train) &
ratings["ItemId"].isin(items_train) &
ratings["UserId"].isin(user_side_info["UserId"]) &
ratings["ItemId"].isin(item_sideinfo_pca["ItemId"])]
ratings_test4 = ratings.loc[~ratings["UserId"].isin(users_train) &
~ratings["ItemId"].isin(items_train) &
ratings["UserId"].isin(user_side_info["UserId"]) &
ratings["ItemId"].isin(item_sideinfo_pca["ItemId"])]
print("Number of ratings in training data: %d" % ratings_train.shape[0])
print("Number of ratings in test data type (1): %d" % ratings_test1.shape[0])
print("Number of ratings in test data type (2): %d" % ratings_test2.shape[0])
print("Number of ratings in test data type (3): %d" % ratings_test3.shape[0])
print("Number of ratings in test data type (4): %d" % ratings_test4.shape[0])
Number of ratings in training data: 512972 Number of ratings in test data type (1): 128221 Number of ratings in test data type (2): 154507 Number of ratings in test data type (3): 139009 Number of ratings in test data type (4): 36774
### Handy usage of Pandas indexing
user_attr_test = user_side_info.set_index("UserId")
item_attr_test = item_sideinfo_pca.set_index("ItemId")
Re-fitting earlier models to the training subset of the earlier data:
m_classic = CMF(k=40)\
.fit(ratings_train)
m_collective = CMF(k=40, w_main=0.5, w_user=0.5, w_item=0.5)\
.fit(X=ratings_train,
U=user_attr_train,
I=item_attr_train)
m_contentbased = ContentBased(k=40, user_bias=False, item_bias=False)\
.fit(X=ratings_train.loc[ratings_train["UserId"].isin(user_attr_train["UserId"]) &
ratings_train["ItemId"].isin(item_attr_train["ItemId"])],
U=user_attr_train,
I=item_attr_train)
m_mostpopular = MostPopular(user_bias=True)\
.fit(X=ratings_train)
RMSE for users and items which were both in the training data:
from sklearn.metrics import mean_squared_error
pred_contetbased = m_mostpopular.predict(ratings_test1["UserId"], ratings_test1["ItemId"])
print("RMSE type 1 non-personalized model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1["Rating"],
pred_contetbased,
squared=True)),
np.corrcoef(ratings_test1["Rating"], pred_contetbased)[0,1]))
pred_ratingsonly = m_classic.predict(ratings_test1["UserId"], ratings_test1["ItemId"])
print("RMSE type 1 ratings-only model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1["Rating"],
pred_ratingsonly,
squared=True)),
np.corrcoef(ratings_test1["Rating"], pred_ratingsonly)[0,1]))
pred_hybrid = m_collective.predict(ratings_test1["UserId"], ratings_test1["ItemId"])
print("RMSE type 1 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1["Rating"],
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test1["Rating"], pred_hybrid)[0,1]))
test_cb = ratings_test1.loc[ratings_test1["UserId"].isin(user_attr_train["UserId"]) &
ratings_test1["ItemId"].isin(item_attr_train["ItemId"])]
pred_contentbased = m_contentbased.predict(test_cb["UserId"], test_cb["ItemId"])
print("RMSE type 1 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(test_cb["Rating"],
pred_contentbased,
squared=True)),
np.corrcoef(test_cb["Rating"], pred_contentbased)[0,1]))
RMSE type 1 non-personalized model: 0.911 [rho: 0.580] RMSE type 1 ratings-only model: 0.896 [rho: 0.603] RMSE type 1 hybrid model: 0.861 [rho: 0.640] RMSE type 1 content-based model: 0.975 [rho: 0.487]
RMSE for users which were in the training data but items which were not:
pred_hybrid = m_collective.predict_new(ratings_test2["UserId"],
item_attr_test.loc[ratings_test2["ItemId"]])
print("RMSE type 2 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2["Rating"],
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test2["Rating"], pred_hybrid)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test2["UserId"]],
item_attr_test.loc[ratings_test2["ItemId"]])
print("RMSE type 2 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2["Rating"],
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test2["Rating"], pred_contentbased)[0,1]))
RMSE type 2 hybrid model: 1.025 [rho: 0.424] RMSE type 2 content-based model: 0.977 [rho: 0.486]
RMSE for items which were in the training data but users which were not:
pred_hybrid = m_collective.predict_cold_multiple(item=ratings_test3["ItemId"],
U=user_attr_test.loc[ratings_test3["UserId"]])
print("RMSE type 3 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3["Rating"],
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test3["Rating"], pred_hybrid)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test3["UserId"]],
item_attr_test.loc[ratings_test3["ItemId"]])
print("RMSE type 3 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3["Rating"],
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test3["Rating"], pred_contentbased)[0,1]))
RMSE type 3 hybrid model: 0.988 [rho: 0.470] RMSE type 3 content-based model: 0.981 [rho: 0.468]
RMSE for users and items which were not in the training data:
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test4["UserId"]],
item_attr_test.loc[ratings_test4["ItemId"]])
print("RMSE type 4 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test4["Rating"],
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test4["Rating"], pred_contentbased)[0,1]))
RMSE type 4 content-based model: 0.986 [rho: 0.464]
In addition to external side information about the users and items, one can also generate features from the same $\mathbf{X}$ data by considering which movies a user rated and which ones didn't - these are taken as binary features, with the zeros being counted towards the loss/objective function.
The package offers an easy option for automatically generating these features on-the-fly, which can then be used in addition to the external features. The full model now becomes: $$ \mathbf{X} \approx \mathbf{A} \mathbf{B}^T + \mu + \mathbf{b}_A + \mathbf{b}_B $$ $$ \mathbf{I}_x \approx \mathbf{A} \mathbf{B}_i^T, \:\: \mathbf{I}_x^T \approx \mathbf{B} \mathbf{A}_i^T $$ $$ \mathbf{U} \approx \mathbf{A} \mathbf{C}^T + \mathbf{\mu}_U ,\:\:\:\: \mathbf{I} \approx \mathbf{B} \mathbf{D}^T + \mathbf{\mu}_I $$
Where:
While in the earlier models, every user/item had the same regularization applied on its factors, it's also possible to make this regularization adjust itself according to the number of ratings for each user movie, which tends to produce better models at the expense of more hyperparameter tuning.
As well, the package offers an ALS-Cholesky solver, which is slower but tends to give better end results. This section will now use the implicit features and the Cholesky solver, and compare the new models to the previous ones.
m_implicit = CMF(k=40, add_implicit_features=True,
lambda_=0.05, scale_lam=True,
w_main=0.7, w_implicit=1., use_cg=False)\
.fit(X=ratings_train)
m_implicit_plus_collective = \
CMF(k=40, add_implicit_features=True, use_cg=False,
lambda_=0.03, scale_lam=True,
w_main=0.5, w_user=0.3, w_item=0.3, w_implicit=1.)\
.fit(X=ratings_train,
U=user_attr_train,
I=item_attr_train)
pred_ratingsonly = m_classic.predict(ratings_test1["UserId"], ratings_test1["ItemId"])
print("RMSE type 1 ratings-only model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1["Rating"],
pred_ratingsonly,
squared=True)),
np.corrcoef(ratings_test1["Rating"], pred_ratingsonly)[0,1]))
pred_implicit = m_implicit.predict(ratings_test1["UserId"], ratings_test1["ItemId"])
print("RMSE type 1 ratings + implicit + dyn + Chol: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1["Rating"],
pred_implicit,
squared=True)),
np.corrcoef(ratings_test1["Rating"], pred_implicit)[0,1]))
pred_hybrid = m_collective.predict(ratings_test1["UserId"], ratings_test1["ItemId"])
print("RMSE type 1 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1["Rating"],
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test1["Rating"], pred_hybrid)[0,1]))
pred_implicit_plus_collective = m_implicit_plus_collective.\
predict(ratings_test1["UserId"], ratings_test1["ItemId"])
print("RMSE type 1 hybrid + implicit + dyn + Chol: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test1["Rating"],
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test1["Rating"], pred_implicit_plus_collective)[0,1]))
RMSE type 1 ratings-only model: 0.896 [rho: 0.603] RMSE type 1 ratings + implicit + dyn + Chol: 0.853 [rho: 0.646] RMSE type 1 hybrid model: 0.861 [rho: 0.640] RMSE type 1 hybrid + implicit + dyn + Chol: 0.846 [rho: 0.654]
But note that, while the dynamic regularization and Cholesky method usually lead to improvements in general, the newly-added implicit features oftentimes result in worse cold-start predictions:
pred_hybrid = m_collective.predict_new(ratings_test2["UserId"],
item_attr_test.loc[ratings_test2["ItemId"]])
print("RMSE type 2 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2["Rating"],
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test2["Rating"], pred_hybrid)[0,1]))
pred_implicit_plus_collective = \
m_implicit_plus_collective\
.predict_new(ratings_test2["UserId"],
item_attr_test.loc[ratings_test2["ItemId"]])
print("RMSE type 2 hybrid model + implicit + dyn + Chol: %.3f [rho: %.3f] (might get worse)" %
(np.sqrt(mean_squared_error(ratings_test2["Rating"],
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test2["Rating"], pred_implicit_plus_collective)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test2["UserId"]],
item_attr_test.loc[ratings_test2["ItemId"]])
print("RMSE type 2 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test2["Rating"],
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test2["Rating"], pred_contentbased)[0,1]))
RMSE type 2 hybrid model: 1.025 [rho: 0.424] RMSE type 2 hybrid model + implicit + dyn + Chol: 1.004 [rho: 0.480] (might get worse) RMSE type 2 content-based model: 0.977 [rho: 0.486]
pred_hybrid = m_collective.predict_cold_multiple(item=ratings_test3["ItemId"],
U=user_attr_test.loc[ratings_test3["UserId"]])
print("RMSE type 3 hybrid model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3["Rating"],
pred_hybrid,
squared=True)),
np.corrcoef(ratings_test3["Rating"], pred_hybrid)[0,1]))
pred_implicit_plus_collective = \
m_implicit_plus_collective\
.predict_cold_multiple(item=ratings_test3["ItemId"],
U=user_attr_test.loc[ratings_test3["UserId"]])
print("RMSE type 3 hybrid model + implicit + dyn + Chol: %.3f [rho: %.3f] (got worse)" %
(np.sqrt(mean_squared_error(ratings_test3["Rating"],
pred_implicit_plus_collective,
squared=True)),
np.corrcoef(ratings_test3["Rating"], pred_implicit_plus_collective)[0,1]))
pred_contentbased = m_contentbased.predict_new(user_attr_test.loc[ratings_test3["UserId"]],
item_attr_test.loc[ratings_test3["ItemId"]])
print("RMSE type 3 content-based model: %.3f [rho: %.3f]" %
(np.sqrt(mean_squared_error(ratings_test3["Rating"],
pred_contentbased,
squared=True)),
np.corrcoef(ratings_test3["Rating"], pred_contentbased)[0,1]))
RMSE type 3 hybrid model: 0.988 [rho: 0.470] RMSE type 3 hybrid model + implicit + dyn + Chol: 1.013 [rho: 0.458] (got worse) RMSE type 3 content-based model: 0.981 [rho: 0.468]