This notebook gets you started with a brief nDCG evaluation with LensKit for Python.

This notebook is also available on Google Collaboratory and nbviewer.

We first import the LensKit components we need:

In [1]:

```
from lenskit.datasets import ML100K
from lenskit import batch, topn, util
from lenskit import crossfold as xf
from lenskit.algorithms import Recommender, als, item_knn as knn
from lenskit import topn
```

And Pandas is very useful:

In [2]:

```
import pandas as pd
```

In [3]:

```
%matplotlib inline
```

We're going to use the ML-100K data set:

In [4]:

```
ml100k = ML100K('ml-100k')
ratings = ml100k.ratings
ratings.head()
```

Out[4]:

Let's set up two algorithms:

In [5]:

```
algo_ii = knn.ItemItem(20)
algo_als = als.BiasedMF(50)
```

In LensKit, our evaluation proceeds in 2 steps:

- Generate recommendations
- Measure them

If memory is a concern, we can measure while generating, but we will not do that for now.

We will first define a function to generate recommendations from one algorithm over a single partition of the data set. It will take an algorithm, a train set, and a test set, and return the recommendations.

**Note:** before fitting the algorithm, we clone it. Some algorithms misbehave when fit multiple times.

**Note 2:** our algorithms do not necessarily implement the `Recommender`

interface, so we adapt them. This fills in a default candidate selector.

The code function looks like this:

In [6]:

```
def eval(aname, algo, train, test):
fittable = util.clone(algo)
fittable = Recommender.adapt(fittable)
fittable.fit(train)
users = test.user.unique()
# now we run the recommender
recs = batch.recommend(fittable, users, 100)
# add the algorithm name for analyzability
recs['Algorithm'] = aname
return recs
```

Now, we will loop over the data and the algorithms, and generate recommendations:

In [7]:

```
all_recs = []
test_data = []
for train, test in xf.partition_users(ratings[['user', 'item', 'rating']], 5, xf.SampleFrac(0.2)):
test_data.append(test)
all_recs.append(eval('ItemItem', algo_ii, train, test))
all_recs.append(eval('ALS', algo_als, train, test))
```

With the results in place, we can concatenate them into a single data frame:

In [8]:

```
all_recs = pd.concat(all_recs, ignore_index=True)
all_recs.head()
```

Out[8]:

To compute our analysis, we also need to concatenate the test data into a single frame:

In [9]:

```
test_data = pd.concat(test_data, ignore_index=True)
```

We analyze our recommendation lists with a `RecListAnalysis`

. It takes care of the hard work of making sure that the truth data (our test data) and the recoommendations line up properly.

We do assume here that each user only appears once per algorithm. Since our crossfold method partitions users, this is fine.

In [10]:

```
rla = topn.RecListAnalysis()
rla.add_metric(topn.ndcg)
results = rla.compute(all_recs, test_data)
results.head()
```

Out[10]:

Now we have nDCG values!

In [11]:

```
results.groupby('Algorithm').ndcg.mean()
```

Out[11]:

In [12]:

```
results.groupby('Algorithm').ndcg.mean().plot.bar()
```

Out[12]:

In [ ]:

```
```