mlcourse.ai – Open Machine Learning Course

Author: Yury Kashnitskiy (@yorko). Edited by Sergey Kolchenko (@KolchenkoSergey). This material is subject to the terms and conditions of the Creative Commons CC BY-NC-SA 4.0 license. Free use is permitted for any non-commercial purpose.

Assignment #2. Spring 2019

Competition 2. Predicting Medium articles popularity with Ridge Regression
(beating baselines in the "Medium" competition)

In this competition we are predicting Medium article popularity based on its features like content, title, author, tags, reading time etc.

Prior to working on the assignment, you'd better check out the corresponding course material:

  1. Classification, Decision Trees and k Nearest Neighbors, the same as an interactive web-based Kaggle Kernel (basics of machine learning are covered here)
  2. Linear classification and regression in 5 parts:
  3. You can also practice with demo assignments, which are simpler and already shared with solutions:
    • "Sarcasm detection with logistic regression": assignment + solution
    • "Linear regression as optimization": assignment (solution cannot be officially shared)
    • "Exploring OLS, Lasso and Random Forest in a regression task": assignment + solution
  4. Baseline with Ridge regression and "bag of words" for article content, Kernel
  5. Other Kernels in this competition. You can share yours as well, but not high-performing ones (Public LB MAE shall be > 1.5). Please don't spoil the competitive spirit.
  6. If that's still not enough, watch two videos (Linear regression and regularization) from here mlcourse.ai/video, the second one on LTV prediction is smth that you won't typically find in a MOOC - real problem, real metrics, real data.

Your task:

  1. "Freeride". Come up with good features to beat the baselines "A2 baseline (10 credits)" (1.45082 Public LB MAE) and "A2 strong baseline (20 credits)" (1.41117 Public LB MAE). As names suggest, you'll get 10 more credits for beating the first one, and 10 more (20 in total) for beating the second one. You need to name your team (out of 1 person) in full accordance with the course rating (for newcomers: you need to name your team with your real full name). You can think of it as a part of the assignment.
  2. If you've beaten "A2 baseline (10 credits)" or performed better, you need to upload your solution as described in course roadmap ("Kaggle Inclass Competition Medium"). For all baselines that you see on Public Leaderboard, it's OK to beat them on Public LB as well. But 10 winners will be defined according to the private LB, which will be revealed by @yorko on March 11.

Deadline for A2: 2019 March 10, 20:59 GMT (London time)

How to get help

In ODS Slack (if you still don't have access, fill in the form mentioned on the mlcourse.ai main page), we have a channel #mlcourse_ai_news with announcements from the course team. You can discuss the course content freely in the #mlcourse_ai channel (we still have a huge Russian-speaking group, they have a separate channel #mlcourse_ai_rus).

Please stick this special thread for your questions:

Help each other without sharing actual code. Our TA Artem @datamove is there to help (only in the mentioned thread, do not write to him directly).

In [ ]:
import os
import json
from tqdm import tqdm_notebook
import numpy as np
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import mean_absolute_error
from scipy.sparse import csr_matrix, hstack
from sklearn.linear_model import Ridge

The following code will help to throw away all HTML tags from an article content.

In [ ]:
from html.parser import HTMLParser

class MLStripper(HTMLParser):
    def __init__(self):
        self.reset()
        self.strict = False
        self.convert_charrefs= True
        self.fed = []
    def handle_data(self, d):
        self.fed.append(d)
    def get_data(self):
        return ''.join(self.fed)

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()

Supplementary function to read a JSON line without crashing on escape characters.

In [ ]:
def read_json_line(line=None):
    result = None
    try:        
        result = json.loads(line)
    except Exception as e:      
        # Find the offending character index:
        idx_to_replace = int(str(e).split(' ')[-1].replace(')',''))      
        # Remove the offending character:
        new_line = list(line)
        new_line[idx_to_replace] = ' '
        new_line = ''.join(new_line)     
        return read_json_line(line=new_line)
    return result

Extract features content, published, title and author, write them to separate files for train and test sets.

In [ ]:
def extract_features_and_write(path_to_data,
                               inp_filename, is_train=True):
    
    features = ['content', 'published', 'title', 'author']
    prefix = 'train' if is_train else 'test'
    feature_files = [open(os.path.join(path_to_data,
                                       '{}_{}.txt'.format(prefix, feat)),
                          'w', encoding='utf-8')
                     for feat in features]
    
    with open(os.path.join(path_to_data, inp_filename), 
              encoding='utf-8') as inp_json_file:

        for line in tqdm_notebook(inp_json_file):
            json_data = read_json_line(line)
            
            # You code here
In [ ]:
PATH_TO_DATA = '../../data/kaggle_medium' # modify this if you need to
In [ ]:
extract_features_and_write(PATH_TO_DATA, 'train.json', is_train=True)
In [ ]:
extract_features_and_write(PATH_TO_DATA, 'test.json', is_train=False)

Add the following groups of features:

- Tf-Idf with article content (ngram_range=(1, 2), max_features=100000 but you can try adding more)
- Tf-Idf with article titles (ngram_range=(1, 2), max_features=100000 but you can try adding more)
- Time features: publication hour, whether it's morning, day, night, whether it's a weekend
- Bag of authors (i.e. One-Hot-Encoded author names)
In [ ]:
# You code here

Join all sparse matrices.

In [ ]:
X_train_sparse = hstack([X_train_content_sparse, X_train_title_sparse,
                         X_train_author_sparse, 
                         X_train_time_features_sparse]).tocsr()
In [ ]:
X_test_sparse = hstack([X_test_content_sparse, X_test_title_sparse,
                        X_test_author_sparse, 
                        X_test_time_features_sparse]).tocsr()

Read train target and split data for validation.

In [ ]:
train_target = pd.read_csv(os.path.join(PATH_TO_DATA, 'train_log1p_recommends.csv'), 
                           index_col='id')
y_train = train_target['log_recommends'].values
In [ ]:
train_part_size = int(0.7 * train_target.shape[0])
X_train_part_sparse = X_train_sparse[:train_part_size, :]
y_train_part = y_train[:train_part_size]
X_valid_sparse =  X_train_sparse[train_part_size:, :]
y_valid = y_train[train_part_size:]

Train a simple Ridge model and check MAE on the validation set.

In [ ]:
# You code here

Train the same Ridge with all available data, make predictions for the test set and form a submission file.

In [ ]:
# You code here
In [ ]:
def write_submission_file(prediction, filename,
                          path_to_sample=os.path.join(PATH_TO_DATA, 
                                                      'sample_submission.csv')):
    submission = pd.read_csv(path_to_sample, index_col='id')
    
    submission['log_recommends'] = prediction
    submission.to_csv(filename)
In [ ]:
write_submission_file(ridge_test_pred, os.path.join(PATH_TO_DATA,
                                                    'assignment2_medium_submission.csv'))

Now's the time for dirty Kaggle hacks. Form a submission file with all zeros. Make a submission. What do you get if you think about it? How is it going to help you with modifying your predictions?

UPD: There is a tutorial on leaderboard probing which is written within mlcourse.ai and is relevant here. (Originally, contestants were supposed to come up with simple probing techniques on their own. But now when this tutorial is shared, we eliminate "discovery bias" and equalize everybody's chances by sharing this tutorial).

In [ ]:
write_submission_file(np.zeros_like(ridge_test_pred), 
                      os.path.join(PATH_TO_DATA,
                                   'medium_all_zeros_submission.csv'))

Modify predictions in an appropriate way (based on your all-zero submission) and make a new submission.

In [ ]:
ridge_test_pred_modif = ridge_test_pred # You code here
In [ ]:
write_submission_file(ridge_test_pred_modif, 
                      os.path.join(PATH_TO_DATA,
                                   'assignment2_medium_submission_with_hack.csv'))

Some ideas for improvement:

  • Engineer good features, this is the key to success. Some simple features will be based on publication time, authors, content length and so on
  • You may not ignore HTML and extract some features from there
  • You'd better experiment with your validation scheme. You should see a correlation between your local improvements and LB score
  • Try TF-IDF, ngrams, Word2Vec and GloVe embeddings
  • Try various NLP techniques like stemming and lemmatization
  • Tune hyperparameters. In our example, we've left only 50k features and used C=1 as a regularization parameter, this can be changed
  • SGD and Vowpal Wabbit will learn much faster
  • Play around with blending and/or stacking. An intro is given in this Kernel by @yorko
  • In our course, we don't cover neural nets. But it's not obliged to use GRUs/LSTMs/whatever in this competition.

Good luck!