Authors: Olga Daykhovskaya, Yury Kashnitskiy. This material is subject to the terms and conditions of the Creative Commons CC BY-NC-SA 4.0 license. Free use is permitted for any non-commercial purpose.
In this task, we will look at how data dimensionality reduction and clustering methods work. At the same time, we'll practice solving classification task again.
We will work with the Samsung Human Activity Recognition dataset. Download the data here. The data comes from accelerometers and gyros of Samsung Galaxy S3 mobile phones ( you can find more info about the features using the link above), the type of activity of a person with a phone in his/her pocket is also known – whether he/she walked, stood, lay, sat or walked up or down the stairs.
First, we pretend that the type of activity is unknown to us, and we will try to cluster people purely on the basis of available features. Then we solve the problem of determining the type of physical activity as a classification problem.
Fill the code where needed ("Your code is here") and answer the questions in the web form.
import os
import numpy as np
import pandas as pd
import seaborn as sns
from tqdm import tqdm_notebook
%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use(['seaborn-darkgrid'])
plt.rcParams['figure.figsize'] = (12, 9)
plt.rcParams['font.family'] = 'DejaVu Sans'
from sklearn import metrics
from sklearn.cluster import KMeans, AgglomerativeClustering, SpectralClustering
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
RANDOM_STATE = 17
PATH_TO_SAMSUNG_DATA = "../../data/samsung_HAR"
X_train = np.loadtxt(os.path.join(PATH_TO_SAMSUNG_DATA, "samsung_train.txt"))
y_train = np.loadtxt(os.path.join(PATH_TO_SAMSUNG_DATA,
"samsung_train_labels.txt")).astype(int)
X_test = np.loadtxt(os.path.join(PATH_TO_SAMSUNG_DATA, "samsung_test.txt"))
y_test = np.loadtxt(os.path.join(PATH_TO_SAMSUNG_DATA,
"samsung_test_labels.txt")).astype(int)
# Checking dimensions
assert(X_train.shape == (7352, 561) and y_train.shape == (7352,))
assert(X_test.shape == (2947, 561) and y_test.shape == (2947,))
For clustering, we do not need a target vector, so we'll work with the combination of training and test samples. Merge X_train
with X_test
, and y_train
with y_test
.
# Your code here
Define the number of unique values of the labels of the target class.
# np.unique(y)
# n_classes = np.unique(y).size
Scale the sample using StandardScaler
with default parameters.
# Your code here
Reduce the number of dimensions using PCA, leaving as many components as necessary to explain at least 90% of the variance of the original (scaled) data. Use the scaled dataset and fix random_state
(RANDOM_STATE constant).
# Your code here
# pca =
# X_pca =
Question 1:
What is the minimum number of principal components required to cover the 90% of the variance of the original (scaled) data?
# Your code here
Answer options:
Вопрос 2:
What percentage of the variance is covered by the first principal component? Round to the nearest percent.
Answer options:
# Your code here
Visualize data in projection on the first two principal components.
# Your code here
# plt.scatter(, , c=y, s=20, cmap='viridis');
Question 3:
If everything worked out correctly, you will see a number of clusters, almost perfectly separated from each other. What types of activity are included in these clusters?
Answer options:
Perform clustering with the KMeans
method, training the model on data with reduced dimensionality (by PCA). In this case, we will give a clue to look for exactly 6 clusters, but in general case we will not know how many clusters we should be looking for.
Options:
Other parameters should have default values.
# Your code here
Visualize data in projection on the first two principal components. Color the dots according to the clusters obtained.
# Your code here
# plt.scatter(, , c=cluster_labels, s=20, cmap='viridis');
Look at the correspondence between the cluster marks and the original class labels and what kinds of activities the KMeans
algorithm is confused at.
# tab = pd.crosstab(y, cluster_labels, margins=True)
# tab.index = ['walking', 'going up the stairs',
# 'going down the stairs', 'sitting', 'standing', 'laying', 'all']
# tab.columns = ['cluster' + str(i + 1) for i in range(6)] + ['all']
# tab
We see that for each class (i.e., each activity) there are several clusters. Let's look at the maximum percentage of objects in a class that are assigned to a single cluster. This will be a simple metric that characterizes how easily the class is separated from others when clustering.
Example: if for class "walking downstairs" (with 1406 instances belonging to it), the distribution of clusters is:
then such a share will be 900/1406 $ \approx $ 0.64.
Question 4:
Which activity is separated from the rest better than others based on the simple metric described above?
Answer:
It can be seen that kMeans does not distinguish activities very well. Use the elbow method to select the optimal number of clusters. Parameters of the algorithm and the data we use are the same as before, we change only n_clusters
.
# # Your code here
# inertia = []
# for k in tqdm_notebook(range(1, n_classes + 1)):
# pass
Question 5:
How many clusters can we choose according to the elbow method?
Answer options:
Let's try another clustering algorithm, described in the article – agglomerative clustering.
# ag = AgglomerativeClustering(n_clusters=n_classes,
# linkage='ward').fit(X_pca)
Calculate the Adjusted Rand Index (sklearn.metrics
) for the resulting clustering and for KMeans
with the parameters from the 4th question.
# Your code here
Question 6:
Select all the correct statements.
Answer options:
You can notice that the task is not very well solved when we try to detect several clusters (> 2). Now, let's solve the classification problem, given that the data is labeled.
For classification, use the support vector machine – class sklearn.svm.LinearSVC
. In this course, we didn't study this algorithm separately, but it is well-known and you can read about it, for example here.
Choose the C
hyperparameter forLinearSVC
using GridSearchCV
.
StandardScaler
on the training set (with all original features), apply scaling to the test setGridSearchCV
, specify cv
= 3.# # Your code here
# scaler = StandardScaler()
# X_train_scaled =
# X_test_scaled =
svc = LinearSVC(random_state=RANDOM_STATE)
svc_params = {'C': [0.001, 0.01, 0.1, 1, 10]}
# %%time
# # Your code here
# best_svc = None
# best_svc.best_params_, best_svc.best_score_
Question 7
Which value of the hyperparameter C
was chosen the best on the basis of cross-validation?
Answer options:
# y_predicted = best_svc.predict(X_test_scaled)
# tab = pd.crosstab(y_test, y_predicted, margins=True)
# tab.index = ['walking', 'climbing up the stairs',
# 'going down the stairs', 'sitting', 'standing', 'laying', 'all']
# tab.columns = ['walking', 'climbing up the stairs',
# 'going down the stairs', 'sitting', 'standing', 'laying', 'all']
# tab
Question 8:
Which activity type is worst detected by SVM in terms of precision? Recall?
Answer options:
Finally, do the same thing as in Question 7, but add PCA.
X_train_scaled
andX_test_scaled
C
via cross-validation on the training set with PCA-transformation. You will notice how much faster it works now. Question 9:
What is the difference between the best quality (accuracy) for cross-validation in the case of all 561 initial characteristics and in the second case, when the principal component method was applied? Round to the nearest percent.
Options:
# Your code here
Question 10:
Select all the correct statements:
Answer options: