!unzip "drive/My Drive/Applied_AI/HAR/HumanActivityRecognition.zip"
from google.colab import drive
drive.mount('/content/drive')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code Enter your authorization code: ·········· Mounted at /content/drive
This project is to build a model that predicts the human activities such as Walking, Walking_Upstairs, Walking_Downstairs, Sitting, Standing or Laying.
This dataset is collected from 30 persons(referred as subjects in this dataset), performing different activities with a smartphone to their waists. The data is recorded with the help of sensors (accelerometer and Gyroscope) in that smartphone. This experiment was video recorded to label the data manually.
By using the sensors(Gyroscope and accelerometer) in a smartphone, they have captured '3-axial linear acceleration'(tAcc-XYZ) from accelerometer and '3-axial angular velocity' (tGyro-XYZ) from Gyroscope with several variations.
prefix 't' in those metrics denotes time.
suffix 'XYZ' represents 3-axial signals in X , Y, and Z directions.
These sensor signals are preprocessed by applying noise filters and then sampled in fixed-width windows(sliding windows) of 2.56 seconds each with 50% overlap. ie., each window has 128 readings.
From Each window, a feature vector was obtianed by calculating variables from the time and frequency domain.
In our dataset, each datapoint represents a window with different readings
The accelertion signal was saperated into Body and Gravity acceleration signals(___tBodyAcc-XYZ___ and ___tGravityAcc-XYZ___) using some low pass filter with corner frequecy of 0.3Hz.
After that, the body linear acceleration and angular velocity were derived in time to obtian jerk signals (___tBodyAccJerk-XYZ___ and ___tBodyGyroJerk-XYZ___).
The magnitude of these 3-dimensional signals were calculated using the Euclidian norm. This magnitudes are represented as features with names like tBodyAccMag, tGravityAccMag, tBodyAccJerkMag, tBodyGyroMag and tBodyGyroJerkMag.
Finally, We've got frequency domain signals from some of the available signals by applying a FFT (Fast Fourier Transform). These signals obtained were labeled with ___prefix 'f'___ just like original signals with ___prefix 't'___. These signals are labeled as ___fBodyAcc-XYZ___, ___fBodyGyroMag___ etc.,.
These are the signals that we got so far.
We can esitmate some set of variables from the above signals. ie., We will estimate the following properties on each and every signal that we recoreded so far.
We can obtain some other vectors by taking the average of signals in a single window sample. These are used on the angle() variable'
` + gravityMean + tBodyAccMean + tBodyAccJerkMean + tBodyGyroMean + tBodyGyroJerkMean
In the dataset, Y_labels are represented as numbers from 1 to 6 as their identifiers.
27 MB
Accelerometer and Gyroscope readings are taken from 30 volunteers(referred as subjects) while performing the following 6 Activities.
Readings are divided into a window of 2.56 seconds with 50% overlapping.
Accelerometer readings are divided into gravity acceleration and body acceleration readings, which has x,y and z components each.
Gyroscope readings are the measure of angular velocities which has x,y and z components.
Jerk signals are calculated for BodyAcceleration readings.
Fourier Transforms are made on the above time readings to obtain frequency readings.
Now, on all the base signal readings., mean, max, mad, sma, arcoefficient, engerybands,entropy etc., are calculated for each window.
We get a feature vector of 561 features and these features are given in the dataset.
Each window of readings is a datapoint of 561 features.
import numpy as np
import pandas as pd
# get the features from the file features.txt
features = list()
with open('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/features.txt') as f:
features = [line.split()[1] for line in f.readlines()]
print('No of Features: {}'.format(len(features)))
No of Features: 561
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import time
# https://gist.github.com/greydanus/f6eee59eaf1d90fcb3b534a25362cea4
# https://stackoverflow.com/a/14434334
# this function is used to update the plots for each epoch and error
def plt_dynamic(x, vy, ty, ax, colors=['b']):
ax.plot(x, vy, 'b', label="Validation Loss")
ax.plot(x, ty, 'r', label="Train Loss")
plt.legend()
plt.grid()
fig.canvas.draw()
# get the data from txt files to pandas dataffame
X_train = pd.read_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/train/X_train.txt', delim_whitespace=True, header=None, names=features)
# add subject column to the dataframe
X_train['subject'] = pd.read_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/train/subject_train.txt', header=None, squeeze=True)
y_train = pd.read_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/train/y_train.txt', names=['Activity'], squeeze=True)
y_train_labels = y_train.map({1: 'WALKING', 2:'WALKING_UPSTAIRS',3:'WALKING_DOWNSTAIRS',\
4:'SITTING', 5:'STANDING',6:'LAYING'})
# put all columns in a single dataframe
train = X_train
train['Activity'] = y_train
train['ActivityName'] = y_train_labels
train.sample()
/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py:702: UserWarning: Duplicate names specified. This will raise an error in the future. return _read(filepath_or_buffer, kwds)
tBodyAcc-mean()-X | tBodyAcc-mean()-Y | tBodyAcc-mean()-Z | tBodyAcc-std()-X | tBodyAcc-std()-Y | tBodyAcc-std()-Z | tBodyAcc-mad()-X | tBodyAcc-mad()-Y | tBodyAcc-mad()-Z | tBodyAcc-max()-X | tBodyAcc-max()-Y | tBodyAcc-max()-Z | tBodyAcc-min()-X | tBodyAcc-min()-Y | tBodyAcc-min()-Z | tBodyAcc-sma() | tBodyAcc-energy()-X | tBodyAcc-energy()-Y | tBodyAcc-energy()-Z | tBodyAcc-iqr()-X | tBodyAcc-iqr()-Y | tBodyAcc-iqr()-Z | tBodyAcc-entropy()-X | tBodyAcc-entropy()-Y | tBodyAcc-entropy()-Z | tBodyAcc-arCoeff()-X,1 | tBodyAcc-arCoeff()-X,2 | tBodyAcc-arCoeff()-X,3 | tBodyAcc-arCoeff()-X,4 | tBodyAcc-arCoeff()-Y,1 | tBodyAcc-arCoeff()-Y,2 | tBodyAcc-arCoeff()-Y,3 | tBodyAcc-arCoeff()-Y,4 | tBodyAcc-arCoeff()-Z,1 | tBodyAcc-arCoeff()-Z,2 | tBodyAcc-arCoeff()-Z,3 | tBodyAcc-arCoeff()-Z,4 | tBodyAcc-correlation()-X,Y | tBodyAcc-correlation()-X,Z | tBodyAcc-correlation()-Y,Z | ... | fBodyBodyAccJerkMag-maxInds | fBodyBodyAccJerkMag-meanFreq() | fBodyBodyAccJerkMag-skewness() | fBodyBodyAccJerkMag-kurtosis() | fBodyBodyGyroMag-mean() | fBodyBodyGyroMag-std() | fBodyBodyGyroMag-mad() | fBodyBodyGyroMag-max() | fBodyBodyGyroMag-min() | fBodyBodyGyroMag-sma() | fBodyBodyGyroMag-energy() | fBodyBodyGyroMag-iqr() | fBodyBodyGyroMag-entropy() | fBodyBodyGyroMag-maxInds | fBodyBodyGyroMag-meanFreq() | fBodyBodyGyroMag-skewness() | fBodyBodyGyroMag-kurtosis() | fBodyBodyGyroJerkMag-mean() | fBodyBodyGyroJerkMag-std() | fBodyBodyGyroJerkMag-mad() | fBodyBodyGyroJerkMag-max() | fBodyBodyGyroJerkMag-min() | fBodyBodyGyroJerkMag-sma() | fBodyBodyGyroJerkMag-energy() | fBodyBodyGyroJerkMag-iqr() | fBodyBodyGyroJerkMag-entropy() | fBodyBodyGyroJerkMag-maxInds | fBodyBodyGyroJerkMag-meanFreq() | fBodyBodyGyroJerkMag-skewness() | fBodyBodyGyroJerkMag-kurtosis() | angle(tBodyAccMean,gravity) | angle(tBodyAccJerkMean),gravityMean) | angle(tBodyGyroMean,gravityMean) | angle(tBodyGyroJerkMean,gravityMean) | angle(X,gravityMean) | angle(Y,gravityMean) | angle(Z,gravityMean) | subject | Activity | ActivityName | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7165 | 0.271353 | 0.027849 | -0.203829 | -0.392303 | -0.104729 | -0.000659 | -0.449692 | -0.123698 | 0.069395 | -0.145411 | -0.102062 | -0.252405 | 0.406256 | 0.308976 | 0.382972 | -0.136135 | -0.814007 | -0.840944 | -0.53897 | -0.642437 | -0.426825 | 0.082406 | 0.231688 | 0.327772 | -0.111448 | -0.496922 | 0.297419 | -0.123907 | 0.19015 | -0.400091 | 0.17647 | 0.207524 | -0.139335 | -0.295053 | 0.003024 | 0.067414 | 0.158264 | -0.179276 | -0.352577 | -0.325355 | ... | -0.936508 | -0.03356 | 0.199289 | -0.064344 | -0.492952 | -0.109544 | -0.231337 | -0.000409 | -0.95593 | -0.492952 | -0.638835 | -0.533439 | 0.354719 | -0.897436 | -0.628768 | 0.289206 | 0.0169 | -0.809967 | -0.809567 | -0.788654 | -0.83963 | -0.959585 | -0.809967 | -0.980243 | -0.804277 | 0.078211 | -0.936508 | 0.168635 | -0.246286 | -0.646544 | 0.104479 | -0.286379 | -0.526419 | 0.666447 | -0.753446 | 0.267225 | 0.039295 | 30 | 2 | WALKING_UPSTAIRS |
1 rows × 564 columns
train.shape
(7352, 564)
# get the data from txt files to pandas dataffame
X_test = pd.read_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/test/X_test.txt', delim_whitespace=True, header=None, names=features)
# add subject column to the dataframe
X_test['subject'] = pd.read_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/test/subject_test.txt', header=None, squeeze=True)
# get y labels from the txt file
y_test = pd.read_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/test/y_test.txt', names=['Activity'], squeeze=True)
y_test_labels = y_test.map({1: 'WALKING', 2:'WALKING_UPSTAIRS',3:'WALKING_DOWNSTAIRS',\
4:'SITTING', 5:'STANDING',6:'LAYING'})
# put all columns in a single dataframe
test = X_test
test['Activity'] = y_test
test['ActivityName'] = y_test_labels
test.sample()
/usr/local/lib/python3.6/dist-packages/pandas/io/parsers.py:702: UserWarning: Duplicate names specified. This will raise an error in the future. return _read(filepath_or_buffer, kwds)
tBodyAcc-mean()-X | tBodyAcc-mean()-Y | tBodyAcc-mean()-Z | tBodyAcc-std()-X | tBodyAcc-std()-Y | tBodyAcc-std()-Z | tBodyAcc-mad()-X | tBodyAcc-mad()-Y | tBodyAcc-mad()-Z | tBodyAcc-max()-X | tBodyAcc-max()-Y | tBodyAcc-max()-Z | tBodyAcc-min()-X | tBodyAcc-min()-Y | tBodyAcc-min()-Z | tBodyAcc-sma() | tBodyAcc-energy()-X | tBodyAcc-energy()-Y | tBodyAcc-energy()-Z | tBodyAcc-iqr()-X | tBodyAcc-iqr()-Y | tBodyAcc-iqr()-Z | tBodyAcc-entropy()-X | tBodyAcc-entropy()-Y | tBodyAcc-entropy()-Z | tBodyAcc-arCoeff()-X,1 | tBodyAcc-arCoeff()-X,2 | tBodyAcc-arCoeff()-X,3 | tBodyAcc-arCoeff()-X,4 | tBodyAcc-arCoeff()-Y,1 | tBodyAcc-arCoeff()-Y,2 | tBodyAcc-arCoeff()-Y,3 | tBodyAcc-arCoeff()-Y,4 | tBodyAcc-arCoeff()-Z,1 | tBodyAcc-arCoeff()-Z,2 | tBodyAcc-arCoeff()-Z,3 | tBodyAcc-arCoeff()-Z,4 | tBodyAcc-correlation()-X,Y | tBodyAcc-correlation()-X,Z | tBodyAcc-correlation()-Y,Z | ... | fBodyBodyAccJerkMag-maxInds | fBodyBodyAccJerkMag-meanFreq() | fBodyBodyAccJerkMag-skewness() | fBodyBodyAccJerkMag-kurtosis() | fBodyBodyGyroMag-mean() | fBodyBodyGyroMag-std() | fBodyBodyGyroMag-mad() | fBodyBodyGyroMag-max() | fBodyBodyGyroMag-min() | fBodyBodyGyroMag-sma() | fBodyBodyGyroMag-energy() | fBodyBodyGyroMag-iqr() | fBodyBodyGyroMag-entropy() | fBodyBodyGyroMag-maxInds | fBodyBodyGyroMag-meanFreq() | fBodyBodyGyroMag-skewness() | fBodyBodyGyroMag-kurtosis() | fBodyBodyGyroJerkMag-mean() | fBodyBodyGyroJerkMag-std() | fBodyBodyGyroJerkMag-mad() | fBodyBodyGyroJerkMag-max() | fBodyBodyGyroJerkMag-min() | fBodyBodyGyroJerkMag-sma() | fBodyBodyGyroJerkMag-energy() | fBodyBodyGyroJerkMag-iqr() | fBodyBodyGyroJerkMag-entropy() | fBodyBodyGyroJerkMag-maxInds | fBodyBodyGyroJerkMag-meanFreq() | fBodyBodyGyroJerkMag-skewness() | fBodyBodyGyroJerkMag-kurtosis() | angle(tBodyAccMean,gravity) | angle(tBodyAccJerkMean),gravityMean) | angle(tBodyGyroMean,gravityMean) | angle(tBodyGyroJerkMean,gravityMean) | angle(X,gravityMean) | angle(Y,gravityMean) | angle(Z,gravityMean) | subject | Activity | ActivityName | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2450 | 0.280576 | -0.009888 | -0.092731 | -0.978711 | -0.964622 | -0.889601 | -0.983633 | -0.964794 | -0.871134 | -0.912057 | -0.538771 | -0.760036 | 0.822255 | 0.6885 | 0.806211 | -0.944126 | -0.999681 | -0.999532 | -0.992755 | -0.990504 | -0.972553 | -0.854524 | -0.282707 | -0.269999 | -0.13638 | 0.224061 | -0.08937 | 0.290032 | 0.192295 | 0.136354 | -0.043291 | 0.065246 | 0.180924 | -0.127875 | 0.09835 | -0.025624 | -0.165457 | -0.056957 | -0.090721 | -0.29565 | ... | -1.0 | -0.032366 | -0.222995 | -0.683098 | -0.938243 | -0.940839 | -0.941554 | -0.948435 | -0.973249 | -0.938243 | -0.997536 | -0.949271 | -0.324406 | -1.0 | 0.144707 | -0.259723 | -0.635655 | -0.947558 | -0.941105 | -0.940719 | -0.938929 | -0.979127 | -0.947558 | -0.998215 | -0.94237 | -0.404518 | -1.0 | 0.103494 | 0.056547 | -0.292552 | -0.003855 | -0.007809 | -0.024132 | -0.358898 | -0.920981 | 0.126866 | 0.067282 | 20 | 4 | SITTING |
1 rows × 564 columns
test.shape
(2947, 564)
print('No of duplicates in train: {}'.format(sum(train.duplicated())))
print('No of duplicates in test : {}'.format(sum(test.duplicated())))
No of duplicates in train: 0 No of duplicates in test : 0
print('We have {} NaN/Null values in train'.format(train.isnull().values.sum()))
print('We have {} NaN/Null values in test'.format(test.isnull().values.sum()))
We have 0 NaN/Null values in train We have 0 NaN/Null values in test
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
plt.rcParams['font.family'] = 'Dejavu Sans'
plt.figure(figsize=(16,8))
plt.title('Data provided by each user', fontsize=20)
sns.countplot(x='subject',hue='ActivityName', data = train)
plt.show()
We have got almost same number of reading from all the subjects
plt.title('No of Datapoints per Activity', fontsize=15)
sns.countplot(train.ActivityName)
plt.xticks(rotation=90)
plt.show()
Our data is well balanced (almost)
columns = train.columns
# Removing '()' from column names
columns = columns.str.replace('[()]','')
columns = columns.str.replace('[-]', '')
columns = columns.str.replace('[,]','')
train.columns = columns
test.columns = columns
test.columns
Index(['tBodyAccmeanX', 'tBodyAccmeanY', 'tBodyAccmeanZ', 'tBodyAccstdX', 'tBodyAccstdY', 'tBodyAccstdZ', 'tBodyAccmadX', 'tBodyAccmadY', 'tBodyAccmadZ', 'tBodyAccmaxX', ... 'angletBodyAccMeangravity', 'angletBodyAccJerkMeangravityMean', 'angletBodyGyroMeangravityMean', 'angletBodyGyroJerkMeangravityMean', 'angleXgravityMean', 'angleYgravityMean', 'angleZgravityMean', 'subject', 'Activity', 'ActivityName'], dtype='object', length=564)
train.to_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/csv_files/train.csv', index=False)
test.to_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/csv_files/test.csv', index=False)
"___Without domain knowledge EDA has no meaning, without EDA a problem has no soul.___"
Static and Dynamic Activities
sns.set_palette("Set1", desat=0.80)
facetgrid = sns.FacetGrid(train, hue='ActivityName', size=6,aspect=2)
facetgrid.map(sns.distplot,'tBodyAccMagmean', hist=False)\
.add_legend()
plt.annotate("Stationary Activities", xy=(-0.956,17), xytext=(-0.9, 23), size=20,\
va='center', ha='left',\
arrowprops=dict(arrowstyle="simple",connectionstyle="arc3,rad=0.1"))
plt.annotate("Moving Activities", xy=(0,3), xytext=(0.2, 9), size=20,\
va='center', ha='left',\
arrowprops=dict(arrowstyle="simple",connectionstyle="arc3,rad=0.1"))
plt.show()
/usr/local/lib/python3.6/dist-packages/seaborn/axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code. warnings.warn(msg, UserWarning)
# for plotting purposes taking datapoints of each activity to a different dataframe
df1 = train[train['Activity']==1]
df2 = train[train['Activity']==2]
df3 = train[train['Activity']==3]
df4 = train[train['Activity']==4]
df5 = train[train['Activity']==5]
df6 = train[train['Activity']==6]
plt.figure(figsize=(14,7))
plt.subplot(2,2,1)
plt.title('Stationary Activities(Zoomed in)')
sns.distplot(df4['tBodyAccMagmean'],color = 'r',hist = False, label = 'Sitting')
sns.distplot(df5['tBodyAccMagmean'],color = 'm',hist = False,label = 'Standing')
sns.distplot(df6['tBodyAccMagmean'],color = 'c',hist = False, label = 'Laying')
plt.axis([-1.01, -0.5, 0, 35])
plt.legend(loc='center')
plt.subplot(2,2,2)
plt.title('Moving Activities')
sns.distplot(df1['tBodyAccMagmean'],color = 'red',hist = False, label = 'Walking')
sns.distplot(df2['tBodyAccMagmean'],color = 'blue',hist = False,label = 'Walking Up')
sns.distplot(df3['tBodyAccMagmean'],color = 'green',hist = False, label = 'Walking down')
plt.legend(loc='center right')
plt.tight_layout()
plt.show()
plt.figure(figsize=(7,7))
sns.boxplot(x='ActivityName', y='tBodyAccMagmean',data=train, showfliers=False, saturation=1)
plt.ylabel('Acceleration Magnitude mean')
plt.axhline(y=-0.7, xmin=0.1, xmax=0.9,dashes=(5,5), c='g')
plt.axhline(y=-0.05, xmin=0.4, dashes=(5,5), c='m')
plt.xticks(rotation=90)
plt.show()
__ Observations__:
sns.boxplot(x='ActivityName', y='angleXgravityMean', data=train)
plt.axhline(y=0.08, xmin=0.1, xmax=0.9,c='m',dashes=(5,3))
plt.title('Angle between X-axis and Gravity_mean', fontsize=15)
plt.xticks(rotation = 40)
plt.show()
__ Observations__:
sns.boxplot(x='ActivityName', y='angleYgravityMean', data = train, showfliers=False)
plt.title('Angle between Y-axis and Gravity_mean', fontsize=15)
plt.xticks(rotation = 40)
plt.axhline(y=-0.22, xmin=0.1, xmax=0.8, dashes=(5,3), c='m')
plt.show()
import numpy as np
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import seaborn as sns
# performs t-sne with different perplexity values and their repective plots..
def perform_tsne(X_data, y_data, perplexities, n_iter=1000, img_name_prefix='t-sne'):
for index,perplexity in enumerate(perplexities):
# perform t-sne
print('\nperforming tsne with perplexity {} and with {} iterations at max'.format(perplexity, n_iter))
X_reduced = TSNE(verbose=2, perplexity=perplexity).fit_transform(X_data)
print('Done..')
# prepare the data for seaborn
print('Creating plot for this t-sne visualization..')
df = pd.DataFrame({'x':X_reduced[:,0], 'y':X_reduced[:,1] ,'label':y_data})
# draw the plot in appropriate place in the grid
sns.lmplot(data=df, x='x', y='y', hue='label', fit_reg=False, size=8,\
palette="Set1",markers=['^','v','s','o', '1','2'])
plt.title("perplexity : {} and max_iter : {}".format(perplexity, n_iter))
img_name = img_name_prefix + '_perp_{}_iter_{}.png'.format(perplexity, n_iter)
print('saving this plot as image in present working directory...')
plt.savefig(img_name)
plt.show()
print('Done')
X_pre_tsne = train.drop(['subject', 'Activity','ActivityName'], axis=1)
y_pre_tsne = train['ActivityName']
perform_tsne(X_data = X_pre_tsne,y_data=y_pre_tsne, perplexities =[2,5,10,20,50])
performing tsne with perplexity 2 and with 1000 iterations at max [t-SNE] Computing 7 nearest neighbors... [t-SNE] Indexed 7352 samples in 0.426s... [t-SNE] Computed neighbors for 7352 samples in 72.001s... [t-SNE] Computed conditional probabilities for sample 1000 / 7352 [t-SNE] Computed conditional probabilities for sample 2000 / 7352 [t-SNE] Computed conditional probabilities for sample 3000 / 7352 [t-SNE] Computed conditional probabilities for sample 4000 / 7352 [t-SNE] Computed conditional probabilities for sample 5000 / 7352 [t-SNE] Computed conditional probabilities for sample 6000 / 7352 [t-SNE] Computed conditional probabilities for sample 7000 / 7352 [t-SNE] Computed conditional probabilities for sample 7352 / 7352 [t-SNE] Mean sigma: 0.635855 [t-SNE] Computed conditional probabilities in 0.071s [t-SNE] Iteration 50: error = 124.8017578, gradient norm = 0.0253939 (50 iterations in 16.625s) [t-SNE] Iteration 100: error = 107.2019501, gradient norm = 0.0284782 (50 iterations in 9.735s) [t-SNE] Iteration 150: error = 100.9872894, gradient norm = 0.0185151 (50 iterations in 5.346s) [t-SNE] Iteration 200: error = 97.6054382, gradient norm = 0.0142084 (50 iterations in 7.013s) [t-SNE] Iteration 250: error = 95.3084183, gradient norm = 0.0132592 (50 iterations in 5.703s) [t-SNE] KL divergence after 250 iterations with early exaggeration: 95.308418 [t-SNE] Iteration 300: error = 4.1209540, gradient norm = 0.0015668 (50 iterations in 7.156s) [t-SNE] Iteration 350: error = 3.2113254, gradient norm = 0.0009953 (50 iterations in 8.022s) [t-SNE] Iteration 400: error = 2.7819963, gradient norm = 0.0007203 (50 iterations in 9.419s) [t-SNE] Iteration 450: error = 2.5178111, gradient norm = 0.0005655 (50 iterations in 9.370s) [t-SNE] Iteration 500: error = 2.3341548, gradient norm = 0.0004804 (50 iterations in 7.681s) [t-SNE] Iteration 550: error = 2.1961622, gradient norm = 0.0004183 (50 iterations in 7.097s) [t-SNE] Iteration 600: error = 2.0867445, gradient norm = 0.0003664 (50 iterations in 9.274s) [t-SNE] Iteration 650: error = 1.9967778, gradient norm = 0.0003279 (50 iterations in 7.697s) [t-SNE] Iteration 700: error = 1.9210005, gradient norm = 0.0002984 (50 iterations in 8.174s) [t-SNE] Iteration 750: error = 1.8558111, gradient norm = 0.0002776 (50 iterations in 9.747s) [t-SNE] Iteration 800: error = 1.7989457, gradient norm = 0.0002569 (50 iterations in 8.687s) [t-SNE] Iteration 850: error = 1.7490212, gradient norm = 0.0002394 (50 iterations in 8.407s) [t-SNE] Iteration 900: error = 1.7043383, gradient norm = 0.0002224 (50 iterations in 8.351s) [t-SNE] Iteration 950: error = 1.6641431, gradient norm = 0.0002098 (50 iterations in 7.841s) [t-SNE] Iteration 1000: error = 1.6279151, gradient norm = 0.0001989 (50 iterations in 5.623s) [t-SNE] Error after 1000 iterations: 1.627915 Done.. Creating plot for this t-sne visualization.. saving this plot as image in present working directory...
Done performing tsne with perplexity 5 and with 1000 iterations at max [t-SNE] Computing 16 nearest neighbors... [t-SNE] Indexed 7352 samples in 0.263s... [t-SNE] Computed neighbors for 7352 samples in 48.983s... [t-SNE] Computed conditional probabilities for sample 1000 / 7352 [t-SNE] Computed conditional probabilities for sample 2000 / 7352 [t-SNE] Computed conditional probabilities for sample 3000 / 7352 [t-SNE] Computed conditional probabilities for sample 4000 / 7352 [t-SNE] Computed conditional probabilities for sample 5000 / 7352 [t-SNE] Computed conditional probabilities for sample 6000 / 7352 [t-SNE] Computed conditional probabilities for sample 7000 / 7352 [t-SNE] Computed conditional probabilities for sample 7352 / 7352 [t-SNE] Mean sigma: 0.961265 [t-SNE] Computed conditional probabilities in 0.122s [t-SNE] Iteration 50: error = 114.1862640, gradient norm = 0.0184120 (50 iterations in 55.655s) [t-SNE] Iteration 100: error = 97.6535568, gradient norm = 0.0174309 (50 iterations in 12.580s) [t-SNE] Iteration 150: error = 93.1900101, gradient norm = 0.0101048 (50 iterations in 9.180s) [t-SNE] Iteration 200: error = 91.2315445, gradient norm = 0.0074560 (50 iterations in 10.340s) [t-SNE] Iteration 250: error = 90.0714417, gradient norm = 0.0057667 (50 iterations in 9.458s) [t-SNE] KL divergence after 250 iterations with early exaggeration: 90.071442 [t-SNE] Iteration 300: error = 3.5796804, gradient norm = 0.0014691 (50 iterations in 8.718s) [t-SNE] Iteration 350: error = 2.8173938, gradient norm = 0.0007508 (50 iterations in 10.180s) [t-SNE] Iteration 400: error = 2.4344938, gradient norm = 0.0005251 (50 iterations in 10.506s) [t-SNE] Iteration 450: error = 2.2156141, gradient norm = 0.0004069 (50 iterations in 10.072s) [t-SNE] Iteration 500: error = 2.0703306, gradient norm = 0.0003340 (50 iterations in 10.511s) [t-SNE] Iteration 550: error = 1.9646366, gradient norm = 0.0002816 (50 iterations in 9.792s) [t-SNE] Iteration 600: error = 1.8835558, gradient norm = 0.0002471 (50 iterations in 9.098s) [t-SNE] Iteration 650: error = 1.8184001, gradient norm = 0.0002184 (50 iterations in 8.656s) [t-SNE] Iteration 700: error = 1.7647167, gradient norm = 0.0001961 (50 iterations in 9.063s) [t-SNE] Iteration 750: error = 1.7193680, gradient norm = 0.0001796 (50 iterations in 9.754s) [t-SNE] Iteration 800: error = 1.6803776, gradient norm = 0.0001655 (50 iterations in 9.540s) [t-SNE] Iteration 850: error = 1.6465144, gradient norm = 0.0001538 (50 iterations in 9.953s) [t-SNE] Iteration 900: error = 1.6166563, gradient norm = 0.0001421 (50 iterations in 10.270s) [t-SNE] Iteration 950: error = 1.5901035, gradient norm = 0.0001335 (50 iterations in 6.609s) [t-SNE] Iteration 1000: error = 1.5664237, gradient norm = 0.0001257 (50 iterations in 8.553s) [t-SNE] Error after 1000 iterations: 1.566424 Done.. Creating plot for this t-sne visualization.. saving this plot as image in present working directory...
Done performing tsne with perplexity 10 and with 1000 iterations at max [t-SNE] Computing 31 nearest neighbors... [t-SNE] Indexed 7352 samples in 0.410s... [t-SNE] Computed neighbors for 7352 samples in 64.801s... [t-SNE] Computed conditional probabilities for sample 1000 / 7352 [t-SNE] Computed conditional probabilities for sample 2000 / 7352 [t-SNE] Computed conditional probabilities for sample 3000 / 7352 [t-SNE] Computed conditional probabilities for sample 4000 / 7352 [t-SNE] Computed conditional probabilities for sample 5000 / 7352 [t-SNE] Computed conditional probabilities for sample 6000 / 7352 [t-SNE] Computed conditional probabilities for sample 7000 / 7352 [t-SNE] Computed conditional probabilities for sample 7352 / 7352 [t-SNE] Mean sigma: 1.133828 [t-SNE] Computed conditional probabilities in 0.214s [t-SNE] Iteration 50: error = 106.0169220, gradient norm = 0.0194293 (50 iterations in 24.550s) [t-SNE] Iteration 100: error = 90.3036194, gradient norm = 0.0097653 (50 iterations in 11.936s) [t-SNE] Iteration 150: error = 87.3132935, gradient norm = 0.0053059 (50 iterations in 11.246s) [t-SNE] Iteration 200: error = 86.1169128, gradient norm = 0.0035844 (50 iterations in 11.864s) [t-SNE] Iteration 250: error = 85.4133606, gradient norm = 0.0029100 (50 iterations in 11.944s) [t-SNE] KL divergence after 250 iterations with early exaggeration: 85.413361 [t-SNE] Iteration 300: error = 3.1394315, gradient norm = 0.0013976 (50 iterations in 11.742s) [t-SNE] Iteration 350: error = 2.4929206, gradient norm = 0.0006466 (50 iterations in 11.627s) [t-SNE] Iteration 400: error = 2.1733041, gradient norm = 0.0004230 (50 iterations in 11.846s) [t-SNE] Iteration 450: error = 1.9884514, gradient norm = 0.0003124 (50 iterations in 11.405s) [t-SNE] Iteration 500: error = 1.8702440, gradient norm = 0.0002514 (50 iterations in 11.320s) [t-SNE] Iteration 550: error = 1.7870129, gradient norm = 0.0002107 (50 iterations in 12.009s) [t-SNE] Iteration 600: error = 1.7246909, gradient norm = 0.0001824 (50 iterations in 10.632s) [t-SNE] Iteration 650: error = 1.6758548, gradient norm = 0.0001590 (50 iterations in 11.270s) [t-SNE] Iteration 700: error = 1.6361949, gradient norm = 0.0001451 (50 iterations in 12.072s) [t-SNE] Iteration 750: error = 1.6034756, gradient norm = 0.0001305 (50 iterations in 11.607s) [t-SNE] Iteration 800: error = 1.5761518, gradient norm = 0.0001188 (50 iterations in 9.409s) [t-SNE] Iteration 850: error = 1.5527289, gradient norm = 0.0001113 (50 iterations in 8.309s) [t-SNE] Iteration 900: error = 1.5328671, gradient norm = 0.0001021 (50 iterations in 9.433s) [t-SNE] Iteration 950: error = 1.5152045, gradient norm = 0.0000974 (50 iterations in 11.488s) [t-SNE] Iteration 1000: error = 1.4999681, gradient norm = 0.0000933 (50 iterations in 10.593s) [t-SNE] Error after 1000 iterations: 1.499968 Done.. Creating plot for this t-sne visualization.. saving this plot as image in present working directory...
Done performing tsne with perplexity 20 and with 1000 iterations at max [t-SNE] Computing 61 nearest neighbors... [t-SNE] Indexed 7352 samples in 0.425s... [t-SNE] Computed neighbors for 7352 samples in 61.792s... [t-SNE] Computed conditional probabilities for sample 1000 / 7352 [t-SNE] Computed conditional probabilities for sample 2000 / 7352 [t-SNE] Computed conditional probabilities for sample 3000 / 7352 [t-SNE] Computed conditional probabilities for sample 4000 / 7352 [t-SNE] Computed conditional probabilities for sample 5000 / 7352 [t-SNE] Computed conditional probabilities for sample 6000 / 7352 [t-SNE] Computed conditional probabilities for sample 7000 / 7352 [t-SNE] Computed conditional probabilities for sample 7352 / 7352 [t-SNE] Mean sigma: 1.274335 [t-SNE] Computed conditional probabilities in 0.355s [t-SNE] Iteration 50: error = 97.5202179, gradient norm = 0.0223863 (50 iterations in 21.168s) [t-SNE] Iteration 100: error = 83.9500732, gradient norm = 0.0059110 (50 iterations in 17.306s) [t-SNE] Iteration 150: error = 81.8804779, gradient norm = 0.0035797 (50 iterations in 14.258s) [t-SNE] Iteration 200: error = 81.1615143, gradient norm = 0.0022536 (50 iterations in 14.130s) [t-SNE] Iteration 250: error = 80.7704086, gradient norm = 0.0018108 (50 iterations in 15.340s) [t-SNE] KL divergence after 250 iterations with early exaggeration: 80.770409 [t-SNE] Iteration 300: error = 2.6957574, gradient norm = 0.0012993 (50 iterations in 13.605s) [t-SNE] Iteration 350: error = 2.1637220, gradient norm = 0.0005765 (50 iterations in 13.248s) [t-SNE] Iteration 400: error = 1.9143614, gradient norm = 0.0003474 (50 iterations in 14.774s) [t-SNE] Iteration 450: error = 1.7684202, gradient norm = 0.0002458 (50 iterations in 15.502s) [t-SNE] Iteration 500: error = 1.6744757, gradient norm = 0.0001923 (50 iterations in 14.808s) [t-SNE] Iteration 550: error = 1.6101606, gradient norm = 0.0001575 (50 iterations in 14.043s) [t-SNE] Iteration 600: error = 1.5641028, gradient norm = 0.0001344 (50 iterations in 15.769s) [t-SNE] Iteration 650: error = 1.5291905, gradient norm = 0.0001182 (50 iterations in 15.834s) [t-SNE] Iteration 700: error = 1.5024391, gradient norm = 0.0001055 (50 iterations in 15.398s) [t-SNE] Iteration 750: error = 1.4809053, gradient norm = 0.0000965 (50 iterations in 14.594s) [t-SNE] Iteration 800: error = 1.4631859, gradient norm = 0.0000884 (50 iterations in 15.025s) [t-SNE] Iteration 850: error = 1.4486470, gradient norm = 0.0000832 (50 iterations in 14.060s) [t-SNE] Iteration 900: error = 1.4367288, gradient norm = 0.0000804 (50 iterations in 12.389s) [t-SNE] Iteration 950: error = 1.4270191, gradient norm = 0.0000761 (50 iterations in 10.392s) [t-SNE] Iteration 1000: error = 1.4189968, gradient norm = 0.0000787 (50 iterations in 12.355s) [t-SNE] Error after 1000 iterations: 1.418997 Done.. Creating plot for this t-sne visualization.. saving this plot as image in present working directory...
Done performing tsne with perplexity 50 and with 1000 iterations at max [t-SNE] Computing 151 nearest neighbors... [t-SNE] Indexed 7352 samples in 0.376s... [t-SNE] Computed neighbors for 7352 samples in 73.164s... [t-SNE] Computed conditional probabilities for sample 1000 / 7352 [t-SNE] Computed conditional probabilities for sample 2000 / 7352 [t-SNE] Computed conditional probabilities for sample 3000 / 7352 [t-SNE] Computed conditional probabilities for sample 4000 / 7352 [t-SNE] Computed conditional probabilities for sample 5000 / 7352 [t-SNE] Computed conditional probabilities for sample 6000 / 7352 [t-SNE] Computed conditional probabilities for sample 7000 / 7352 [t-SNE] Computed conditional probabilities for sample 7352 / 7352 [t-SNE] Mean sigma: 1.437672 [t-SNE] Computed conditional probabilities in 0.844s [t-SNE] Iteration 50: error = 86.1525574, gradient norm = 0.0242986 (50 iterations in 36.249s) [t-SNE] Iteration 100: error = 75.9874649, gradient norm = 0.0061005 (50 iterations in 30.453s) [t-SNE] Iteration 150: error = 74.7072296, gradient norm = 0.0024708 (50 iterations in 28.461s) [t-SNE] Iteration 200: error = 74.2736282, gradient norm = 0.0018644 (50 iterations in 27.735s) [t-SNE] Iteration 250: error = 74.0722427, gradient norm = 0.0014078 (50 iterations in 26.835s) [t-SNE] KL divergence after 250 iterations with early exaggeration: 74.072243 [t-SNE] Iteration 300: error = 2.1539080, gradient norm = 0.0011796 (50 iterations in 25.445s) [t-SNE] Iteration 350: error = 1.7567128, gradient norm = 0.0004845 (50 iterations in 21.282s) [t-SNE] Iteration 400: error = 1.5888531, gradient norm = 0.0002798 (50 iterations in 21.015s) [t-SNE] Iteration 450: error = 1.4956820, gradient norm = 0.0001894 (50 iterations in 23.332s) [t-SNE] Iteration 500: error = 1.4359720, gradient norm = 0.0001420 (50 iterations in 23.083s) [t-SNE] Iteration 550: error = 1.3947564, gradient norm = 0.0001117 (50 iterations in 19.626s) [t-SNE] Iteration 600: error = 1.3653858, gradient norm = 0.0000949 (50 iterations in 22.752s) [t-SNE] Iteration 650: error = 1.3441534, gradient norm = 0.0000814 (50 iterations in 23.972s) [t-SNE] Iteration 700: error = 1.3284039, gradient norm = 0.0000742 (50 iterations in 20.636s) [t-SNE] Iteration 750: error = 1.3171139, gradient norm = 0.0000700 (50 iterations in 20.407s) [t-SNE] Iteration 800: error = 1.3085558, gradient norm = 0.0000657 (50 iterations in 24.951s) [t-SNE] Iteration 850: error = 1.3017821, gradient norm = 0.0000603 (50 iterations in 24.719s) [t-SNE] Iteration 900: error = 1.2962619, gradient norm = 0.0000586 (50 iterations in 24.500s) [t-SNE] Iteration 950: error = 1.2914882, gradient norm = 0.0000573 (50 iterations in 24.132s) [t-SNE] Iteration 1000: error = 1.2874244, gradient norm = 0.0000546 (50 iterations in 22.840s) [t-SNE] Error after 1000 iterations: 1.287424 Done.. Creating plot for this t-sne visualization.. saving this plot as image in present working directory...
Done
import numpy as np
import pandas as pd
train = pd.read_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/csv_files/train.csv')
test = pd.read_csv('drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/csv_files/test.csv')
print(train.shape, test.shape)
(7352, 564) (2947, 564)
train.head(3)
tBodyAccmeanX | tBodyAccmeanY | tBodyAccmeanZ | tBodyAccstdX | tBodyAccstdY | tBodyAccstdZ | tBodyAccmadX | tBodyAccmadY | tBodyAccmadZ | tBodyAccmaxX | tBodyAccmaxY | tBodyAccmaxZ | tBodyAccminX | tBodyAccminY | tBodyAccminZ | tBodyAccsma | tBodyAccenergyX | tBodyAccenergyY | tBodyAccenergyZ | tBodyAcciqrX | tBodyAcciqrY | tBodyAcciqrZ | tBodyAccentropyX | tBodyAccentropyY | tBodyAccentropyZ | tBodyAccarCoeffX1 | tBodyAccarCoeffX2 | tBodyAccarCoeffX3 | tBodyAccarCoeffX4 | tBodyAccarCoeffY1 | tBodyAccarCoeffY2 | tBodyAccarCoeffY3 | tBodyAccarCoeffY4 | tBodyAccarCoeffZ1 | tBodyAccarCoeffZ2 | tBodyAccarCoeffZ3 | tBodyAccarCoeffZ4 | tBodyAcccorrelationXY | tBodyAcccorrelationXZ | tBodyAcccorrelationYZ | ... | fBodyBodyAccJerkMagmaxInds | fBodyBodyAccJerkMagmeanFreq | fBodyBodyAccJerkMagskewness | fBodyBodyAccJerkMagkurtosis | fBodyBodyGyroMagmean | fBodyBodyGyroMagstd | fBodyBodyGyroMagmad | fBodyBodyGyroMagmax | fBodyBodyGyroMagmin | fBodyBodyGyroMagsma | fBodyBodyGyroMagenergy | fBodyBodyGyroMagiqr | fBodyBodyGyroMagentropy | fBodyBodyGyroMagmaxInds | fBodyBodyGyroMagmeanFreq | fBodyBodyGyroMagskewness | fBodyBodyGyroMagkurtosis | fBodyBodyGyroJerkMagmean | fBodyBodyGyroJerkMagstd | fBodyBodyGyroJerkMagmad | fBodyBodyGyroJerkMagmax | fBodyBodyGyroJerkMagmin | fBodyBodyGyroJerkMagsma | fBodyBodyGyroJerkMagenergy | fBodyBodyGyroJerkMagiqr | fBodyBodyGyroJerkMagentropy | fBodyBodyGyroJerkMagmaxInds | fBodyBodyGyroJerkMagmeanFreq | fBodyBodyGyroJerkMagskewness | fBodyBodyGyroJerkMagkurtosis | angletBodyAccMeangravity | angletBodyAccJerkMeangravityMean | angletBodyGyroMeangravityMean | angletBodyGyroJerkMeangravityMean | angleXgravityMean | angleYgravityMean | angleZgravityMean | subject | Activity | ActivityName | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.288585 | -0.020294 | -0.132905 | -0.995279 | -0.983111 | -0.913526 | -0.995112 | -0.983185 | -0.923527 | -0.934724 | -0.567378 | -0.744413 | 0.852947 | 0.685845 | 0.814263 | -0.965523 | -0.999945 | -0.999863 | -0.994612 | -0.994231 | -0.987614 | -0.943220 | -0.407747 | -0.679338 | -0.602122 | 0.929294 | -0.853011 | 0.359910 | -0.058526 | 0.256892 | -0.224848 | 0.264106 | -0.095246 | 0.278851 | -0.465085 | 0.491936 | -0.190884 | 0.376314 | 0.435129 | 0.660790 | ... | -0.936508 | 0.346989 | -0.516080 | -0.802760 | -0.980135 | -0.961309 | -0.973653 | -0.952264 | -0.989498 | -0.980135 | -0.999240 | -0.992656 | -0.701291 | -1.000000 | -0.128989 | 0.586156 | 0.374605 | -0.991990 | -0.990697 | -0.989941 | -0.992448 | -0.991048 | -0.991990 | -0.999937 | -0.990458 | -0.871306 | -1.000000 | -0.074323 | -0.298676 | -0.710304 | -0.112754 | 0.030400 | -0.464761 | -0.018446 | -0.841247 | 0.179941 | -0.058627 | 1 | 5 | STANDING |
1 | 0.278419 | -0.016411 | -0.123520 | -0.998245 | -0.975300 | -0.960322 | -0.998807 | -0.974914 | -0.957686 | -0.943068 | -0.557851 | -0.818409 | 0.849308 | 0.685845 | 0.822637 | -0.981930 | -0.999991 | -0.999788 | -0.998405 | -0.999150 | -0.977866 | -0.948225 | -0.714892 | -0.500930 | -0.570979 | 0.611627 | -0.329549 | 0.284213 | 0.284595 | 0.115705 | -0.090963 | 0.294310 | -0.281211 | 0.085988 | -0.022153 | -0.016657 | -0.220643 | -0.013429 | -0.072692 | 0.579382 | ... | -0.841270 | 0.532061 | -0.624871 | -0.900160 | -0.988296 | -0.983322 | -0.982659 | -0.986321 | -0.991829 | -0.988296 | -0.999811 | -0.993979 | -0.720683 | -0.948718 | -0.271958 | -0.336310 | -0.720015 | -0.995854 | -0.996399 | -0.995442 | -0.996866 | -0.994440 | -0.995854 | -0.999981 | -0.994544 | -1.000000 | -1.000000 | 0.158075 | -0.595051 | -0.861499 | 0.053477 | -0.007435 | -0.732626 | 0.703511 | -0.844788 | 0.180289 | -0.054317 | 1 | 5 | STANDING |
2 | 0.279653 | -0.019467 | -0.113462 | -0.995380 | -0.967187 | -0.978944 | -0.996520 | -0.963668 | -0.977469 | -0.938692 | -0.557851 | -0.818409 | 0.843609 | 0.682401 | 0.839344 | -0.983478 | -0.999969 | -0.999660 | -0.999470 | -0.997130 | -0.964810 | -0.974675 | -0.592235 | -0.485821 | -0.570979 | 0.273025 | -0.086309 | 0.337202 | -0.164739 | 0.017150 | -0.074507 | 0.342256 | -0.332564 | 0.239281 | -0.136204 | 0.173863 | -0.299493 | -0.124698 | -0.181105 | 0.608900 | ... | -0.904762 | 0.660795 | -0.724697 | -0.928539 | -0.989255 | -0.986028 | -0.984274 | -0.990979 | -0.995703 | -0.989255 | -0.999854 | -0.993238 | -0.736521 | -0.794872 | -0.212728 | -0.535352 | -0.871914 | -0.995031 | -0.995127 | -0.994640 | -0.996060 | -0.995866 | -0.995031 | -0.999973 | -0.993755 | -1.000000 | -0.555556 | 0.414503 | -0.390748 | -0.760104 | -0.118559 | 0.177899 | 0.100699 | 0.808529 | -0.848933 | 0.180637 | -0.049118 | 1 | 5 | STANDING |
3 rows × 564 columns
# get X_train and y_train from csv files
X_train = train.drop(['subject', 'Activity', 'ActivityName'], axis=1)
y_train = train.ActivityName
# get X_test and y_test from test csv file
X_test = test.drop(['subject', 'Activity', 'ActivityName'], axis=1)
y_test = test.ActivityName
print('X_train and y_train : ({},{})'.format(X_train.shape, y_train.shape))
print('X_test and y_test : ({},{})'.format(X_test.shape, y_test.shape))
X_train and y_train : ((7352, 561),(7352,)) X_test and y_test : ((2947, 561),(2947,))
labels=['LAYING', 'SITTING','STANDING','WALKING','WALKING_DOWNSTAIRS','WALKING_UPSTAIRS']
import itertools
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
plt.rcParams["font.family"] = 'DejaVu Sans'
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
from datetime import datetime
def perform_model(model, X_train, y_train, X_test, y_test, class_labels, cm_normalize=True, \
print_cm=True, cm_cmap=plt.cm.Greens):
# to store results at various phases
results = dict()
# time at which model starts training
train_start_time = datetime.now()
print('training the model..')
model.fit(X_train, y_train)
print('Done \n \n')
train_end_time = datetime.now()
results['training_time'] = train_end_time - train_start_time
print('training_time(HH:MM:SS.ms) - {}\n\n'.format(results['training_time']))
# predict test data
print('Predicting test data')
test_start_time = datetime.now()
y_pred = model.predict(X_test)
test_end_time = datetime.now()
print('Done \n \n')
results['testing_time'] = test_end_time - test_start_time
print('testing time(HH:MM:SS:ms) - {}\n\n'.format(results['testing_time']))
results['predicted'] = y_pred
# calculate overall accuracty of the model
accuracy = metrics.accuracy_score(y_true=y_test, y_pred=y_pred)
# store accuracy in results
results['accuracy'] = accuracy
print('---------------------')
print('| Accuracy |')
print('---------------------')
print('\n {}\n\n'.format(accuracy))
# confusion matrix
cm = metrics.confusion_matrix(y_test, y_pred)
results['confusion_matrix'] = cm
if print_cm:
print('--------------------')
print('| Confusion Matrix |')
print('--------------------')
print('\n {}'.format(cm))
# plot confusin matrix
plt.figure(figsize=(8,8))
plt.grid(b=False)
plot_confusion_matrix(cm, classes=class_labels, normalize=True, title='Normalized confusion matrix', cmap = cm_cmap)
plt.show()
# get classification report
print('-------------------------')
print('| Classifiction Report |')
print('-------------------------')
classification_report = metrics.classification_report(y_test, y_pred)
# store report in results
results['classification_report'] = classification_report
print(classification_report)
# add the trained model to the results
results['model'] = model
return results
def print_grid_search_attributes(model):
# Estimator that gave highest score among all the estimators formed in GridSearch
print('--------------------------')
print('| Best Estimator |')
print('--------------------------')
print('\n\t{}\n'.format(model.best_estimator_))
# parameters that gave best results while performing grid search
print('--------------------------')
print('| Best parameters |')
print('--------------------------')
print('\tParameters of best estimator : \n\n\t{}\n'.format(model.best_params_))
# number of cross validation splits
print('---------------------------------')
print('| No of CrossValidation sets |')
print('--------------------------------')
print('\n\tTotal numbre of cross validation sets: {}\n'.format(model.n_splits_))
# Average cross validated score of the best estimator, from the Grid Search
print('--------------------------')
print('| Best Score |')
print('--------------------------')
print('\n\tAverage Cross Validate scores of best estimator : \n\n\t{}\n'.format(model.best_score_))
from sklearn import linear_model
from sklearn import metrics
from sklearn.model_selection import GridSearchCV
# start Grid search
parameters = {'C':[0.01, 0.1, 1, 10, 20, 30], 'penalty':['l2','l1']}
log_reg = linear_model.LogisticRegression()
log_reg_grid = GridSearchCV(log_reg, param_grid=parameters, cv=3, verbose=1, n_jobs=-1)
log_reg_grid_results = perform_model(log_reg_grid, X_train, y_train, X_test, y_test, class_labels=labels)
training the model.. Fitting 3 folds for each of 12 candidates, totalling 36 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers. [Parallel(n_jobs=-1)]: Done 36 out of 36 | elapsed: 1.8min finished /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning. FutureWarning) /usr/local/lib/python3.6/dist-packages/sklearn/linear_model/logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning. "this warning.", FutureWarning)
Done training_time(HH:MM:SS.ms) - 0:02:07.698399 Predicting test data Done testing time(HH:MM:SS:ms) - 0:00:00.009011 --------------------- | Accuracy | --------------------- 0.9626739056667798 -------------------- | Confusion Matrix | -------------------- [[537 0 0 0 0 0] [ 1 428 58 0 0 4] [ 0 12 519 1 0 0] [ 0 0 0 495 1 0] [ 0 0 0 3 409 8] [ 0 0 0 22 0 449]]
------------------------- | Classifiction Report | ------------------------- precision recall f1-score support LAYING 1.00 1.00 1.00 537 SITTING 0.97 0.87 0.92 491 STANDING 0.90 0.98 0.94 532 WALKING 0.95 1.00 0.97 496 WALKING_DOWNSTAIRS 1.00 0.97 0.99 420 WALKING_UPSTAIRS 0.97 0.95 0.96 471 accuracy 0.96 2947 macro avg 0.97 0.96 0.96 2947 weighted avg 0.96 0.96 0.96 2947
plt.figure(figsize=(8,8))
plt.grid(b=False)
plot_confusion_matrix(log_reg_grid_results['confusion_matrix'], classes=labels, cmap=plt.cm.Greens, )
plt.show()
# observe the attributes of the model
print_grid_search_attributes(log_reg_grid_results['model'])
-------------------------- | Best Estimator | -------------------------- LogisticRegression(C=30, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=100, multi_class='warn', n_jobs=None, penalty='l2', random_state=None, solver='warn', tol=0.0001, verbose=0, warm_start=False) -------------------------- | Best parameters | -------------------------- Parameters of best estimator : {'C': 30, 'penalty': 'l2'} --------------------------------- | No of CrossValidation sets | -------------------------------- Total numbre of cross validation sets: 3 -------------------------- | Best Score | -------------------------- Average Cross Validate scores of best estimator : 0.9461371055495104
from sklearn.svm import LinearSVC
parameters = {'C':[0.125, 0.5, 1, 2, 8, 16]}
lr_svc = LinearSVC(tol=0.00005)
lr_svc_grid = GridSearchCV(lr_svc, param_grid=parameters, n_jobs=-1, verbose=1)
lr_svc_grid_results = perform_model(lr_svc_grid, X_train, y_train, X_test, y_test, class_labels=labels)
training the model.. Fitting 3 folds for each of 6 candidates, totalling 18 fits
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning. warnings.warn(CV_WARNING, FutureWarning) [Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers. [Parallel(n_jobs=-1)]: Done 18 out of 18 | elapsed: 32.8s finished /usr/local/lib/python3.6/dist-packages/sklearn/svm/base.py:929: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations. "the number of iterations.", ConvergenceWarning)
Done training_time(HH:MM:SS.ms) - 0:00:40.865910 Predicting test data Done testing time(HH:MM:SS:ms) - 0:00:00.010694 --------------------- | Accuracy | --------------------- 0.9667458432304038 -------------------- | Confusion Matrix | -------------------- [[537 0 0 0 0 0] [ 2 430 56 0 0 3] [ 0 11 520 1 0 0] [ 0 0 0 496 0 0] [ 0 0 0 2 412 6] [ 0 0 0 17 0 454]]
------------------------- | Classifiction Report | ------------------------- precision recall f1-score support LAYING 1.00 1.00 1.00 537 SITTING 0.98 0.88 0.92 491 STANDING 0.90 0.98 0.94 532 WALKING 0.96 1.00 0.98 496 WALKING_DOWNSTAIRS 1.00 0.98 0.99 420 WALKING_UPSTAIRS 0.98 0.96 0.97 471 accuracy 0.97 2947 macro avg 0.97 0.97 0.97 2947 weighted avg 0.97 0.97 0.97 2947
print_grid_search_attributes(lr_svc_grid_results['model'])
-------------------------- | Best Estimator | -------------------------- LinearSVC(C=1, class_weight=None, dual=True, fit_intercept=True, intercept_scaling=1, loss='squared_hinge', max_iter=1000, multi_class='ovr', penalty='l2', random_state=None, tol=5e-05, verbose=0) -------------------------- | Best parameters | -------------------------- Parameters of best estimator : {'C': 1} --------------------------------- | No of CrossValidation sets | -------------------------------- Total numbre of cross validation sets: 3 -------------------------- | Best Score | -------------------------- Average Cross Validate scores of best estimator : 0.9457290533188248
from sklearn.svm import SVC
parameters = {'C':[2,8,16],\
'gamma': [ 0.0078125, 0.125, 2]}
rbf_svm = SVC(kernel='rbf')
rbf_svm_grid = GridSearchCV(rbf_svm,param_grid=parameters, n_jobs=-1)
rbf_svm_grid_results = perform_model(rbf_svm_grid, X_train, y_train, X_test, y_test, class_labels=labels)
training the model..
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning. warnings.warn(CV_WARNING, FutureWarning)
Done training_time(HH:MM:SS.ms) - 0:03:05.890991 Predicting test data Done testing time(HH:MM:SS:ms) - 0:00:02.659645 --------------------- | Accuracy | --------------------- 0.9626739056667798 -------------------- | Confusion Matrix | -------------------- [[537 0 0 0 0 0] [ 0 441 48 0 0 2] [ 0 12 520 0 0 0] [ 0 0 0 489 2 5] [ 0 0 0 4 397 19] [ 0 0 0 17 1 453]]
------------------------- | Classifiction Report | ------------------------- precision recall f1-score support LAYING 1.00 1.00 1.00 537 SITTING 0.97 0.90 0.93 491 STANDING 0.92 0.98 0.95 532 WALKING 0.96 0.99 0.97 496 WALKING_DOWNSTAIRS 0.99 0.95 0.97 420 WALKING_UPSTAIRS 0.95 0.96 0.95 471 accuracy 0.96 2947 macro avg 0.96 0.96 0.96 2947 weighted avg 0.96 0.96 0.96 2947
print_grid_search_attributes(rbf_svm_grid_results['model'])
-------------------------- | Best Estimator | -------------------------- SVC(C=16, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape='ovr', degree=3, gamma=0.0078125, kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False) -------------------------- | Best parameters | -------------------------- Parameters of best estimator : {'C': 16, 'gamma': 0.0078125} --------------------------------- | No of CrossValidation sets | -------------------------------- Total numbre of cross validation sets: 3 -------------------------- | Best Score | -------------------------- Average Cross Validate scores of best estimator : 0.9440968443960827
from sklearn.tree import DecisionTreeClassifier
parameters = {'max_depth':np.arange(3,10,2)}
dt = DecisionTreeClassifier()
dt_grid = GridSearchCV(dt,param_grid=parameters, n_jobs=-1)
dt_grid_results = perform_model(dt_grid, X_train, y_train, X_test, y_test, class_labels=labels)
print_grid_search_attributes(dt_grid_results['model'])
training the model..
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning. warnings.warn(CV_WARNING, FutureWarning)
Done training_time(HH:MM:SS.ms) - 0:00:08.058557 Predicting test data Done testing time(HH:MM:SS:ms) - 0:00:00.007041 --------------------- | Accuracy | --------------------- 0.8646080760095012 -------------------- | Confusion Matrix | -------------------- [[537 0 0 0 0 0] [ 0 386 105 0 0 0] [ 0 93 439 0 0 0] [ 0 0 0 470 18 8] [ 0 0 0 12 347 61] [ 0 0 0 73 29 369]]
------------------------- | Classifiction Report | ------------------------- precision recall f1-score support LAYING 1.00 1.00 1.00 537 SITTING 0.81 0.79 0.80 491 STANDING 0.81 0.83 0.82 532 WALKING 0.85 0.95 0.89 496 WALKING_DOWNSTAIRS 0.88 0.83 0.85 420 WALKING_UPSTAIRS 0.84 0.78 0.81 471 accuracy 0.86 2947 macro avg 0.86 0.86 0.86 2947 weighted avg 0.86 0.86 0.86 2947 -------------------------- | Best Estimator | -------------------------- DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=7, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, presort=False, random_state=None, splitter='best') -------------------------- | Best parameters | -------------------------- Parameters of best estimator : {'max_depth': 7} --------------------------------- | No of CrossValidation sets | -------------------------------- Total numbre of cross validation sets: 3 -------------------------- | Best Score | -------------------------- Average Cross Validate scores of best estimator : 0.8490206746463548
from sklearn.ensemble import RandomForestClassifier
params = {'n_estimators': np.arange(10,201,20), 'max_depth':np.arange(3,15,2)}
rfc = RandomForestClassifier()
rfc_grid = GridSearchCV(rfc, param_grid=params, n_jobs=-1)
rfc_grid_results = perform_model(rfc_grid, X_train, y_train, X_test, y_test, class_labels=labels)
print_grid_search_attributes(rfc_grid_results['model'])
training the model..
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning. warnings.warn(CV_WARNING, FutureWarning)
Done training_time(HH:MM:SS.ms) - 0:04:56.534617 Predicting test data Done testing time(HH:MM:SS:ms) - 0:00:00.023134 --------------------- | Accuracy | --------------------- 0.9087207329487614 -------------------- | Confusion Matrix | -------------------- [[537 0 0 0 0 0] [ 0 419 72 0 0 0] [ 0 54 478 0 0 0] [ 0 0 0 483 11 2] [ 0 0 0 36 337 47] [ 0 0 0 41 6 424]]
------------------------- | Classifiction Report | ------------------------- precision recall f1-score support LAYING 1.00 1.00 1.00 537 SITTING 0.89 0.85 0.87 491 STANDING 0.87 0.90 0.88 532 WALKING 0.86 0.97 0.91 496 WALKING_DOWNSTAIRS 0.95 0.80 0.87 420 WALKING_UPSTAIRS 0.90 0.90 0.90 471 accuracy 0.91 2947 macro avg 0.91 0.90 0.91 2947 weighted avg 0.91 0.91 0.91 2947 -------------------------- | Best Estimator | -------------------------- RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=7, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=50, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False) -------------------------- | Best parameters | -------------------------- Parameters of best estimator : {'max_depth': 7, 'n_estimators': 50} --------------------------------- | No of CrossValidation sets | -------------------------------- Total numbre of cross validation sets: 3 -------------------------- | Best Score | -------------------------- Average Cross Validate scores of best estimator : 0.9173014145810664
from sklearn.ensemble import GradientBoostingClassifier
param_grid = {'max_depth': np.arange(5,8,1), \
'n_estimators':np.arange(130,170,10)}
gbdt = GradientBoostingClassifier()
gbdt_grid = GridSearchCV(gbdt, param_grid=param_grid, n_jobs=-1)
gbdt_grid_results = perform_model(gbdt_grid, X_train, y_train, X_test, y_test, class_labels=labels)
print_grid_search_attributes(gbdt_grid_results['model'])
training the model..
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning. warnings.warn(CV_WARNING, FutureWarning)
Done training_time(HH:MM:SS.ms) - 0:43:35.410602 Predicting test data Done testing time(HH:MM:SS:ms) - 0:00:00.064529 --------------------- | Accuracy | --------------------- 0.9236511706820495 -------------------- | Confusion Matrix | -------------------- [[537 0 0 0 0 0] [ 0 397 93 0 0 1] [ 0 38 494 0 0 0] [ 0 0 0 483 6 7] [ 0 0 0 9 371 40] [ 0 1 0 25 5 440]]
------------------------- | Classifiction Report | ------------------------- precision recall f1-score support LAYING 1.00 1.00 1.00 537 SITTING 0.91 0.81 0.86 491 STANDING 0.84 0.93 0.88 532 WALKING 0.93 0.97 0.95 496 WALKING_DOWNSTAIRS 0.97 0.88 0.93 420 WALKING_UPSTAIRS 0.90 0.93 0.92 471 accuracy 0.92 2947 macro avg 0.93 0.92 0.92 2947 weighted avg 0.93 0.92 0.92 2947 -------------------------- | Best Estimator | -------------------------- GradientBoostingClassifier(criterion='friedman_mse', init=None, learning_rate=0.1, loss='deviance', max_depth=5, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=130, n_iter_no_change=None, presort='auto', random_state=None, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False) -------------------------- | Best parameters | -------------------------- Parameters of best estimator : {'max_depth': 5, 'n_estimators': 130} --------------------------------- | No of CrossValidation sets | -------------------------------- Total numbre of cross validation sets: 3 -------------------------- | Best Score | -------------------------- Average Cross Validate scores of best estimator : 0.904651795429815
print('\n Accuracy Error')
print(' ---------- --------')
print('Logistic Regression : {:.04}% {:.04}%'.format(log_reg_grid_results['accuracy'] * 100,\
100-(log_reg_grid_results['accuracy'] * 100)))
print('Linear SVC : {:.04}% {:.04}% '.format(lr_svc_grid_results['accuracy'] * 100,\
100-(lr_svc_grid_results['accuracy'] * 100)))
print('rbf SVM classifier : {:.04}% {:.04}% '.format(rbf_svm_grid_results['accuracy'] * 100,\
100-(rbf_svm_grid_results['accuracy'] * 100)))
print('DecisionTree : {:.04}% {:.04}% '.format(dt_grid_results['accuracy'] * 100,\
100-(dt_grid_results['accuracy'] * 100)))
print('Random Forest : {:.04}% {:.04}% '.format(rfc_grid_results['accuracy'] * 100,\
100-(rfc_grid_results['accuracy'] * 100)))
print('GradientBoosting DT : {:.04}% {:.04}% '.format(rfc_grid_results['accuracy'] * 100,\
100-(rfc_grid_results['accuracy'] * 100)))
Accuracy Error ---------- -------- Logistic Regression : 96.27% 3.733% Linear SVC : 96.61% 3.393% rbf SVM classifier : 96.27% 3.733% DecisionTree : 86.43% 13.57% Random Forest : 91.31% 8.687% GradientBoosting DT : 91.31% 8.687%
We can choose ___Logistic regression___ or ___Linear SVC___ or ___rbf SVM___.
In the real world, domain-knowledge, EDA and feature-engineering matter most.
# Importing Libraries
import pandas as pd
import numpy as np
# Activities are the class labels
# It is a 6 class classification
ACTIVITIES = {
0: 'WALKING',
1: 'WALKING_UPSTAIRS',
2: 'WALKING_DOWNSTAIRS',
3: 'SITTING',
4: 'STANDING',
5: 'LAYING',
}
# Utility function to print the confusion matrix
def confusion_matrix(Y_true, Y_pred):
Y_true = pd.Series([ACTIVITIES[y] for y in np.argmax(Y_true, axis=1)])
Y_pred = pd.Series([ACTIVITIES[y] for y in np.argmax(Y_pred, axis=1)])
return pd.crosstab(Y_true, Y_pred, rownames=['True'], colnames=['Pred'])
# Data directory
DATADIR = '/content/drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset'
# Raw data signals
# Signals are from Accelerometer and Gyroscope
# The signals are in x,y,z directions
# Sensor signals are filtered to have only body acceleration
# excluding the acceleration due to gravity
# Triaxial acceleration from the accelerometer is total acceleration
SIGNALS = [
"body_acc_x",
"body_acc_y",
"body_acc_z",
"body_gyro_x",
"body_gyro_y",
"body_gyro_z",
"total_acc_x",
"total_acc_y",
"total_acc_z"
]
# Utility function to read the data from csv file
def _read_csv(filename):
return pd.read_csv(filename, delim_whitespace=True, header=None)
# Utility function to load the load
def load_signals(subset):
signals_data = []
for signal in SIGNALS:
filename = f'drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/{subset}/Inertial Signals/{signal}_{subset}.txt'
signals_data.append(
_read_csv(filename).as_matrix()
)
# Transpose is used to change the dimensionality of the output,
# aggregating the signals by combination of sample/timestep.
# Resultant shape is (7352 train/2947 test samples, 128 timesteps, 9 signals)
return np.transpose(signals_data, (1, 2, 0))
def load_y(subset):
"""
The objective that we are trying to predict is a integer, from 1 to 6,
that represents a human activity. We return a binary representation of
every sample objective as a 6 bits vector using One Hot Encoding
(https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html)
"""
filename = f'drive/My Drive/Applied_AI/HAR/UCI_HAR_Dataset/{subset}/y_{subset}.txt'
y = _read_csv(filename)[0]
return pd.get_dummies(y).as_matrix()
def load_data():
"""
Obtain the dataset from multiple files.
Returns: X_train, X_test, y_train, y_test
"""
X_train, X_test = load_signals('train'), load_signals('test')
y_train, y_test = load_y('train'), load_y('test')
return X_train, X_test, y_train, y_test
# Importing tensorflow
np.random.seed(42)
import tensorflow as tf
tf.set_random_seed(42)
# Configuring a session
session_conf = tf.ConfigProto(
intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1
)
# Import Keras
from keras import backend as K
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
Using TensorFlow backend.
# Importing libraries
from keras.models import Sequential
from keras.layers import LSTM
from keras.layers.core import Dense, Dropout
# Initializing parameters
n_hidden = 32
epochs = 30
batch_size = 16
# Utility function to count the number of classes
def _count_classes(y):
return len(set([tuple(category) for category in y]))
# Loading the train and test data
X_train, X_test, Y_train, Y_test = load_data()
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:11: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. # This is added back by InteractiveShellApp.init_path() /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:12: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead. if sys.path[0] == '':
timesteps = len(X_train[0])
input_dim = len(X_train[0][0])
n_classes = _count_classes(Y_train)
print(timesteps)
print(input_dim)
print(len(X_train))
128 9 7352
# Initiliazing the sequential model
model = Sequential()
# Configuring the parameters
model.add(LSTM(64,return_sequences=True, input_shape=(timesteps, input_dim)))
# Adding a dropout layer
model.add(Dropout(0.7))
model.add(LSTM(32))
# Adding a dropout layer
model.add(Dropout(0.6))
# Adding a dense output layer with sigmoid activation
model.add(Dense(n_classes, activation='relu',kernel_initializer='he_normal'))
model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_20 (LSTM) (None, 128, 64) 18944 _________________________________________________________________ dropout_18 (Dropout) (None, 128, 64) 0 _________________________________________________________________ lstm_21 (LSTM) (None, 32) 12416 _________________________________________________________________ dropout_19 (Dropout) (None, 32) 0 _________________________________________________________________ dense_9 (Dense) (None, 6) 198 ================================================================= Total params: 31,558 Trainable params: 31,558 Non-trainable params: 0 _________________________________________________________________
# Compiling the model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
W0814 08:03:48.703269 140336617305984 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
# Training the model
model.fit(X_train,
Y_train,
batch_size=batch_size,
validation_data=(X_test, Y_test),
epochs=epochs)
W0814 08:04:01.099930 140336617305984 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where
Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 190s 26ms/step - loss: 1.9318 - acc: 0.3855 - val_loss: 1.4064 - val_acc: 0.4944 Epoch 2/30 7352/7352 [==============================] - 185s 25ms/step - loss: 1.3454 - acc: 0.4816 - val_loss: 1.2470 - val_acc: 0.5741 Epoch 3/30 7352/7352 [==============================] - 184s 25ms/step - loss: 1.2108 - acc: 0.5000 - val_loss: 1.2850 - val_acc: 0.5721 Epoch 4/30 7352/7352 [==============================] - 188s 26ms/step - loss: 1.1307 - acc: 0.5227 - val_loss: 1.2625 - val_acc: 0.4913 Epoch 5/30 7352/7352 [==============================] - 185s 25ms/step - loss: 1.0214 - acc: 0.5181 - val_loss: 1.2549 - val_acc: 0.5456 Epoch 6/30 7352/7352 [==============================] - 183s 25ms/step - loss: 1.0139 - acc: 0.5479 - val_loss: 1.1065 - val_acc: 0.6186 Epoch 7/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.9358 - acc: 0.5688 - val_loss: 1.1780 - val_acc: 0.6373 Epoch 8/30 7352/7352 [==============================] - 183s 25ms/step - loss: 1.4578 - acc: 0.5155 - val_loss: 0.8273 - val_acc: 0.6369 Epoch 9/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.8244 - acc: 0.5853 - val_loss: 0.8393 - val_acc: 0.6352 Epoch 10/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.8558 - acc: 0.6032 - val_loss: 0.8215 - val_acc: 0.6396 Epoch 11/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.8245 - acc: 0.6122 - val_loss: 1.2644 - val_acc: 0.5300 Epoch 12/30 7352/7352 [==============================] - 183s 25ms/step - loss: 1.0716 - acc: 0.5302 - val_loss: 1.5114 - val_acc: 0.4968 Epoch 13/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.9078 - acc: 0.5428 - val_loss: 1.1656 - val_acc: 0.5222 Epoch 14/30 7352/7352 [==============================] - 182s 25ms/step - loss: 0.9788 - acc: 0.5462 - val_loss: 1.0110 - val_acc: 0.5243 Epoch 15/30 7352/7352 [==============================] - 186s 25ms/step - loss: 0.8637 - acc: 0.5571 - val_loss: 0.9567 - val_acc: 0.5134 Epoch 16/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.8623 - acc: 0.5506 - val_loss: 0.8834 - val_acc: 0.5799 Epoch 17/30 7352/7352 [==============================] - 182s 25ms/step - loss: 0.8392 - acc: 0.5734 - val_loss: 0.8735 - val_acc: 0.5243 Epoch 18/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.8479 - acc: 0.5548 - val_loss: 0.9208 - val_acc: 0.5819 Epoch 19/30 7352/7352 [==============================] - 182s 25ms/step - loss: 0.7998 - acc: 0.6084 - val_loss: 0.7995 - val_acc: 0.6067 Epoch 20/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.8889 - acc: 0.6330 - val_loss: 1.0306 - val_acc: 0.6101 Epoch 21/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.9530 - acc: 0.6430 - val_loss: 0.8687 - val_acc: 0.6166 Epoch 22/30 7352/7352 [==============================] - 182s 25ms/step - loss: 0.7223 - acc: 0.6499 - val_loss: 0.7625 - val_acc: 0.6237 Epoch 23/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.6952 - acc: 0.6581 - val_loss: 0.7844 - val_acc: 0.6026 Epoch 24/30 7352/7352 [==============================] - 184s 25ms/step - loss: 0.6875 - acc: 0.6601 - val_loss: 1.2797 - val_acc: 0.5236 Epoch 25/30 7352/7352 [==============================] - 183s 25ms/step - loss: 2.3553 - acc: 0.5518 - val_loss: 2.0921 - val_acc: 0.5758 Epoch 26/30 7352/7352 [==============================] - 184s 25ms/step - loss: 0.8806 - acc: 0.6575 - val_loss: 1.1252 - val_acc: 0.6115 Epoch 27/30 7352/7352 [==============================] - 182s 25ms/step - loss: 0.6667 - acc: 0.6696 - val_loss: 0.8390 - val_acc: 0.6043 Epoch 28/30 7352/7352 [==============================] - 183s 25ms/step - loss: 1.1439 - acc: 0.6427 - val_loss: 0.8186 - val_acc: 0.6664 Epoch 29/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.7800 - acc: 0.6547 - val_loss: 0.7032 - val_acc: 0.6963 Epoch 30/30 7352/7352 [==============================] - 185s 25ms/step - loss: 0.7069 - acc: 0.7103 - val_loss: 0.5875 - val_acc: 0.7693
<keras.callbacks.History at 0x7fa1ed1c0f98>
# Confusion Matrix
print(confusion_matrix(Y_test, model.predict(X_test)))
Pred LAYING SITTING ... WALKING_DOWNSTAIRS WALKING_UPSTAIRS True ... LAYING 510 0 ... 0 27 SITTING 0 405 ... 0 11 STANDING 0 100 ... 0 6 WALKING 0 0 ... 40 2 WALKING_DOWNSTAIRS 0 0 ... 402 9 WALKING_UPSTAIRS 0 0 ... 9 450 [6 rows x 6 columns]
score = model.evaluate(X_test, Y_test)
2947/2947 [==============================] - 6s 2ms/step
score
[0.4080225666194856, 0.8971835765184933]
# Initiliazing the sequential model
model = Sequential()
# Configuring the parameters
model.add(LSTM(64,return_sequences=True, input_shape=(timesteps, input_dim)))
# Adding a dropout layer
model.add(LSTM(32))
# Adding a dropout layer
model.add(Dropout(0.6))
# Adding a dense output layer with sigmoid activation
model.add(Dense(n_classes, activation='relu',kernel_initializer='he_normal'))
model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_1 (LSTM) (None, 128, 64) 18944 _________________________________________________________________ lstm_2 (LSTM) (None, 32) 12416 _________________________________________________________________ dropout_36 (Dropout) (None, 32) 0 _________________________________________________________________ dense_27 (Dense) (None, 6) 198 ================================================================= Total params: 31,558 Trainable params: 31,558 Non-trainable params: 0 _________________________________________________________________
# Compiling the model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Training the model
model.fit(X_train,
Y_train,
batch_size=batch_size,
validation_data=(X_test, Y_test),
epochs=epochs)
Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 180s 24ms/step - loss: 0.4943 - acc: 0.8406 - val_loss: 0.3551 - val_acc: 0.8364 Epoch 2/30 7352/7352 [==============================] - 176s 24ms/step - loss: 0.3577 - acc: 0.8427 - val_loss: 0.3363 - val_acc: 0.8431 Epoch 3/30 7352/7352 [==============================] - 178s 24ms/step - loss: 0.3383 - acc: 0.8444 - val_loss: 0.3363 - val_acc: 0.8357 Epoch 4/30 7352/7352 [==============================] - 175s 24ms/step - loss: 0.3071 - acc: 0.8468 - val_loss: 0.2375 - val_acc: 0.8642 Epoch 5/30 7352/7352 [==============================] - 177s 24ms/step - loss: 0.2397 - acc: 0.8590 - val_loss: 0.2697 - val_acc: 0.8610 Epoch 6/30 7352/7352 [==============================] - 177s 24ms/step - loss: 0.2621 - acc: 0.8618 - val_loss: 0.2275 - val_acc: 0.8693 Epoch 7/30 7352/7352 [==============================] - 180s 24ms/step - loss: 0.2386 - acc: 0.8675 - val_loss: 0.2763 - val_acc: 0.8725 Epoch 8/30 7352/7352 [==============================] - 175s 24ms/step - loss: 0.2397 - acc: 0.8828 - val_loss: 0.2267 - val_acc: 0.8936 Epoch 9/30 7352/7352 [==============================] - 176s 24ms/step - loss: 0.2206 - acc: 0.8883 - val_loss: 0.3420 - val_acc: 0.8773 Epoch 10/30 7352/7352 [==============================] - 177s 24ms/step - loss: 0.1818 - acc: 0.8880 - val_loss: 0.4045 - val_acc: 0.8670 Epoch 11/30 7352/7352 [==============================] - 178s 24ms/step - loss: 0.2947 - acc: 0.8561 - val_loss: 0.2507 - val_acc: 0.8625 Epoch 12/30 7352/7352 [==============================] - 174s 24ms/step - loss: 0.2215 - acc: 0.8664 - val_loss: 0.2415 - val_acc: 0.8690 Epoch 13/30 7352/7352 [==============================] - 179s 24ms/step - loss: 0.2737 - acc: 0.8629 - val_loss: 0.2280 - val_acc: 0.8574 Epoch 14/30 7352/7352 [==============================] - 176s 24ms/step - loss: 0.3198 - acc: 0.8573 - val_loss: 0.2822 - val_acc: 0.8564 Epoch 15/30 7352/7352 [==============================] - 178s 24ms/step - loss: 0.5596 - acc: 0.8264 - val_loss: 0.3518 - val_acc: 0.8332 Epoch 16/30 7352/7352 [==============================] - 177s 24ms/step - loss: 0.3421 - acc: 0.8355 - val_loss: 0.3127 - val_acc: 0.8345 Epoch 17/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.2650 - acc: 0.8427 - val_loss: 0.2641 - val_acc: 0.8414 Epoch 18/30 7352/7352 [==============================] - 184s 25ms/step - loss: 0.2356 - acc: 0.8580 - val_loss: 0.5635 - val_acc: 0.8315 Epoch 19/30 7352/7352 [==============================] - 182s 25ms/step - loss: 0.3286 - acc: 0.8502 - val_loss: 0.2159 - val_acc: 0.8630 Epoch 20/30 7352/7352 [==============================] - 180s 25ms/step - loss: 0.2270 - acc: 0.8588 - val_loss: 0.2794 - val_acc: 0.8786 Epoch 21/30 7352/7352 [==============================] - 181s 25ms/step - loss: 0.2117 - acc: 0.8726 - val_loss: 0.2202 - val_acc: 0.8914 Epoch 22/30 7352/7352 [==============================] - 180s 24ms/step - loss: 0.2324 - acc: 0.8717 - val_loss: 0.2098 - val_acc: 0.8664 Epoch 23/30 7352/7352 [==============================] - 180s 24ms/step - loss: 0.2368 - acc: 0.8809 - val_loss: 0.2543 - val_acc: 0.8907 Epoch 24/30 7352/7352 [==============================] - 179s 24ms/step - loss: 0.4671 - acc: 0.8730 - val_loss: 0.2831 - val_acc: 0.8537 Epoch 25/30 7352/7352 [==============================] - 180s 25ms/step - loss: 0.2117 - acc: 0.8715 - val_loss: 0.2318 - val_acc: 0.9118 Epoch 26/30 7352/7352 [==============================] - 182s 25ms/step - loss: 0.1947 - acc: 0.8767 - val_loss: 0.1863 - val_acc: 0.9035 Epoch 27/30 7352/7352 [==============================] - 183s 25ms/step - loss: 0.1722 - acc: 0.8918 - val_loss: 0.2024 - val_acc: 0.9125 Epoch 28/30 7352/7352 [==============================] - 179s 24ms/step - loss: 0.2336 - acc: 0.8926 - val_loss: 0.1891 - val_acc: 0.9023 Epoch 29/30 7352/7352 [==============================] - 179s 24ms/step - loss: 0.2082 - acc: 0.8918 - val_loss: 0.2011 - val_acc: 0.9069 Epoch 30/30 7352/7352 [==============================] - 180s 25ms/step - loss: 0.2575 - acc: 0.8773 - val_loss: 0.2742 - val_acc: 0.8418
<keras.callbacks.History at 0x7f92099e1048>
# Confusion Matrix
print(confusion_matrix(Y_test, model.predict(X_test)))
Pred LAYING SITTING ... WALKING_DOWNSTAIRS WALKING_UPSTAIRS True ... LAYING 510 0 ... 0 26 SITTING 0 214 ... 1 0 STANDING 0 23 ... 0 0 WALKING 0 69 ... 8 0 WALKING_DOWNSTAIRS 0 9 ... 360 0 WALKING_UPSTAIRS 0 20 ... 17 4 [6 rows x 6 columns]
score = model.evaluate(X_test, Y_test)
2947/2947 [==============================] - 12s 4ms/step
score
[0.27423708396197416, 0.8418165167226684]
# Initiliazing the sequential model
model = Sequential()
# Configuring the parameters
model.add(LSTM(n_hidden, input_shape=(timesteps, input_dim)))
# Adding a dropout layer
model.add(Dropout(0.5))
# Adding a dense output layer with sigmoid activation
model.add(Dense(n_classes, activation='sigmoid'))
model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= lstm_3 (LSTM) (None, 32) 5376 _________________________________________________________________ dropout_3 (Dropout) (None, 32) 0 _________________________________________________________________ dense_3 (Dense) (None, 6) 198 ================================================================= Total params: 5,574 Trainable params: 5,574 Non-trainable params: 0 _________________________________________________________________
# Compiling the model
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# Training the model
model.fit(X_train,
Y_train,
batch_size=batch_size,
validation_data=(X_test, Y_test),
epochs=epochs)
Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 92s 13ms/step - loss: 1.3018 - acc: 0.4395 - val_loss: 1.1254 - val_acc: 0.4662 Epoch 2/30 7352/7352 [==============================] - 94s 13ms/step - loss: 0.9666 - acc: 0.5880 - val_loss: 0.9491 - val_acc: 0.5714 Epoch 3/30 7352/7352 [==============================] - 97s 13ms/step - loss: 0.7812 - acc: 0.6408 - val_loss: 0.8286 - val_acc: 0.5850 Epoch 4/30 7352/7352 [==============================] - 95s 13ms/step - loss: 0.6941 - acc: 0.6574 - val_loss: 0.7297 - val_acc: 0.6128 Epoch 5/30 7352/7352 [==============================] - 92s 13ms/step - loss: 0.6336 - acc: 0.6912 - val_loss: 0.7359 - val_acc: 0.6787 Epoch 6/30 7352/7352 [==============================] - 94s 13ms/step - loss: 0.5859 - acc: 0.7134 - val_loss: 0.7015 - val_acc: 0.6939 Epoch 7/30 7352/7352 [==============================] - 95s 13ms/step - loss: 0.5692 - acc: 0.7477 - val_loss: 0.5995 - val_acc: 0.7387 Epoch 8/30 7352/7352 [==============================] - 96s 13ms/step - loss: 0.4899 - acc: 0.7809 - val_loss: 0.5762 - val_acc: 0.7387 Epoch 9/30 7352/7352 [==============================] - 90s 12ms/step - loss: 0.4482 - acc: 0.7886 - val_loss: 0.7413 - val_acc: 0.7126 Epoch 10/30 7352/7352 [==============================] - 90s 12ms/step - loss: 0.4132 - acc: 0.8077 - val_loss: 0.5048 - val_acc: 0.7513 Epoch 11/30 7352/7352 [==============================] - 89s 12ms/step - loss: 0.3985 - acc: 0.8274 - val_loss: 0.5234 - val_acc: 0.7452 Epoch 12/30 7352/7352 [==============================] - 91s 12ms/step - loss: 0.3378 - acc: 0.8638 - val_loss: 0.4114 - val_acc: 0.8833 Epoch 13/30 7352/7352 [==============================] - 91s 12ms/step - loss: 0.2947 - acc: 0.9051 - val_loss: 0.4386 - val_acc: 0.8731 Epoch 14/30 7352/7352 [==============================] - 90s 12ms/step - loss: 0.2448 - acc: 0.9291 - val_loss: 0.3768 - val_acc: 0.8921 Epoch 15/30 7352/7352 [==============================] - 91s 12ms/step - loss: 0.2157 - acc: 0.9331 - val_loss: 0.4441 - val_acc: 0.8931 Epoch 16/30 7352/7352 [==============================] - 90s 12ms/step - loss: 0.2053 - acc: 0.9366 - val_loss: 0.4162 - val_acc: 0.8968 Epoch 17/30 7352/7352 [==============================] - 89s 12ms/step - loss: 0.2028 - acc: 0.9404 - val_loss: 0.4538 - val_acc: 0.8962 Epoch 18/30 7352/7352 [==============================] - 93s 13ms/step - loss: 0.1911 - acc: 0.9419 - val_loss: 0.3964 - val_acc: 0.8999 Epoch 19/30 7352/7352 [==============================] - 96s 13ms/step - loss: 0.1912 - acc: 0.9407 - val_loss: 0.3165 - val_acc: 0.9030 Epoch 20/30 7352/7352 [==============================] - 96s 13ms/step - loss: 0.1732 - acc: 0.9446 - val_loss: 0.4546 - val_acc: 0.8904 Epoch 21/30 7352/7352 [==============================] - 94s 13ms/step - loss: 0.1782 - acc: 0.9444 - val_loss: 0.3346 - val_acc: 0.9063 Epoch 22/30 7352/7352 [==============================] - 95s 13ms/step - loss: 0.1812 - acc: 0.9418 - val_loss: 0.8164 - val_acc: 0.8582 Epoch 23/30 7352/7352 [==============================] - 95s 13ms/step - loss: 0.1824 - acc: 0.9426 - val_loss: 0.4240 - val_acc: 0.9036 Epoch 24/30 7352/7352 [==============================] - 94s 13ms/step - loss: 0.1726 - acc: 0.9429 - val_loss: 0.4067 - val_acc: 0.9148 Epoch 25/30 7352/7352 [==============================] - 96s 13ms/step - loss: 0.1737 - acc: 0.9411 - val_loss: 0.3396 - val_acc: 0.9074 Epoch 26/30 7352/7352 [==============================] - 96s 13ms/step - loss: 0.1650 - acc: 0.9461 - val_loss: 0.3806 - val_acc: 0.9019 Epoch 27/30 7352/7352 [==============================] - 89s 12ms/step - loss: 0.1925 - acc: 0.9415 - val_loss: 0.6464 - val_acc: 0.8850 Epoch 28/30 7352/7352 [==============================] - 91s 12ms/step - loss: 0.1965 - acc: 0.9425 - val_loss: 0.3363 - val_acc: 0.9203 Epoch 29/30 7352/7352 [==============================] - 92s 12ms/step - loss: 0.1889 - acc: 0.9431 - val_loss: 0.3737 - val_acc: 0.9158 Epoch 30/30 7352/7352 [==============================] - 95s 13ms/step - loss: 0.1945 - acc: 0.9414 - val_loss: 0.3088 - val_acc: 0.9097
<keras.callbacks.History at 0x29b5ee36a20>
# Confusion Matrix
print(confusion_matrix(Y_test, model.predict(X_test)))
Pred LAYING SITTING STANDING WALKING WALKING_DOWNSTAIRS \ True LAYING 512 0 25 0 0 SITTING 3 410 75 0 0 STANDING 0 87 445 0 0 WALKING 0 0 0 481 2 WALKING_DOWNSTAIRS 0 0 0 0 382 WALKING_UPSTAIRS 0 0 0 2 18 Pred WALKING_UPSTAIRS True LAYING 0 SITTING 3 STANDING 0 WALKING 13 WALKING_DOWNSTAIRS 38 WALKING_UPSTAIRS 451
score = model.evaluate(X_test, Y_test)
2947/2947 [==============================] - 4s 2ms/step
score
[0.3087582236972612, 0.9097387173396675]
from prettytable import PrettyTable
x = PrettyTable()
x.field_names = ["Model no.", "Architecture", "Test Loss(Cross-Entropy)", "Test Accuracy"]
x.add_row([1,"1 LSTM Layers - RMSProp, Sigmoid ", 0.302, 0.909 ])
x.add_row([2,"2 LSTM Layers - Adam, Relu", 0.274, 0.84 ])
x.add_row([3,"2 LSTM Layers - Adam, Relu, Weights-He Normal", 0.408, 0.8972 ])
print(x)
+-----------+-----------------------------------------------+--------------------------+---------------+ | Model no. | Architecture | Test Loss(Cross-Entropy) | Test Accuracy | +-----------+-----------------------------------------------+--------------------------+---------------+ | 1 | 1 LSTM Layers - RMSProp, Sigmoid | 0.302 | 0.909 | | 2 | 2 LSTM Layers - Adam, Relu | 0.274 | 0.84 | | 3 | 2 LSTM Layers - Adam, Relu, Weights-He Normal | 0.408 | 0.8972 | +-----------+-----------------------------------------------+--------------------------+---------------+
References:
import keras
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.preprocessing import sequence
from keras.initializers import RandomNormal
from keras.initializers import glorot_uniform,glorot_normal
from keras.layers.normalization import BatchNormalization
import sqlite3
import pandas as pd
import re
import nltk
## Reshaping the input for CNN models
input_shape = (timesteps, input_dim, 1)
X_train = X_train.reshape(X_train.shape[0], timesteps, input_dim, 1)
X_test = X_test.reshape(X_test.shape[0], timesteps, input_dim, 1)
model = Sequential()
model.add(Conv2D(64, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(n_classes, activation='softmax'))
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model.summary()
history = model.fit(X_train, Y_train, batch_size=256, epochs=30, verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,epochs+1))
vy = history.history['val_loss']
ty = history.history['loss']
plt_dynamic(x, vy, ty, ax)
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_59 (Conv2D) (None, 128, 9, 64) 1664 _________________________________________________________________ conv2d_60 (Conv2D) (None, 128, 9, 32) 18464 _________________________________________________________________ conv2d_61 (Conv2D) (None, 128, 9, 32) 9248 _________________________________________________________________ flatten_13 (Flatten) (None, 36864) 0 _________________________________________________________________ dense_25 (Dense) (None, 100) 3686500 _________________________________________________________________ batch_normalization_28 (Batc (None, 100) 400 _________________________________________________________________ dropout_35 (Dropout) (None, 100) 0 _________________________________________________________________ dense_26 (Dense) (None, 6) 606 ================================================================= Total params: 3,716,882 Trainable params: 3,716,682 Non-trainable params: 200 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 5s 693us/step - loss: 0.1642 - acc: 0.9313 - val_loss: 0.1904 - val_acc: 0.9250 Epoch 2/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0622 - acc: 0.9774 - val_loss: 0.1402 - val_acc: 0.9591 Epoch 3/30 7352/7352 [==============================] - 1s 142us/step - loss: 0.0481 - acc: 0.9815 - val_loss: 0.1578 - val_acc: 0.9538 Epoch 4/30 7352/7352 [==============================] - 1s 142us/step - loss: 0.0423 - acc: 0.9835 - val_loss: 0.0846 - val_acc: 0.9677 Epoch 5/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0395 - acc: 0.9837 - val_loss: 0.1183 - val_acc: 0.9557 Epoch 6/30 7352/7352 [==============================] - 1s 142us/step - loss: 0.0354 - acc: 0.9851 - val_loss: 0.0859 - val_acc: 0.9687 Epoch 7/30 7352/7352 [==============================] - 1s 142us/step - loss: 0.0353 - acc: 0.9860 - val_loss: 0.0998 - val_acc: 0.9621 Epoch 8/30 7352/7352 [==============================] - 1s 142us/step - loss: 0.0334 - acc: 0.9869 - val_loss: 0.0866 - val_acc: 0.9661 Epoch 9/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0310 - acc: 0.9878 - val_loss: 0.1239 - val_acc: 0.9545 Epoch 10/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0302 - acc: 0.9886 - val_loss: 0.0906 - val_acc: 0.9696 Epoch 11/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0263 - acc: 0.9891 - val_loss: 0.0784 - val_acc: 0.9747 Epoch 12/30 7352/7352 [==============================] - 1s 142us/step - loss: 0.0240 - acc: 0.9906 - val_loss: 0.0735 - val_acc: 0.9751 Epoch 13/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0250 - acc: 0.9901 - val_loss: 0.0782 - val_acc: 0.9695 Epoch 14/30 7352/7352 [==============================] - 1s 144us/step - loss: 0.0181 - acc: 0.9936 - val_loss: 0.0730 - val_acc: 0.9747 Epoch 15/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0195 - acc: 0.9925 - val_loss: 0.0891 - val_acc: 0.9713 Epoch 16/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0168 - acc: 0.9942 - val_loss: 0.1130 - val_acc: 0.9573 Epoch 17/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0149 - acc: 0.9948 - val_loss: 0.0812 - val_acc: 0.9744 Epoch 18/30 7352/7352 [==============================] - 1s 143us/step - loss: 0.0147 - acc: 0.9952 - val_loss: 0.0979 - val_acc: 0.9732 Epoch 19/30 7352/7352 [==============================] - 1s 144us/step - loss: 0.0152 - acc: 0.9942 - val_loss: 0.1003 - val_acc: 0.9745 Epoch 20/30 7352/7352 [==============================] - 1s 144us/step - loss: 0.0118 - acc: 0.9959 - val_loss: 0.1071 - val_acc: 0.9729 Epoch 21/30 7352/7352 [==============================] - 1s 144us/step - loss: 0.0142 - acc: 0.9946 - val_loss: 0.1345 - val_acc: 0.9700 Epoch 22/30 7352/7352 [==============================] - 1s 147us/step - loss: 0.0192 - acc: 0.9933 - val_loss: 0.1227 - val_acc: 0.9653 Epoch 23/30 7352/7352 [==============================] - 1s 146us/step - loss: 0.0169 - acc: 0.9942 - val_loss: 0.0790 - val_acc: 0.9766 Epoch 24/30 7352/7352 [==============================] - 1s 146us/step - loss: 0.0124 - acc: 0.9959 - val_loss: 0.0938 - val_acc: 0.9756 Epoch 25/30 7352/7352 [==============================] - 1s 147us/step - loss: 0.0127 - acc: 0.9952 - val_loss: 0.0909 - val_acc: 0.9717 Epoch 26/30 7352/7352 [==============================] - 1s 146us/step - loss: 0.0107 - acc: 0.9959 - val_loss: 0.0839 - val_acc: 0.9721 Epoch 27/30 7352/7352 [==============================] - 1s 148us/step - loss: 0.0097 - acc: 0.9966 - val_loss: 0.1044 - val_acc: 0.9736 Epoch 28/30 7352/7352 [==============================] - 1s 149us/step - loss: 0.0121 - acc: 0.9956 - val_loss: 0.1200 - val_acc: 0.9745 Epoch 29/30 7352/7352 [==============================] - 1s 146us/step - loss: 0.0092 - acc: 0.9971 - val_loss: 0.1321 - val_acc: 0.9734 Epoch 30/30 7352/7352 [==============================] - 1s 149us/step - loss: 0.0094 - acc: 0.9961 - val_loss: 0.1106 - val_acc: 0.9741 Test loss: 0.11055568987172415 Test accuracy: 0.974097960271146
model = Sequential()
model.add(Conv2D(128, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model.add(Dropout(0.5))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(n_classes, activation='softmax'))
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model.summary()
history = model.fit(X_train, Y_train, batch_size=256, epochs=30, verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,epochs+1))
vy = history.history['val_loss']
ty = history.history['loss']
plt_dynamic(x, vy, ty, ax)
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_56 (Conv2D) (None, 128, 9, 128) 3328 _________________________________________________________________ max_pooling2d_30 (MaxPooling (None, 64, 4, 128) 0 _________________________________________________________________ dropout_33 (Dropout) (None, 64, 4, 128) 0 _________________________________________________________________ conv2d_57 (Conv2D) (None, 64, 4, 64) 73792 _________________________________________________________________ max_pooling2d_31 (MaxPooling (None, 32, 2, 64) 0 _________________________________________________________________ conv2d_58 (Conv2D) (None, 32, 2, 64) 36928 _________________________________________________________________ max_pooling2d_32 (MaxPooling (None, 16, 1, 64) 0 _________________________________________________________________ flatten_12 (Flatten) (None, 1024) 0 _________________________________________________________________ dense_23 (Dense) (None, 100) 102500 _________________________________________________________________ batch_normalization_27 (Batc (None, 100) 400 _________________________________________________________________ dropout_34 (Dropout) (None, 100) 0 _________________________________________________________________ dense_24 (Dense) (None, 6) 606 ================================================================= Total params: 217,554 Trainable params: 217,354 Non-trainable params: 200 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 5s 690us/step - loss: 0.2510 - acc: 0.8868 - val_loss: 0.2693 - val_acc: 0.8858 Epoch 2/30 7352/7352 [==============================] - 1s 128us/step - loss: 0.1361 - acc: 0.9439 - val_loss: 0.1823 - val_acc: 0.9364 Epoch 3/30 7352/7352 [==============================] - 1s 128us/step - loss: 0.0849 - acc: 0.9694 - val_loss: 0.2915 - val_acc: 0.8919 Epoch 4/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0618 - acc: 0.9784 - val_loss: 0.1479 - val_acc: 0.9392 Epoch 5/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0520 - acc: 0.9808 - val_loss: 0.1286 - val_acc: 0.9464 Epoch 6/30 7352/7352 [==============================] - 1s 127us/step - loss: 0.0466 - acc: 0.9824 - val_loss: 0.2496 - val_acc: 0.9152 Epoch 7/30 7352/7352 [==============================] - 1s 126us/step - loss: 0.0449 - acc: 0.9832 - val_loss: 0.0910 - val_acc: 0.9679 Epoch 8/30 7352/7352 [==============================] - 1s 128us/step - loss: 0.0434 - acc: 0.9835 - val_loss: 0.0681 - val_acc: 0.9706 Epoch 9/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0426 - acc: 0.9831 - val_loss: 0.0964 - val_acc: 0.9597 Epoch 10/30 7352/7352 [==============================] - 1s 128us/step - loss: 0.0407 - acc: 0.9832 - val_loss: 0.0815 - val_acc: 0.9693 Epoch 11/30 7352/7352 [==============================] - 1s 127us/step - loss: 0.0396 - acc: 0.9842 - val_loss: 0.1159 - val_acc: 0.9608 Epoch 12/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0374 - acc: 0.9849 - val_loss: 0.1251 - val_acc: 0.9479 Epoch 13/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0372 - acc: 0.9840 - val_loss: 0.1108 - val_acc: 0.9529 Epoch 14/30 7352/7352 [==============================] - 1s 127us/step - loss: 0.0383 - acc: 0.9841 - val_loss: 0.0737 - val_acc: 0.9725 Epoch 15/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0361 - acc: 0.9850 - val_loss: 0.1353 - val_acc: 0.9495 Epoch 16/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0332 - acc: 0.9870 - val_loss: 0.0821 - val_acc: 0.9627 Epoch 17/30 7352/7352 [==============================] - 1s 127us/step - loss: 0.0328 - acc: 0.9863 - val_loss: 0.0908 - val_acc: 0.9641 Epoch 18/30 7352/7352 [==============================] - 1s 130us/step - loss: 0.0313 - acc: 0.9872 - val_loss: 0.0666 - val_acc: 0.9764 Epoch 19/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0336 - acc: 0.9856 - val_loss: 0.0698 - val_acc: 0.9746 Epoch 20/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0343 - acc: 0.9865 - val_loss: 0.0699 - val_acc: 0.9754 Epoch 21/30 7352/7352 [==============================] - 1s 128us/step - loss: 0.0291 - acc: 0.9884 - val_loss: 0.0609 - val_acc: 0.9783 Epoch 22/30 7352/7352 [==============================] - 1s 126us/step - loss: 0.0289 - acc: 0.9886 - val_loss: 0.0630 - val_acc: 0.9738 Epoch 23/30 7352/7352 [==============================] - 1s 129us/step - loss: 0.0291 - acc: 0.9881 - val_loss: 0.0857 - val_acc: 0.9647 Epoch 24/30 7352/7352 [==============================] - 1s 128us/step - loss: 0.0282 - acc: 0.9885 - val_loss: 0.1368 - val_acc: 0.9478 Epoch 25/30 7352/7352 [==============================] - 1s 127us/step - loss: 0.0260 - acc: 0.9893 - val_loss: 0.1554 - val_acc: 0.9466 Epoch 26/30 7352/7352 [==============================] - 1s 126us/step - loss: 0.0268 - acc: 0.9899 - val_loss: 0.0720 - val_acc: 0.9695 Epoch 27/30 7352/7352 [==============================] - 1s 130us/step - loss: 0.0260 - acc: 0.9892 - val_loss: 0.0866 - val_acc: 0.9643 Epoch 28/30 7352/7352 [==============================] - 1s 126us/step - loss: 0.0225 - acc: 0.9909 - val_loss: 0.1325 - val_acc: 0.9512 Epoch 29/30 7352/7352 [==============================] - 1s 130us/step - loss: 0.0235 - acc: 0.9905 - val_loss: 0.0696 - val_acc: 0.9748 Epoch 30/30 7352/7352 [==============================] - 1s 128us/step - loss: 0.0245 - acc: 0.9903 - val_loss: 0.0716 - val_acc: 0.9777 Test loss: 0.07162463552604258 Test accuracy: 0.9776609062865341
model = Sequential()
model.add(Conv2D(128, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model.add(Flatten())
model.add(Dense(100, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(n_classes, activation='softmax'))
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model.summary()
history = model.fit(X_train, Y_train, batch_size=256, epochs=30, verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,epochs+1))
vy = history.history['val_loss']
ty = history.history['loss']
plt_dynamic(x, vy, ty, ax)
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_53 (Conv2D) (None, 128, 9, 128) 3328 _________________________________________________________________ max_pooling2d_27 (MaxPooling (None, 64, 4, 128) 0 _________________________________________________________________ conv2d_54 (Conv2D) (None, 64, 4, 64) 204864 _________________________________________________________________ max_pooling2d_28 (MaxPooling (None, 32, 2, 64) 0 _________________________________________________________________ conv2d_55 (Conv2D) (None, 32, 2, 64) 102464 _________________________________________________________________ max_pooling2d_29 (MaxPooling (None, 16, 1, 64) 0 _________________________________________________________________ flatten_11 (Flatten) (None, 1024) 0 _________________________________________________________________ dense_21 (Dense) (None, 100) 102500 _________________________________________________________________ batch_normalization_26 (Batc (None, 100) 400 _________________________________________________________________ dropout_32 (Dropout) (None, 100) 0 _________________________________________________________________ dense_22 (Dense) (None, 6) 606 ================================================================= Total params: 414,162 Trainable params: 413,962 Non-trainable params: 200 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 5s 741us/step - loss: 0.2105 - acc: 0.9085 - val_loss: 0.2021 - val_acc: 0.9260 Epoch 2/30 7352/7352 [==============================] - 1s 168us/step - loss: 0.0906 - acc: 0.9695 - val_loss: 0.1243 - val_acc: 0.9588 Epoch 3/30 7352/7352 [==============================] - 1s 169us/step - loss: 0.0591 - acc: 0.9808 - val_loss: 0.1208 - val_acc: 0.9572 Epoch 4/30 7352/7352 [==============================] - 1s 169us/step - loss: 0.0491 - acc: 0.9833 - val_loss: 0.0726 - val_acc: 0.9714 Epoch 5/30 7352/7352 [==============================] - 1s 167us/step - loss: 0.0437 - acc: 0.9836 - val_loss: 0.1191 - val_acc: 0.9561 Epoch 6/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0439 - acc: 0.9837 - val_loss: 0.1105 - val_acc: 0.9587 Epoch 7/30 7352/7352 [==============================] - 1s 170us/step - loss: 0.0390 - acc: 0.9853 - val_loss: 0.0793 - val_acc: 0.9733 Epoch 8/30 7352/7352 [==============================] - 1s 172us/step - loss: 0.0419 - acc: 0.9839 - val_loss: 0.0938 - val_acc: 0.9663 Epoch 9/30 7352/7352 [==============================] - 1s 170us/step - loss: 0.0382 - acc: 0.9852 - val_loss: 0.0773 - val_acc: 0.9729 Epoch 10/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0389 - acc: 0.9851 - val_loss: 0.0960 - val_acc: 0.9661 Epoch 11/30 7352/7352 [==============================] - 1s 168us/step - loss: 0.0371 - acc: 0.9860 - val_loss: 0.0699 - val_acc: 0.9731 Epoch 12/30 7352/7352 [==============================] - 1s 170us/step - loss: 0.0348 - acc: 0.9865 - val_loss: 0.0917 - val_acc: 0.9602 Epoch 13/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0321 - acc: 0.9873 - val_loss: 0.0665 - val_acc: 0.9744 Epoch 14/30 7352/7352 [==============================] - 1s 169us/step - loss: 0.0350 - acc: 0.9855 - val_loss: 0.0772 - val_acc: 0.9696 Epoch 15/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0301 - acc: 0.9877 - val_loss: 0.0584 - val_acc: 0.9774 Epoch 16/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0310 - acc: 0.9878 - val_loss: 0.0648 - val_acc: 0.9807 Epoch 17/30 7352/7352 [==============================] - 1s 169us/step - loss: 0.0286 - acc: 0.9888 - val_loss: 0.0633 - val_acc: 0.9764 Epoch 18/30 7352/7352 [==============================] - 1s 170us/step - loss: 0.0288 - acc: 0.9887 - val_loss: 0.2431 - val_acc: 0.9388 Epoch 19/30 7352/7352 [==============================] - 1s 169us/step - loss: 0.0287 - acc: 0.9881 - val_loss: 0.0577 - val_acc: 0.9775 Epoch 20/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0251 - acc: 0.9895 - val_loss: 0.1332 - val_acc: 0.9599 Epoch 21/30 7352/7352 [==============================] - 1s 172us/step - loss: 0.0247 - acc: 0.9904 - val_loss: 0.0680 - val_acc: 0.9714 Epoch 22/30 7352/7352 [==============================] - 1s 172us/step - loss: 0.0248 - acc: 0.9899 - val_loss: 0.0665 - val_acc: 0.9783 Epoch 23/30 7352/7352 [==============================] - 1s 170us/step - loss: 0.0224 - acc: 0.9910 - val_loss: 0.0700 - val_acc: 0.9771 Epoch 24/30 7352/7352 [==============================] - 1s 170us/step - loss: 0.0239 - acc: 0.9907 - val_loss: 0.1886 - val_acc: 0.9485 Epoch 25/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0246 - acc: 0.9901 - val_loss: 0.0733 - val_acc: 0.9762 Epoch 26/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0200 - acc: 0.9924 - val_loss: 0.0544 - val_acc: 0.9806 Epoch 27/30 7352/7352 [==============================] - 1s 168us/step - loss: 0.0223 - acc: 0.9904 - val_loss: 0.0696 - val_acc: 0.9802 Epoch 28/30 7352/7352 [==============================] - 1s 168us/step - loss: 0.0194 - acc: 0.9921 - val_loss: 0.0743 - val_acc: 0.9765 Epoch 29/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0200 - acc: 0.9916 - val_loss: 0.0762 - val_acc: 0.9796 Epoch 30/30 7352/7352 [==============================] - 1s 171us/step - loss: 0.0185 - acc: 0.9923 - val_loss: 0.0598 - val_acc: 0.9821 Test loss: 0.05977197116488045 Test accuracy: 0.982072170686026
model_1 = Sequential()
model_1.add(Conv2D(128, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model_1.add(Conv2D(128, (5, 5), activation='relu', padding='same'))
model_1.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_1.add(Dropout(0.6))
model_1.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_1.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_1.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_1.add(BatchNormalization())
model_1.add(Dropout(0.7))
model_1.add(Flatten())
model_1.add(Dense(100, activation='relu'))
model_1.add(BatchNormalization())
model_1.add(Dropout(0.5))
model_1.add(Dense(n_classes, activation='softmax'))
model_1.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model_1.summary()
history_1 = model_1.fit(X_train, Y_train, batch_size=256, epochs=30, verbose=1, validation_data=(X_test, Y_test))
score_1 = model_1.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score_1[0])
print('Test accuracy:', score_1[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,epochs+1))
vy = history_1.history['val_loss']
ty = history_1.history['loss']
plt_dynamic(x, vy, ty, ax)
W0815 11:35:15.672336 140275984717696 nn_ops.py:4224] Large dropout rate: 0.6 (>0.5). In TensorFlow 2.x, dropout() uses dropout rate instead of keep_prob. Please ensure that this is intended. W0815 11:35:15.778605 140275984717696 nn_ops.py:4224] Large dropout rate: 0.7 (>0.5). In TensorFlow 2.x, dropout() uses dropout rate instead of keep_prob. Please ensure that this is intended.
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_7 (Conv2D) (None, 128, 9, 128) 3328 _________________________________________________________________ conv2d_8 (Conv2D) (None, 128, 9, 128) 409728 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 64, 4, 128) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 64, 4, 128) 0 _________________________________________________________________ conv2d_9 (Conv2D) (None, 64, 4, 64) 204864 _________________________________________________________________ conv2d_10 (Conv2D) (None, 64, 4, 64) 102464 _________________________________________________________________ max_pooling2d_5 (MaxPooling2 (None, 32, 2, 64) 0 _________________________________________________________________ batch_normalization_3 (Batch (None, 32, 2, 64) 256 _________________________________________________________________ dropout_5 (Dropout) (None, 32, 2, 64) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 4096) 0 _________________________________________________________________ dense_3 (Dense) (None, 100) 409700 _________________________________________________________________ batch_normalization_4 (Batch (None, 100) 400 _________________________________________________________________ dropout_6 (Dropout) (None, 100) 0 _________________________________________________________________ dense_4 (Dense) (None, 6) 606 ================================================================= Total params: 1,131,346 Trainable params: 1,131,018 Non-trainable params: 328 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 9s 1ms/step - loss: 0.3429 - acc: 0.8551 - val_loss: 0.5128 - val_acc: 0.8299 Epoch 2/30 7352/7352 [==============================] - 4s 479us/step - loss: 0.1988 - acc: 0.9137 - val_loss: 0.2536 - val_acc: 0.9002 Epoch 3/30 7352/7352 [==============================] - 4s 483us/step - loss: 0.1391 - acc: 0.9409 - val_loss: 0.5219 - val_acc: 0.8365 Epoch 4/30 7352/7352 [==============================] - 4s 484us/step - loss: 0.0910 - acc: 0.9637 - val_loss: 1.0533 - val_acc: 0.7864 Epoch 5/30 7352/7352 [==============================] - 4s 484us/step - loss: 0.0639 - acc: 0.9745 - val_loss: 0.5508 - val_acc: 0.8123 Epoch 6/30 7352/7352 [==============================] - 4s 485us/step - loss: 0.0530 - acc: 0.9803 - val_loss: 0.6761 - val_acc: 0.8101 Epoch 7/30 7352/7352 [==============================] - 4s 485us/step - loss: 0.0468 - acc: 0.9821 - val_loss: 0.4613 - val_acc: 0.8294 Epoch 8/30 7352/7352 [==============================] - 4s 485us/step - loss: 0.0440 - acc: 0.9820 - val_loss: 0.2806 - val_acc: 0.8862 Epoch 9/30 7352/7352 [==============================] - 4s 482us/step - loss: 0.0445 - acc: 0.9822 - val_loss: 0.2801 - val_acc: 0.8947 Epoch 10/30 7352/7352 [==============================] - 4s 482us/step - loss: 0.0434 - acc: 0.9823 - val_loss: 0.1439 - val_acc: 0.9459 Epoch 11/30 7352/7352 [==============================] - 4s 485us/step - loss: 0.0406 - acc: 0.9837 - val_loss: 0.0835 - val_acc: 0.9700 Epoch 12/30 7352/7352 [==============================] - 4s 487us/step - loss: 0.0379 - acc: 0.9844 - val_loss: 0.1138 - val_acc: 0.9587 Epoch 13/30 7352/7352 [==============================] - 4s 486us/step - loss: 0.0369 - acc: 0.9850 - val_loss: 0.1021 - val_acc: 0.9615 Epoch 14/30 7352/7352 [==============================] - 4s 490us/step - loss: 0.0403 - acc: 0.9828 - val_loss: 0.0698 - val_acc: 0.9734 Epoch 15/30 7352/7352 [==============================] - 4s 488us/step - loss: 0.0346 - acc: 0.9857 - val_loss: 0.0787 - val_acc: 0.9734 Epoch 16/30 7352/7352 [==============================] - 4s 489us/step - loss: 0.0316 - acc: 0.9871 - val_loss: 0.1203 - val_acc: 0.9563 Epoch 17/30 7352/7352 [==============================] - 4s 490us/step - loss: 0.0303 - acc: 0.9875 - val_loss: 0.0951 - val_acc: 0.9649 Epoch 18/30 7352/7352 [==============================] - 4s 492us/step - loss: 0.0334 - acc: 0.9867 - val_loss: 0.0972 - val_acc: 0.9645 Epoch 19/30 7352/7352 [==============================] - 4s 492us/step - loss: 0.0269 - acc: 0.9888 - val_loss: 0.1251 - val_acc: 0.9470 Epoch 20/30 7352/7352 [==============================] - 4s 492us/step - loss: 0.0278 - acc: 0.9892 - val_loss: 0.0846 - val_acc: 0.9692 Epoch 21/30 7352/7352 [==============================] - 4s 491us/step - loss: 0.0273 - acc: 0.9890 - val_loss: 0.0752 - val_acc: 0.9759 Epoch 22/30 7352/7352 [==============================] - 4s 491us/step - loss: 0.0262 - acc: 0.9893 - val_loss: 0.0805 - val_acc: 0.9678 Epoch 23/30 7352/7352 [==============================] - 4s 494us/step - loss: 0.0250 - acc: 0.9900 - val_loss: 0.1074 - val_acc: 0.9615 Epoch 24/30 7352/7352 [==============================] - 4s 494us/step - loss: 0.0191 - acc: 0.9919 - val_loss: 0.0691 - val_acc: 0.9761 Epoch 25/30 7352/7352 [==============================] - 4s 497us/step - loss: 0.0194 - acc: 0.9927 - val_loss: 0.1342 - val_acc: 0.9674 Epoch 26/30 7352/7352 [==============================] - 4s 497us/step - loss: 0.0235 - acc: 0.9911 - val_loss: 0.1011 - val_acc: 0.9654 Epoch 27/30 7352/7352 [==============================] - 4s 493us/step - loss: 0.0202 - acc: 0.9922 - val_loss: 0.0650 - val_acc: 0.9792 Epoch 28/30 7352/7352 [==============================] - 4s 494us/step - loss: 0.0164 - acc: 0.9940 - val_loss: 0.0705 - val_acc: 0.9790 Epoch 29/30 7352/7352 [==============================] - 4s 499us/step - loss: 0.0156 - acc: 0.9938 - val_loss: 0.0663 - val_acc: 0.9768 Epoch 30/30 7352/7352 [==============================] - 4s 497us/step - loss: 0.0144 - acc: 0.9939 - val_loss: 0.0696 - val_acc: 0.9796 Test loss: 0.06964582370217265 Test accuracy: 0.9796403167123992
model_2 = Sequential()
model_2.add(Conv2D(64, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model_2.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_2.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_2.add(Dropout(0.5))
model_2.add(Conv2D(32, (5, 5), activation='relu', padding='same'))
model_2.add(Conv2D(32, (5, 5), activation='relu', padding='same'))
model_2.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_2.add(BatchNormalization())
model_2.add(Dropout(0.5))
model_2.add(Flatten())
model_2.add(Dense(100, activation='relu'))
model_2.add(BatchNormalization())
model_2.add(Dropout(0.5))
model_2.add(Dense(n_classes, activation='softmax'))
model_2.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model_2.summary()
history_2 = model_2.fit(X_train, Y_train, batch_size=256, epochs=30, verbose=1, validation_data=(X_test, Y_test))
score_2 = model_2.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score_2[0])
print('Test accuracy:', score_2[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,epochs+1))
vy = history_2.history['val_loss']
ty = history_2.history['loss']
plt_dynamic(x, vy, ty, ax)
W0815 11:28:44.837903 140275984717696 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. W0815 11:28:44.848655 140275984717696 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead. W0815 11:28:44.893887 140275984717696 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead. W0815 11:28:44.896788 140275984717696 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:133: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead. W0815 11:28:44.905226 140275984717696 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version. Instructions for updating: Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`. W0815 11:28:44.953710 140275984717696 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead. W0815 11:28:46.645333 140275984717696 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead. W0815 11:28:46.834158 140275984717696 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. W0815 11:28:46.855683 140275984717696 deprecation.py:323] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_1 (Conv2D) (None, 128, 9, 64) 1664 _________________________________________________________________ conv2d_2 (Conv2D) (None, 128, 9, 64) 102464 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 64, 4, 64) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 64, 4, 64) 0 _________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 4, 32) 51232 _________________________________________________________________ conv2d_4 (Conv2D) (None, 64, 4, 32) 25632 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 (None, 32, 2, 32) 0 _________________________________________________________________ batch_normalization_1 (Batch (None, 32, 2, 32) 128 _________________________________________________________________ dropout_2 (Dropout) (None, 32, 2, 32) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 2048) 0 _________________________________________________________________ dense_1 (Dense) (None, 100) 204900 _________________________________________________________________ batch_normalization_2 (Batch (None, 100) 400 _________________________________________________________________ dropout_3 (Dropout) (None, 100) 0 _________________________________________________________________ dense_2 (Dense) (None, 6) 606 ================================================================= Total params: 387,026 Trainable params: 386,762 Non-trainable params: 264 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 9s 1ms/step - loss: 0.3185 - acc: 0.8711 - val_loss: 0.4499 - val_acc: 0.8771 Epoch 2/30 7352/7352 [==============================] - 2s 234us/step - loss: 0.1709 - acc: 0.9287 - val_loss: 0.3076 - val_acc: 0.9088 Epoch 3/30 7352/7352 [==============================] - 2s 236us/step - loss: 0.1008 - acc: 0.9624 - val_loss: 0.9853 - val_acc: 0.8225 Epoch 4/30 7352/7352 [==============================] - 2s 237us/step - loss: 0.0694 - acc: 0.9741 - val_loss: 1.0780 - val_acc: 0.7912 Epoch 5/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0555 - acc: 0.9800 - val_loss: 0.6017 - val_acc: 0.8350 Epoch 6/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0527 - acc: 0.9793 - val_loss: 0.5365 - val_acc: 0.8552 Epoch 7/30 7352/7352 [==============================] - 2s 236us/step - loss: 0.0492 - acc: 0.9813 - val_loss: 0.1479 - val_acc: 0.9427 Epoch 8/30 7352/7352 [==============================] - 2s 232us/step - loss: 0.0440 - acc: 0.9821 - val_loss: 0.3598 - val_acc: 0.8753 Epoch 9/30 7352/7352 [==============================] - 2s 232us/step - loss: 0.0407 - acc: 0.9838 - val_loss: 0.1931 - val_acc: 0.9256 Epoch 10/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0400 - acc: 0.9837 - val_loss: 0.0826 - val_acc: 0.9694 Epoch 11/30 7352/7352 [==============================] - 2s 233us/step - loss: 0.0407 - acc: 0.9829 - val_loss: 0.1354 - val_acc: 0.9472 Epoch 12/30 7352/7352 [==============================] - 2s 233us/step - loss: 0.0379 - acc: 0.9844 - val_loss: 0.0795 - val_acc: 0.9731 Epoch 13/30 7352/7352 [==============================] - 2s 233us/step - loss: 0.0371 - acc: 0.9846 - val_loss: 0.0873 - val_acc: 0.9726 Epoch 14/30 7352/7352 [==============================] - 2s 231us/step - loss: 0.0347 - acc: 0.9852 - val_loss: 0.1365 - val_acc: 0.9484 Epoch 15/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0322 - acc: 0.9858 - val_loss: 0.2183 - val_acc: 0.9259 Epoch 16/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0321 - acc: 0.9866 - val_loss: 0.1121 - val_acc: 0.9665 Epoch 17/30 7352/7352 [==============================] - 2s 234us/step - loss: 0.0314 - acc: 0.9868 - val_loss: 0.1976 - val_acc: 0.9423 Epoch 18/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0317 - acc: 0.9863 - val_loss: 0.2139 - val_acc: 0.9555 Epoch 19/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0269 - acc: 0.9894 - val_loss: 0.1263 - val_acc: 0.9513 Epoch 20/30 7352/7352 [==============================] - 2s 234us/step - loss: 0.0265 - acc: 0.9883 - val_loss: 0.1657 - val_acc: 0.9403 Epoch 21/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0246 - acc: 0.9901 - val_loss: 0.1205 - val_acc: 0.9540 Epoch 22/30 7352/7352 [==============================] - 2s 236us/step - loss: 0.0243 - acc: 0.9899 - val_loss: 0.1014 - val_acc: 0.9715 Epoch 23/30 7352/7352 [==============================] - 2s 234us/step - loss: 0.0221 - acc: 0.9913 - val_loss: 0.0866 - val_acc: 0.9771 Epoch 24/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0242 - acc: 0.9906 - val_loss: 0.1821 - val_acc: 0.9460 Epoch 25/30 7352/7352 [==============================] - 2s 237us/step - loss: 0.0221 - acc: 0.9915 - val_loss: 0.0829 - val_acc: 0.9756 Epoch 26/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0196 - acc: 0.9921 - val_loss: 0.0664 - val_acc: 0.9799 Epoch 27/30 7352/7352 [==============================] - 2s 235us/step - loss: 0.0171 - acc: 0.9936 - val_loss: 0.0641 - val_acc: 0.9811 Epoch 28/30 7352/7352 [==============================] - 2s 236us/step - loss: 0.0163 - acc: 0.9938 - val_loss: 0.0963 - val_acc: 0.9757 Epoch 29/30 7352/7352 [==============================] - 2s 237us/step - loss: 0.0219 - acc: 0.9909 - val_loss: 0.1343 - val_acc: 0.9769 Epoch 30/30 7352/7352 [==============================] - 2s 236us/step - loss: 0.0166 - acc: 0.9931 - val_loss: 0.0986 - val_acc: 0.9768 Test loss: 0.09862902752178446 Test accuracy: 0.9767560301937004
model_2 = Sequential()
model_2.add(Conv2D(128, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model_2.add(Conv2D(128, (5, 5), activation='relu', padding='same'))
model_2.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_2.add(Dropout(0.8))
model_2.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_2.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_2.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_2.add(BatchNormalization())
model_2.add(Dropout(0.7))
model_2.add(Flatten())
model_2.add(Dense(100, activation='relu'))
model_2.add(BatchNormalization())
model_2.add(Dropout(0.5))
model_2.add(Dense(n_classes, activation='softmax'))
model_2.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model_2.summary()
history_2 = model_2.fit(X_train, Y_train, batch_size=256, epochs=25, verbose=1, validation_data=(X_test, Y_test))
score_2 = model_2.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score_2[0])
print('Test accuracy:', score_2[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,25+1))
vy = history_2.history['val_loss']
ty = history_2.history['loss']
plt_dynamic(x, vy, ty, ax)
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_19 (Conv2D) (None, 128, 9, 128) 3328 _________________________________________________________________ conv2d_20 (Conv2D) (None, 128, 9, 128) 409728 _________________________________________________________________ max_pooling2d_10 (MaxPooling (None, 64, 4, 128) 0 _________________________________________________________________ dropout_13 (Dropout) (None, 64, 4, 128) 0 _________________________________________________________________ conv2d_21 (Conv2D) (None, 64, 4, 64) 204864 _________________________________________________________________ conv2d_22 (Conv2D) (None, 64, 4, 64) 102464 _________________________________________________________________ max_pooling2d_11 (MaxPooling (None, 32, 2, 64) 0 _________________________________________________________________ batch_normalization_9 (Batch (None, 32, 2, 64) 256 _________________________________________________________________ dropout_14 (Dropout) (None, 32, 2, 64) 0 _________________________________________________________________ flatten_5 (Flatten) (None, 4096) 0 _________________________________________________________________ dense_9 (Dense) (None, 100) 409700 _________________________________________________________________ batch_normalization_10 (Batc (None, 100) 400 _________________________________________________________________ dropout_15 (Dropout) (None, 100) 0 _________________________________________________________________ dense_10 (Dense) (None, 6) 606 ================================================================= Total params: 1,131,346 Trainable params: 1,131,018 Non-trainable params: 328 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/25 7352/7352 [==============================] - 5s 677us/step - loss: 0.3749 - acc: 0.8484 - val_loss: 0.3813 - val_acc: 0.8604 Epoch 2/25 7352/7352 [==============================] - 4s 479us/step - loss: 0.2264 - acc: 0.9009 - val_loss: 0.3193 - val_acc: 0.8823 Epoch 3/25 7352/7352 [==============================] - 4s 480us/step - loss: 0.1582 - acc: 0.9309 - val_loss: 0.6222 - val_acc: 0.8375 Epoch 4/25 7352/7352 [==============================] - 4s 483us/step - loss: 0.1084 - acc: 0.9571 - val_loss: 0.8395 - val_acc: 0.8080 Epoch 5/25 7352/7352 [==============================] - 4s 486us/step - loss: 0.0739 - acc: 0.9716 - val_loss: 0.8819 - val_acc: 0.8068 Epoch 6/25 7352/7352 [==============================] - 4s 484us/step - loss: 0.0610 - acc: 0.9772 - val_loss: 0.9143 - val_acc: 0.8053 Epoch 7/25 7352/7352 [==============================] - 4s 486us/step - loss: 0.0511 - acc: 0.9814 - val_loss: 0.9736 - val_acc: 0.7628 Epoch 8/25 7352/7352 [==============================] - 4s 485us/step - loss: 0.0467 - acc: 0.9820 - val_loss: 0.4975 - val_acc: 0.8478 Epoch 9/25 7352/7352 [==============================] - 4s 483us/step - loss: 0.0446 - acc: 0.9827 - val_loss: 0.7331 - val_acc: 0.8238 Epoch 10/25 7352/7352 [==============================] - 4s 485us/step - loss: 0.0468 - acc: 0.9810 - val_loss: 0.3990 - val_acc: 0.8557 Epoch 11/25 7352/7352 [==============================] - 4s 485us/step - loss: 0.0458 - acc: 0.9814 - val_loss: 0.1171 - val_acc: 0.9598 Epoch 12/25 7352/7352 [==============================] - 4s 484us/step - loss: 0.0413 - acc: 0.9830 - val_loss: 0.1394 - val_acc: 0.9442 Epoch 13/25 7352/7352 [==============================] - 4s 484us/step - loss: 0.0419 - acc: 0.9826 - val_loss: 0.0897 - val_acc: 0.9683 Epoch 14/25 7352/7352 [==============================] - 4s 485us/step - loss: 0.0388 - acc: 0.9840 - val_loss: 0.2559 - val_acc: 0.8984 Epoch 15/25 7352/7352 [==============================] - 4s 485us/step - loss: 0.0380 - acc: 0.9839 - val_loss: 0.1334 - val_acc: 0.9494 Epoch 16/25 7352/7352 [==============================] - 4s 489us/step - loss: 0.0363 - acc: 0.9849 - val_loss: 0.1139 - val_acc: 0.9568 Epoch 17/25 7352/7352 [==============================] - 4s 491us/step - loss: 0.0367 - acc: 0.9851 - val_loss: 0.1103 - val_acc: 0.9613 Epoch 18/25 7352/7352 [==============================] - 4s 492us/step - loss: 0.0333 - acc: 0.9858 - val_loss: 0.1316 - val_acc: 0.9573 Epoch 19/25 7352/7352 [==============================] - 4s 492us/step - loss: 0.0337 - acc: 0.9864 - val_loss: 0.1197 - val_acc: 0.9679 Epoch 20/25 7352/7352 [==============================] - 4s 492us/step - loss: 0.0336 - acc: 0.9860 - val_loss: 0.1722 - val_acc: 0.9369 Epoch 21/25 7352/7352 [==============================] - 4s 491us/step - loss: 0.0313 - acc: 0.9869 - val_loss: 0.0907 - val_acc: 0.9697 Epoch 22/25 7352/7352 [==============================] - 4s 491us/step - loss: 0.0296 - acc: 0.9871 - val_loss: 0.0883 - val_acc: 0.9739 Epoch 23/25 7352/7352 [==============================] - 4s 492us/step - loss: 0.0267 - acc: 0.9895 - val_loss: 0.1185 - val_acc: 0.9637 Epoch 24/25 7352/7352 [==============================] - 4s 493us/step - loss: 0.0256 - acc: 0.9906 - val_loss: 0.1783 - val_acc: 0.9264 Epoch 25/25 7352/7352 [==============================] - 4s 494us/step - loss: 0.0278 - acc: 0.9889 - val_loss: 0.1811 - val_acc: 0.9507 Test loss: 0.18112290080562157 Test accuracy: 0.9507408769890378
model_3 = Sequential()
model_3.add(Conv2D(64, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model_3.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(Dropout(0.5))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(BatchNormalization())
model_3.add(Dropout(0.5))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(BatchNormalization())
model_3.add(Dropout(0.5))
model_3.add(Flatten())
model_3.add(Dense(100, activation='relu'))
model_3.add(BatchNormalization())
model_3.add(Dropout(0.5))
model_3.add(Dense(n_classes, activation='softmax'))
model_3.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model_3.summary()
history_3 = model_3.fit(X_train, Y_train, batch_size=256, epochs=25, verbose=1, validation_data=(X_test, Y_test))
score_3 = model_3.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score_3[0])
print('Test accuracy:', score_3[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,25+1))
vy = history_3.history['val_loss']
ty = history_3.history['loss']
plt_dynamic(x, vy, ty, ax)
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_23 (Conv2D) (None, 128, 9, 64) 1664 _________________________________________________________________ conv2d_24 (Conv2D) (None, 128, 9, 64) 102464 _________________________________________________________________ max_pooling2d_12 (MaxPooling (None, 64, 4, 64) 0 _________________________________________________________________ dropout_16 (Dropout) (None, 64, 4, 64) 0 _________________________________________________________________ conv2d_25 (Conv2D) (None, 64, 4, 32) 18464 _________________________________________________________________ conv2d_26 (Conv2D) (None, 64, 4, 32) 9248 _________________________________________________________________ max_pooling2d_13 (MaxPooling (None, 32, 2, 32) 0 _________________________________________________________________ batch_normalization_11 (Batc (None, 32, 2, 32) 128 _________________________________________________________________ dropout_17 (Dropout) (None, 32, 2, 32) 0 _________________________________________________________________ conv2d_27 (Conv2D) (None, 32, 2, 32) 9248 _________________________________________________________________ conv2d_28 (Conv2D) (None, 32, 2, 32) 9248 _________________________________________________________________ max_pooling2d_14 (MaxPooling (None, 16, 1, 32) 0 _________________________________________________________________ batch_normalization_12 (Batc (None, 16, 1, 32) 128 _________________________________________________________________ dropout_18 (Dropout) (None, 16, 1, 32) 0 _________________________________________________________________ flatten_6 (Flatten) (None, 512) 0 _________________________________________________________________ dense_11 (Dense) (None, 100) 51300 _________________________________________________________________ batch_normalization_13 (Batc (None, 100) 400 _________________________________________________________________ dropout_19 (Dropout) (None, 100) 0 _________________________________________________________________ dense_12 (Dense) (None, 6) 606 ================================================================= Total params: 202,898 Trainable params: 202,570 Non-trainable params: 328 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/25 7352/7352 [==============================] - 5s 670us/step - loss: 0.4313 - acc: 0.8343 - val_loss: 0.2170 - val_acc: 0.8891 Epoch 2/25 7352/7352 [==============================] - 2s 221us/step - loss: 0.2883 - acc: 0.8726 - val_loss: 0.2458 - val_acc: 0.8788 Epoch 3/25 7352/7352 [==============================] - 2s 219us/step - loss: 0.2217 - acc: 0.8995 - val_loss: 0.2570 - val_acc: 0.8915 Epoch 4/25 7352/7352 [==============================] - 2s 222us/step - loss: 0.1754 - acc: 0.9226 - val_loss: 0.3630 - val_acc: 0.8606 Epoch 5/25 7352/7352 [==============================] - 2s 220us/step - loss: 0.1227 - acc: 0.9494 - val_loss: 0.8001 - val_acc: 0.7912 Epoch 6/25 7352/7352 [==============================] - 2s 220us/step - loss: 0.0874 - acc: 0.9667 - val_loss: 1.1919 - val_acc: 0.7399 Epoch 7/25 7352/7352 [==============================] - 2s 218us/step - loss: 0.0689 - acc: 0.9742 - val_loss: 1.1142 - val_acc: 0.7621 Epoch 8/25 7352/7352 [==============================] - 2s 220us/step - loss: 0.0576 - acc: 0.9792 - val_loss: 0.9188 - val_acc: 0.7496 Epoch 9/25 7352/7352 [==============================] - 2s 221us/step - loss: 0.0537 - acc: 0.9786 - val_loss: 1.3420 - val_acc: 0.7564 Epoch 10/25 7352/7352 [==============================] - 2s 224us/step - loss: 0.0577 - acc: 0.9779 - val_loss: 0.7701 - val_acc: 0.7924 Epoch 11/25 7352/7352 [==============================] - 2s 223us/step - loss: 0.0489 - acc: 0.9812 - val_loss: 0.5626 - val_acc: 0.8375 Epoch 12/25 7352/7352 [==============================] - 2s 221us/step - loss: 0.0470 - acc: 0.9818 - val_loss: 0.3007 - val_acc: 0.8755 Epoch 13/25 7352/7352 [==============================] - 2s 224us/step - loss: 0.0435 - acc: 0.9833 - val_loss: 0.4282 - val_acc: 0.8436 Epoch 14/25 7352/7352 [==============================] - 2s 230us/step - loss: 0.0423 - acc: 0.9832 - val_loss: 0.1828 - val_acc: 0.9276 Epoch 15/25 7352/7352 [==============================] - 2s 226us/step - loss: 0.0423 - acc: 0.9834 - val_loss: 0.1363 - val_acc: 0.9462 Epoch 16/25 7352/7352 [==============================] - 2s 222us/step - loss: 0.0396 - acc: 0.9842 - val_loss: 0.1497 - val_acc: 0.9408 Epoch 17/25 7352/7352 [==============================] - 2s 225us/step - loss: 0.0402 - acc: 0.9835 - val_loss: 0.1335 - val_acc: 0.9526 Epoch 18/25 7352/7352 [==============================] - 2s 224us/step - loss: 0.0417 - acc: 0.9832 - val_loss: 0.1512 - val_acc: 0.9496 Epoch 19/25 7352/7352 [==============================] - 2s 222us/step - loss: 0.0386 - acc: 0.9845 - val_loss: 0.0948 - val_acc: 0.9634 Epoch 20/25 7352/7352 [==============================] - 2s 222us/step - loss: 0.0408 - acc: 0.9838 - val_loss: 0.1007 - val_acc: 0.9668 Epoch 21/25 7352/7352 [==============================] - 2s 221us/step - loss: 0.0405 - acc: 0.9835 - val_loss: 0.0661 - val_acc: 0.9740 Epoch 22/25 7352/7352 [==============================] - 2s 223us/step - loss: 0.0385 - acc: 0.9839 - val_loss: 0.0624 - val_acc: 0.9783 Epoch 23/25 7352/7352 [==============================] - 2s 224us/step - loss: 0.0356 - acc: 0.9857 - val_loss: 0.0908 - val_acc: 0.9645 Epoch 24/25 7352/7352 [==============================] - 2s 221us/step - loss: 0.0358 - acc: 0.9854 - val_loss: 0.0936 - val_acc: 0.9627 Epoch 25/25 7352/7352 [==============================] - 2s 227us/step - loss: 0.0338 - acc: 0.9852 - val_loss: 0.1191 - val_acc: 0.9579 Test loss: 0.11909454988765167 Test accuracy: 0.9578667606059923
model_3 = Sequential()
model_3.add(Conv2D(64, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model_3.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(Dropout(0.7))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(BatchNormalization())
model_3.add(Dropout(0.6))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(BatchNormalization())
model_3.add(Dropout(0.6))
model_3.add(Flatten())
model_3.add(Dense(100, activation='relu'))
model_3.add(BatchNormalization())
model_3.add(Dropout(0.5))
model_3.add(Dense(n_classes, activation='softmax'))
model_3.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model_3.summary()
history_3 = model_3.fit(X_train, Y_train, batch_size=256, epochs=50, verbose=1, validation_data=(X_test, Y_test))
score_3 = model_3.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score_3[0])
print('Test accuracy:', score_3[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,50+1))
vy = history_3.history['val_loss']
ty = history_3.history['loss']
plt_dynamic(x, vy, ty, ax)
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_35 (Conv2D) (None, 128, 9, 64) 1664 _________________________________________________________________ conv2d_36 (Conv2D) (None, 128, 9, 64) 102464 _________________________________________________________________ max_pooling2d_18 (MaxPooling (None, 64, 4, 64) 0 _________________________________________________________________ dropout_24 (Dropout) (None, 64, 4, 64) 0 _________________________________________________________________ conv2d_37 (Conv2D) (None, 64, 4, 32) 18464 _________________________________________________________________ conv2d_38 (Conv2D) (None, 64, 4, 32) 9248 _________________________________________________________________ max_pooling2d_19 (MaxPooling (None, 32, 2, 32) 0 _________________________________________________________________ batch_normalization_17 (Batc (None, 32, 2, 32) 128 _________________________________________________________________ dropout_25 (Dropout) (None, 32, 2, 32) 0 _________________________________________________________________ conv2d_39 (Conv2D) (None, 32, 2, 32) 9248 _________________________________________________________________ conv2d_40 (Conv2D) (None, 32, 2, 32) 9248 _________________________________________________________________ max_pooling2d_20 (MaxPooling (None, 16, 1, 32) 0 _________________________________________________________________ batch_normalization_18 (Batc (None, 16, 1, 32) 128 _________________________________________________________________ dropout_26 (Dropout) (None, 16, 1, 32) 0 _________________________________________________________________ flatten_8 (Flatten) (None, 512) 0 _________________________________________________________________ dense_15 (Dense) (None, 100) 51300 _________________________________________________________________ batch_normalization_19 (Batc (None, 100) 400 _________________________________________________________________ dropout_27 (Dropout) (None, 100) 0 _________________________________________________________________ dense_16 (Dense) (None, 6) 606 ================================================================= Total params: 202,898 Trainable params: 202,570 Non-trainable params: 328 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/50 7352/7352 [==============================] - 4s 535us/step - loss: 0.4545 - acc: 0.8228 - val_loss: 0.8408 - val_acc: 0.7844 Epoch 2/50 7352/7352 [==============================] - 2s 223us/step - loss: 0.3092 - acc: 0.8599 - val_loss: 0.7267 - val_acc: 0.8009 Epoch 3/50 7352/7352 [==============================] - 2s 226us/step - loss: 0.2625 - acc: 0.8777 - val_loss: 1.1622 - val_acc: 0.7798 Epoch 4/50 7352/7352 [==============================] - 2s 226us/step - loss: 0.2296 - acc: 0.8894 - val_loss: 1.0814 - val_acc: 0.7836 Epoch 5/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.2087 - acc: 0.9019 - val_loss: 1.1423 - val_acc: 0.7640 Epoch 6/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.1780 - acc: 0.9174 - val_loss: 1.2035 - val_acc: 0.7603 Epoch 7/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.1541 - acc: 0.9302 - val_loss: 1.3670 - val_acc: 0.7446 Epoch 8/50 7352/7352 [==============================] - 2s 224us/step - loss: 0.1251 - acc: 0.9478 - val_loss: 1.2715 - val_acc: 0.7548 Epoch 9/50 7352/7352 [==============================] - 2s 229us/step - loss: 0.0999 - acc: 0.9622 - val_loss: 2.3074 - val_acc: 0.7275 Epoch 10/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0807 - acc: 0.9705 - val_loss: 1.8830 - val_acc: 0.7289 Epoch 11/50 7352/7352 [==============================] - 2s 226us/step - loss: 0.0680 - acc: 0.9757 - val_loss: 1.4760 - val_acc: 0.7401 Epoch 12/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0581 - acc: 0.9796 - val_loss: 0.9046 - val_acc: 0.7976 Epoch 13/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0534 - acc: 0.9801 - val_loss: 1.0729 - val_acc: 0.7729 Epoch 14/50 7352/7352 [==============================] - 2s 225us/step - loss: 0.0535 - acc: 0.9799 - val_loss: 0.5502 - val_acc: 0.8221 Epoch 15/50 7352/7352 [==============================] - 2s 224us/step - loss: 0.0516 - acc: 0.9806 - val_loss: 0.3413 - val_acc: 0.8711 Epoch 16/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0465 - acc: 0.9816 - val_loss: 0.4095 - val_acc: 0.8551 Epoch 17/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0454 - acc: 0.9827 - val_loss: 0.4661 - val_acc: 0.8445 Epoch 18/50 7352/7352 [==============================] - 2s 226us/step - loss: 0.0444 - acc: 0.9828 - val_loss: 0.3243 - val_acc: 0.8752 Epoch 19/50 7352/7352 [==============================] - 2s 229us/step - loss: 0.0451 - acc: 0.9824 - val_loss: 0.1003 - val_acc: 0.9592 Epoch 20/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0406 - acc: 0.9848 - val_loss: 0.1675 - val_acc: 0.9260 Epoch 21/50 7352/7352 [==============================] - 2s 224us/step - loss: 0.0418 - acc: 0.9839 - val_loss: 0.0827 - val_acc: 0.9679 Epoch 22/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0400 - acc: 0.9835 - val_loss: 0.2373 - val_acc: 0.9104 Epoch 23/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0393 - acc: 0.9852 - val_loss: 0.1226 - val_acc: 0.9450 Epoch 24/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0423 - acc: 0.9830 - val_loss: 0.0725 - val_acc: 0.9744 Epoch 25/50 7352/7352 [==============================] - 2s 229us/step - loss: 0.0392 - acc: 0.9841 - val_loss: 0.0815 - val_acc: 0.9715 Epoch 26/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0373 - acc: 0.9852 - val_loss: 0.2075 - val_acc: 0.9151 Epoch 27/50 7352/7352 [==============================] - 2s 224us/step - loss: 0.0381 - acc: 0.9845 - val_loss: 0.0792 - val_acc: 0.9712 Epoch 28/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0372 - acc: 0.9849 - val_loss: 0.0728 - val_acc: 0.9760 Epoch 29/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0357 - acc: 0.9851 - val_loss: 0.1645 - val_acc: 0.9356 Epoch 30/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0334 - acc: 0.9866 - val_loss: 0.0987 - val_acc: 0.9619 Epoch 31/50 7352/7352 [==============================] - 2s 229us/step - loss: 0.0332 - acc: 0.9866 - val_loss: 0.1215 - val_acc: 0.9602 Epoch 32/50 7352/7352 [==============================] - 2s 226us/step - loss: 0.0352 - acc: 0.9857 - val_loss: 0.0641 - val_acc: 0.9768 Epoch 33/50 7352/7352 [==============================] - 2s 224us/step - loss: 0.0317 - acc: 0.9872 - val_loss: 0.0909 - val_acc: 0.9750 Epoch 34/50 7352/7352 [==============================] - 2s 225us/step - loss: 0.0319 - acc: 0.9870 - val_loss: 0.0866 - val_acc: 0.9624 Epoch 35/50 7352/7352 [==============================] - 2s 224us/step - loss: 0.0340 - acc: 0.9855 - val_loss: 0.0872 - val_acc: 0.9684 Epoch 36/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0291 - acc: 0.9882 - val_loss: 0.1273 - val_acc: 0.9493 Epoch 37/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0314 - acc: 0.9870 - val_loss: 0.0707 - val_acc: 0.9774 Epoch 38/50 7352/7352 [==============================] - 2s 229us/step - loss: 0.0329 - acc: 0.9869 - val_loss: 0.0943 - val_acc: 0.9721 Epoch 39/50 7352/7352 [==============================] - 2s 230us/step - loss: 0.0279 - acc: 0.9886 - val_loss: 0.0694 - val_acc: 0.9791 Epoch 40/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0261 - acc: 0.9890 - val_loss: 0.1090 - val_acc: 0.9636 Epoch 41/50 7352/7352 [==============================] - 2s 230us/step - loss: 0.0262 - acc: 0.9887 - val_loss: 0.0864 - val_acc: 0.9694 Epoch 42/50 7352/7352 [==============================] - 2s 229us/step - loss: 0.0271 - acc: 0.9893 - val_loss: 0.0658 - val_acc: 0.9815 Epoch 43/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0264 - acc: 0.9895 - val_loss: 0.1208 - val_acc: 0.9588 Epoch 44/50 7352/7352 [==============================] - 2s 231us/step - loss: 0.0219 - acc: 0.9912 - val_loss: 0.1126 - val_acc: 0.9662 Epoch 45/50 7352/7352 [==============================] - 2s 231us/step - loss: 0.0235 - acc: 0.9901 - val_loss: 0.1213 - val_acc: 0.9597 Epoch 46/50 7352/7352 [==============================] - 2s 229us/step - loss: 0.0240 - acc: 0.9909 - val_loss: 0.1073 - val_acc: 0.9697 Epoch 47/50 7352/7352 [==============================] - 2s 229us/step - loss: 0.0249 - acc: 0.9899 - val_loss: 0.0824 - val_acc: 0.9721 Epoch 48/50 7352/7352 [==============================] - 2s 231us/step - loss: 0.0227 - acc: 0.9909 - val_loss: 0.0779 - val_acc: 0.9756 Epoch 49/50 7352/7352 [==============================] - 2s 227us/step - loss: 0.0217 - acc: 0.9914 - val_loss: 0.0751 - val_acc: 0.9762 Epoch 50/50 7352/7352 [==============================] - 2s 228us/step - loss: 0.0218 - acc: 0.9919 - val_loss: 0.1061 - val_acc: 0.9683 Test loss: 0.10611134655401126 Test accuracy: 0.9682728295250185
model_3 = Sequential()
model_3.add(Conv2D(64, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model_3.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(Dropout(0.7))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(BatchNormalization())
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(BatchNormalization())
model_3.add(Flatten())
model_3.add(Dense(100, activation='relu'))
model_3.add(BatchNormalization())
model_3.add(Dropout(0.5))
model_3.add(Dense(n_classes, activation='softmax'))
model_3.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy'])
model_3.summary()
history_3 = model_3.fit(X_train, Y_train, batch_size=256, epochs=30, verbose=1, validation_data=(X_test, Y_test))
score_3 = model_3.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score_3[0])
print('Test accuracy:', score_3[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,30+1))
vy = history_3.history['val_loss']
ty = history_3.history['loss']
plt_dynamic(x, vy, ty, ax)
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_41 (Conv2D) (None, 128, 9, 64) 1664 _________________________________________________________________ conv2d_42 (Conv2D) (None, 128, 9, 64) 102464 _________________________________________________________________ max_pooling2d_21 (MaxPooling (None, 64, 4, 64) 0 _________________________________________________________________ dropout_28 (Dropout) (None, 64, 4, 64) 0 _________________________________________________________________ conv2d_43 (Conv2D) (None, 64, 4, 32) 18464 _________________________________________________________________ conv2d_44 (Conv2D) (None, 64, 4, 32) 9248 _________________________________________________________________ max_pooling2d_22 (MaxPooling (None, 32, 2, 32) 0 _________________________________________________________________ batch_normalization_20 (Batc (None, 32, 2, 32) 128 _________________________________________________________________ conv2d_45 (Conv2D) (None, 32, 2, 32) 9248 _________________________________________________________________ conv2d_46 (Conv2D) (None, 32, 2, 32) 9248 _________________________________________________________________ max_pooling2d_23 (MaxPooling (None, 16, 1, 32) 0 _________________________________________________________________ batch_normalization_21 (Batc (None, 16, 1, 32) 128 _________________________________________________________________ flatten_9 (Flatten) (None, 512) 0 _________________________________________________________________ dense_17 (Dense) (None, 100) 51300 _________________________________________________________________ batch_normalization_22 (Batc (None, 100) 400 _________________________________________________________________ dropout_29 (Dropout) (None, 100) 0 _________________________________________________________________ dense_18 (Dense) (None, 6) 606 ================================================================= Total params: 202,898 Trainable params: 202,570 Non-trainable params: 328 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/30 7352/7352 [==============================] - 4s 561us/step - loss: 1.2326 - acc: 0.5694 - val_loss: 2.6819 - val_acc: 0.4432 Epoch 2/30 7352/7352 [==============================] - 2s 218us/step - loss: 0.7567 - acc: 0.7053 - val_loss: 1.7428 - val_acc: 0.5517 Epoch 3/30 7352/7352 [==============================] - 2s 224us/step - loss: 0.5066 - acc: 0.7975 - val_loss: 2.5908 - val_acc: 0.5405 Epoch 4/30 7352/7352 [==============================] - 2s 221us/step - loss: 0.3381 - acc: 0.8769 - val_loss: 2.5298 - val_acc: 0.5131 Epoch 5/30 7352/7352 [==============================] - 2s 219us/step - loss: 0.2138 - acc: 0.9221 - val_loss: 2.6172 - val_acc: 0.4856 Epoch 6/30 7352/7352 [==============================] - 2s 219us/step - loss: 0.1758 - acc: 0.9338 - val_loss: 2.9302 - val_acc: 0.4537 Epoch 7/30 7352/7352 [==============================] - 2s 221us/step - loss: 0.1603 - acc: 0.9410 - val_loss: 2.7660 - val_acc: 0.4364 Epoch 8/30 7352/7352 [==============================] - 2s 218us/step - loss: 0.1416 - acc: 0.9442 - val_loss: 2.0992 - val_acc: 0.4798 Epoch 9/30 7352/7352 [==============================] - 2s 221us/step - loss: 0.1326 - acc: 0.9493 - val_loss: 2.2566 - val_acc: 0.4988 Epoch 10/30 7352/7352 [==============================] - 2s 220us/step - loss: 0.1322 - acc: 0.9479 - val_loss: 1.4126 - val_acc: 0.5789 Epoch 11/30 7352/7352 [==============================] - 2s 218us/step - loss: 0.1246 - acc: 0.9497 - val_loss: 2.0539 - val_acc: 0.5097 Epoch 12/30 7352/7352 [==============================] - 2s 224us/step - loss: 0.1245 - acc: 0.9490 - val_loss: 0.6392 - val_acc: 0.7832 Epoch 13/30 7352/7352 [==============================] - 2s 222us/step - loss: 0.1229 - acc: 0.9502 - val_loss: 0.7884 - val_acc: 0.7689 Epoch 14/30 7352/7352 [==============================] - 2s 221us/step - loss: 0.1210 - acc: 0.9489 - val_loss: 0.5540 - val_acc: 0.8124 Epoch 15/30 7352/7352 [==============================] - 2s 222us/step - loss: 0.1076 - acc: 0.9544 - val_loss: 0.5636 - val_acc: 0.8069 Epoch 16/30 7352/7352 [==============================] - 2s 224us/step - loss: 0.1113 - acc: 0.9547 - val_loss: 0.8160 - val_acc: 0.7781 Epoch 17/30 7352/7352 [==============================] - 2s 219us/step - loss: 0.1064 - acc: 0.9546 - val_loss: 1.1757 - val_acc: 0.6488 Epoch 18/30 7352/7352 [==============================] - 2s 220us/step - loss: 0.1047 - acc: 0.9558 - val_loss: 0.4074 - val_acc: 0.8341 Epoch 19/30 7352/7352 [==============================] - 2s 222us/step - loss: 0.1096 - acc: 0.9547 - val_loss: 0.4937 - val_acc: 0.8439 Epoch 20/30 7352/7352 [==============================] - 2s 223us/step - loss: 0.1093 - acc: 0.9536 - val_loss: 0.3611 - val_acc: 0.8839 Epoch 21/30 7352/7352 [==============================] - 2s 225us/step - loss: 0.0978 - acc: 0.9582 - val_loss: 0.5909 - val_acc: 0.8449 Epoch 22/30 7352/7352 [==============================] - 2s 226us/step - loss: 0.0931 - acc: 0.9581 - val_loss: 0.5909 - val_acc: 0.8280 Epoch 23/30 7352/7352 [==============================] - 2s 225us/step - loss: 0.0939 - acc: 0.9606 - val_loss: 0.4361 - val_acc: 0.8761 Epoch 24/30 7352/7352 [==============================] - 2s 225us/step - loss: 0.0943 - acc: 0.9595 - val_loss: 0.2940 - val_acc: 0.9321 Epoch 25/30 7352/7352 [==============================] - 2s 222us/step - loss: 0.0900 - acc: 0.9608 - val_loss: 0.2705 - val_acc: 0.9243 Epoch 26/30 7352/7352 [==============================] - 2s 224us/step - loss: 0.0849 - acc: 0.9655 - val_loss: 0.3081 - val_acc: 0.9148 Epoch 27/30 7352/7352 [==============================] - 2s 224us/step - loss: 0.0756 - acc: 0.9671 - val_loss: 0.8576 - val_acc: 0.7316 Epoch 28/30 7352/7352 [==============================] - 2s 222us/step - loss: 0.0848 - acc: 0.9612 - val_loss: 0.4719 - val_acc: 0.8571 Epoch 29/30 7352/7352 [==============================] - 2s 222us/step - loss: 0.0698 - acc: 0.9712 - val_loss: 0.4234 - val_acc: 0.8741 Epoch 30/30 7352/7352 [==============================] - 2s 222us/step - loss: 0.0757 - acc: 0.9682 - val_loss: 0.6220 - val_acc: 0.8096 Test loss: 0.6220432024602066 Test accuracy: 0.8096369189208024
model_3 = Sequential()
model_3.add(Conv2D(64, kernel_size=(5, 5),activation='relu', padding='same', input_shape=input_shape))
model_3.add(Conv2D(64, (5, 5), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(Dropout(0.7))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(BatchNormalization())
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
model_3.add(MaxPooling2D(pool_size=(2, 2), strides = 2))
model_3.add(BatchNormalization())
model_3.add(Flatten())
model_3.add(Dense(100, activation='relu'))
model_3.add(BatchNormalization())
model_3.add(Dropout(0.5))
model_3.add(Dense(n_classes, activation='softmax'))
model_3.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy'])
model_3.summary()
history_3 = model_3.fit(X_train, Y_train, batch_size=256, epochs=40, verbose=1, validation_data=(X_test, Y_test))
score_3 = model_3.evaluate(X_test, Y_test, verbose=0)
print('Test loss:', score_3[0])
print('Test accuracy:', score_3[1])
fig,ax = plt.subplots(1,1)
ax.set_xlabel('epoch') ; ax.set_ylabel('Crossentropy Loss')
x = list(range(1,40+1))
vy = history_3.history['val_loss']
ty = history_3.history['loss']
plt_dynamic(x, vy, ty, ax)
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_47 (Conv2D) (None, 128, 9, 64) 1664 _________________________________________________________________ conv2d_48 (Conv2D) (None, 128, 9, 64) 102464 _________________________________________________________________ max_pooling2d_24 (MaxPooling (None, 64, 4, 64) 0 _________________________________________________________________ dropout_30 (Dropout) (None, 64, 4, 64) 0 _________________________________________________________________ conv2d_49 (Conv2D) (None, 64, 4, 32) 18464 _________________________________________________________________ conv2d_50 (Conv2D) (None, 64, 4, 32) 9248 _________________________________________________________________ max_pooling2d_25 (MaxPooling (None, 32, 2, 32) 0 _________________________________________________________________ batch_normalization_23 (Batc (None, 32, 2, 32) 128 _________________________________________________________________ conv2d_51 (Conv2D) (None, 32, 2, 32) 9248 _________________________________________________________________ conv2d_52 (Conv2D) (None, 32, 2, 32) 9248 _________________________________________________________________ max_pooling2d_26 (MaxPooling (None, 16, 1, 32) 0 _________________________________________________________________ batch_normalization_24 (Batc (None, 16, 1, 32) 128 _________________________________________________________________ flatten_10 (Flatten) (None, 512) 0 _________________________________________________________________ dense_19 (Dense) (None, 100) 51300 _________________________________________________________________ batch_normalization_25 (Batc (None, 100) 400 _________________________________________________________________ dropout_31 (Dropout) (None, 100) 0 _________________________________________________________________ dense_20 (Dense) (None, 6) 606 ================================================================= Total params: 202,898 Trainable params: 202,570 Non-trainable params: 328 _________________________________________________________________ Train on 7352 samples, validate on 2947 samples Epoch 1/40 7352/7352 [==============================] - 4s 591us/step - loss: 0.3590 - acc: 0.8546 - val_loss: 0.5514 - val_acc: 0.8386 Epoch 2/40 7352/7352 [==============================] - 2s 216us/step - loss: 0.2096 - acc: 0.9087 - val_loss: 0.5065 - val_acc: 0.8540 Epoch 3/40 7352/7352 [==============================] - 2s 217us/step - loss: 0.1441 - acc: 0.9400 - val_loss: 0.8521 - val_acc: 0.8135 Epoch 4/40 7352/7352 [==============================] - 2s 216us/step - loss: 0.1014 - acc: 0.9605 - val_loss: 0.6996 - val_acc: 0.8216 Epoch 5/40 7352/7352 [==============================] - 2s 218us/step - loss: 0.0755 - acc: 0.9711 - val_loss: 0.9969 - val_acc: 0.7702 Epoch 6/40 7352/7352 [==============================] - 2s 217us/step - loss: 0.0621 - acc: 0.9762 - val_loss: 0.7991 - val_acc: 0.7913 Epoch 7/40 7352/7352 [==============================] - 2s 217us/step - loss: 0.0531 - acc: 0.9803 - val_loss: 1.0233 - val_acc: 0.7824 Epoch 8/40 7352/7352 [==============================] - 2s 217us/step - loss: 0.0469 - acc: 0.9817 - val_loss: 1.2507 - val_acc: 0.7649 Epoch 9/40 7352/7352 [==============================] - 2s 217us/step - loss: 0.0462 - acc: 0.9807 - val_loss: 0.7020 - val_acc: 0.8020 Epoch 10/40 7352/7352 [==============================] - 2s 217us/step - loss: 0.0423 - acc: 0.9830 - val_loss: 0.4424 - val_acc: 0.8388 Epoch 11/40 7352/7352 [==============================] - 2s 217us/step - loss: 0.0427 - acc: 0.9818 - val_loss: 0.2136 - val_acc: 0.9297 Epoch 12/40 7352/7352 [==============================] - 2s 218us/step - loss: 0.0422 - acc: 0.9824 - val_loss: 0.2479 - val_acc: 0.9196 Epoch 13/40 7352/7352 [==============================] - 2s 220us/step - loss: 0.0409 - acc: 0.9838 - val_loss: 0.2350 - val_acc: 0.9207 Epoch 14/40 7352/7352 [==============================] - 2s 225us/step - loss: 0.0404 - acc: 0.9832 - val_loss: 0.2079 - val_acc: 0.9355 Epoch 15/40 7352/7352 [==============================] - 2s 223us/step - loss: 0.0391 - acc: 0.9829 - val_loss: 0.1687 - val_acc: 0.9420 Epoch 16/40 7352/7352 [==============================] - 2s 224us/step - loss: 0.0386 - acc: 0.9840 - val_loss: 0.1331 - val_acc: 0.9529 Epoch 17/40 7352/7352 [==============================] - 2s 221us/step - loss: 0.0390 - acc: 0.9833 - val_loss: 0.0694 - val_acc: 0.9709 Epoch 18/40 7352/7352 [==============================] - 2s 225us/step - loss: 0.0383 - acc: 0.9839 - val_loss: 0.1949 - val_acc: 0.9323 Epoch 19/40 7352/7352 [==============================] - 2s 222us/step - loss: 0.0373 - acc: 0.9843 - val_loss: 0.1478 - val_acc: 0.9434 Epoch 20/40 7352/7352 [==============================] - 2s 225us/step - loss: 0.0363 - acc: 0.9847 - val_loss: 0.1226 - val_acc: 0.9606 Epoch 21/40 7352/7352 [==============================] - 2s 226us/step - loss: 0.0351 - acc: 0.9852 - val_loss: 0.2458 - val_acc: 0.9078 Epoch 22/40 7352/7352 [==============================] - 2s 223us/step - loss: 0.0353 - acc: 0.9845 - val_loss: 0.0817 - val_acc: 0.9674 Epoch 23/40 7352/7352 [==============================] - 2s 224us/step - loss: 0.0319 - acc: 0.9861 - val_loss: 0.1391 - val_acc: 0.9431 Epoch 24/40 7352/7352 [==============================] - 2s 222us/step - loss: 0.0329 - acc: 0.9859 - val_loss: 0.1560 - val_acc: 0.9397 Epoch 25/40 7352/7352 [==============================] - 2s 227us/step - loss: 0.0317 - acc: 0.9864 - val_loss: 0.0882 - val_acc: 0.9767 Epoch 26/40 7352/7352 [==============================] - 2s 225us/step - loss: 0.0314 - acc: 0.9869 - val_loss: 0.2127 - val_acc: 0.9261 Epoch 27/40 7352/7352 [==============================] - 2s 222us/step - loss: 0.0330 - acc: 0.9860 - val_loss: 0.0728 - val_acc: 0.9782 Epoch 28/40 7352/7352 [==============================] - 2s 227us/step - loss: 0.0298 - acc: 0.9871 - val_loss: 0.1228 - val_acc: 0.9588 Epoch 29/40 7352/7352 [==============================] - 2s 229us/step - loss: 0.0289 - acc: 0.9878 - val_loss: 0.1185 - val_acc: 0.9591 Epoch 30/40 7352/7352 [==============================] - 2s 226us/step - loss: 0.0300 - acc: 0.9871 - val_loss: 0.1453 - val_acc: 0.9491 Epoch 31/40 7352/7352 [==============================] - 2s 227us/step - loss: 0.0265 - acc: 0.9886 - val_loss: 0.0843 - val_acc: 0.9742 Epoch 32/40 7352/7352 [==============================] - 2s 228us/step - loss: 0.0269 - acc: 0.9881 - val_loss: 0.0820 - val_acc: 0.9722 Epoch 33/40 7352/7352 [==============================] - 2s 227us/step - loss: 0.0245 - acc: 0.9897 - val_loss: 0.2095 - val_acc: 0.9211 Epoch 34/40 7352/7352 [==============================] - 2s 228us/step - loss: 0.0255 - acc: 0.9890 - val_loss: 0.0792 - val_acc: 0.9766 Epoch 35/40 7352/7352 [==============================] - 2s 229us/step - loss: 0.0240 - acc: 0.9899 - val_loss: 0.0913 - val_acc: 0.9753 Epoch 36/40 7352/7352 [==============================] - 2s 226us/step - loss: 0.0240 - acc: 0.9896 - val_loss: 0.0795 - val_acc: 0.9780 Epoch 37/40 7352/7352 [==============================] - 2s 229us/step - loss: 0.0216 - acc: 0.9910 - val_loss: 0.1278 - val_acc: 0.9633 Epoch 38/40 7352/7352 [==============================] - 2s 228us/step - loss: 0.0209 - acc: 0.9916 - val_loss: 0.0718 - val_acc: 0.9784 Epoch 39/40 7352/7352 [==============================] - 2s 227us/step - loss: 0.0207 - acc: 0.9915 - val_loss: 0.0999 - val_acc: 0.9691 Epoch 40/40 7352/7352 [==============================] - 2s 228us/step - loss: 0.0218 - acc: 0.9905 - val_loss: 0.0839 - val_acc: 0.9772 Test loss: 0.08391317656654564 Test accuracy: 0.9772084692109428
from prettytable import PrettyTable
from prettytable import PrettyTable
x = PrettyTable()
x.field_names = ["Model no.", "Architecture", "Test Loss(Cross-Entropy)", "Test Accuracy"]
x.add_row([1,"3 Conv Layers Without Dropout,BN", 0.1105, 0.9740 ])
x.add_row([2,"3 Conv Layers with dropout,BN", 0.0716, 0.977 ])
x.add_row([3,"3 Conv Layers with Dropout,BN, Maxpool ", 0.0597, 0.982 ])
x.add_row([4,"4 Conv Layers with Dropout,BN, Maxpool", 0.0696, 0.9796 ])
x.add_row([5,"4 Conv Layers with Dropout,BN, Maxpool", 0.0986, 0.976 ])
x.add_row([6,"4 Conv Layers without Dropout,BN, Maxpool ", 0.180, 0.951 ])
x.add_row([7,"6 Conv Layers with Dropout,BN, Maxpool", 0.119, 0.9578 ])
x.add_row([8,"6 Conv Layers with Dropout,BN, Maxpool", 0.1061, 0.968 ])
x.add_row([9,"6 Conv Layers without Dropout,BN, Maxpool", 0.622, 0.809 ])
x.add_row([10,"6 Conv Layers with Dropout,BN, Maxpool", 0.0839, 0.9772 ])
print(x)
+-----------+--------------------------------------------+--------------------------+---------------+ | Model no. | Architecture | Test Loss(Cross-Entropy) | Test Accuracy | +-----------+--------------------------------------------+--------------------------+---------------+ | 1 | 3 Conv Layers Without Dropout,BN | 0.1105 | 0.974 | | 2 | 3 Conv Layers with dropout,BN | 0.0716 | 0.977 | | 3 | 3 Conv Layers with Dropout,BN, Maxpool | 0.0597 | 0.982 | | 4 | 4 Conv Layers with Dropout,BN, Maxpool | 0.0696 | 0.9796 | | 5 | 4 Conv Layers with Dropout,BN, Maxpool | 0.0986 | 0.976 | | 6 | 4 Conv Layers without Dropout,BN, Maxpool | 0.18 | 0.951 | | 7 | 6 Conv Layers with Dropout,BN, Maxpool | 0.119 | 0.9578 | | 8 | 6 Conv Layers with Dropout,BN, Maxpool | 0.1061 | 0.968 | | 9 | 6 Conv Layers without Dropout,BN, Maxpool | 0.622 | 0.809 | | 10 | 6 Conv Layers with Dropout,BN, Maxpool | 0.0839 | 0.9772 | +-----------+--------------------------------------------+--------------------------+---------------+