Detection of Outliers |
Outliers are events that do not conform with the rest of a dataset. The detection of such events might be important to denoise parts of signals, detect unexpected events, remove movement artifacts, among other applications.
The difficulty of this detection is the fact that generic algorithms may yield low accuracy for different signals and different kinds of outliers.
In this Jupyter Notebook we will present three different approaches to this problem and discuss how each may be more suited for different situations.
1 - Import of the required packages
These packages will facilitate outlier detection.# Import biosignalsnotebooks Python package
import biosignalsnotebooks as bsnb
# Import numpy
from numpy import array, histogram, mean, std, ptp, hstack
# Import scit-kit learn
from sklearn.cluster import DBSCAN
# Import Kurtosis from scipy package
from scipy.stats import kurtosis
# Package used for forecasting tasks
from statsmodels.tsa.arima_model import ARIMA
# Function intended to estimate the Euclidean distance between forecast and the original signal
from scipy.spatial import distance
# Imports used on hidden cells
import warnings
warnings.filterwarnings('ignore')
from numpy import zeros, arange, concatenate
2 - Load the signals
In this notebooks we will use a BVP signal that is contaminated with motion artifacts. So, in this case, the outliers will be the motion artifacts.# Load the signal and the header of the file, which contains information about the acquisition.
bvp, bvp_header = bsnb.load("../../signal_samples/bvp_motion_artifact.txt", get_header=True)
# The BVP signal was acquired in channel 2 of the hub, thus we must load it from the Python dictionary using the CH2 key.
bvp_signal_raw = array(bvp['CH2'])
# From the header we can extract information such as the sampling rate and resolution
sampling_frequency = bvp_header['sampling rate']
# The following line allows to generate the time axis of the axis based on the utilized sampling frequency.
bvp_time = bsnb.generate_time(bvp_signal_raw, sample_rate=sampling_frequency)
The next plot shows the signal (filtered to remove high frequency noise), where the motion artifacts are identified by red vertical bands.
# Remove low frequency noise
bvp_signal_raw = bsnb.highpass(bvp_signal_raw, 0.2, use_filtfilt=True)
from bokeh.models import BoxAnnotation
from bokeh.plotting import show
# Define the box for the first artifact
first_box = BoxAnnotation(left=4.1, right=7.61, fill_alpha=0.1, fill_color='red')
# Define the box for the second artifact
second_box = BoxAnnotation(left=13.91, right=16.52, fill_alpha=0.1, fill_color='red')
# Plot the BVP signal
plot_artifacts = bsnb.plot(bvp_time, bvp_signal_raw, y_axis_label="BVP(a.u.)", get_fig_list=True, show_plot=False)[0]
# Add the boxes that define the outliers
plot_artifacts.add_layout(first_box)
# plot_artifacts.add_layout(second_box)
plot_artifacts.add_layout(second_box)
# Show the plot
show(plot_artifacts)
3 - Outliers Detection Approaches
Outliers detection can be made using different approaches. In this section, we will demonstrate three of the most common, starting by a statistical approach, followed by a forecasting approach and finishing with an approach based on unsupervised machine learning.3.1 - Statistical Approach
Statistically, outliers correspond to the events that have a lower probability of happening. Thus, we will plot the histogram of the signal and try to identify the outliers by that. Note that, in this case, only the points that are deviated from the mean will be considered outliers.from bokeh.plotting import figure
hist, edges = histogram(bvp_signal_raw, bins = 100)
p = figure(background_fill_color=(242, 242, 242))
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], fill_color="#00893E", line_color="white", alpha=0.5);
mean_value = mean(bvp_signal_raw)
std_value = std(bvp_signal_raw)
# This cell serves only to style the histogram plot
from bokeh.plotting import show
from bokeh.models.tools import PanTool, ResetTool, BoxZoomTool, WheelZoomTool
from bokeh.models import Span
# Vertical line
vline = Span(location=mean_value, dimension='height', line_color='black', line_width=1)
vline1 = Span(location=mean_value+std_value, dimension='height', line_color='grey', line_width=2, line_dash='dashed')
vline2 = Span(location=mean_value-std_value, dimension='height', line_color='grey', line_width=2, line_dash='dashed')
p.renderers.extend([vline, vline1, vline2])
toolbar="right"
p.toolbar.active_scroll = p.select_one(WheelZoomTool)
p.sizing_mode = 'scale_width'
p.height = 200
p.toolbar.logo = None
p.toolbar_location = toolbar
p.xaxis.axis_label = "Value"
p.yaxis.axis_label = "# Points"
p.xgrid.grid_line_color = (150, 150, 150)
p.ygrid.grid_line_color = (150, 150, 150)
p.xgrid.grid_line_dash = [2, 2]
p.xaxis.major_tick_line_color = "white"
p.xaxis.minor_tick_line_color = "white"
p.xaxis.axis_line_color = "white"
p.yaxis.major_tick_in = 0
p.yaxis.major_tick_out = 0
p.yaxis.major_tick_line_color = "white"
p.yaxis.minor_tick_line_color = "white"
p.yaxis.minor_tick_in = 0
p.yaxis.minor_tick_out = 0
p.yaxis.axis_line_color = (150, 150, 150)
p.yaxis.axis_line_dash = [2, 2]
p.yaxis.major_label_text_color = (88, 88, 88)
p.xaxis.major_label_text_color = (88, 88, 88)
p.ygrid.grid_line_dash = [2, 2]
show(p)
The histogram shows the distribution of values of the signal. The black vertical line corresponds to the mean value and the dashed lines to the standard deviation relative to the mean. One approach could be to consider all values below or above the range of the standard deviation as outliers.
This idea can be graphically demonstrated below:
from bokeh.models import BoxAnnotation
from bokeh.plotting import show
sup_limit = mean_value + std_value
sub_limit = mean_value - std_value
# Plot the BVP signal
sup_std = zeros(len(bvp_time)) + sup_limit
sub_std = zeros(len(bvp_time)) + sub_limit
bsnb.plot([bvp_time, bvp_time, bvp_time], [bvp_signal_raw, sup_std, sub_std], y_axis_label="BVP(a.u.)")
In this case, outliers are all the points that are not in between the two horizontal lines. Thus, we get some false positives because we are not taking into account the time domain of the signal and its context. While we could clearly set thresholds to identify the peaks that are outliers, the parts of those peaks that do not cross the thresholds but still correspond to artifacts could never be considered as outliers.
3.2 - Forecasting Approach
As we are dealing with time series, context is an important aspect to take into account for outliers detection. In this section, we will show how to take the time axis into consideration, by using forecasting models for outliers detection.Forecasting models attempt to capture the normal behaviour of a time series and then forecast that same behaviour in the future. For outliers detection, if the forecast model is accurate enough, one can compare the actual data with the forecast of previous data and, if it is too different, it is considered an outlier.
Note: In this case, we will not analyse the accuracy of the forecasting model and we will assume it is accurate enough.
There are numerous algorithms that may be capable to forecast time series, as enumerate in this link . We chose to apply the Autoregressive Integrated Moving Average (ARIMA) model because it combines two others, autoregression (AR) and moving average (MA), after differencing the signal to make the time series stationary. For this, we will use the algorithm implemented in the statsmodels Python package.
3.2.1 - Comparing Outliers to Normal Results
In this subsection we will present the methodology followed to detect outliers and compare the results of this procedure for outliers and normal points.First, we will present the case of an outlier. In this case we will show the case of the motion artifact starting at around 14 seconds (hence j = 14*sampling_frequency). The order of the model was taken from experimenting with various orders and saving the one with the best score.
# We will train our model with the signal until the 14th second
j = 14*sampling_frequency
# In the next line we build the model with data until the 14th second and define the order of the model
model = ARIMA(bvp_signal_raw[:j], order=(3, 0, 3))
# In the next cell the model is fitted to the data that we gave as input
model_fit = model.fit(disp=False, method='css-mle')
The next step, since we already trained (fitted) our model to the input data, is to compute the forecast for the next time steps. In this case, we chose to forecast for the next 0.5 seconds. Note that the quality of the forecast degrades with the duration of that forecast. Thus, typically, the longer forecasts, the worse their quality.
# Compute the forecast
yhat = model_fit.forecast(steps=int(0.5*sampling_frequency))[0]
In the next plot, we show the forecast (around the second 14 with a duration of 0.5 seconds). Then, we present the result of the Euclidean distance to the actual signal.
bsnb.plot([bvp_time, bvp_time[j:j+int(0.5*sampling_frequency)]], [bvp_signal_raw, yhat], y_axis_label="BVP(a.u.)")
print("For this particular segment, which is an outlier, the Euclidean distance between the expected the result and the real is {:0.2f}.".format(distance.euclidean(bvp_signal_raw[j:j+500], yhat)))
For this particular segment, which is an outlier, the Euclidean distance between the expected the result and the real is 853.89.
Next, we will repeat the same procedure but for a normal interval. After this, we will compare the results of the Euclidean distances and discuss the suitability of this methodology.
In this case, we use the signal until 2 seconds, so make sure to check that part of the signal on the following plot.
# We will train our model with the signal until de index 2000
j = 2*sampling_frequency
model = ARIMA(bvp_signal_raw[:j], order=(3, 0, 3))
model_fit = model.fit(disp=False, method='css-mle')
# make prediction
yhat = model_fit.forecast(steps=int(0.5*sampling_frequency))[0]
bsnb.plot([bvp_time, bvp_time[j:j+int(0.5*sampling_frequency)]], [bvp_signal_raw, yhat], y_axis_label="BVP(a.u.)")
print("For this particular segment, which is normal, the Euclidean distance between the expected the result and the real is {:0.2f}.".format(distance.euclidean(bvp_signal_raw[j:j+500], yhat)))
For this particular segment, which is normal, the Euclidean distance between the expected the result and the real is 78.94.
As expected, considering that the forecast is accurate enough, the distance between the forecast values and the actual signal is lower for the normal segment than for the outlier, because the forecast is not able to predict an anomalous event. Thus, if we apply this to the whole signal, hopefully we will be able to detect the outliers in our signal.
3.2.2 - Applying the Procedure to the whole Signal
Given the clear differences between the normal segment and the outlier, we propose to apply the same methodology to the whole signal. In this case, we will consider that the state (normal or outlier) does not change in a 1 second range, thus we will apply the forecast every second. For each time we forecast the next second of signal, we compute the Euclidean distance between the predicted signal and the actual signal and store it in a list.Note: We will start at 2 seconds because ARIMA model needs data to capture the normal behaviour of the signal.
# Start
num = 2*sampling_frequency
# Definition of the step
step = 1*sampling_frequency
# Initialise the list to store the values of the Euclidean distance between forecasts and the signal
diff = []
# Helper variable to keep track of the progress
j=0
# Loop the values of signal, with a step of 1 second, to compute the forecast
for i in range(num, len(bvp_signal_raw), step):
# Print the progress of the loop
print("{:0.2f}% completed".format((j*step*100)/(len(bvp_signal_raw)-num)))
# Copy the BVP signal to a new variable
data = bvp_signal_raw.copy()
# Define the model (similar to the previous subsections)
model = ARIMA(data[:i], order=(3, 0, 3))
# try...except because the model could give some unexpected error
try:
# Fit the model
model_fit = model.fit(disp=False, method='css-mle')
# Forecast the next 1 second of the signal
yhat = model_fit.forecast(steps=step)[0]
# Compute the Euclidean distance between the forecast and the signal
euc_distance = distance.euclidean(data[i:i+step], yhat)
# Store the Euclidean distances
diff.append(euc_distance)
except ValueError as e:
# If a ValueError occurs, print the error
print(e)
finally:
# Increment the helper variable to keep track of the progress
j+=1
0.00% completed 3.59% completed 7.18% completed 10.77% completed 14.36% completed 17.95% completed 21.54% completed 25.13% completed 28.73% completed 32.32% completed 35.91% completed 39.50% completed 43.09% completed 46.68% completed 50.27% completed 53.86% completed 57.45% completed 61.04% completed 64.63% completed 68.22% completed 71.81% completed 75.40% completed 78.99% completed 82.59% completed 86.18% completed 89.77% completed 93.36% completed 96.95% completed operands could not be broadcast together with shapes (850,) (1000,)
The next plot contains lots of information:
diff_vector = []
for value in diff:
diff_vector.append(zeros(step)+value)
diff_vector = concatenate(diff_vector)
threshold = 0.22
# Normalize both vectors
diff_vector_normalized = diff_vector / ptp(diff_vector)
bvp_signal_normalized = bvp_signal_raw / ptp(bvp_signal_raw)
start = len(bvp_time)-len(diff_vector_normalized)
plot_diff = bsnb.plot([bvp_time, bvp_time[start-step:-step]], [bvp_signal_normalized, diff_vector_normalized], hor_lines=[[threshold], [threshold]], get_fig_list=True, show_plot=False)[0]
for i in range(len(diff)):
if diff[i]/ptp(diff) > threshold:
_aux = BoxAnnotation(left=(start-step+i*step)/sampling_frequency, right=(start-step+(i+1)*step)/sampling_frequency, fill_alpha=0.2, fill_color='red')
# Add the boxes that define the outliers
plot_diff.add_layout(_aux)
# Show the plot
show(plot_diff)
Though this method is accurate, it has some limitations. There are numerous algorithms to chose from that may suit the data in different ways and, it might not be easy to determine which is better. Furthermore, for the ARIMA model we need to set the order of the model, which involves a time-consuming process. Moreover, the definition of the place to start the training of the model and the step is a process that needs to be taken into account for every algorithm used in this type of application. Besides, setting a threshold may not be the better approach, as it may be too simplistic depending on the quality of data. Finally, the fit of the model is time consuming process, which should be made offline.
However, as demonstrated, it may be a reliable method to detect outliers in time series data.
3.3 - Unsupervised Machine Learning Approach
In this section, we will show how to use unsupervised machine learning for outliers detection, namely using the scikit-learn Python package. If you are not familiar with machine learning, we recommend you to read the notebooks of the category Train and Classify .For this application, we will segment our signal in equal parts. For that, we will use the biosignalsnotebooks Python package, in which the only parameter that we need is the time that each segment (or window) of the signal should have. This is an arbitrary parameter, so we chose 1 second.
time_window = 1 # second
BVP_segments = bsnb.windowing(bvp_signal_raw, sampling_frequency, time_window)
# Vertical line
segments = arange(0, bvp_time[-1], time_window)
vlines = []
for segment in segments:
vlines.append(Span(location=segment, dimension='height', line_color='grey', line_width=2, line_dash='dashed'))
p = bsnb.plot(bvp_time, bvp_signal_raw, get_fig_list=True, show_plot=False, y_axis_label="BVP(a.u.)")[0]
p.renderers.extend(vlines)
show(p)
After segmenting the signal, we will extract features that may separate the outliers from the normal segments of the signal. We chose to use only two features, standard deviation and kurtosis, for visualization and simplicity sake, but we could use virtually the number of features we wanted. Note however, that the increase of the number of features, also increases the complexity of the analysis. Next, we will plot the features from each segment and we can perceive that there are points that are far more sparse than others, which might correspond to outliers.
func = [std, kurtosis]
BVP_features = bsnb.features_extraction(BVP_segments, func)
p = figure(background_fill_color=(242, 242, 242))
# add a circle renderer with a size, color, and alpha
p.circle(BVP_features[:,0], BVP_features[:,1], size=15, color="#009EE3", alpha=.9)
toolbar="right"
p.toolbar.active_scroll = p.select_one(WheelZoomTool)
p.sizing_mode = 'scale_width'
p.height = 200
p.toolbar.logo = None
p.toolbar_location = toolbar
p.xaxis.axis_label = "Standard Deviation"
p.yaxis.axis_label = "Kurtosis"
p.xgrid.grid_line_color = (150, 150, 150)
p.ygrid.grid_line_color = (150, 150, 150)
p.xgrid.grid_line_dash = [2, 2]
p.xaxis.major_tick_line_color = "white"
p.xaxis.minor_tick_line_color = "white"
p.xaxis.axis_line_color = "white"
p.yaxis.major_tick_in = 0
p.yaxis.major_tick_out = 0
p.yaxis.major_tick_line_color = "white"
p.yaxis.minor_tick_line_color = "white"
p.yaxis.minor_tick_in = 0
p.yaxis.minor_tick_out = 0
p.yaxis.axis_line_color = (150, 150, 150)
p.yaxis.axis_line_dash = [2, 2]
p.yaxis.major_label_text_color = (88, 88, 88)
p.xaxis.major_label_text_color = (88, 88, 88)
p.ygrid.grid_line_dash = [2, 2]
# show the results
show(p)
For this example, we will use the DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithm, which clusters data points based on their density relative to others. Essentially, this algorithm clusters points in two types: belonging to clusters (considered core points) or outliers. To belong to a cluster, a data point must have a number n points at a distance below dist or belong to the vicinity of a core point. All points that do not conform to this criteria are considered outliers. Thus, this algorithm receives two parameters (n and dist) that need to be tuned to accurately be applied.
Note: We will first normalize the features, so that no features is more important than the other.
# Normalise the features, so that they end up with a range of 1.
BVP_std_normalized = BVP_features[:, 0] / ptp(BVP_features[:, 0])
BVP_kurtosis_normalized = BVP_features[:, 1] / ptp(BVP_features[:, 1])
# Constructing the normalized features vector. The .reshape(-1, 1) is used to transpose the line arrays to column arrays.
BVP_features_normalized = hstack([BVP_std_normalized.reshape(-1, 1), BVP_kurtosis_normalized.reshape(-1, 1)])
# Definition of the parameters for the DBSCAN algorithm
dist = .05
n = 3
# Definition of the algorithm
outlier_detection = DBSCAN(min_samples = n, eps = dist)
# Application of the algorithm to our features matrix
clusters = outlier_detection.fit_predict(BVP_features_normalized)
from numpy import where
outliers = BVP_features_normalized[where(clusters == -1)[0]]
normal = BVP_features_normalized[where(clusters != -1)[0]]
p = figure(background_fill_color=(242, 242, 242))
# add a circle renderer with a size, color, and alpha
p.circle(normal[:,0], normal[:,1], size=15, color="#009EE3", alpha=.9)
toolbar="right"
p.toolbar.active_scroll = p.select_one(WheelZoomTool)
p.sizing_mode = 'scale_width'
p.height = 200
p.toolbar.logo = None
p.toolbar_location = toolbar
p.xaxis.axis_label = "Value"
p.yaxis.axis_label = "# Points"
p.xgrid.grid_line_color = (150, 150, 150)
p.ygrid.grid_line_color = (150, 150, 150)
p.xgrid.grid_line_dash = [2, 2]
p.xaxis.major_tick_line_color = "white"
p.xaxis.minor_tick_line_color = "white"
p.xaxis.axis_line_color = "white"
p.yaxis.major_tick_in = 0
p.yaxis.major_tick_out = 0
p.yaxis.major_tick_line_color = "white"
p.yaxis.minor_tick_line_color = "white"
p.yaxis.minor_tick_in = 0
p.yaxis.minor_tick_out = 0
p.yaxis.axis_line_color = (150, 150, 150)
p.yaxis.axis_line_dash = [2, 2]
p.yaxis.major_label_text_color = (88, 88, 88)
p.xaxis.major_label_text_color = (88, 88, 88)
p.ygrid.grid_line_dash = [2, 2]
p.circle(outliers[:, 0], outliers[:, 1], size = 15, color = 'red')
show(p)
The next plot will show at what segments do those points correspond in the actual signal.
# Plot the BVP signal
plot_outliers = bsnb.plot(bvp_time, bvp_signal_raw, get_fig_list=True, show_plot=False)[0]
for i in range(len(clusters)):
if clusters[i] < 0:
_aux = BoxAnnotation(left=segments[i], right=segments[i+1], fill_alpha=0.1, fill_color='red')
# Add the boxes that define the outliers
plot_outliers.add_layout(_aux)
# Show the plot
show(plot_outliers)
This method yielded accurate results as it was able to identify the motion artifacts that we had previously determined. However, note that we had to tune all the parameters: the time for the segments of the signal, the features we used to distinguish outliers from normal segments and the parameters for the DBSCAN algorithm. Furthermore, it is not applicable in real time application as it requires the whole dataset to calculate the density of each point.
In this notebook we made a quick review of three different methods to detect outliers in time series data. Though forecasting and unsupervised machine learning methods may have high accuracy when detecting outliers, they are also very complex and involve the determination of numerous parameters which might affect the results and, thus, must be carefully applied.
You are now in conditions to start exploring the fascination world of outliers detection and all the algorithms that are commonly applied to this problem.
We hope that you have enjoyed this guide. biosignalsnotebooks is an environment in continuous expansion, so don't stop your journey and learn more with the remaining Notebooks !
Auxiliary Code Segment (should not be replicated by the user)
from biosignalsnotebooks.__notebook_support__ import css_style_apply
css_style_apply()
.................... CSS Style Applied to Jupyter Notebook .........................