Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
In this tutorial, you will setup a data drift monitor on a weather dataset to:
☑ Analyze historical data for drift
☑ Setup a monitor to recieve email alerts if data drift is detected going forward
If your workspace is Enterprise level, view and exlpore the results in the Azure Machine Learning studio. The video below shows the results from this tutorial.
If you are using an Azure Machine Learning Compute instance, you are all set. Otherwise, go through the configuration notebook if you haven't already established your connection to the AzureML Workspace.
# Check core SDK version number
import azureml.core
print('SDK version:', azureml.core.VERSION)
Initialize a workspace object from persisted configuration.
from azureml.core import Workspace
ws = Workspace.from_config()
ws
Setup the baseline and target datasets. The baseline will be used to compare each time slice of the target dataset, which is sampled by a given frequency. For further details, see our documentation.
The next few cells will:
weather-data
to the datastoredatetime
The folder weather-data
contains weather data from the NOAA Integrated Surface Data filtered down to to station names containing the string 'FLORIDA' to reduce the size of data. See get_data.py
to see how this data is curated and modify as desired. This script may take a long time to run, hence the data is provided in the weather-data
folder for this demo.
# use default datastore
dstore = ws.get_default_datastore()
# upload weather data
dstore.upload('weather-data', 'datadrift-data', overwrite=True, show_progress=True)
# import Dataset class
from azureml.core import Dataset
# create target dataset
target = Dataset.Tabular.from_parquet_files(dstore.path('datadrift-data/**/data.parquet'))
# set the timestamp column
target = target.with_timestamp_columns('datetime')
# register the target dataset
target = target.register(ws, 'target')
# retrieve the dataset from the workspace by name
target = Dataset.get_by_name(ws, 'target')
# import datetime
from datetime import datetime
# set baseline dataset as January 2019 weather data
baseline = Dataset.Tabular.from_parquet_files(dstore.path('datadrift-data/2019/01/data.parquet'))
# optionally, register the baseline dataset. if skipped, an unregistered dataset will be used
#baseline = baseline.register(ws, 'baseline')
Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
Create an Azure Machine Learning compute cluster to run the data drift monitor and associated runs. The below cell will create a compute cluster named 'cpu-cluster'
.
from azureml.core.compute import AmlCompute, ComputeTarget
compute_name = 'cpu-cluster'
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D3_V2', min_nodes=0, max_nodes=2)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
See our documentation for a complete description for all of the parameters.
from azureml.datadrift import DataDriftDetector, AlertConfiguration
alert_config = AlertConfiguration(['user@contoso.com']) # replace with your email to recieve alerts from the scheduled pipeline after enabling
monitor = DataDriftDetector.create_from_datasets(ws, 'weather-monitor', baseline, target,
compute_target='cpu-cluster', # compute target for scheduled pipeline and backfills
frequency='Week', # how often to analyze target data
feature_list=None, # list of features to detect drift on
drift_threshold=None, # threshold from 0 to 1 for email alerting
latency=0, # SLA in hours for target data to arrive in the dataset
alert_config=alert_config) # email addresses to send alert
Many settings of the data drift monitor can be updated after creation. In this demo, we will update the drift_threshold
and feature_list
. See our documentation for details on which settings can be changed.
# get monitor by name
monitor = DataDriftDetector.get_by_name(ws, 'weather-monitor')
# create feature list - need to exclude columns that naturally drift or increment over time, such as year, day, index
columns = list(baseline.take(1).to_pandas_dataframe())
exclude = ['year', 'day', 'version', '__index_level_0__']
features = [col for col in columns if col not in exclude]
# update the feature list
monitor = monitor.update(feature_list=features)
You can use the backfill
method to:
The below cells will run two backfills that will produce data drift results for 2019 weather data, with January used as the baseline in the monitor. The output can be seen from the show
method after the runs have completed, or viewed from the Azure Machine Learning studio for Enterprise workspaces.
Tip! When starting with the data drift capability, start by backfilling on a small section of data to get initial results. Update the feature list as needed by removing columns that are causing drift, but can be ignored, and backfill this section of data until satisfied with the results. Then, backfill on a larger slice of data and/or set the alert configuration, threshold, and enable the schedule to recieve alerts to drift on your dataset. All of this can be done through the UI (Enterprise) or Python SDK.
Although it depends on many factors, the below backfill should typically take less than 20 minutes to run. Results will show as soon as they become available, not when the backfill is completed, so you may begin to see some metrics in a few minutes.
# backfill for one month
backfill_start_date = datetime(2019, 9, 1)
backfill_end_date = datetime(2019, 10, 1)
backfill = monitor.backfill(backfill_start_date, backfill_end_date)
backfill
The below cell will plot some key data drift metrics, and can be used to query the results. Run help(monitor.get_output)
for specifics on the object returned.
# make sure the backfill has completed
backfill.wait_for_completion(wait_post_processing=True)
# get results from Python SDK (wait for backfills or monitor runs to finish)
results, metrics = monitor.get_output(start_time=datetime(year=2019, month=9, day=1))
# plot the results from Python SDK
monitor.show(backfill_start_date, backfill_end_date)
Turn on a scheduled pipeline which will anlayze the target dataset for drift every frequency
. Use the latency parameter to adjust the start time of the pipeline. For instance, if it takes 24 hours for my data processing pipelines for data to arrive in the target dataset, set latency to 24.
# enable the pipeline schedule and recieve email alerts
monitor.enable_schedule()
# disable the pipeline schedule
#monitor.disable_schedule()
Do not delete the compute target if you intend to keep using it for the data drift monitor scheduled runs or otherwise. If the minimum nodes are set to 0, it will scale down soon after jobs are completed, and scale up the next time the cluster is needed.
# optionally delete the compute target
#compute_target.delete()
Invoking the delete()
method on the object deletes the the drift monitor permanently and cannot be undone. You will no longer be able to find it in the UI and the list()
or get()
methods. The object on which delete() was called will have its state set to deleted and name suffixed with deleted. The baseline and target datasets and model data that was collected, if any, are not deleted. The compute is not deleted. The DataDrift schedule pipeline is disabled and archived.
monitor.delete()