We are glad this package caught your attention, so let's try to briefly showcase its power.
In this tutorial, we will use a historical Rossmann sales dataset that
we load via the
A description of the dataset and available columns is given in the docstring.
import pandas as pd import matplotlib.pyplot as plt plt.style.use('seaborn') plt.rcParams['figure.figsize'] = [12, 6]
from hcrystalball.utils import get_sales_data df = get_sales_data(n_dates=100, n_assortments=2, n_states=2, n_stores=2)
Next step is to define
ModelSelector for which frequency the data will be resampled to, how many steps ahead the forecast should run, and optionally define column, which contains ISO code of country/region to take holiday information for given days
Once done, creating grid search with possible exogenous columns and extending it with custom models
from hcrystalball.model_selection import ModelSelector ms = ModelSelector(horizon=10, frequency='D', country_code_column='HolidayCode', ) ms.create_gridsearch(sklearn_models=True, n_splits = 2, between_split_lag=None, sklearn_models_optimize_for_horizon=False, autosarimax_models=False, prophet_models=False, tbats_models=False, exp_smooth_models=False, average_ensembles=False, stacking_ensembles=False, exog_cols=['Open','Promo','SchoolHoliday','Promo2'], # holidays_days_before=2, # holidays_days_after=1, # holidays_bridge_days=True, )
from hcrystalball.wrappers import get_sklearn_wrapper from sklearn.linear_model import LinearRegression ms.add_model_to_gridsearch(get_sklearn_wrapper(LinearRegression))
By default the run will partition data by
partition_columns and do for loop over all partitions.
If you have a problem, that make parallelization overhead worth trying, you can also use
parallel_columns - subset of
partition_columns over which the parallel run (using prefect) will be started.
If expecting the run to take long, it might be good to directly store results. Here
persist_ methods might come convenient
# from prefect.engine.executors import LocalDaskExecutor ms.select_model(df=df, target_col_name='Sales', partition_columns=['Assortment', 'State','Store'], # parallel_over_columns=['Assortment'], # persist_model_selector_results=False, # output_path='my_results', # executor = LocalDaskExecutor(), )
Naturaly we are interested in which models were chosen, so that we can strip our parameter grid from the ones, which were failing and extend with more sophisticated models from most selected classes
There also exists convenient method to plot the results over all (or subset of) the data partitions to see how well our model fitted the data during cross validation
To get to more information, it is advisable to go from all partitions level (
ModelSelector) to single partition level (
ModelSelector stores results as a list of
ModelSelectorResult objects in
self.results. Here we provide rich
__repr__ that hints on what information are available.
Another way to get the ModelSelectorResult is to use
ModelSelector.get_result_for_partition that ensures the same results also when loading the stored results. Here the list access method fails (
ModelSelector.results), because each ModelSelectorResults is stored with
partition_hash name and later load ingests these files in alphabetical order.
Accessing training data to see what is behind the model, cv_results to check the fitting time or how big margin my best model had over the second best one or access model definition and explore its parameters are all handy things that we found useful
res = ms.get_result_for_partition(partition=ms.results.partition)
On this level we can also access the forecast plots - one that we already know with cv_forecasts and one that gives us only errors
res.plot_result(plot_from = '2015-06-01', title='forecasts');
To enable later usage of our found results, there are plenty of methods that can help storing and loading the results of model selection in a uniform way.
Some methods and functions persits/load the whole objects (
load_model_selector_result), while some provide direct access to the part we might only care if we run in production and have space limitations (
from hcrystalball.model_selection import load_model_selector from hcrystalball.model_selection import load_model_selector_result from hcrystalball.model_selection import load_best_model res.persist(path='tmp') res = load_model_selector_result(path='tmp',partition_label=ms.results.partition) ms.persist_results(folder_path='tmp/results') ms = load_model_selector(folder_path='tmp/results')
ms = load_model_selector(folder_path='tmp/results')
# cleanup import shutil try: shutil.rmtree('tmp') except: pass