Consider these tips for working with an auto-generated notebook:
This notebook contains a Scikit-learn representation of AutoAI pipeline. This notebook introduces commands for getting data, training the model, and testing the model.
Some familiarity with Python is helpful. This notebook uses Python 3.7 and scikit-learn 0.23.1.
This notebook contains the following parts:
Setup
Package installation
AutoAI experiment metadata
Pipeline inspection
Read training data
Train and test data split
Make pipeline
Train pipeline model
Test pipeline model
Next steps
Copyrights
Before you use the sample code in this notebook, install the following packages:
!pip install ibm-watson-machine-learning | tail -n 1
!pip install -U autoai-libs==1.12.5 | tail -n 1
/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead from cryptography.utils import int_from_bytes /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead from cryptography.utils import int_from_bytes Requirement already satisfied: docutils<0.16,>=0.10 in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from ibm-cos-sdk-core==2.7.0->ibm-cos-sdk==2.7.*->ibm-watson-machine-learning) (0.15.2) /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead from cryptography.utils import int_from_bytes /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead from cryptography.utils import int_from_bytes Successfully installed autoai-libs-1.12.5 gensim-3.8.3 smart-open-5.0.0
The following cell contains the training data connection details.
Note: The connection might contain authorization credentials, so be careful when sharing the notebook.
# @hidden_cell
from ibm_watson_machine_learning.helpers import DataConnection
from ibm_watson_machine_learning.helpers import S3Connection, S3Location
training_data_reference = [DataConnection(
connection=S3Connection(
api_key='YLrM_K3seFDkiGkQU-XEQNKug3KY5zZ-E4r8kHu4S_Dk',
auth_endpoint='https://iam.bluemix.net/oidc/token/',
endpoint_url='https://s3.eu-geo.objectstorage.softlayer.net'
),
location=S3Location(
bucket='diabetesprediction-donotdelete-pr-am3iyq6p2ccof4',
path='diabetes.csv'
)),
]
training_result_reference = DataConnection(
connection=S3Connection(
api_key='YLrM_K3seFDkiGkQU-XEQNKug3KY5zZ-E4r8kHu4S_Dk',
auth_endpoint='https://iam.bluemix.net/oidc/token/',
endpoint_url='https://s3.eu-geo.objectstorage.softlayer.net'
),
location=S3Location(
bucket='diabetesprediction-donotdelete-pr-am3iyq6p2ccof4',
path='auto_ml/6f1472e5-330e-45da-b5b8-6610bf8af6e3/wml_data/be48ec68-c597-464f-9532-3ad7f797302c/data/automl',
model_location='auto_ml/6f1472e5-330e-45da-b5b8-6610bf8af6e3/wml_data/be48ec68-c597-464f-9532-3ad7f797302c/data/automl/hpo_c_output/Pipeline9/model.pickle',
training_status='auto_ml/6f1472e5-330e-45da-b5b8-6610bf8af6e3/wml_data/be48ec68-c597-464f-9532-3ad7f797302c/training-status.json'
))
Following cell contains input parameters provided to run the AutoAI experiment in Watson Studio.
experiment_metadata = dict(
prediction_type='classification',
prediction_column='Outcome',
holdout_size=0.1,
scoring='accuracy',
deployment_url='https://eu-gb.ml.cloud.ibm.com',
csv_separator=',',
random_state=33,
max_number_of_estimators=3,
daub_include_only_estimators=None,
training_data_reference=training_data_reference,
training_result_reference=training_result_reference,
project_id='65580c8e-1337-47e1-a99a-c8efbc6885fd',
positive_label=1
)
df = training_data_reference[0].read(csv_separator=experiment_metadata['csv_separator'])
df.dropna('rows', how='any', subset=[experiment_metadata['prediction_column']], inplace=True)
from sklearn.model_selection import train_test_split
df.drop_duplicates(inplace=True)
X = df.drop([experiment_metadata['prediction_column']], axis=1).values
y = df[experiment_metadata['prediction_column']].values
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=experiment_metadata['holdout_size'],
stratify=y, random_state=experiment_metadata['random_state'])
In the next cell, you can find the Scikit-learn definition of the selected AutoAI pipeline.
Import statements.
from autoai_libs.transformers.exportable import NumpyColumnSelector
from autoai_libs.transformers.exportable import CompressStrings
from autoai_libs.transformers.exportable import NumpyReplaceMissingValues
from autoai_libs.transformers.exportable import NumpyReplaceUnknownValues
from autoai_libs.transformers.exportable import boolean2float
from autoai_libs.transformers.exportable import CatImputer
from autoai_libs.transformers.exportable import CatEncoder
import numpy as np
from autoai_libs.transformers.exportable import float32_transform
from sklearn.pipeline import make_pipeline
from autoai_libs.transformers.exportable import FloatStr2Float
from autoai_libs.transformers.exportable import NumImputer
from autoai_libs.transformers.exportable import OptStandardScaler
from sklearn.pipeline import make_union
from autoai_libs.transformers.exportable import NumpyPermuteArray
from autoai_libs.cognito.transforms.transform_utils import TAM
from sklearn.decomposition import PCA
from autoai_libs.cognito.transforms.transform_utils import FS1
from autoai_libs.cognito.transforms.transform_utils import TA1
import autoai_libs.utils.fc_methods
from sklearn.linear_model import LogisticRegression
numpy_column_selector_0 = NumpyColumnSelector(columns=[0, 2, 3, 7])
compress_strings = CompressStrings(
compress_type="hash",
dtypes_list=[
"float_int_num",
"float_int_num",
"float_int_num",
"float_int_num",
],
missing_values_reference_list=["", "-", "?", float("nan")],
misslist_list=[[], [], [], []],
)
numpy_replace_missing_values_0 = NumpyReplaceMissingValues(
missing_values=[], filling_values=100001
)
numpy_replace_unknown_values = NumpyReplaceUnknownValues(
filling_values=100001,
filling_values_list=[100001, 100001, 100001, 100001],
missing_values_reference_list=["", "-", "?", float("nan")],
)
cat_imputer = CatImputer(
strategy="most_frequent",
missing_values=100001,
sklearn_version_family="23",
)
cat_encoder = CatEncoder(
encoding="ordinal",
categories="auto",
dtype=np.float64,
handle_unknown="error",
sklearn_version_family="23",
)
pipeline_0 = make_pipeline(
numpy_column_selector_0,
compress_strings,
numpy_replace_missing_values_0,
numpy_replace_unknown_values,
boolean2float(),
cat_imputer,
cat_encoder,
float32_transform(),
)
numpy_column_selector_1 = NumpyColumnSelector(columns=[1, 4, 5, 6])
float_str2_float = FloatStr2Float(
dtypes_list=["float_int_num", "float_int_num", "float_num", "float_num"],
missing_values_reference_list=[],
)
numpy_replace_missing_values_1 = NumpyReplaceMissingValues(
missing_values=[], filling_values=float("nan")
)
num_imputer = NumImputer(strategy="median", missing_values=float("nan"))
opt_standard_scaler = OptStandardScaler(
num_scaler_copy=None,
num_scaler_with_mean=None,
num_scaler_with_std=None,
use_scaler_flag=False,
)
pipeline_1 = make_pipeline(
numpy_column_selector_1,
float_str2_float,
numpy_replace_missing_values_1,
num_imputer,
opt_standard_scaler,
float32_transform(),
)
union = make_union(pipeline_0, pipeline_1)
numpy_permute_array = NumpyPermuteArray(
axis=0, permutation_indices=[0, 2, 3, 7, 1, 4, 5, 6]
)
tam = TAM(
tans_class=PCA(),
name="pca",
col_names=[
"Pregnancies",
"Glucose",
"BloodPressure",
"SkinThickness",
"Insulin",
"BMI",
"DiabetesPedigreeFunction",
"Age",
],
col_dtypes=[
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
],
)
fs1_0 = FS1(
cols_ids_must_keep=range(0, 8),
additional_col_count_to_keep=8,
ptype="classification",
)
ta1 = TA1(
fun=np.sqrt,
name="sqrt",
datatypes=["numeric"],
feat_constraints=[
autoai_libs.utils.fc_methods.is_non_negative,
autoai_libs.utils.fc_methods.is_not_categorical,
],
col_names=[
"Pregnancies",
"Glucose",
"BloodPressure",
"SkinThickness",
"Insulin",
"BMI",
"DiabetesPedigreeFunction",
"Age",
"pca_0",
"pca_1",
"pca_2",
"pca_3",
"pca_4",
"pca_5",
"pca_6",
"pca_7",
],
col_dtypes=[
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
np.dtype("float32"),
],
)
fs1_1 = FS1(
cols_ids_must_keep=range(0, 8),
additional_col_count_to_keep=8,
ptype="classification",
)
logistic_regression = LogisticRegression(
class_weight="balanced",
dual=True,
fit_intercept=False,
intercept_scaling=0.001256138140153018,
max_iter=948,
n_jobs=1,
random_state=33,
solver="liblinear",
tol=7.890895594583663e-05,
)
Pipeline.
pipeline = make_pipeline(
union, numpy_permute_array, tam, fs1_0, ta1, fs1_1, logistic_regression
)
This cell constructs the cell scorer based on the experiment metadata.
from sklearn.metrics import get_scorer
scorer = get_scorer(experiment_metadata['scoring'])
pipeline.fit(train_X,train_y)
Pipeline(steps=[('featureunion', FeatureUnion(transformer_list=[('pipeline-1', Pipeline(steps=[('numpycolumnselector', NumpyColumnSelector(columns=[0, 2, 3, 7])), ('compressstrings', CompressStrings(compress_type='hash', dtypes_list=['float_int_num', 'float_int_num', 'float_int_num', 'float_int_num'], missing_values_reference_list=['', '-', '?', nan], misslist_list... autoai_libs.cognito.transforms.transform_utils.FS1(cols_ids_must_keep = range(0, 8), additional_col_count_to_keep = 8, ptype = 'classification')), ('logisticregression', LogisticRegression(class_weight='balanced', dual=True, fit_intercept=False, intercept_scaling=0.001256138140153018, max_iter=948, n_jobs=1, random_state=33, solver='liblinear', tol=7.890895594583663e-05))])
Score the fitted pipeline with the generated scorer using the holdout dataset.
score = scorer(pipeline, test_X, test_y)
print(score)
0.7532467532467533
In this section you will learn how to deploy and score pipeline model as webservice using WML instance.
Authenticate the Watson Machine Learning service on IBM Cloud.
Tip: Your Cloud API key can be generated by going to the Users section of the Cloud console. From that page, click your name, scroll down to the API Keys section, and click Create an IBM Cloud API key. Give your key a name and click Create, then copy the created key and paste it below.
Note: You can also get service specific apikey by going to the Service IDs section of the Cloud Console. From that page, click Create, then copy the created key and paste it below.
Action: Enter your api_key in the following cell.
api_key = "QRPgKROIhFZE6KaYwvm-jhIHPQfAYoAUZE9-_1s0sWbj"
wml_credentials = {
"apikey": api_key,
"url": experiment_metadata["deployment_url"]
}
Action: If you want to deploy refined pipeline please change the pipeline_model to new_pipeline. If you prefer you can also change the deployment_name. To perform deployment please specify target_space_id
target_space_id = "f2f33a84-ecdd-498b-b7b8-fae6ce42026c"
pipeline_name = "Diabetes_predictor_4m_nbk"
from ibm_watson_machine_learning.deployment import WebService
service = WebService(target_wml_credentials=wml_credentials,
target_space_id=target_space_id)
service.create(
model=pipeline,
metadata=experiment_metadata,
deployment_name=f'{pipeline_name}_webservice'
)
Preparing an AutoAI Deployment... Depreciation Warning: Passing an object will no longer be supported. Please specify the AutoAI model name to deploy. Published model uid: 9dc5b7d2-67bb-47e4-9838-dd68a1257023 Deploying model 9dc5b7d2-67bb-47e4-9838-dd68a1257023 using V4 client. ####################################################################################### Synchronous deployment creation for uid: '9dc5b7d2-67bb-47e4-9838-dd68a1257023' started ####################################################################################### initializing......... ready ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='4e04fa53-ab66-4cab-a749-3739fb6ae7a4' ------------------------------------------------------------------------------------------------
Deployment object could be printed to show basic information:
print(service)
name: Diabetes_predictor_4m_nbk_webservice, id: 4e04fa53-ab66-4cab-a749-3739fb6ae7a4, scoring_url: https://eu-gb.ml.cloud.ibm.com/ml/v4/deployments/4e04fa53-ab66-4cab-a749-3739fb6ae7a4/predictions, asset_id: 9dc5b7d2-67bb-47e4-9838-dd68a1257023
To be able to show all available information about deployment use .get_params() method:
service.get_params()
{'entity': {'asset': {'id': '9dc5b7d2-67bb-47e4-9838-dd68a1257023'}, 'custom': {}, 'deployed_asset_type': 'model', 'hardware_spec': {'id': 'c076e82c-b2a7-4d20-9c0f-1f0c2fdf5a24', 'name': 'M', 'num_nodes': 1}, 'hybrid_pipeline_hardware_specs': [{'hardware_spec': {'name': 'S', 'num_nodes': 1}, 'node_runtime_id': 'auto_ai.kb'}], 'name': 'Diabetes_predictor_4m_nbk_webservice', 'online': {}, 'space_id': 'f2f33a84-ecdd-498b-b7b8-fae6ce42026c', 'status': {'online_url': {'url': 'https://eu-gb.ml.cloud.ibm.com/ml/v4/deployments/4e04fa53-ab66-4cab-a749-3739fb6ae7a4/predictions'}, 'state': 'ready'}}, 'metadata': {'created_at': '2021-04-23T06:25:36.630Z', 'id': '4e04fa53-ab66-4cab-a749-3739fb6ae7a4', 'modified_at': '2021-04-23T06:25:36.630Z', 'name': 'Diabetes_predictor_4m_nbk_webservice', 'owner': 'IBMid-55000A1BBE', 'space_id': 'f2f33a84-ecdd-498b-b7b8-fae6ce42026c'}}
You can make scoring request by calling score() on deployed pipeline.
test_df = df.sample(n=5).drop([experiment_metadata['prediction_column']], axis=1)
print(test_df)
Pregnancies Glucose BloodPressure SkinThickness Insulin BMI \ 374 2 122 52 43 158 36.2 709 2 93 64 32 160 38.0 15 7 100 0 0 0 30.0 441 2 83 66 23 50 32.2 185 7 194 68 28 0 35.9 DiabetesPedigreeFunction Age 374 0.816 28 709 0.674 23 15 0.484 32 441 0.497 22 185 0.745 41
predictions = service.score(payload=test_df)
predictions
{'predictions': [{'fields': ['prediction', 'probability'], 'values': [[0, [0.872248659031778, 0.1277513409682221]], [0, [0.9914757704779724, 0.00852422952202755]], [0, [0.6592459444179399, 0.34075405558206]], [0, [0.9962777515691374, 0.0037222484308625284]], [1, [0.006635051966327077, 0.9933649480336729]]]}]}
Licensed Materials - Copyright © 2021 IBM. This notebook and its source code are released under the terms of the ILAN License. Use, duplication disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Note: The auto-generated notebooks are subject to the International License Agreement for Non-Warranted Programs
(or equivalent) and License Information document for Watson Studio Auto-generated Notebook (License Terms),
such agreements located in the link below. Specifically, the Source Components and Sample Materials clause
included in the License Information document for Watson Studio Auto-generated Notebook applies to the auto-generated notebooks.
By downloading, copying, accessing, or otherwise using the materials, you agree to the License Terms