Getting started with SKIL from Python

This notebook is a quick tour of the Skymind Intelligence Layer (SKIL), a tool for managing your deep learning model life-cycle from prototype to production. You will first download and start SKIL, then define and train a simple Keras model, upload the model to SKIL and start a production-ready service that you can use for model inference.

Let's load the SKIL Python package, as well as Keras with Tensorflow backend, first:

In [1]:
%%capture
! pip install skil --user
! pip install tensorflow keras --user

To use SKIL from this notebook (or any other Python environment) we need to install SKIL first. We do this with Docker here, but you have other options as well. Head over to https://docs.skymind.ai/docs/installation to get detailed installation instructions for your platform.

You pull the latest SKIL Community Edition (CE) from dockerhub as follows:

In [2]:
! docker pull skymindops/skil-ce
Using default tag: latest
latest: Pulling from skymindops/skil-ce
Digest: sha256:471fea6627c622ada3158ed5e61212b8ee337cb9bc639e49e3fdef1cefbc4558
Status: Image is up to date for skymindops/skil-ce:latest

Once the download is finished start SKIL from command line like this:

docker run --rm -it -p 9008:9008 -p 8080:8080 skymindops/skil-ce bash /start-skil.sh

To test this, you can open a browser on "localhost:9008" to see the SKIL login screen. User name and password are both "admin". We won't be using the UI much right now, but everything we do within this notebook can also be run and managed within the SKIL UI.

Now that SKIL runs, we can return to the real focus: your deep learning models. Let's start by defining a Keras model that classifies MNIST handwritten digits first.

In [3]:
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout

batch_size = 128
num_classes = 10
epochs = 5

(x_train, y_train), (x_test, y_test) = mnist.load_data()

x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Dense(512, activation='relu', input_shape=(784,)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))

model.summary()
Using TensorFlow backend.
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 512)               401920    
_________________________________________________________________
dropout_1 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 512)               262656    
_________________________________________________________________
dropout_2 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 10)                5130      
=================================================================
Total params: 669,706
Trainable params: 669,706
Non-trainable params: 0
_________________________________________________________________

To deploy a model with SKIL you train this model and persist it using "save".

In [4]:
model.compile(loss='categorical_crossentropy',
              optimizer='sgd', metrics=['accuracy'])


history = model.fit(x_train, y_train,
                    batch_size=batch_size,
                    epochs=epochs,
                    verbose=1,
                    validation_data=(x_test, y_test))

model.save("model.h5")
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 70us/step - loss: 1.2081 - acc: 0.6936 - val_loss: 0.5409 - val_acc: 0.8662
Epoch 2/5
60000/60000 [==============================] - 4s 69us/step - loss: 0.5309 - acc: 0.8489 - val_loss: 0.3808 - val_acc: 0.8954
Epoch 3/5
60000/60000 [==============================] - 4s 61us/step - loss: 0.4261 - acc: 0.8756 - val_loss: 0.3236 - val_acc: 0.9086
Epoch 4/5
60000/60000 [==============================] - 4s 70us/step - loss: 0.3741 - acc: 0.8908 - val_loss: 0.2927 - val_acc: 0.9164
Epoch 5/5
60000/60000 [==============================] - 4s 74us/step - loss: 0.3434 - acc: 0.8989 - val_loss: 0.2711 - val_acc: 0.9219

SKIL organizes your work in workspaces, the basis for all experiments you want to run. Once your experiment is set up, you can register your Keras model in it as SKIL Model.

In [5]:
from skil import Skil, WorkSpace, Experiment, Model

skil_server = Skil()
work_space = WorkSpace(skil_server)
experiment = Experiment(work_space)
model = Model('model.h5', model_id="keras_model", experiment=experiment)
'>>> Authenticating SKIL...'
'>>> Done!'
'>>> Uploading model, this might take a while...'
[   {'file_content': None,
 'file_name': 'model.h5',
 'key': 'file',
 'path': '/opt/skil/plugins/files/MODEL/model.h5',
 'status': 'uploaded',
 'type': None}]

SKIL now has access to your model and you can deploy it as a service, like this. (The deployment process might take a few seconds, but you'll get notified when the model server is up.)

In [6]:
from skil import Deployment, Service

deployment = Deployment(skil_server, "keras_deployment")
service = model.deploy(deployment)
{'created': 1539791800629,
 'deployment_id': 0,
 'extra_args': None,
 'file_location': None,
 'id': 0,
 'jvm_args': None,
 'labels_file_location': None,
 'launch_policy': {'@class': 'io.skymind.deployment.launchpolicy.DefaultLaunchPolicy',
                   'maxFailuresMs': 300000,
                   'maxFailuresQty': 3},
 'model_state': None,
 'model_type': 'model',
 'name': 'model.h5',
 'scale': 1.0,
 'state': 'stopped',
 'sub_type': None,
 'updated': None}
'>>> Starting to serve model...'
'>>> Waiting for deployment...'
'>>> Waiting for deployment...'
'>>> Model server started successfully!'

That's it! You can now get predictions from your deployed service. SKIL will make sure your service is up and running at all times.

In [7]:
print(service.predict(x_test[:10]))
[[1.8891458e-04 9.7403280e-06 2.8806008e-04 1.9406644e-03 9.7655430e-06
  1.8945061e-04 9.3743090e-07 9.9551290e-01 8.0545310e-05 1.7790272e-03]]