The MOJO model can be deployed to production w/o the need of any access to a running H2O or Spark Instance. In this example I am using Jupyter with the BeakerX Kotlin kernel.
In theory all you need to do is to add the following dependency:
<dependency>
<groupId>ai.h2o</groupId>
<artifactId>h2o-genmodel</artifactId>
<version>3.10.4.2</version>
</dependency>
Further details can be found in http://docs.h2o.ai/h2o/latest-stable/h2o-docs/productionizing.html
Unfortunally this does not work in BeakerX so we need to do the following workaround:
%%bash
if [ -e genmodel.jar ]
then
echo "genmodel.jar exists"
else
curl http://central.maven.org/maven2/ai/h2o/h2o-genmodel/3.22.0.1/h2o-genmodel-3.22.0.1.jar > genmodel.jar
fi
genmodel.jar exists
%classpath add jar genmodel.jar
%%classpath add mvn
net.sf.opencsv:opencsv:2.3
com.google.code.gson:gson:2.6.2
ai.h2o:deepwater-backend-api:1.0.4
We load the Mojo model and execute the prediction with the help of the EasyPredictModelWrapper class
import hex.genmodel.easy.RowData
import hex.genmodel.easy.EasyPredictModelWrapper
import hex.genmodel.MojoModel
var row = RowData()
row.put("sepal.length", "5.0")
row.put("sepal.width", "3.4")
row.put("petal.length", "1.4")
row.put("petal.width", "0.2")
var easyModel = EasyPredictModelWrapper( MojoModel.load("model.mojo"));
var p = easyModel.predictMultinomial(row);
p.label
Versicolor