This first cell instantiates a Julia project environment, reproducing a collection of mutually compatible packages for use in this demonstration:

In [1]:

```
using Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
```

In [2]:

```
print("Hello world!")
```

In [3]:

```
2 + 2
```

Out[3]:

In [4]:

```
typeof(42.0)
```

Out[4]:

You will never see anything like `A.add(B)`

in Julia because Julia
is not a traditional object-oriented language. In Julia, function and
structure are kept separate, with the help of abstract types and
multiple dispatch, as we explain next
In addition to regular concrete types, such as `Float64`

and
`String`

, Julia has a built-in heirarchy of *abstract* types. These
generally have subtypes but no instances:

In [11]:

```
typeof(42)
```

Out[11]:

In [12]:

```
supertype(Int64)
```

Out[12]:

In [13]:

```
supertype(Signed)
```

Out[13]:

In [14]:

```
subtypes(Integer)
```

Out[14]:

In [15]:

```
Bool <: Integer # is Bool a subtype of Integer?
```

Out[15]:

In [16]:

```
Bool <: String
```

Out[16]:

In Julia, which is optionally typed, one uses type annotations to adapt the behaviour of functions to their types. If we define

In [17]:

```
divide(x, y) = x / y
```

Out[17]:

then `divide(x, y)`

will make sense whenever `x / y`

makes sense (for
the built-in function `/`

). For example, we can use it to divide two
integers, or two matrices:

In [18]:

```
divide(1, 2)
```

Out[18]:

In [19]:

```
divide([1 2; 3 4], [1 2; 3 7])
```

Out[19]:

To vary the behaviour for specific types we make type annotatations:

In [20]:

```
divide(x::Integer, y::Integer) = floor(x/y)
divide(x::String, y::String) = join([x, y], " / ")
divide(1, 2)
```

Out[20]:

In [21]:

```
divide("Hello", "World!")
```

Out[21]:

In the case of `Float64`

the original "fallback" method still
applies:

In [22]:

```
divide(1.0, 2.0)
```

Out[22]:

Users can define their own abstract types and composite types:

In [23]:

```
abstract type Organism end
struct Animal <: Organism
name::String
is_hervibore::Bool
end
struct Plant <: Organism
name::String
is_flowering::Bool
end
describe(o::Organism) = string(o.name) # fall-back method
function describe(p::Plant)
if p.is_flowering
text = " is a flowering plant."
else
text = " is a non-flowering plant."
end
return p.name*text
end
```

Out[23]:

In [24]:

```
describe(Animal("Elephant", true))
```

Out[24]:

In [25]:

```
describe(Plant("Fern", false))
```

Out[25]:

For more on multiple dispatch, see this blog post by Christopher Rackauckas.

Differentiation of almost arbitrary programs with respect to their input. (source by @matbesancon)

In [26]:

```
using ForwardDiff
function sqrt_babylonian(s)
x = s / 2
while abs(x^2 - s) > 0.001
x = (x + s/x) / 2
end
x
end
```

Out[26]:

In [27]:

```
sqrt_babylonian(2) - sqrt(2)
```

Out[27]:

In [28]:

```
@show ForwardDiff.derivative(sqrt_babylonian, 2);
@show ForwardDiff.derivative(sqrt, 2);
```

Physicists' dreams finally made true. (soure by @matbesancon)

In [29]:

```
using Unitful
using Unitful: J, kg, m, s
```

In [30]:

```
3J + 1kg * (1m / 1s)^2
```

Out[30]:

MLJ (Machine Learning in Julia) is a toolbox written in Julia providing a common interface and meta-algorithms for selecting, tuning, evaluating, composing and comparing machine learning models written in Julia and other languages. In particular MLJ wraps a large number of scikit-learn models.

Offer a consistent way to use, compose and tune machine learning models in Julia,

Promote the improvement of the Julia ML/Stats ecosystem by making it easier to use models from a wide range of packages,

Unlock performance gains by exploiting Julia's support for parallelism, automatic differentiation, GPU, optimisation etc.

Data agnostic, train models on any data supported by the Tables.jl interface,

Extensive support for model composition (

*pipelines*and*learning networks*),Convenient syntax to tune and evaluate (composite) models.

Consistent interface to handle probabilistic predictions.

Extensible tuning interface, to support growing number of optimization strategies, and designed to play well with model composition.

More information is available from the MLJ design paper

Here's how to genearate the full list of models supported by MLJ:

In [31]:

```
using MLJ
models()
```

Out[31]:

The following example shows how to evaluate the performance of supervised learning model in MLJ. We'll start by loading a canned data set that is very well-known:

In [32]:

```
X, y = @load_iris;
```

Here `X`

is a table of input features, and `y`

the target observations (iris species).

Next, we can inspect a list of models that apply immediately to this data:

In [33]:

```
models(matching(X, y))
```

Out[33]:

We'll choose one and invoke the `@load`

macro, which simultaneously loads the code for the chosen model, and instantiates the model, using default hyper-parameters:

In [34]:

```
tree_model = @load RandomForestClassifier pkg=DecisionTree
```

Out[34]:

Now we can evaluate it's performance using, say, 6-fold cross-validation, and the `cross_entropy`

performance measure:

In [35]:

```
evaluate(tree_model, X, y, resampling=CV(nfolds=6, shuffle=true), measure=cross_entropy)
```

Out[35]:

We'll now evaluate the peformance of our model by hand, but using a simple holdout set, to illustate a typical `fit!`

and `predict`

workflow.

First note that a *model* in MLJ is an object that only serves as a container for the hyper-parameters of the model, and that's all. A *machine* is an object binding a model to some data, and is where *learned* parameters are stored (among other things):

In [36]:

```
tree = machine(tree_model, X, y)
```

Out[36]:

To split the data into a training and testing set, you can use the function `partition`

to obtain indices for data points that should be considered either as training or testing data:

In [37]:

```
train, test = partition(eachindex(y), 0.7, shuffle=true)
test[1:3]
```

Out[37]:

To fit the machine, you can use the function `fit!`

specifying the rows to be used for the training:

In [38]:

```
fit!(tree, rows=train)
```

Out[38]:

Note that this modifies the machine, which now contains the trained parameters of the decision tree. You can inspect the result of the fitting with the `fitted_params`

method:

In [39]:

```
fitted_params(tree)
```

Out[39]:

You can now use the machine to make predictions with the `predict`

function specifying rows to be used for the prediction:

In [40]:

```
ŷ = predict(tree, rows=test)
@show ŷ[1]
```

Out[40]:

Note that the output is probabilistic, effectively a vector with a score for each class. You could get the mode by using the `mode`

function on `ŷ`

or using `predict_mode`

:

In [41]:

```
ȳ = predict_mode(tree, rows=test)
@show ȳ[1]
@show mode(ŷ[1])
```

Out[41]:

To measure the discrepancy between ŷ and y you could use the average cross entropy:

In [42]:

```
mce = cross_entropy(ŷ, y[test]) |> mean
round(mce, digits=4)
```

Out[42]:

As in other frameworks, MLJ also supports a variety of unsupervised models for pre-processing data, reducing dimensionality, etc. It also provides a wrapper for tuning model hyper-parameters in various ways. Data transformations, and supervised models are then typically combined into linear pipelines. However, a more advanced feature of MLJ not common in other frameworks allows you to combine models in more complicated ways. We give a simple demonstration of that next.

We start by loading the model code we'll need:

In [43]:

```
@load RidgeRegressor pkg=MultivariateStats
@load RandomForestRegressor pkg=DecisionTree;
```

The next step is to define "learning network" - a kind of blueprint for the new composite model type. Later we "export" the network as a new stand-alone model type.

Our learing network will:

standarizes the input data

learn and apply a Box-Cox transformation to the target variable

blend the predictions of two supervised learning models - a ridge regressor and a random forest regressor; we'll blend using a simple average (for a more sophisticated stacking example, see here)

apply the

*inverse*Box-Cox transformation to this blended prediction

The basic idea is to proceed as if one were composing the various steps "by hand", but to wrap the training data in "source nodes" first. In place of production data, one typically uses some dummy data, to test the network as it is built. When the learning network is "exported" as a new stand-alone model type, it will no longer be bound to any data. You bind the exported model to production data when your're ready to use your new model type (just like you would with any other MLJ model).

There is no need to `fit!`

the machines you create, as this will happen automatically when you *call* the final node in the network (assuming you provide the dummy data).

*Input layer*

In [61]:

```
# define some synthetic data:
X, y = make_regression(100)
y = abs.(y)
test, train = partition(eachindex(y), 0.8);
# wrap as source nodes:
Xs = source(X)
ys = source(y)
```

Out[61]:

*First layer and target transformation*

In [62]:

```
std_model = Standardizer()
stand = machine(std_model, Xs)
W = MLJ.transform(stand, Xs)
box_model = UnivariateBoxCoxTransformer()
box = machine(box_model, ys)
z = MLJ.transform(box, ys)
```

Out[62]:

*Second layer*

In [63]:

```
ridge_model = RidgeRegressor(lambda=0.1)
ridge = machine(ridge_model, W, z)
forest_model = RandomForestRegressor(n_trees=50)
forest = machine(forest_model, W, z)
ẑ = 0.5*predict(ridge, W) + 0.5*predict(forest, W)
```

Out[63]:

*Output*

In [64]:

```
ŷ = inverse_transform(box, ẑ)
```

Out[64]:

No fitting has been done thus far, we have just defined a sequence of operations. We can test the netork by fitting the final predction node and then calling it to retrieve the prediction:

In [65]:

```
fit!(ŷ);
ŷ()[1:4]
```

Out[65]:

To "export" the network a new stand-alone model type, we can use a macro:

In [66]:

```
@from_network machine(Deterministic(), Xs, ys, predict=ŷ) begin
mutable struct CompositeModel
rgs1 = ridge_model
rgs2 = forest_model
end
end
```

Here's an instance of our new type:

In [67]:

```
composite = CompositeModel()
```

Out[67]:

Since we made our model mutable, we could change the regressors for different ones.

For now we'll evaluate this model on the famous Boston data set:

In [68]:

```
X, y = @load_boston
evaluate(composite, X, y, resampling=CV(nfolds=6, shuffle=true), measures=[rms, mae])
```

Out[68]:

In [ ]:

```
```