In this chapter we will review the two key concepts, which make Julia stand out, namely multiple dispatch and its type system.

Defining functions follows a rather intuitive syntax. The value obtained by evaluating the last expression of a `function`

block will be automatically returned:

In [ ]:

```
function mymult(x, y)
x * y
end
```

For one-line functions one may also use a convenient short-hand:

In [ ]:

```
mysquare(x) = mymult(x, x)
```

Both such functions are fully generic in the argument types

In [ ]:

```
@show mysquare(2) # Use integer arithmetic
@show mymult(-1, 3. + 2im) # Use complex arithmetic
@show mysquare(" abc "); # Use string concatenation
```

Notice, that for each type combination a separate piece of code will be compiled even though we only *defined* the functionality a single time. This compilation takes place on first use of a particular tuple of types.

Functions by themselves are objects

In [ ]:

```
mymult
```

and may be passed around to other functions, for example:

In [ ]:

```
"""The fold function, applying f from the left and accumulating the results"""
function myfold(f, x, y, z)
f(f(x, y), z)
end
myfold(mymult, "Hello", " Julia ", "World")
```

Julia makes a distinction between **functions** and **methods**. Methods are concrete implementations in form of a list of Julia expressions to be executed, when the function name is used in the code. Multiple methods may be defined for the same function name. They differ in the number of arguments or in the supported argument types (more on this in a second). The resulting function object represents a mapping from the function name (e.g. `myfold`

) to the respective methods.
Its job is to mediate between the different methods,
namely to transparently select one method for each call of the function in other Julia code.
This **dispatch** is done based on the passed arguments and their types, such that the best-fitting method is selected.

For our `myfold`

example, one could easily imagine a few more method implementations, for example

In [ ]:

```
myfold(f, x) = x
myfold(f, x, y) = f(x, y)
```

So now `myfold`

works transparently with 1, 2 or 3 arguments:

In [ ]:

```
@show myfold(mymult, 2., 3.)
@show myfold(+, 1)
@show myfold(==, false, false, true)
```

As an aside we note, that making `myfold`

work with a variable number of arguments is possible:

In [ ]:

```
myfold(f, x, rest...) = myfold(f, f(x, myfold(rest...)))
```

If you want to know more on this, the `...`

is known as an ellipsis in the function argument list and as the splatting operator in the call towards the end of the line.

A small project, which will tackle during the course is to develop a simple code for simulating the classical dynamics of interacting particles.

Assume we are given a potential $V(x)$ for a single particle with unit mass at position $x$. We want to propagate the particle in time. That's integrating the famous Newton's equation of motion $$ -\frac{dV}{dx} = \frac{d^2x}{dt^2}$$ in time. Defining the accelleration map $$ A_V = -\frac{dV}{dx}$$ this may be written equivalently: $$\left\{\begin{array}{l} \frac{dv}{dt} = A_V \\ \frac{dx}{dt} = v \end{array}\right. $$

At first we will just apply forward-Euler: We discrete in time using the interval $\Delta t$, which leads to sequences $t^{(n)} = n \Delta t + t^{(0)}$ and similarly $v^{(n)}$, $x^{(n)}$, etc: $$\left\{\begin{array}{l} v^{(n+1)} = v^{(n)} + A_V(x^{(n)}) \Delta t\\ x^{(n+1)} = x^{(n)} + v^{(n)} \Delta t\\ \end{array}\right. .$$ This is transformed to a Julia function in a single line:

In [ ]:

```
euler(A, Δt, xn, vn) = xn + vn * Δt, vn + A(xn) * Δt
```

where `A`

is a function object to be passed to `euler`

, for example:

In [ ]:

```
A(x) = -x
euler(A, 0.1, 0, 1)
```

- Take timesteps of $\Delta t = 0.1$ and start at $x_n = 0$ and $v_n = 1$. Propagate the dynamics for 5 steps in the harmonic potential $V = x^2$.
- Now do the same thing using the confining potential: $$ V_\alpha(x) = \left\{ \begin{array}{ll} (|x| - 1)^\alpha & |x| > 1 \\ 0 & \text{else}\end{array}\right.$$ for $\alpha = 4$.

Before we discuss multiple dispatch of functions and dispatch by types, we briefly review Julia's type system. Types in Julia fall into two categories: **Abstract** and **concrete**. Abstract types such as `Integer`

or `Number`

are supertypes of a bunch of other types, for example:

In [ ]:

```
Int32 <: Integer # Read Int32 is-a Integer
```

In [ ]:

```
UInt16 <: Integer
```

In [ ]:

```
Float32 <: Integer
```

In [ ]:

```
Float32 <: Number
```

In [ ]:

```
Integer <: Number
```

In [ ]:

```
# by transitivity:
@show Int32 <: Number
@show UInt16 <: Number
@show Number <: Number;
```

In Julia concrete types are always a leaf of the type tree, i.e. they cannot be inherited from each other. For a C++ or Python person (as I was before looking into Julia) this seems restrictive at first, but it takes away a lot of unnecessary complexity from the type hierachy. In Julia the structure of a library or a problem is in many cases not converted into explict type hierachies, as it would for OOP languages like Python or C++. Instead it builds up implicitly from conventions which are associated with abstract or concrete types.

For example, if you derive off the abstract type `AbstractMatrix`

you are expected to implement a certain set of functions (e.g. a way to obtain an element for a given index, something which returns the size of the matrix, etc.). Otherwise not all of the standard library and other linear algebra packages will work. The difference to a hard enforcement of interfaces like in C++ or Java is, however, that *some things* will still work. This has disadvantages as your code could break in the future, but it is extremely useful for rapidly trying something out. Afterwards one should of course still implement the full expected interface of an `AbstractMatrix`

consulting the Julia documentation to ensure the code is fully compatible.

In programming language theory type systems traditionally fall in two categories.
In **dynamically typed** languages the type of
a value or expression is inferred only at runtime,
which usually allows for more flexible code. Examples are Python or MATLAB.
In contrast, so-called **statically-typed** languages (think FORTRAN or C++),
require types to be already known before runtime when the program is compiled.
This allows both to check more thoroughly for errors (which can manifest in mismatched types)
and it usually brings a gain in performance because more things about the memory layout of the program is known
at compile time. As a result aspects such as vectorisation, contiguous alignment of data,
preallocation of memory can be leveraged more easily.

Julia is kind of both. Strictly speaking it is dynamically typed. E.g. the type of variables can change type at any point:

In [ ]:

```
a = 4
println(typeof(a))
a = "bla"
println(typeof(a))
```

In [ ]:

```
a = 4
println(typeof(a))
a = "bla"
println(typeof(a))
```

Note, however, that the type of a *value* cannot change in Julia!^{1}.

Still, Julia's strong emphasis on types are one of the reasons for its performance.
Unlike in statically typed languages, however, **type deduction in Julia** happens at runtime, right before JIT-compiling a function: The more narrowly the types for a particular piece of code can be deduced, the better the emitted machine code can be optimised. One can influence this using explicit type annotations in function arguments and intermediate expressions. Due to the to the excellent type inference capabilities of Julia, this is in general not needed, however.

This might sound a bit unusal at first, but the result is,
that it takes almost no effort to write generic code: Just leave off all the type annotations. Notice, that this only means that the code has no types. At runtime types are still inferred as much as possible, such that aspects like vectorisation, contiguous alignment of data, preallocation of memory *can* be taken into account by the Julia compiler.

To make the Julia compiler appear less like a mysterious black box, let us check up on some of it's inner workings. This is fortunately quite simple, since Julia is equipped with quite some helpful tools to provide insight into the code generation step. One of these is the macro `@code_typed`

:

In [ ]:

```
@code_typed euler(x -> -2x, 0.1, 0.0, 1.0)
```

In [ ]:

```
@code_typed euler(x -> -2x, 0.1f0, 0.0f0, 1.0f0) # Why not use single precision?
```

In [ ]:

```
@code_typed myfold(mymult, "Hello", " Julia ", " World!")
```

It shows exactly the types which are inferred during the JIT compilation procedure. In this sense Julia functions resemble very much the behaviour of fully templated code in C++, albeit the syntax is a lot more condensed. Additionally compile-on-first-use implies that methods are only ever compiled for those combinations of types, which are actually used in the code. Again a difference to C++, where heavily templated code often leads to a combinatorial growth in binary size and compile time.

Three more facts about Julia types:

- In Julia all types are the same. For example, there is no difference between
`Int32`

and`String`

, even though the first has a direct mapping to low-level instructions in the CPU and the latter has not (contrast this with e.g. C++). - The
`Nothing`

type with the single instance`nothing`

is the Julia equivalent to`void`

in C++ or`None`

in Python. It often represents that a function does not return anything or that a variable has not been initialised. `Any`

is the root of the type tree: Any type in Julia is a subtype of`Any`

.

Which of the following type are subtypes of another?
Try to guess first and then verify by using the operator `<:`

.

```
Float64 AbstractFloat Integer
Number AbstractArray Complex
Real Any Nothing
```

Let us return back to the `mymult`

function:

In [ ]:

```
mymult(x, y) = x * y
```

We were able to safely use this functions with a number of type combinations, but some things do not yet work:

In [ ]:

```
mymult(2, " abc")
```

Let's say we wanted to concatenate the string `str`

$n$ times on multiplication with an integer $n$. In Julia this functionality is already implemented by the exponentiation operator:

In [ ]:

```
"abc"^4
```

So our task is to implement `mymult("abc", 4)`

and `mymult(4, "abc")`

to behave the same way. We define two special methods:

In [ ]:

```
mymult(str::AbstractString, n::Integer) = str^n
mymult(n::Integer, str::AbstractString) = mymult(str, n)
```

In both of these, the syntax `str::AbstractString`

and `n::Integer`

means that the respective method is only
considered during dispatch if the argument `str`

is of type `AbstractString`

or one of its concrete subtypes and similarly `n`

is an `Integer`

(or subtype). Since Julia always dispatches to the most specific method in case multiple methods match, this is all we need to do:

In [ ]:

```
@show mymult(2, " abc")
@show mymult("def ", UInt16(3));
```

Notice, that the fully generic

```
mymult(x, y) = x * y
```

is actually an abbreviation for

```
mymult(x::Any, y::Any) = x * y
```

Let us recap. Roughly speaking what we did to implement `mymult`

was:

- The most generic method was used to establish the basic functionality, which works well for almost all cases.
- Only a few specialised methods were needed additionally to capture an extra class of special cases on top. Arguably this was a very simple example, still this observation is a common paradigm in Julia. It is both used to extend existing functions by extra methods for custom types as well as to optimise when more performant implementations can be suggested for special cases.

For example, consider the `det`

-function for computing the determinant of a matrix,
which is implemented exactly in this spirit.
One of its methods is kept as generic as possible, that is assuming nothing about the matrix and just computing the determinant by visiting all rows and columns.
The result is a slow fallback method, which always works.
Afterwards a few sensible specialisations for example for `UpperTriangular`

or `Tridiagonal`

matrices are implemented on top.
These exploit the additional structure, resulting in much reduced cost.

Let's assume now we program an algorithm, which involves a determinant, e.g. the simple normalisation function

`normalise(A) = A / det(A)`

(where of course we assume `A`

to be nonsingular). We note:

- As a user, if I know the structure of my matrix is special, I mark it e.g. with the
`Tridiagonal`

type. When running`normalise`

on it, the`det`

-method specialised for`Tridiagonal`

will be used, so that the best possible performance results. - As a user, if I do not know the structure of the matrix, I leave it general. Then the slow
`det`

will still give me a sensible result. - As a programmer, writing the
`normalise`

algorithm I do not need to know what kind of matrix my users will pass me, since the dispatch of`det`

will assure the best thing happens. Most notably even if I have standard matrices in mind when writing the code, my`normalise`

will still work if a passed matrix`A`

is non-standard, because the fallback`det`

will do the job. - As a programmer, writing a custom matrix
`A`

which adheres to all conventions of the`AbstractMatrix`

interface, I instantly have a working`normalise`

function even if I could be more clever about it. This way I can rapidly prototype my fancy new matrix type and only later implement the`det`

function once I see it's necessary to achieve better performance. Most importantly, however, I can specialise`det`

(from`Base`

) in my code where I define`A`

and*still*the`normalise`

function (which could be third-party code) will speed up (more on this below).

Notice, that methods can also be parametrised in the types over multiple arguments, for example:

In [ ]:

```
equal_type(x::T, y::T) where {T} = true
equal_type(x, y) = false
```

In [ ]:

```
equal_type(1, 1.2)
```

In [ ]:

```
equal_type("abc", "ef")
```

In [ ]:

```
equal_type_and_integer(x::T, y::T) where {T <: Integer} = true
equal_type_and_integer(x, y) = false
```

In [ ]:

```
@show equal_type_and_integer(1, 2)
@show equal_type_and_integer("abc", "ef");
```

Without a doubt designing the methods' type annotations needs some practice. For more details and plenty of more options to influence type-specific dispatch, see the Julia documentation on methods.

Consider the functions

In [ ]:

```
characterise(x::String, y::Nothing) = ""
characterise(x::Nothing, y::Complex) = "Only one of us is complex"
characterise(x::Number, ::Nothing) = "I'm a number, Jack, and I'm ok"
characterise(x::AbstractFloat, y::Integer) = "Pretty abstract"
characterise(x::BigFloat, y) = "I'm big"
characterise(x, y::Union{BigFloat, BigInt}) = "We're big"
characterise(x::Float32, y::Union{Int32, Int64}) = "pretty wide"
```

Where `Union{A, B}`

means that the type may either be `A`

or `B`

.

For each of the following calls, try to determine which method will be called (if any) and verify by running the code:

`characterise(0.1, UInt32(3))`

`characterise(0.1, 3)`

`characterise(nothing, im)`

`characterise(nothing, im)`

`characterise(Float32(0.1), UInt32(3))`

`characterise(Float32(0.1), 3)`

In [ ]:

```
# For example:
characterise("abc", 1.2)
```

Plenty of standard functions are already defined in Julia `Base`

. This includes:

- All operators
`+`

,`*`

,`-`

,`≈`

(isapprox) - Standard functions such as
`exp`

,`sin`

,`cos`

,`abs`

, ... `inv`

to compute the multiplicative inverse (e.g. Matrix inverse)`min`

,`max`

and`minmax`

But also a few special cases worth mentioning:

`cis`

for $\exp(i x)$`sinpi`

and`cospi`

for computing $\sin(\pi x)$ and $\cos(\pi x)$ more accurately`cbrt`

computes the cube root.

Same as custom functions, additional methods for functions from other packages can also be defined.
This extends to functions from `Base`

Julia. So instead of defining our custom `mymult`

function,
we could have equally well overloaded the `*`

operator from `Base`

for integers and strings:

In [ ]:

```
import Base: *
*(str::AbstractString, n::Integer) = str^n
*(n::Integer, str::AbstractString) = mymult(str, n)
```

In [ ]:

```
@show 4 * "abc"
```

which is actually equivalent to

In [ ]:

```
*(4, "abc")
```

For evaluating the latter expression, Julia needs to determine which method of the function `Base.*`

to execute.
For this *both* argument types are taken into account and not just the first or the second. This is **multiple dispatch**, namely the fact that for dispatching to a method definition the type of *all* arguments matters.

This probably does not sound so impressive, but let us discuss this a little further, staying with the example of multiplication. We consider implementing a binary operator `*(n::Float64, str::SparseMatrixCSC)`

, which multiplies a number with a sparse matrix in compressed-column storage format:

- In many languages such as Python, implementing such a binary operator translates to a special operator function. In the Python case this is
`__mul__`

, which needs to be implemented for the LHS type, i.e.`Float64`

. In other words which method to call for`*`

is decided solely by considering the type of the*first*argument, hence the name*single dispatch*. - This is not always the most clever thing to do. In our case one could well argue that
`SparseMatrixCSC`

should be the place where the operator should be implemented, because the net effect will be to scale this matrix. Amongst other reasons this is why Python rolled out a second set of functions, called`__rmul__`

, which dispatch on the*second*argument. Problem solved, right? - Yes, for
`*(n::Float64, str::SparseMatrixCSC)`

we are good, but a new conceptional problem results. Let's say I want to use a custom floating point format`Double64`

in my code and specifically implement`*(n::Double64, str::SparseMatrixCSC)`

. Both`Double64`

and`SparseMatrixCSC`

are from third-party packages, which know nothing from another. In Python I have roughly three options:- Implement
`__rmul__`

in`SparseMatrixCSC`

. That's bad because it adds the dependency on`Double64`

inside`SparseMatrixCSC`

. - Implement
`__mul__`

in`Double64`

... same issue in reverse. - Derive off e.g.
`SparseMatrixCSC`

in my code and implement there. Works, but I have now a part of my library that is closely coupled to the API of`SparseMatrixCSC`

. If anything happens in the`SparseMatrixCSC`

I have to follow along or my code breaks.

- Implement
- In C++ the very problem of implementing mulitplication is not so prominent, because
`operator*`

can also be a free function. For other binary operations (like type conversion) the problem is, however, exactly the same: As soon as two non-standard third-party types are involved and my code needs to connect them, I have to make one more special than the other.

The aforementioned problem is known as the *binary method problem* and is elegantly solved by multiple dispatch, because both arguments are treated on the same footing. So in Julia I just need to do

```
import Base: *
*(n::Double64, str::SparseMatrixCSC) = ...
```

in my code, without changing any `Double64`

or `SparseMatrixCSC`

code, because from Julia's point of view my method will now be the most specific for the type tuple `(Double64, SparseMatrixCSC)`

.

- PhD Thesis from Jeff Bezanson
- https://www.youtube.com/watch?v=kc9HwsxE1OY
- https://docs.julialang.org/en/v1/manual/methods/

^{1}I ignore some dirty tricks for performance optimisation here, where one can reinterpret a bit of memory as some other type ...