`SymPy`

within `Julia`

. It owes an enormous debt to the tutorial for using SymPy within Python which may be found here. The overall structure and many examples are taken from that work, with adjustments and additions to illustrate the differences due to using `SymPy`

within `Julia`

.

This tutorial can be read as an `IJulia`

notebook here.

`SymPy`

, which is discussed in the package's `README`

file, we must first load it into `Julia`

with the standard command `using`

:

In [ ]:

```
using SymPy
```

The start up time is a bit lengthy.

`SymPy`

is the introduction of symbolic variables that differ quite a bit from `Julia`

's variables. Symbolic variables do not immediately evaluate to a value, rather the "symbolicness" propagates when interacted with. To keep things manageable, SymPy does some simplifications along the way.

Symbolic expressions are primarily of the `Sym`

type and can be constructed in the standard way:

In [ ]:

```
x = Sym("x")
```

Out[ ]:

This creates a symbolic object `x`

, which can be manipulated through further function calls.

`@syms`

macro that makes creating multiple variables a bit less typing, as it creates variables in the local scope – no assignment is necessary. Compare these similar ways to create symbolic variables:

In [ ]:

```
@syms a b c
a,b,c = Sym("a,b,c")
```

Out[ ]:

(For now, `@vars`

is also an alias for `@syms`

.)

`symbols`

constructor for producing symbolic objects. With `symbols`

it is possible to pass assumptions onto the variables. A list of possible assumptions is here. Some examples are:

In [ ]:

```
u = symbols("u")
x = symbols("x", real=true)
y1, y2 = symbols("y1, y2", positive=true)
alpha = symbols("alpha", integer=true, positive=true)
```

Out[ ]:

`symbols`

function can be used to make one or more variables with zero, one or more assumptions.

`solve`

will respect these assumptions, by failing to find solutions to these equations:

In [ ]:

```
solve(x^2 + 1) # ±i are not real
```

Out[ ]:

In [ ]:

```
solve(y1 + 1) # -1 is not positive
```

Out[ ]:

The `@syms`

macro can also have assumptions passed in as follows:

In [ ]:

```
@syms u1 positive=true u2 positive=true
solve(u1 + u2) # empty, though solving u1 - u2 is not.
```

Out[ ]:

`Sym`

to create a variable from a function name in Base.

`Julia`

has its math constants, like `pi`

and `e`

, `SymPy`

as well. A few of these have `Julia`

counterparts provided by `SymPy`

. For example, these three constants are defined (where `oo`

is for infinity):

In [ ]:

```
PI, E, oo
```

Out[ ]:

(The pretty printing of SymPy objects does not work for tuples.)

`asin`

call dispatches to `Julia`

's `asin`

function, the second to `SymPy`

's:

In [ ]:

```
[asin(1), asin(Sym(1))]
```

Out[ ]:

In [ ]:

```
@syms x y
ex = x^2 + 2x + 1
subs(ex, x, y)
```

Out[ ]:

Substitution can also be numeric:

In [ ]:

```
subs(ex, x, 0)
```

Out[ ]:

The output has no free variables, but is still symbolic.

In [ ]:

```
x,y,z = symbols("x,y,z")
ex = x + y + z
subs(ex, (x,1), (y,pi))
```

Out[ ]:

Pairs can be used for substitution with:

In [ ]:

```
subs(ex, x=>1, y=>pi)
```

Out[ ]:

`call`

method overloaded to allow substitution:

In [ ]:

```
ex(x=>1, y=>pi)
```

Out[ ]:

A straight call is also possble, where the order of the variables is determined by `free_symbols`

:

In [ ]:

```
ex(1, pi)
```

Out[ ]:

`|>`

, is convenient, there is a curried form that allows the expression to be implicit:

In [ ]:

```
ex |> subs(x, 1)
```

Out[ ]:

As `subs`

is very similar in spirit to `Julia`

's `replace`

function, that alias is provided:

In [ ]:

```
ex |> replace(y, pi)
```

Out[ ]:

`evalf`

, the other `N`

. Within `Julia`

we decouple this, using `N`

to also convert to a `Julian`

value and `evalf`

to leave the conversion as a symbolic object. The `N`

function converts symbolic integers, rationals, irrationals, and complex values, while attempting to find an appropriate `Julia`

type for the value.

To see the difference, we use both on `PI`

:

In [ ]:

```
N(PI) # floating-point value
```

Out[ ]:

Whereas, while this may look the same, it is still symbolic:

In [ ]:

```
evalf(PI)
```

Out[ ]:

`N`

and `evalf`

allow for a precision argument to be passed through the second argument. This is how 30 digits of $\pi$ can be extracted:

In [ ]:

```
N(PI, 30)
```

Out[ ]:

`N`

produces a `BigFloat`

with a precision to match (basically) the specified number of digits. Whereas

In [ ]:

```
evalf(PI, 30)
```

Out[ ]:

leaves the value as a symbolic object with 30 digits of accuracy.

`convert(T, ex)`

can also be done, and is necessary at times if `N`

does not give the desired type.

`SymPy`

overloads many of `Julia`

's functions to work with symbolic objects, such as seen above with `asin`

. The usual mathematical operations such as `+`

, `*`

, `-`

, `/`

etc. work through `Julia`

's promotion mechanism, where numbers are promoted to symbolic objects, others dispatch internally to related `SymPy`

functions.

`SymPy`

functions are typically promoted to symbolic expressions. This conversion will take math constants to their corresponding `SymPy`

counterpart, rational expressions to rational expressions, and floating point values to floating point values. However there are edge cases. An expression like `1//2 * pi * x`

will differ from the seemingly identical `1//2 * (pi * x)`

. The former will produce a floating point value from `1//2 * pi`

before being promoted to a symbolic instance. Using the symbolic value `PI`

makes this expression work either way.

`Julia`

's mathematical functions are overloaded to work with symbolic expressions. `Julia`

's generic definitions are used, as possible. This also introduces some edge cases. For example, `x^(-2)`

will balk due to the negative, integer exponent, but either `x^(-2//1)`

or `x^Sym(-2)`

will work as expected, as the former call first dispatches to a generic defintion, but the latter two expressions do not.

`SymPy`

makes it very easy to work with polynomial and rational expressions. First we create some variables:

In [ ]:

```
@syms x y z
```

Out[ ]:

`factor`

and `expand`

can move between the two.

For example,

In [ ]:

```
p = x^2 + 3x + 2
factor(p)
```

Out[ ]:

Or

In [ ]:

```
expand(prod([(x-i) for i in 1:5]))
```

Out[ ]:

`factor`

function factors over the rational numbers, so something like this with obvious factors is not finished:

In [ ]:

```
factor(x^2 - 2)
```

Out[ ]:

`q`

by:

In [ ]:

```
q = x*y + x*y^2 + x^2*y + x
```

Out[ ]:

Then we can collect the terms by the variable `x`

:

In [ ]:

```
collect(q, x)
```

Out[ ]:

or the variable `y`

:

In [ ]:

```
collect(q, y)
```

Out[ ]:

These are identical expressions, though viewed differently.

`SymPy`

simplify the values. In this case, the common value of `x`

is factored out:

In [ ]:

```
simplify(q)
```

Out[ ]:

`simplify`

function attempts to apply the dozens of functions related to simplification that are part of SymPy. It is also possible to apply these functions one at a time, for example `trigsimp`

does trigonometric simplifications.

The SymPy tutorial illustrates that `expand`

can also result in simplifications through this example:

In [ ]:

```
expand((x + 1)*(x - 2) - (x - 1)*x)
```

Out[ ]:

`factor`

identifies the following as a factorable object in terms of the variable `exp(x)`

:

In [ ]:

```
factor(exp(2x) + 3exp(x) + 2)
```

Out[ ]:

In [ ]:

```
r = 1/x + 1/x^2
```

Out[ ]:

To put the terms of `r`

over a common denominator, the `together`

function is available:

In [ ]:

```
together(r)
```

Out[ ]:

`apart`

function does the reverse, creating a partial fraction decomposition from a ratio of polynomials:

In [ ]:

```
apart( (4x^3 + 21x^2 + 10x + 12) / (x^4 + 5x^3 + 5x^2 + 4x))
```

Out[ ]:

Some times SymPy will cancel factors, as here:

In [ ]:

```
top = (x-1)*(x-2)*(x-3)
bottom = (x-1)*(x-4)
top/bottom
```

Out[ ]:

(This might make math faculty a bit upset, but it is in line with student thinking.)

However, with expanded terms, the common factor of `(x-1)`

is not cancelled:

In [ ]:

```
r = expand(top) / expand(bottom)
```

Out[ ]:

`cancel`

function instructs SymPy to perform cancellations. It takes rational functions and puts them in a canonical $p/q$ form with no common (rational) factors and leading terms which are integers:

In [ ]:

```
cancel(r)
```

Out[ ]:

- $x^a x^b = x^{a+b}$
is always true. However

- $x^a y^a=(xy)^a$
is only true with assumptions, such as $x,y \geq 0$ and $a$ is real, but not in general. For example, $x=y=-1$ and $a=1/2$ has $x^a \cdot y^a = i \cdot i = -1$, where as $(xy)^a = 1$.

- $(x^a)^b = x^{ab}$
is only true with assumptions. For example $x=-1, a=2$, and $b=1/2$ gives $(x^a)^b = 1^{1/2} = 1$, whereas $x^{ab} = -1^1 = -1$.

We see that with assumptions, the following expression does simplify to $0$:

In [ ]:

```
@syms x y nonnegative=true a real=true
simplify(x^a * y^a - (x*y)^a)
```

Out[ ]:

However, without assumptions this is not the case

In [ ]:

```
x,y,a = symbols("x,y,a")
simplify(x^a * y^a - (x*y)^a)
```

Out[ ]:

`simplify`

function calls `powsimp`

to simplify powers, as above. The `powsimp`

function has the keyword argument `force=true`

to force simplification even if assumptions are not specified:

In [ ]:

```
powsimp(x^a * y^a - (x*y)^a, force=true)
```

Out[ ]:

For trigonometric expressions, `simplify`

will use `trigsimp`

to simplify:

In [ ]:

```
theta = symbols("theta", real=true)
p = cos(theta)^2 + sin(theta)^2
```

Out[ ]:

Calling either `simplify`

or `trigsimp`

will apply the Pythagorean identity:

In [ ]:

```
simplify(p)
```

Out[ ]:

While often forgotten, the `trigsimp`

function is, of course, aware of the double angle formulas:

In [ ]:

```
simplify(sin(2theta) - 2sin(theta)*cos(theta))
```

Out[ ]:

The `expand_trig`

function will expand such expressions:

In [ ]:

```
expand_trig(sin(2theta))
```

Out[ ]:

In [ ]:

```
a,b,c,x = symbols("a, b, c, x")
p = a*x^2 + b*x + c
```

Out[ ]:

The `coeff(ex, monom)`

function will return the corresponding coefficient of the monomial:

In [ ]:

```
coeff(p, x^2) # a
coeff(p, x) # b
```

Out[ ]:

The constant can be found through substitution:

In [ ]:

```
p(x=>0)
```

Out[ ]:

Though one could use some trick like this to find all the coefficients:

In [ ]:

```
Sym[[coeff(p, x^i) for i in N(degree(p)):-1:1]; p(x=>0)]
```

Out[ ]:

`coeffs`

, but it is defined for polynomial types, so will fail on `p`

:

In [ ]:

```
coeffs(p) # fails
```

Out[ ]:

`Poly`

constructor can be used. As there is more than one free variable in `p`

, we specify the variable `x`

below:

In [ ]:

```
q = Poly(p, x)
coeffs(q)
```

Out[ ]:

*univariate* polynomial expression (a single variable), the real roots, when available, are returned by `real_roots`

. For example,

In [ ]:

```
real_roots(x^2 - 2)
```

Out[ ]:

`factor`

– which only factors over rational factors – `real_roots`

finds the two irrational roots here. It is well known (the Abel-Ruffini theorem) that for degree 5 polynomials, or higher, it is not always possible to express the roots in terms of radicals. However, when the roots are rational `SymPy`

can have success:

In [ ]:

```
p = (x-3)^2*(x-2)*(x-1)*x*(x+1)*(x^2 + x + 1)
real_roots(p)
```

Out[ ]:

`p`

is 8, but only the 6 real roots returned, the double root of $3$ is accounted for. The two complex roots of `x^2 + x+ 1`

are not considered by this function. The complete set of distinct roots can be found with `solve`

:

In [ ]:

```
solve(p)
```

Out[ ]:

`roots`

function of SymPy does.

`roots`

function from the `Polynomials`

package) but we can still access it using `p[:roots]()`

or its alias `polyroots`

.

`p[:roots](args...)`

will call`roots(p, args...)`

within SymPy. For methods of SymPy objects, the same is true, so if`roots`

were a class method, then the call would resolve to`p.roots(args...)`

.

`polyroots`

will be a dictionary whose keys are the roots and values the multiplicity.

In [ ]:

```
polyroots(p)
```

Out[ ]:

When exact answers are not provided, the `polyroots`

call is contentless:

In [ ]:

```
p = x^5 - x + 1
polyroots(p)
```

Out[ ]:

Calling `solve`

seems to produce very little as well:

In [ ]:

```
rts = solve(p)
```

Out[ ]:

But in fact, `rts`

contains lots of information. We can extract numeric values quite easily with `N`

:

In [ ]:

```
[N(r) for r in rts] # or map(N, rts)
```

Out[ ]:

`nroots`

function is also provided, though with this call the answers are still symbolic:

In [ ]:

```
nroots(p)
```

Out[ ]:

`solve`

function is more general purpose than just finding roots of univariate polynomials. The function tries to solve for when an expression is 0, or a set of expressions are all 0.

For example, it can be used to solve when $\cos(x) = \sin(x)$:

In [ ]:

```
solve(cos(x) - sin(x))
```

Out[ ]:

Though there are infinitely many correct solutions, these are within a certain range.

`solve`

. Here we see it gives all solutions:

In [ ]:

```
u = solveset(cos(x) - sin(x))
```

Out[ ]:

`solveset`

is a set, rather than a vector or dictionary. To get the values requires some work. For *finite sets* we collect the elements with `elements`

:

In [ ]:

```
v = solveset(x^2 - 4)
elements(v)
```

Out[ ]:

`elements`

function does not work for more complicated sets, such as `u`

. For these, the `contains`

method may be useful.

Solving within Sympy has limits. For example, there is no symbolic solution here:

In [ ]:

```
solve(cos(x) - x)
```

Out[ ]:

For such, a numeric method would be needed, say:

In [ ]:

```
nsolve(cos(x) - x, 1)
```

Out[ ]:

`solve`

function can also solve equations of a more general type. For example, here it is used to derive the quadratic equation:

In [ ]:

```
a,b,c = symbols("a,b,c", real=true)
p = a*x^2 + b*x + c
solve(p, x)
```

Out[ ]:

The extra argument `x`

is passed to `solve`

so that `solve`

knows which variable to solve for.

The `solveset`

function is similar:

In [ ]:

```
solveset(p, x)
```

Out[ ]:

`x`

value is not given, `solveset`

will complain and `solve`

tries to find a solution with all the free variables:

In [ ]:

```
solve(p)
```

Out[ ]:

`[ex1, ex2, ..., exn]`

where a found solution is one where all the expressions are 0. For example, to solve this linear system: $2x + 3y = 6, 3x - 4y=12$, we have:

In [ ]:

```
x, y = symbols("x,y", real=true)
exs = [2x+3y-6, 3x-4y-12]
d = solve(exs)
```

Out[ ]:

`subs`

function allows us to pass in a dictionary:

In [ ]:

```
map(ex -> subs(ex, d), exs)
```

Out[ ]:

In [ ]:

```
a,b,c,h = symbols("a,b,c,h", real=true)
p = a*x^2 + b*x + c
fn = cos
exs = [fn(0*h)-p(x=>0), fn(h)-p(x => h), fn(2h)-p(x => 2h)]
d = solve(exs, [a,b,c])
```

Out[ ]:

`a`

, `b`

, and `c`

:

In [ ]:

```
quad_approx = subs(p, d)
```

Out[ ]:

(Taking the limit as $h$ goes to 0 produces the answer $1 - x^2/2$.)

`solve`

, we show one way to re-express the polynomial $a_2x^2 + a_1x + a_0$ as $b_2(x-c)^2 + b_1(x-c) + b_0$ using `solve`

(and not, say, an expansion theorem.)

In [ ]:

```
n = 3
x, c = symbols("x,c")
as = Sym["a$i" for i in 0:(n-1)]
bs = Sym["b$i" for i in 0:(n-1)]
p = sum([as[i+1]*x^i for i in 0:(n-1)])
q = sum([bs[i+1]*(x-c)^i for i in 0:(n-1)])
solve(p-q, bs)
```

Out[ ]:

`solve`

function does not need to just solve `ex = 0`

. There are other means to specify an equation. Ideally, it would be nice to say `ex1 == ex2`

, but the interpretation of `==`

is not for this. Rather, `SymPy`

introduces `Eq`

for equality. So this expression

In [ ]:

```
solve(Eq(x, 1))
```

Out[ ]:

gives 1, as expected from solving `x == 1`

.

`Eq`

, there are `Lt`

, `Le`

, `Ge`

, `Gt`

. The Unicode operators are not aliased to these, but there are alternatives `\ll[tab]`

, `\leqq[tab]`

, `\Equal[tab]`

, `\geqq[tab]`

, `\gg[tab]`

and `\neg[tab]`

to negate.

`\Equal[tab]`

.

In [ ]:

```
solve(x ⩵ 1)
```

Out[ ]:

Here is an alternative way of asking a previous question on a pair of linear equations:

In [ ]:

```
x, y = symbols("x,y", real=true)
exs = [2x+3y ⩵ 6, 3x-4y ⩵ 12] ## Using \Equal[tab]
d = solve(exs)
```

Out[ ]:

`Plots`

package allows many 2-dimensional plots of `SymPy`

objects to be agnostic as to a backend plotting package. `SymPy`

provides recipes that allow symbolic expressions to be used where functions are part of the `Plots`

interface. [See the help page for `sympy_plotting`

.]

In particular, the following methods of `plot`

are defined:

`plot(ex::Sym, a, b)`

will plot the expression of single variable over the interval`[a,b]`

`plot!(ex::Sym, a, b)`

will add to the current plot a plot of the expression of single variable over the interval`[a,b]`

`plot(exs::Vector{Sym}, a, b)`

will plot each expression over`[a,b]`

`plot(ex1, ex2, a, b)`

will plot a parametric plot of the two expressions over the interval`[a,b]`

.`contour(xs, ys, ex::Sym)`

will make a contour plot of the expression of two variables over the grid specifed by the`xs`

and`ys`

.`surface(xs, ys, ex::Sym)`

will make a surface plot of the expression of two variables over the grid specifed by the`xs`

and`ys`

.

For example:

In [ ]:

```
x = symbols("x")
using Plots
#
plot(x^2 - 2, -2,2)
```

Out[ ]:

Or a parametric plot:

In [ ]:

```
plot(sin(2x), cos(3x), 0, 4pi)
```

Out[ ]:

`lambdify`

on the expression and then generate `y`

values with the resulting `Julia`

function.

`PyPlot`

a few other plotting functions from `SymPy`

are available from its interface to `MatplotLib`

:

`plot_parametric_surface(ex1::Sym, ex2::Sym, ex3::Sym), (uvar, a0, b0), (vvar, a1, b1))`

– make a surface plot of the expressions parameterized by the region`[a0,b0] x [a1,b1]`

. The default region is`[-5,5]x[-5,5]`

where the ordering of the variables is given by`free_symbols(ex)`

.`plot_implicit(predictate, (xvar, a0, b0), (yvar, a1, b1))`

– make

`[a0,b0] x [a1,b1]`

. The default region is `[-5,5]x[-5,5]`

where the ordering of the variables is given by `free_symbols(ex)`

. To create predicates from the variable, the functions `Lt`

, `Le`

, `Eq`

, `Ge`

, and `Gt`

can be used, as with `Lt(x*y, 1)`

. For infix notation, unicode operators can be used: `\ll<tab>`

, `\leqq<tab>`

, `\Equal<tab>`

, `\geqq<tab>`

, and `\gg<tab>`

. For example, `x*y ≪ 1`

. To combine terms, the unicode `\vee<tab>`

(for "or"), `\wedge<tab>`

(for "and") can be used.

`SymPy`

has many of the basic operations of calculus provided through a relatively small handful of functions.

`limit`

function which takes an expression, a variable and a value, and optionally a direction specified by either `dir="+"`

or `dir="-"`

.

For example, this shows Gauss was right:

In [ ]:

```
limit(sin(x)/x, x, 0)
```

Out[ ]:

Alternatively, the second and third arguments can be specified as a pair:

In [ ]:

```
limit(sin(x)/x, x=>0)
```

Out[ ]:

Limits at infinity are done by using `oo`

for $\infty$:

In [ ]:

```
limit((1+1/x)^x, x => oo)
```

Out[ ]:

This example computes what L'Hopital reportedly paid a Bernoulli for

In [ ]:

```
a = symbols("a", positive=true)
ex = (sqrt(2a^3*x-x^4) - a*(a^2*x)^(1//3)) / (a - (a*x^3)^(1//4))
```

Out[ ]:

Substituting $x=a$ gives an indeterminate form:

In [ ]:

```
ex(x=>a) # or subs(ex, x, a)
```

Out[ ]:

We can see it is of the form $0/0$:

In [ ]:

```
subs(denom(ex), x, a), subs(numer(ex), x, a)
```

Out[ ]:

And we get

In [ ]:

```
limit(ex, x => a)
```

Out[ ]:

In a previous example, we defined `quad_approx`

:

In [ ]:

```
quad_approx
```

Out[ ]:

The limit as `h`

goes to $0$ gives `1 - x^2/2`

, as expected:

In [ ]:

```
limit(quad_approx, h => 0)
```

Out[ ]:

`sign`

function is $1$ for positive $x$, $-1$ for negative $x$ and $0$ when $x$ is 0. It should not have a limit at $0$:

In [ ]:

```
limit(sign(x), x => 0)
```

Out[ ]:

Oops. Well, the left and right limits are different anyways:

In [ ]:

```
limit(sign(x), x => 0, dir="-"), limit(sign(x), x => 0, dir="+")
```

Out[ ]:

`limit`

function finds the *right* limit by default. To be careful, either plot or check that both the left and right limit exist and are equal.)

`c`

as the second (the variable is implicit, as `f`

has only one).

In [ ]:

```
f(x) = sin(5x)/x
limit(f, 0)
```

Out[ ]:

`limit`

function uses the Gruntz algorithm. It is far more reliable then simple numeric attempts at limits. An example of Gruntz is the right limit at $0$ of the function:

In [ ]:

```
f(x) = 1/x^(log(log(log(log(1/x)))) - 1)
```

Out[ ]:

A numeric attempt might be done along these lines:

In [ ]:

```
hs = [10.0^(-i) for i in 6:16]
ys = [f(h) for h in hs]
[hs ys]
```

Out[ ]:

In [ ]:

```
limit(f(x), x, 0, dir="+")
```

Out[ ]:

One *could* use limits to implement the definition of a derivative:

In [ ]:

```
x, h = symbols("x,h")
f(x) = exp(x)*sin(x)
limit((f(x+h) - f(x)) / h, h, 0)
```

Out[ ]:

`SymPy`

already does a great job with derivatives. The `diff`

function implements this. The basic syntax is `diff(ex, x)`

to find the first derivative in `x`

of the expression in `ex`

, or its generalization to $k$th derivatives with `diff(ex, x, k)`

.

The same derivative computed above by a limit could be found with:

In [ ]:

```
diff(f(x), x)
```

Out[ ]:

Similarly, we can compute other derivatives:

In [ ]:

```
diff(x^x, x)
```

Out[ ]:

Or

In [ ]:

```
diff(exp(-x^2), x, 2)
```

Out[ ]:

As an alternate to specifying the number of derivatives, multiple variables can be passed to `diff`

:

In [ ]:

```
diff(exp(-x^2), x, x, x) # same as diff(..., x, 3)
```

Out[ ]:

This could include variables besides `x`

.

`diff`

can be composed with other functions, such as `solve`

. For example, here we find the critical points where the derivative is $0$ of some rational function:

In [ ]:

```
f(x) = (12x^2 - 1) / (x^3)
diff(f(x), x) |> solve
```

Out[ ]:

`SymPy`

provides an "operator" version of `diff`

for univariate functions for convenience (`diff(f::Function,k=1)=diff(f(x),x,k)`

):

In [ ]:

```
f(x) = exp(x)*cos(x)
diff(f, 2)
```

Out[ ]:

`diff`

function makes finding partial derivatives as easy as specifying the variable to differentiate in. This example computes the mixed partials of an expression in `x`

and `y`

:

In [ ]:

```
x,y = symbols("x,y")
ex = x^2*cos(y)
Sym[diff(ex,v1, v2) for v1 in [x,y], v2 in [x,y]]
```

Out[ ]:

The extra `Sym`

, of the form `T[]`

, helps `Julia`

resolve the type of the output.

`Derivative`

function provides unevaluated derivatives, useful with differential equations and the output for unknown functions. Here is an example:

In [ ]:

```
ex = Derivative(exp(x*y), x, y, 2)
```

Out[ ]:

`y,2`

is a replacement for `y,y`

which makes higher order terms easier to type.) These expressions are evaluated with `doit`

:

In [ ]:

```
doit(ex)
```

Out[ ]:

$$
y^4 - x^4 -y^2 + 2x^2 = 0
$$

`y`

.

`SymPy`

we use `SymFunction`

to avoid a name collision with one of `Julia`

's primary types. The constructor can be used as `SymFunction(:F)`

:

In [ ]:

```
F, G = SymFunction("F"), SymFunction("G")
```

Out[ ]:

We can call these functions, but we get a function expression:

In [ ]:

```
F(x)
```

Out[ ]:

SymPy can differentiate symbolically, again with `diff`

:

In [ ]:

```
diff(F(x))
```

Out[ ]:

Of for symbolic functions the more natural `F'(x)`

.

To get back to our problem, we have our expression:

In [ ]:

```
x,y = symbols("x, y")
ex = y^4 - x^4 - y^2 + 2x^2
```

Out[ ]:

Now we substitute:

In [ ]:

```
ex1 = ex(y=>F(x))
```

Out[ ]:

In [ ]:

```
ex2 = diff(ex1, x)
```

Out[ ]:

Now we collect terms and solve in terms of $F'(x)$

In [ ]:

```
ex3 = solve(ex2, F'(x))[1]
```

Out[ ]:

Finally, we substitute back into the solution for $F(x)$:

In [ ]:

```
ex4 = ex3(F(x) => y)
```

Out[ ]:

In [ ]:

```
w, h, P = symbols("w, h, P", nonnegative=true)
r = w/2
A = w*h + 1//2 * (pi * r^2)
p = w + 2h + pi*r
```

Out[ ]:

`1//2*pi*r^2`

will lose exactness, as the products will be done left to right, and `1//2*pi`

will be converted to an approximate floating point value before multiplying `r^2`

, as such we rewrite the terms. It may be easier to use `PI`

instead of `pi`

.)

`h`

from when `p=P`

(our fixed value) and substitute back into `A`

. We solve `P-p==0`

:

In [ ]:

```
h0 = solve(P-p, h)[1]
A1 = A(h => h0)
```

Out[ ]:

`w`

, so any maximum will be an endpoint or the vertex, provided the leading term is negative. The leading term can be found through:

In [ ]:

```
coeffs(Poly(A1, w))
```

Out[ ]:

Or without using the `Poly`

methods, we could do this:

In [ ]:

```
coeff(collect(expand(A1), w), w^2)
```

Out[ ]:

In [ ]:

```
A1(w => 0)
```

Out[ ]:

The other endpoint is when $h=0$, or

In [ ]:

```
b = solve(subs(P-p, h, 0), w)[1]
```

Out[ ]:

We will need to check the area at `b`

and at the vertex.

To find the vertex, we can use calculus – it will be when the derivative in `w`

is $0$:

In [ ]:

```
c = solve(diff(A1, w), w)[1]
```

Out[ ]:

The answer will be the larger of `A1`

at `b`

or `c`

:

In [ ]:

```
atb = A1(w => b)
atc = A1(w => c)
```

Out[ ]:

A simple comparison isn't revealing:

In [ ]:

```
atc - atb
```

Out[ ]:

But after simplifying, we can see that this expression is positive if $P$ is:

In [ ]:

```
simplify(atc - atb)
```

Out[ ]:

With this observation, we conclude the maximum area happens at `c`

with area `atc`

.

`integrate`

function. There are two basic calls: `integrate(f(x), x)`

will find the indefinite integral ($\int f(x) dx$) and when endpoints are specified through `integrate(f(x), (x, a, b))`

the definite integral will be found ($\int_a^b f(x) dx$). The special form `integrate(ex, x, a, b)`

can be used for single integrals, but the specification through a tuple is needed for multiple integrals.

Basic integrals are implemented:

In [ ]:

```
integrate(x^3, x)
```

Out[ ]:

Or in more generality:

In [ ]:

```
n = symbols("n", real=true)
ex = integrate(x^n, x)
```

Out[ ]:

*piecewise function*, performing a substitution will choose a branch in this case:

In [ ]:

```
ex(n => 3)
```

Out[ ]:

Definite integrals are just as easy. Here is Archimedes' answer:

In [ ]:

```
integrate(x^2, (x, 0, 1))
```

Out[ ]:

Tedious problems, such as those needing multiple integration-by-parts steps can be done easily:

In [ ]:

```
integrate(x^5*sin(x), x)
```

Out[ ]:

The SymPy tutorial says:

`integrate`

uses powerful algorithms that are always improving to compute both definite and indefinite integrals, including heuristic pattern matching type algorithms, a partial implementation of the Risch algorithm, and an algorithm using Meijer G-functions that is useful for computing integrals in terms of special functions, especially definite integrals."

The tutorial gives the following example:

In [ ]:

```
f(x) = (x^4 + x^2 * exp(x) - x^2 - 2x*exp(x) - 2x - exp(x)) * exp(x) / ( (x-1)^2 * (x+1)^2 * (exp(x) + 1) )
f(x)
```

Out[ ]:

With indefinite integral:

In [ ]:

```
integrate(f(x), x)
```

Out[ ]:

`integrate`

function uses a tuple, `(var, a, b)`

, to specify the limits of a definite integral. This syntax lends itself readily to multiple integration.

For example, the following computes the integral of $xy$ over the unit square:

In [ ]:

```
x, y = symbols("x,y")
integrate(x*y, (y, 0, 1), (x, 0, 1))
```

Out[ ]:

In [ ]:

```
integrate(x^2*y, (y, 0, sqrt(1 - x^2)), (x, -1, 1))
```

Out[ ]:

`Integral`

function can stage unevaluated integrals that will be evaluated by calling `doit`

. It is also used when the output is unknown. This example comes from the tutorial:

In [ ]:

```
integ = Integral(sin(x^2), x)
```

Out[ ]:

In [ ]:

```
doit(integ)
```

Out[ ]:

`integrate(f)`

and `integrate(f, a, b)`

– will perform the integrations.

In [ ]:

```
f(x) = exp(x) * cos(x)
integrate(f)
```

Out[ ]:

Or

In [ ]:

```
integrate(sin, 0, pi)
```

Out[ ]:

`series`

function can compute series expansions around a point to a specified order. For example, the following command finds 4 terms of the series expansion of `exp(sin(x))`

in `x`

about $c=0$:

In [ ]:

```
s1 = series(exp(sin(x)), x, 0, 4)
```

Out[ ]:

Consider what happens when we multiply series of different orders:

In [ ]:

```
s2 = series(cos(exp(x)), x, 0, 6)
```

Out[ ]:

In [ ]:

```
simplify(s1 * s2)
```

Out[ ]:

`s2`

are covered in this term. The big "O" notation is sometimes not desired, in which case the `removeO`

function can be employed:

In [ ]:

```
removeO(s1)
```

Out[ ]:

`SymPy`

can do sums, including some infinite ones. The `summation`

function performs this task. For example, we have

In [ ]:

```
i, n = symbols("i, n")
summation(i^2, (i, 1, n))
```

Out[ ]:

`Integrate`

and `Derivative`

, there is also a `Sum`

function to stage the task until the `doit`

function is called to initiate the sum.

Some famous sums can be computed:

In [ ]:

```
sn = Sum(1/i^2, (i, 1, n))
doit(sn)
```

Out[ ]:

And from this a limit is available:

In [ ]:

```
limit(doit(sn), n, oo)
```

Out[ ]:

This would have also been possible through `summation(1/i^2, (i, 1, oo))`

.

Julia makes constructing a vector of symbolic objects easy:

In [ ]:

```
x,y = symbols("x,y")
v = [1,2,x]
w = [1,y,3]
```

Out[ ]:

The generic definitions of vector operations will work as expected with symbolic objects:

In [ ]:

```
dot(v,w)
```

Out[ ]:

Or

In [ ]:

```
cross(v,w)
```

Out[ ]:

Finding gradients can be done using a comprehension.

In [ ]:

```
ex = x^2*y - x*y^2
Sym[diff(ex,var) for var in [x,y]]
```

Out[ ]:

The mixed partials is similarly done by passing two variables to differentiate in to `diff`

:

In [ ]:

```
Sym[diff(ex, v1, v2) for v1 in [x,y], v2 in [x,y]]
```

Out[ ]:

For this task, SymPy provides the `hessian`

function:

In [ ]:

```
hessian(ex)
```

Out[ ]:

`hessian(ex, vars)`

.)

`Julia`

. With `SymPy`

, matrices are just `Julia`

n matrices with symbolic entries. The conversion to matrices that SymPy knows about is handled in the background.

Constructing matrices then follows `Julia`

's conventions:

In [ ]:

```
x,y = symbols("x,y")
M = [1 x; x 1]
```

Out[ ]:

As much as possible, generic `Julia`

functions are utilized:

In [ ]:

```
diagm(ones(Sym, 5))
M^2
det(M)
```

Out[ ]:

Occasionally, the SymPy method has more content:

In [ ]:

```
eigvecs(M)
```

Out[ ]:

As compared to SymPy's `:egienvects`

which yields:

In [ ]:

```
M[:eigenvects]()
```

Out[ ]:

This example from the tutorial shows the `nullspace`

function:

In [ ]:

```
M = Sym[1 2 3 0 0; 4 10 0 0 1]
vs = nullspace(M)
```

Out[ ]:

And this shows that they are indeed in the null space of `M`

:

In [ ]:

```
[M*vs[i] for i in 1:3]
```

Out[ ]:

Symbolic expressions can be included in the matrices:

In [ ]:

```
M = [1 x; x 1]
P, D = diagonalize(M) # M = PDP^-1
D, M - P*D*inv(P)
```

Out[ ]:

`SymFunction`

. Again, this may be done through:

In [ ]:

```
F = SymFunction("F")
```

Out[ ]:

In [ ]:

```
diffeq = Eq(diff(F(x), x, 2) - 2*diff(F(x)) + F(x), sin(x))
```

Out[ ]:

With this, we just need the `dsolve`

function. This is called as `dsolve(eq)`

:

In [ ]:

```
ex = dsolve(diffeq)
```

Out[ ]:

`dsolve`

function in SymPy has an extensive list of named arguments to control the underlying algorithm. These can be passed through with the appropriate keyword arguments.

`SymFunction`

objects have the `'`

method defined to find a derivative, so the above could also have been:

In [ ]:

```
diffeq = F''(x) - 2F'(x) + F(x) - sin(x)
dsolve(diffeq)
```

Out[ ]:

`rhs`

function will return the right-hand side of a relation:

In [ ]:

```
ex1 = rhs(ex)
```

Out[ ]:

(The args function also can be used to break up the expression into parts.)

With this, we can solve for `C1`

through substituting in $0$ for $x$:

In [ ]:

```
solve(ex1(x => 0), Sym("C1"))
```

Out[ ]:

We see that $C1=-1/2$, which we substitute in:

In [ ]:

```
ex2 = ex1(Sym("C1") => -1//2)
```

Out[ ]:

We know that $F'(0)=1$ now, so we solve for `C2`

through

In [ ]:

```
solve( subs(diff(ex2, x), x, 0) - 1, Sym("C2") )
```

Out[ ]:

This gives `C2=3/2`

. Again we substitute in to get our answer:

In [ ]:

```
ex3 = ex2(Sym("C2") => 3//2)
```

Out[ ]:

We do one more example, this one borrowed from here.

Find the variation of speed with time of a parachutist subject to a drag force of $k\cdot v^2$.

The equation is

$$
\frac{m}{k} \frac{dv}{dt} = \alpha^2 - v^2.
$$

We proceed through:

In [ ]:

```
t, m,k,alpha = symbols("t,m,k,alpha")
v = SymFunction("v")
ex = Eq( (m/k)*v'(t), alpha^2 - v(t)^2 )
```

Out[ ]:

`classify_ode`

. As this is not exported, we call it using indexing:

In [ ]:

```
ex[:classify_ode]()
```

Out[ ]:

It is linear, but not solvable. Proceeding with `dsolve`

gives:

In [ ]:

```
dsolve(ex)
```

Out[ ]:

`SymPy`

. The first example shows the steps. This is because the `ics`

argument for `dsolve`

only works for a few types of equations. These do not include, by default, the familiar "book" examples, such as $y'(x) = a\cdot y(x)$.

`SymPy.jl`

extends the function `dsolve`

to allow a specification of the initial conditions when solving. The new ingredients are the independent variable (`x`

in the examples) and tuples to specify each condition. The are conditions on the values of `u`

, `u'`

', .... To illustrate, we follow an example from Wolfram.

In [ ]:

```
y = SymFunction("y")
a, x = symbols("a,x")
eqn = y'(x) - 3*x*y(x) - 1
```

Out[ ]:

We solve the initial value problem with $y(0) = 4$ as follows:

In [ ]:

```
x0, y0 = 0, 4
out = dsolve(eqn, x, (y, x0, y0))
```

Out[ ]:

Verifying this requires combining some operations:

In [ ]:

```
u = rhs(out)
diff(u, x) - 3*x*u - 1
```

Out[ ]:

To solve with a general initial condition is similar:

In [ ]:

```
x0, y0 = 0, a
out = dsolve(eqn, x, (y, x0, y0))
```

Out[ ]:

To plot this over a range of values for `a`

we have:

In [ ]:

```
as = -2:0.6:2
ex = rhs(out)
p = plot(ex(a=>as[1]), -1.8, 1.8, ylims=(-4, 4))
for i in as[2:end]
plot!(p, ex(a=>i), -1.8, 1.8, ylims=(-4, 4))
end
p
```

Out[ ]:

In [ ]:

```
y = SymFunction("y")
x = symbols("x")
eqn = y''(x) + 5y'(x) + 6y(x)
```

Out[ ]:

To solve with $y(0) = 1$ and $y'(0) = 1$ we have:

In [ ]:

```
out = dsolve(eqn, x, (y, 0, 1), (y', 0, 1))
```

Out[ ]:

To make a plot, we only need the right-hand-side of the answer:

In [ ]:

```
plot(rhs(out), -1/3, 2)
```

Out[ ]:

In [ ]:

```
eqn = y''(x) + y(x) - exp(x)
dsolve(eqn, x, (y, 0, 1), (y, 1, 1//2))
```

Out[ ]: