We will here demonstrate the relatioship between a function expressed in spherical harmonics and its variance. To make things simple, we will consider only a single harmonic, and note that the results are easily extended to more complicated functions given that the spherical harmonics are orthogonal.

We start by initializing a new coefficient class to zero and setting a single coefficient to 1.

In [1]:

```
%matplotlib inline
from __future__ import print_function # only necessary if using Python 2.x
import matplotlib.pyplot as plt
import numpy as np
from pyshtools.shclasses import SHCoeffs, SHGrid, SHWindow
lmax = 100
coeffs = SHCoeffs.from_zeros(lmax)
coeffs.set_coeffs(values=[1], ls=[5], ms=[2])
```

Given that we will perform some numerical integrations with this function below, we expand it onto a grid appropriate for integration by Gauss-Legendre quadrature:

In [2]:

```
grid = coeffs.expand('GLQ')
fig, ax = grid.plot()
```

Next, we would like to calculate the variance of this single spherical harmonic. Since each spherical harmonic has a zero mean, the variance is equal to the integral of the function squared (i.e., its norm `N`

) divided by the surface area of the sphere (4 pi):

When the spherical harmonics are 4-pi normalized, `N`

is equal to 4 pi for all values of `l`

and `m`

. Thus, by definition, the variance of each harmonic is 1 for 4-pi nomalized harmonics.

We can verify the mathemiatical value of `N`

by doing the integration manually. For this, we will perform a Gauss-Legendre Quadrature, making use of the latitudinal weighting function that is stored in the `SHGrid`

class instance.

In [3]:

```
N = ((grid.data**2) * grid.weights[np.newaxis,:].T).sum() * (2. * np.pi / grid.nlon)
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
```

Alternatively, we could have done the integration with a 'DH' grid instead:

In [4]:

```
from pyshtools.utils import DHaj
grid_dh = coeffs.expand('DH')
weights = DHaj(grid_dh.nlat)
N = ((grid_dh.data**2) * weights[np.newaxis,:].T).sum() * 2. * np.sqrt(2.) * np.pi / grid_dh.nlon
print('N = ', N)
print('Variance of Ylm = ', N / (4. * np.pi))
```

We have seen in the previous section that a single 4-pi normalized spherical harmonic has unit variance. In spectral analysis, the word *power* is often used to mean the value of the function squared divided by the area it spans, and if the function has zero mean, power is equivalent to variance. Since the spherical harmonics are orthogonal functions on the sphere, there exists a simple relationship between the power of the function and its spherical harmonic coefficients:

This is Parseval's theorem for data on the sphere. For 4-pi normalized harmonics, the last fraction on the right hand side is unity, and the total variance (power) of the function is the sum of the coefficients squared. Knowning this, we can confirm the result of the previous section by showing that the total power of the `l=5`

, `m=2`

harmonic is unity:

In [5]:

```
power = coeffs.spectrum()
print('Total power is ', power.sum())
```

If the coefficients of all spherical harmonics are independent, the distribution will become Gaussian as predicted by the central limit theorem. If the individual coefficients were Gaussian in the first place, the distribution would naturally be Gaussian as well. We illustrate this below.

First, we create a random realization of normally distributed coefficients whose power spectrum follows a power law:

In [6]:

```
lmax = 200
a = 30
ls = np.arange(lmax+1, dtype=float)
power = 1. / (1. + (ls / a) ** 2) ** 1
coeffs = SHCoeffs.from_random(power)
power_random = coeffs.spectrum()
total_power = power_random.sum()
grid = coeffs.expand('GLQ')
fig, ax = grid.plot()
```

Next, we calculate a histogram of the data using the Gauss-Legendre quadrature points and weights:

In [7]:

```
weights = (grid.weights[np.newaxis,:].T).repeat(grid.nlon, axis=1) * (2. * np.pi / grid.nlon)
bins = np.linspace(-50, 50, 30)
center = 0.5 * (bins[:-1] + bins[1:])
dbin = center[1] - center[0]
hist, bins = np.histogram(grid.data, bins=bins, weights=weights, density=True)
```

Finally, we compute the expected distribution and plot the two:

In [8]:

```
normal_distribution = np.exp( - center ** 2 / (2 * total_power))
normal_distribution /= dbin * normal_distribution.sum()
fig, ax = plt.subplots(1, 1)
ax.plot(center, hist, '-x', c='blue', label='computed distribution')
ax.plot(center, normal_distribution, c='red', label='predicted distribution')
ax.legend(loc=3);
```