In this example, we will train a neural network to reconstruct air-shower properties measured by a hypothetic cosmic-ray observatory located at a height of 1400 m. The observatory features a cartesian array of 14 x 14 particle detectors with a distance of 750 m. Thus, the measured data contain the simulated detector responses from secondary particles of the cosmic-ray induced air shower that interact with the ground-based observatory.
Each particle detector measures two quantities that are stored in the form of a cartesian image (2D array with 14 x 14 pixels)
T
: arrival time of the first particlesS
: integrated signalIn this task, we are interested in reconstructing the cosmic-ray energy in EeV (exaelectronvolt = $10^{18}~\mathrm{eV}$) using the measured particle footprint and a convolutional neural network.
Enable a GPU for this task.
Therefore, click:
Edit -> Notebook settings -> select GPU as Hardware accelerator
The tutorial is inspired by the simulation and the study performed in https://doi.org/10.1016/j.astropartphys.2017.10.006.
import tensorflow as tf
from tensorflow import keras
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
layers = keras.layers
print("tf", tf.__version__)
keras 2.6.0 tf 2.6.0
import os
import gdown
url = "https://drive.google.com/u/0/uc?export=download&confirm=HgGH&id=19gOSVQ0mhdjMkm5i2u5NsGrWNBdlhR4F"
output = 'airshowers_mnist.npz'
if os.path.exists(output) == False:
gdown.download(url, output, quiet=False)
f = np.load(output)
Our first input is the arrival times of the first shower particles, measured at the various stations. We normalize the arrival times with respect to the mean and the standard deviation of the arrival time distribution.
Remember that each input has the shape of 14 x 14, as each station measures a specific arrival time.
# time map
T = f['time']
T -= np.nanmean(T)
T /= np.nanstd(T)
T[np.isnan(T)] = 0
print(T.shape)
(100000, 14, 14)
To visualize a map of arrival times, we can plot a random example event. Here, the dark blue indicates an early trigger and a bright yellow a late trigger.
With this information, one can directly get an impression of the shower axis (arrival direction) of the arriving particle shower.
nsamples=len(T)
random_samples = np.random.choice(nsamples, 4)
def rectangular_array(n=14):
""" Return x,y coordinates for rectangular array with n^2 stations. """
n0 = (n - 1) / 2
return (np.mgrid[0:n, 0:n].astype(float) - n0)
plt.figure(figsize=(9,7))
for i,j in enumerate(random_samples):
plt.subplot(2,2,i+1)
footprint=T[j,...]
xd, yd = rectangular_array()
mask = footprint != 0
mask[5, 5] = True
marker_size = 50 * footprint[mask]
plot = plt.scatter(xd, yd, c='grey', s=10, alpha=0.3, label="silent")
circles = plt.scatter(xd[mask], yd[mask], c=footprint[mask],
cmap="viridis", alpha=1, label="loud",
edgecolors="k", linewidths=0.3)
cbar = plt.colorbar(circles)
cbar.set_label('normalized time [a.u.]')
plt.xlabel("x / km")
plt.ylabel("y / km")
plt.grid(True)
plt.tight_layout()
plt.show()
In the following, we inspect the measured signals of the detectors. Again, we have 14 x 14 measurements (a few are zero as no signal was measured) that form a single event. We process the signal using a logarithmic re-scaling.
# signal map
S = f['signal']
S = np.log10(1 + S)
S -= np.nanmin(S)
S /= np.nanmax(S)
S[np.isnan(S)] = 0
print(S.shape)
(100000, 14, 14)
plt.figure(figsize=(9,7))
for i,j in enumerate(random_samples):
plt.subplot(2,2,i+1)
footprint=S[j,...]
xd, yd = rectangular_array()
mask = footprint != 0
mask[5, 5] = True
marker_size = 130 * footprint[mask] + 20
plot = plt.scatter(xd, yd, c='grey', s=10, alpha=0.4, label="silent")
circles = plt.scatter(xd[mask], yd[mask], c=footprint[mask], s=marker_size,
cmap="autumn_r", alpha=1, label="loud",
edgecolors="k", linewidths=0.4)
cbar = plt.colorbar(circles)
cbar.set_label('normalized signals [a.u.]')
plt.xlabel("x / km")
plt.ylabel("y / km")
plt.grid(True)
plt.tight_layout()
plt.show()
Now, we can prepare the targets of our neural network training.
axis = f['showeraxis']
core = f['showercore'][:, 0:2]
core /= 750
# energy - log10(E/eV) in range 3 - 100 EeV
energy = f['energy']
print(energy)
[ 4.00461776 5.99159294 7.49367551 ... 10.81657107 5.07542798 43.32910954]
We further split the data into one large training set and a smaller test set. This test set is used for the final evaluation of the model and is only used once to ensure an unbiased performance measure.
Before, we combine our input X
by concatenating the maps of arrival times with the maps of signals.
X = np.stack([T, S], axis=-1)
X_train, X_test = np.split(X, [-20000])
energy_train, energy_test = np.split(energy, [-20000])
Now, we will set up a neural network to reconstruct the energy of the particle shower. In this define step we will set the architecture of the model.
Modification section
Feel free to modify the model. For example:
- Change the number of nodes (but remember that the number of weights scales with n x n. Also, the final layer has to have only one node as we are reconstructing the energy, which is a scalar.).
- Change the activation function, e.g., use
relu, tanh, sigmoid, softplus, elu,
(take care to not use an activation function for the final layer!).- Add new layers convolutional layers.
- Add new pooling layers, followed by other convolutional layers.
- Add fully-connected layers (remember to use
Flatten()
before).- Increase the Dropout fraction or place Dropout between several layers if you observe overtraining (validation loss increases).
model = keras.models.Sequential(name="energy_regression_CNN")
kwargs = dict(kernel_initializer="he_normal", padding="same",)
model.add(layers.Conv2D(16, (3, 3), activation="relu", input_shape=X_train.shape[1:], **kwargs))
model.add(layers.AveragePooling2D((2, 2)))
model.add(layers.Conv2D(32, (3, 3), activation="relu", **kwargs))
model.add(layers.Flatten())
model.add(layers.Dropout(0.3))
model.add(layers.Dense(1))
We can have a deeper look at our designed model and inspect the number of adaptive parameters by printing the model summary
.
print(model.summary())
Model: "energy_regression_CNN" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 14, 14, 16) 304 _________________________________________________________________ average_pooling2d (AveragePo (None, 7, 7, 16) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 7, 7, 32) 4640 _________________________________________________________________ flatten (Flatten) (None, 1568) 0 _________________________________________________________________ dropout (Dropout) (None, 1568) 0 _________________________________________________________________ dense (Dense) (None, 1) 1569 ================================================================= Total params: 6,513 Trainable params: 6,513 Non-trainable params: 0 _________________________________________________________________ None
We now compile the model to prepare it for the training. During the compile step, we set a loss/objective function (mean_squared_error
, as energy reconstruction is a regression task) and set an optimizer. In this case, we use the Adam optimizer with a learning rate of 0.001.
To monitor the resolution and the bias of the model, we add them as a metric.
def resolution(y_true, y_pred):
""" Metric to control for standart deviation """
mean, var = tf.nn.moments((y_true - y_pred), axes=[0])
return tf.sqrt(var)
def bias(y_true, y_pred):
""" Metric to control for standart deviation """
mean, var = tf.nn.moments((y_true - y_pred), axes=[0])
return mean
model.compile(
loss='mean_squared_error',
metrics=[bias, resolution],
optimizer=keras.optimizers.Adam(learning_rate=1e-3))
We can now run the training for 20 epochs
(20-fold iteration over the dataset) using our training data X_train
and our energy labels energy_train
.
For each iteration (calculating the gradients, updating the model parameters), we use 128 samples (batch_size
).
We furthermore can set the fraction of validation data (initially set to 0.1
) that is used to monitor overtraining.
Modification section
Feel free to modify the training procedure, for example:
- Change (increase) the number of
epochs
.- Modify the batch size via
batch_size
.
fit = model.fit(
X_train,
energy_train,
batch_size=128,
epochs=20,
verbose=2,
validation_split=0.1,
)
Epoch 1/20 563/563 - 4s - loss: 325.0489 - bias: 0.1453 - resolution: 16.1507 - val_loss: 41.0953 - val_bias: -1.8066e+00 - val_resolution: 6.0990 Epoch 2/20 563/563 - 2s - loss: 26.2303 - bias: 0.2216 - resolution: 4.9965 - val_loss: 14.7234 - val_bias: 0.0060 - val_resolution: 3.7989 Epoch 3/20 563/563 - 2s - loss: 16.7429 - bias: 0.2401 - resolution: 4.0058 - val_loss: 10.8091 - val_bias: 0.5168 - val_resolution: 3.2092 Epoch 4/20 563/563 - 2s - loss: 14.0478 - bias: 0.1709 - resolution: 3.6538 - val_loss: 9.3699 - val_bias: 0.6542 - val_resolution: 2.9514 Epoch 5/20 563/563 - 2s - loss: 12.3605 - bias: 0.1044 - resolution: 3.4159 - val_loss: 8.2507 - val_bias: -3.3918e-01 - val_resolution: 2.8145 Epoch 6/20 563/563 - 2s - loss: 11.3349 - bias: 0.0742 - resolution: 3.2860 - val_loss: 7.8655 - val_bias: -5.4563e-01 - val_resolution: 2.7143 Epoch 7/20 563/563 - 2s - loss: 11.1673 - bias: 0.0648 - resolution: 3.2315 - val_loss: 7.1230 - val_bias: -2.4382e-01 - val_resolution: 2.6214 Epoch 8/20 563/563 - 2s - loss: 10.6519 - bias: 0.0579 - resolution: 3.1750 - val_loss: 6.8807 - val_bias: -2.6234e-01 - val_resolution: 2.5734 Epoch 9/20 563/563 - 2s - loss: 10.4120 - bias: 0.0506 - resolution: 3.1250 - val_loss: 7.0425 - val_bias: 0.1974 - val_resolution: 2.6064 Epoch 10/20 563/563 - 2s - loss: 10.0534 - bias: 0.0560 - resolution: 3.0791 - val_loss: 6.4224 - val_bias: -4.1669e-02 - val_resolution: 2.4973 Epoch 11/20 563/563 - 2s - loss: 9.6740 - bias: 0.0448 - resolution: 3.0231 - val_loss: 6.5501 - val_bias: -3.7965e-01 - val_resolution: 2.4976 Epoch 12/20 563/563 - 2s - loss: 9.5946 - bias: 0.0489 - resolution: 3.0021 - val_loss: 6.4533 - val_bias: 0.1407 - val_resolution: 2.5026 Epoch 13/20 563/563 - 2s - loss: 9.2457 - bias: 0.0452 - resolution: 2.9682 - val_loss: 6.3114 - val_bias: 0.0585 - val_resolution: 2.4759 Epoch 14/20 563/563 - 2s - loss: 9.2130 - bias: 0.0433 - resolution: 2.9524 - val_loss: 6.5115 - val_bias: 0.6209 - val_resolution: 2.4414 Epoch 15/20 563/563 - 2s - loss: 8.9590 - bias: 0.0444 - resolution: 2.9150 - val_loss: 6.8284 - val_bias: 0.1688 - val_resolution: 2.5692 Epoch 16/20 563/563 - 2s - loss: 9.0625 - bias: 0.0413 - resolution: 2.9099 - val_loss: 6.0376 - val_bias: -4.2200e-01 - val_resolution: 2.3852 Epoch 17/20 563/563 - 2s - loss: 8.6702 - bias: 0.0395 - resolution: 2.8638 - val_loss: 5.9992 - val_bias: 0.2017 - val_resolution: 2.4051 Epoch 18/20 563/563 - 2s - loss: 8.8129 - bias: 0.0379 - resolution: 2.8662 - val_loss: 7.5701 - val_bias: -1.1417e+00 - val_resolution: 2.4691 Epoch 19/20 563/563 - 2s - loss: 8.8486 - bias: 0.0373 - resolution: 2.8626 - val_loss: 5.8710 - val_bias: 0.3706 - val_resolution: 2.3622 Epoch 20/20 563/563 - 2s - loss: 8.5313 - bias: 0.0360 - resolution: 2.8242 - val_loss: 5.5853 - val_bias: -2.3838e-01 - val_resolution: 2.3197
fig, ax = plt.subplots(1, figsize=(8,5))
n = np.arange(len(fit.history['loss']))
ax.plot(n, fit.history['loss'], ls='--', c='k', label='loss')
ax.plot(n, fit.history['val_loss'], label='val_loss', c='k')
ax.set_xlabel('Epoch')
ax.set_ylabel('Loss')
ax.legend()
ax.semilogy()
ax.grid()
plt.show()
After training the model, we can use the test set X_test
to evaluate the performance of the DNN.
Particularly interesting are the resolution and the bias of the method. A bias close to 0 is desirable. A resolution below 3 EeV can be regarded as good.
energy_pred = model.predict(X_test, batch_size=128, verbose=1)[:,0]
157/157 [==============================] - 0s 2ms/step
diff = energy_pred - energy_test
resolution = np.std(diff)
plt.figure()
plt.hist(diff, bins=100)
plt.xlabel('$E_\mathrm{rec} - E_\mathrm{true}$')
plt.ylabel('# Events')
plt.text(0.95, 0.95, '$\sigma = %.3f$ EeV' % resolution, ha='right', va='top', transform=plt.gca().transAxes)
plt.text(0.95, 0.85, '$\mu = %.1f$ EeV' % diff.mean(), ha='right', va='top', transform=plt.gca().transAxes)
plt.grid()
plt.xlim(-10, 10)
plt.tight_layout()
After estimating the bias and the resolution, we can now inspect the reconstruction via a scatter plot.
Furthermore, we can study the energy dependence of the resolution. With increasing energy, the performance increases due to the lower sampling fluctuation of the ground-based particle detectors and the larger footprints.
x = [3, 10, 30, 100]
labels = ["3", "10", "30", "100"]
diff = energy_pred - energy_test
# Embedding plot
fig, axes = plt.subplots(1, 2, figsize=(20, 9))
axes[0].scatter(energy_test, energy_pred, s=3, alpha=0.60)
axes[0].set_xlabel(r"$E_{true}\;/\;\mathrm{EeV}$")
axes[0].set_ylabel(r"$E_{DNN}\;/\;\mathrm{EeV}$")
stat_box = r"$\mu = $ %.3f" % np.mean(diff) + " / EeV" + "\n" + "$\sigma = $ %.3f" % np.std(diff) + " / EeV"
axes[0].text(0.95, 0.2, stat_box, verticalalignment="top", horizontalalignment="right",
transform=axes[0].transAxes, backgroundcolor="w")
axes[0].plot([np.min(energy_test), np.max(energy_test)],
[np.min(energy_test), np.max(energy_test)], color="red")
sns.regplot(x=energy_test, y=diff / energy_test, x_estimator=np.std, x_bins=12,
fit_reg=False, color="royalblue", ax=axes[1])
axes[1].tick_params(axis="both", which="major")
axes[1].set(xscale="log")
plt.xticks(x, labels)
axes[1].set_xlabel(r"$E_{true}\;/\;\mathrm{EeV}$")
axes[1].set_ylabel(r"$\sigma_{E}/E$")
axes[1].set_ylim(0, 0.2)
plt.show()