Demo: Neural network training for denoising of Tribolium castaneum

This notebook demonstrates training a CARE model for a 3D denoising task, assuming that training data was already generated via 1_datagen.ipynb and has been saved to disk to the file data/my_training_data.npz.

Note that training a neural network for actual use should be done on more (representative) data and with more training time.

More documentation is available at http://csbdeep.bioimagecomputing.com/doc/.

In [1]:
from __future__ import print_function, unicode_literals, absolute_import, division
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

from tifffile import imread
from csbdeep.utils import axes_dict, plot_some, plot_history
from csbdeep.utils.tf import limit_gpu_memory
from csbdeep.io import load_training_data
from csbdeep.models import Config, CARE
Using TensorFlow backend.

The TensorFlow backend uses all available GPU memory by default, hence it can be useful to limit it:

In [2]:
# limit_gpu_memory(fraction=1/2)

Training data

Load training data generated via 1_datagen.ipynb, use 10% as validation data.

In [3]:
(X,Y), (X_val,Y_val), axes = load_training_data('data/my_training_data.npz', validation_split=0.1, verbose=True)

c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
number of training images:	 922
number of validation images:	 102
image size (3D):		 (16, 64, 64)
axes:				 SZYXC
channels in / out:		 1 / 1
In [4]:
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');

CARE model

Before we construct the actual CARE model, we have to define its configuration via a Config object, which includes

  • parameters of the underlying neural network,
  • the learning rate,
  • the number of parameter updates per epoch,
  • the loss function, and
  • whether the model is probabilistic or not.

The defaults should be sensible in many cases, so a change should only be necessary if the training process fails.


Important: Note that for this notebook we use a very small number of update steps per epoch for immediate feedback, whereas this number should be increased considerably (e.g. train_steps_per_epoch=400) to obtain a well-trained model.

In [5]:
config = Config(axes, n_channel_in, n_channel_out, train_steps_per_epoch=10)
print(config)
vars(config)
Config(axes='ZYXC', n_channel_in=1, n_channel_out=1, n_dim=3, probabilistic=False, train_batch_size=16, train_checkpoint='weights_best.h5', train_checkpoint_epoch='weights_now.h5', train_checkpoint_last='weights_last.h5', train_epochs=100, train_learning_rate=0.0004, train_loss='mae', train_reduce_lr={'factor': 0.5, 'patience': 10, 'min_delta': 0}, train_steps_per_epoch=10, train_tensorboard=True, unet_input_shape=(None, None, None, 1), unet_kern_size=3, unet_last_activation='linear', unet_n_depth=2, unet_n_first=32, unet_residual=True)
Out[5]:
{'axes': 'ZYXC',
 'n_channel_in': 1,
 'n_channel_out': 1,
 'n_dim': 3,
 'probabilistic': False,
 'train_batch_size': 16,
 'train_checkpoint': 'weights_best.h5',
 'train_checkpoint_epoch': 'weights_now.h5',
 'train_checkpoint_last': 'weights_last.h5',
 'train_epochs': 100,
 'train_learning_rate': 0.0004,
 'train_loss': 'mae',
 'train_reduce_lr': {'factor': 0.5, 'min_delta': 0, 'patience': 10},
 'train_steps_per_epoch': 10,
 'train_tensorboard': True,
 'unet_input_shape': (None, None, None, 1),
 'unet_kern_size': 3,
 'unet_last_activation': 'linear',
 'unet_n_depth': 2,
 'unet_n_first': 32,
 'unet_residual': True}

We now create a CARE model with the chosen configuration:

In [6]:
model = CARE(config, 'my_model', basedir='models')

Training

Training the model will likely take some time. We recommend to monitor the progress with TensorBoard (example below), which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.

You can start TensorBoard from the current working directory with tensorboard --logdir=. Then connect to http://localhost:6006/ with your browser.

In [7]:
history = model.train(X,Y, validation_data=(X_val,Y_val))
Epoch 1/100
10/10 [==============================] - 8s 777ms/step - loss: 0.1431 - mse: 0.0529 - mae: 0.1431 - val_loss: 0.1340 - val_mse: 0.0484 - val_mae: 0.1340
Epoch 2/100
10/10 [==============================] - 5s 454ms/step - loss: 0.1189 - mse: 0.0395 - mae: 0.1189 - val_loss: 0.1107 - val_mse: 0.0366 - val_mae: 0.1107
Epoch 3/100
10/10 [==============================] - 5s 538ms/step - loss: 0.1022 - mse: 0.0294 - mae: 0.1022 - val_loss: 0.0927 - val_mse: 0.0257 - val_mae: 0.0927
Epoch 4/100
10/10 [==============================] - 5s 457ms/step - loss: 0.0897 - mse: 0.0223 - mae: 0.0897 - val_loss: 0.0881 - val_mse: 0.0231 - val_mae: 0.0881
Epoch 5/100
10/10 [==============================] - 5s 459ms/step - loss: 0.0799 - mse: 0.0185 - mae: 0.0799 - val_loss: 0.0781 - val_mse: 0.0171 - val_mae: 0.0781
Epoch 6/100
10/10 [==============================] - 5s 469ms/step - loss: 0.0794 - mse: 0.0177 - mae: 0.0794 - val_loss: 0.0736 - val_mse: 0.0155 - val_mae: 0.0736
Epoch 7/100
10/10 [==============================] - 5s 490ms/step - loss: 0.0712 - mse: 0.0144 - mae: 0.0712 - val_loss: 0.0803 - val_mse: 0.0215 - val_mae: 0.0803
Epoch 8/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0725 - mse: 0.0165 - mae: 0.0725 - val_loss: 0.0695 - val_mse: 0.0148 - val_mae: 0.0695
Epoch 9/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0622 - mse: 0.0124 - mae: 0.0622 - val_loss: 0.0672 - val_mse: 0.0134 - val_mae: 0.0672
Epoch 10/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0760 - mse: 0.0155 - mae: 0.0760 - val_loss: 0.0757 - val_mse: 0.0176 - val_mae: 0.0757
Epoch 11/100
10/10 [==============================] - 5s 511ms/step - loss: 0.0734 - mse: 0.0170 - mae: 0.0734 - val_loss: 0.0801 - val_mse: 0.0213 - val_mae: 0.0801
Epoch 12/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0689 - mse: 0.0146 - mae: 0.0689 - val_loss: 0.0699 - val_mse: 0.0139 - val_mae: 0.0699
Epoch 13/100
10/10 [==============================] - 5s 506ms/step - loss: 0.0658 - mse: 0.0138 - mae: 0.0658 - val_loss: 0.0656 - val_mse: 0.0134 - val_mae: 0.0656
Epoch 14/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0623 - mse: 0.0121 - mae: 0.0623 - val_loss: 0.0643 - val_mse: 0.0123 - val_mae: 0.0643
Epoch 15/100
10/10 [==============================] - 5s 488ms/step - loss: 0.0624 - mse: 0.0120 - mae: 0.0624 - val_loss: 0.0631 - val_mse: 0.0120 - val_mae: 0.0631
Epoch 16/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0639 - mse: 0.0126 - mae: 0.0639 - val_loss: 0.0619 - val_mse: 0.0123 - val_mae: 0.0619
Epoch 17/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0559 - mse: 0.0099 - mae: 0.0559 - val_loss: 0.0612 - val_mse: 0.0117 - val_mae: 0.0612
Epoch 18/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0584 - mse: 0.0111 - mae: 0.0584 - val_loss: 0.0592 - val_mse: 0.0114 - val_mae: 0.0592
Epoch 19/100
10/10 [==============================] - 5s 512ms/step - loss: 0.0560 - mse: 0.0104 - mae: 0.0560 - val_loss: 0.0580 - val_mse: 0.0109 - val_mae: 0.0580
Epoch 20/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0571 - mse: 0.0102 - mae: 0.0571 - val_loss: 0.0559 - val_mse: 0.0103 - val_mae: 0.0559
Epoch 21/100
10/10 [==============================] - 5s 507ms/step - loss: 0.0535 - mse: 0.0098 - mae: 0.0535 - val_loss: 0.0589 - val_mse: 0.0111 - val_mae: 0.0589
Epoch 22/100
10/10 [==============================] - 5s 509ms/step - loss: 0.0555 - mse: 0.0102 - mae: 0.0555 - val_loss: 0.0569 - val_mse: 0.0101 - val_mae: 0.0569
Epoch 23/100
10/10 [==============================] - 5s 490ms/step - loss: 0.0521 - mse: 0.0090 - mae: 0.0521 - val_loss: 0.0557 - val_mse: 0.0101 - val_mae: 0.0557
Epoch 24/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0527 - mse: 0.0092 - mae: 0.0527 - val_loss: 0.0579 - val_mse: 0.0098 - val_mae: 0.0579
Epoch 25/100
10/10 [==============================] - 5s 502ms/step - loss: 0.0519 - mse: 0.0089 - mae: 0.0519 - val_loss: 0.0559 - val_mse: 0.0102 - val_mae: 0.0559
Epoch 26/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0544 - mse: 0.0097 - mae: 0.0544 - val_loss: 0.0544 - val_mse: 0.0105 - val_mae: 0.0544
Epoch 27/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0532 - mse: 0.0093 - mae: 0.0532 - val_loss: 0.0545 - val_mse: 0.0094 - val_mae: 0.0545
Epoch 28/100
10/10 [==============================] - 5s 488ms/step - loss: 0.0477 - mse: 0.0080 - mae: 0.0477 - val_loss: 0.0520 - val_mse: 0.0089 - val_mae: 0.0520
Epoch 29/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0496 - mse: 0.0084 - mae: 0.0496 - val_loss: 0.0509 - val_mse: 0.0088 - val_mae: 0.0509
Epoch 30/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0510 - mse: 0.0087 - mae: 0.0510 - val_loss: 0.0501 - val_mse: 0.0085 - val_mae: 0.0501
Epoch 31/100
10/10 [==============================] - 5s 490ms/step - loss: 0.0490 - mse: 0.0081 - mae: 0.0490 - val_loss: 0.0538 - val_mse: 0.0102 - val_mae: 0.0538
Epoch 32/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0499 - mse: 0.0084 - mae: 0.0499 - val_loss: 0.0660 - val_mse: 0.0141 - val_mae: 0.0660
Epoch 33/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0525 - mse: 0.0092 - mae: 0.0525 - val_loss: 0.0534 - val_mse: 0.0098 - val_mae: 0.0534
Epoch 34/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0500 - mse: 0.0086 - mae: 0.0500 - val_loss: 0.0509 - val_mse: 0.0086 - val_mae: 0.0509
Epoch 35/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0480 - mse: 0.0079 - mae: 0.0480 - val_loss: 0.0499 - val_mse: 0.0084 - val_mae: 0.0499
Epoch 36/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0478 - mse: 0.0078 - mae: 0.0478 - val_loss: 0.0491 - val_mse: 0.0082 - val_mae: 0.0491
Epoch 37/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0486 - mse: 0.0080 - mae: 0.0486 - val_loss: 0.0493 - val_mse: 0.0084 - val_mae: 0.0493
Epoch 38/100
10/10 [==============================] - 5s 488ms/step - loss: 0.0477 - mse: 0.0076 - mae: 0.0477 - val_loss: 0.0488 - val_mse: 0.0082 - val_mae: 0.0488
Epoch 39/100
10/10 [==============================] - 5s 502ms/step - loss: 0.0434 - mse: 0.0067 - mae: 0.0434 - val_loss: 0.0494 - val_mse: 0.0084 - val_mae: 0.0494
Epoch 40/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0457 - mse: 0.0072 - mae: 0.0457 - val_loss: 0.0497 - val_mse: 0.0078 - val_mae: 0.0497
Epoch 41/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0459 - mse: 0.0073 - mae: 0.0459 - val_loss: 0.0482 - val_mse: 0.0077 - val_mae: 0.0482
Epoch 42/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0473 - mse: 0.0077 - mae: 0.0473 - val_loss: 0.0492 - val_mse: 0.0081 - val_mae: 0.0492
Epoch 43/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0465 - mse: 0.0073 - mae: 0.0465 - val_loss: 0.0485 - val_mse: 0.0084 - val_mae: 0.0485
Epoch 44/100
10/10 [==============================] - 5s 489ms/step - loss: 0.0474 - mse: 0.0075 - mae: 0.0474 - val_loss: 0.0477 - val_mse: 0.0076 - val_mae: 0.0477
Epoch 45/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0435 - mse: 0.0067 - mae: 0.0435 - val_loss: 0.0475 - val_mse: 0.0078 - val_mae: 0.0475
Epoch 46/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0434 - mse: 0.0068 - mae: 0.0434 - val_loss: 0.0483 - val_mse: 0.0074 - val_mae: 0.0483
Epoch 47/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0473 - mse: 0.0075 - mae: 0.0473 - val_loss: 0.0482 - val_mse: 0.0075 - val_mae: 0.0482
Epoch 48/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0430 - mse: 0.0064 - mae: 0.0430 - val_loss: 0.0486 - val_mse: 0.0074 - val_mae: 0.0486
Epoch 49/100
10/10 [==============================] - 5s 488ms/step - loss: 0.0487 - mse: 0.0079 - mae: 0.0487 - val_loss: 0.0469 - val_mse: 0.0075 - val_mae: 0.0469
Epoch 50/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0444 - mse: 0.0068 - mae: 0.0444 - val_loss: 0.0464 - val_mse: 0.0073 - val_mae: 0.0464
Epoch 51/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0432 - mse: 0.0064 - mae: 0.0432 - val_loss: 0.0462 - val_mse: 0.0073 - val_mae: 0.0462
Epoch 52/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0459 - mse: 0.0071 - mae: 0.0459 - val_loss: 0.0479 - val_mse: 0.0083 - val_mae: 0.0479
Epoch 53/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0429 - mse: 0.0064 - mae: 0.0429 - val_loss: 0.0473 - val_mse: 0.0073 - val_mae: 0.0473
Epoch 54/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0445 - mse: 0.0068 - mae: 0.0445 - val_loss: 0.0465 - val_mse: 0.0073 - val_mae: 0.0465
Epoch 55/100
10/10 [==============================] - 5s 489ms/step - loss: 0.0458 - mse: 0.0070 - mae: 0.0458 - val_loss: 0.0465 - val_mse: 0.0071 - val_mae: 0.0465
Epoch 56/100
10/10 [==============================] - 5s 510ms/step - loss: 0.0445 - mse: 0.0067 - mae: 0.0445 - val_loss: 0.0465 - val_mse: 0.0072 - val_mae: 0.0465
Epoch 57/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0434 - mse: 0.0066 - mae: 0.0434 - val_loss: 0.0458 - val_mse: 0.0072 - val_mae: 0.0458
Epoch 58/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0433 - mse: 0.0065 - mae: 0.0433 - val_loss: 0.0456 - val_mse: 0.0070 - val_mae: 0.0456
Epoch 59/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0429 - mse: 0.0062 - mae: 0.0429 - val_loss: 0.0456 - val_mse: 0.0072 - val_mae: 0.0456
Epoch 60/100
10/10 [==============================] - 5s 510ms/step - loss: 0.0435 - mse: 0.0066 - mae: 0.0435 - val_loss: 0.0465 - val_mse: 0.0069 - val_mae: 0.0465
Epoch 61/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0430 - mse: 0.0064 - mae: 0.0430 - val_loss: 0.0454 - val_mse: 0.0072 - val_mae: 0.0454
Epoch 62/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0431 - mse: 0.0063 - mae: 0.0431 - val_loss: 0.0504 - val_mse: 0.0086 - val_mae: 0.0504
Epoch 63/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0427 - mse: 0.0062 - mae: 0.0427 - val_loss: 0.0455 - val_mse: 0.0073 - val_mae: 0.0455
Epoch 64/100
10/10 [==============================] - 5s 496ms/step - loss: 0.0415 - mse: 0.0058 - mae: 0.0415 - val_loss: 0.0457 - val_mse: 0.0069 - val_mae: 0.0457
Epoch 65/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0414 - mse: 0.0059 - mae: 0.0414 - val_loss: 0.0458 - val_mse: 0.0072 - val_mae: 0.0458
Epoch 66/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0428 - mse: 0.0062 - mae: 0.0428 - val_loss: 0.0451 - val_mse: 0.0068 - val_mae: 0.0451
Epoch 67/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0407 - mse: 0.0059 - mae: 0.0407 - val_loss: 0.0445 - val_mse: 0.0069 - val_mae: 0.0445
Epoch 68/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0446 - mse: 0.0067 - mae: 0.0446 - val_loss: 0.0450 - val_mse: 0.0065 - val_mae: 0.0450
Epoch 69/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0409 - mse: 0.0058 - mae: 0.0409 - val_loss: 0.0448 - val_mse: 0.0071 - val_mae: 0.0448
Epoch 70/100
10/10 [==============================] - 5s 502ms/step - loss: 0.0430 - mse: 0.0063 - mae: 0.0430 - val_loss: 0.0456 - val_mse: 0.0068 - val_mae: 0.0456
Epoch 71/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0417 - mse: 0.0059 - mae: 0.0417 - val_loss: 0.0455 - val_mse: 0.0072 - val_mae: 0.0455
Epoch 72/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0417 - mse: 0.0061 - mae: 0.0417 - val_loss: 0.0452 - val_mse: 0.0070 - val_mae: 0.0452
Epoch 73/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0430 - mse: 0.0062 - mae: 0.0430 - val_loss: 0.0460 - val_mse: 0.0074 - val_mae: 0.0460
Epoch 74/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0416 - mse: 0.0059 - mae: 0.0416 - val_loss: 0.0440 - val_mse: 0.0066 - val_mae: 0.0440
Epoch 75/100
10/10 [==============================] - 5s 501ms/step - loss: 0.0416 - mse: 0.0059 - mae: 0.0416 - val_loss: 0.0443 - val_mse: 0.0067 - val_mae: 0.0443
Epoch 76/100
10/10 [==============================] - 5s 488ms/step - loss: 0.0427 - mse: 0.0061 - mae: 0.0427 - val_loss: 0.0473 - val_mse: 0.0066 - val_mae: 0.0473
Epoch 77/100
10/10 [==============================] - 5s 489ms/step - loss: 0.0397 - mse: 0.0054 - mae: 0.0397 - val_loss: 0.0447 - val_mse: 0.0066 - val_mae: 0.0447
Epoch 78/100
10/10 [==============================] - 5s 502ms/step - loss: 0.0413 - mse: 0.0058 - mae: 0.0413 - val_loss: 0.0438 - val_mse: 0.0064 - val_mae: 0.0438
Epoch 79/100
10/10 [==============================] - 5s 506ms/step - loss: 0.0428 - mse: 0.0061 - mae: 0.0428 - val_loss: 0.0443 - val_mse: 0.0067 - val_mae: 0.0443
Epoch 80/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0416 - mse: 0.0057 - mae: 0.0416 - val_loss: 0.0437 - val_mse: 0.0067 - val_mae: 0.0437
Epoch 81/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0391 - mse: 0.0052 - mae: 0.0391 - val_loss: 0.0444 - val_mse: 0.0068 - val_mae: 0.0444
Epoch 82/100
10/10 [==============================] - 5s 506ms/step - loss: 0.0382 - mse: 0.0051 - mae: 0.0382 - val_loss: 0.0464 - val_mse: 0.0075 - val_mae: 0.0464
Epoch 83/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0439 - mse: 0.0063 - mae: 0.0439 - val_loss: 0.0444 - val_mse: 0.0067 - val_mae: 0.0444
Epoch 84/100
10/10 [==============================] - 5s 508ms/step - loss: 0.0414 - mse: 0.0058 - mae: 0.0414 - val_loss: 0.0464 - val_mse: 0.0070 - val_mae: 0.0464
Epoch 85/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0407 - mse: 0.0056 - mae: 0.0407 - val_loss: 0.0440 - val_mse: 0.0066 - val_mae: 0.0440
Epoch 86/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0406 - mse: 0.0055 - mae: 0.0406 - val_loss: 0.0429 - val_mse: 0.0064 - val_mae: 0.0429
Epoch 87/100
10/10 [==============================] - 5s 491ms/step - loss: 0.0379 - mse: 0.0051 - mae: 0.0379 - val_loss: 0.0433 - val_mse: 0.0067 - val_mae: 0.0433
Epoch 88/100
10/10 [==============================] - 5s 507ms/step - loss: 0.0362 - mse: 0.0046 - mae: 0.0362 - val_loss: 0.0438 - val_mse: 0.0062 - val_mae: 0.0438
Epoch 89/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0408 - mse: 0.0055 - mae: 0.0408 - val_loss: 0.0427 - val_mse: 0.0063 - val_mae: 0.0427
Epoch 90/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0384 - mse: 0.0051 - mae: 0.0384 - val_loss: 0.0429 - val_mse: 0.0061 - val_mae: 0.0429
Epoch 91/100
10/10 [==============================] - 5s 489ms/step - loss: 0.0394 - mse: 0.0053 - mae: 0.0394 - val_loss: 0.0432 - val_mse: 0.0064 - val_mae: 0.0432
Epoch 92/100
10/10 [==============================] - 5s 506ms/step - loss: 0.0394 - mse: 0.0052 - mae: 0.0394 - val_loss: 0.0425 - val_mse: 0.0061 - val_mae: 0.0425
Epoch 93/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0407 - mse: 0.0055 - mae: 0.0407 - val_loss: 0.0428 - val_mse: 0.0065 - val_mae: 0.0428
Epoch 94/100
10/10 [==============================] - 5s 489ms/step - loss: 0.0403 - mse: 0.0053 - mae: 0.0403 - val_loss: 0.0452 - val_mse: 0.0062 - val_mae: 0.0452
Epoch 95/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0385 - mse: 0.0051 - mae: 0.0385 - val_loss: 0.0432 - val_mse: 0.0067 - val_mae: 0.0432
Epoch 96/100
10/10 [==============================] - 5s 504ms/step - loss: 0.0378 - mse: 0.0049 - mae: 0.0378 - val_loss: 0.0433 - val_mse: 0.0060 - val_mae: 0.0433
Epoch 97/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0367 - mse: 0.0045 - mae: 0.0367 - val_loss: 0.0422 - val_mse: 0.0061 - val_mae: 0.0422
Epoch 98/100
10/10 [==============================] - 5s 503ms/step - loss: 0.0394 - mse: 0.0051 - mae: 0.0394 - val_loss: 0.0431 - val_mse: 0.0060 - val_mae: 0.0431
Epoch 99/100
10/10 [==============================] - 5s 508ms/step - loss: 0.0402 - mse: 0.0052 - mae: 0.0402 - val_loss: 0.0424 - val_mse: 0.0064 - val_mae: 0.0424
Epoch 100/100
10/10 [==============================] - 5s 505ms/step - loss: 0.0384 - mse: 0.0049 - mae: 0.0384 - val_loss: 0.0455 - val_mse: 0.0073 - val_mae: 0.0455

Loading network weights from 'weights_best.h5'.

Plot final training history (available in TensorBoard during training):

In [8]:
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss'],['mse','val_mse','mae','val_mae']);
['loss', 'lr', 'mae', 'mse', 'val_loss', 'val_mae', 'val_mse']

Evaluation

Example results for validation images.

In [9]:
plt.figure(figsize=(12,7))
_P = model.keras_model.predict(X_val[:5])
if config.probabilistic:
    _P = _P[...,:(_P.shape[-1]//2)]
plot_some(X_val[:5],Y_val[:5],_P,pmax=99.5)
plt.suptitle('5 example validation patches\n'      
             'top row: input (source),  '          
             'middle row: target (ground truth),  '
             'bottom row: predicted from source');

Export model to be used with CSBDeep Fiji plugins and KNIME workflows

See https://github.com/CSBDeep/CSBDeep_website/wiki/Your-Model-in-Fiji for details.

In [10]:
model.export_TF()
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: /tmp/tmpf6ymhkbo/model/saved_model.pb

Model exported in TensorFlow's SavedModel format:
/home/uschmidt/research/csbdeep/examples/examples/denoising3D/models/my_model/TF_SavedModel.zip