Demo: Neural network training for combined denoising and upsamling of synthetic 3D data

This notebook demonstrates training a CARE model for a combined denoising and upsampling task, assuming that training data was already generated via 1_datagen.ipynb and has been saved to disk to the file data/my_training_data.npz. Note that the training approach is exactly the same as in the standard CARE approach, what differs is the training data generation and prediction.

Note that training a neural network for actual use should be done on more (representative) data and with more training time.

More documentation is available at http://csbdeep.bioimagecomputing.com/doc/.

In [1]:
from __future__ import print_function, unicode_literals, absolute_import, division
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'

from tifffile import imread
from csbdeep.utils import axes_dict, plot_some, plot_history
from csbdeep.utils.tf import limit_gpu_memory
from csbdeep.io import load_training_data
from csbdeep.models import Config, UpsamplingCARE
Using TensorFlow backend.

The TensorFlow backend uses all available GPU memory by default, hence it can be useful to limit it:

In [2]:
# limit_gpu_memory(fraction=1/2)

Training data

Load training data generated via 1_datagen.ipynb, use 10% as validation data.

In [3]:
(X,Y), (X_val,Y_val), axes = load_training_data('data/my_training_data.npz', validation_split=0.1, verbose=True)

c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
number of training images:	 1382
number of validation images:	 154
image size (3D):		 (32, 64, 64)
axes:				 SZYXC
channels in / out:		 1 / 1
In [4]:
plt.figure(figsize=(12,3))
plot_some(X_val[:5,...,0,0],Y_val[:5,...,0,0])
plt.suptitle('5 example validation patches (ZY slice, top row: source, bottom row: target)');

CARE model

Before we construct the actual CARE model, we have to define its configuration via a Config object, which includes

  • parameters of the underlying neural network,
  • the learning rate,
  • the number of parameter updates per epoch,
  • the loss function, and
  • whether the model is probabilistic or not.

The defaults should be sensible in many cases, so a change should only be necessary if the training process fails.


Important: Note that for this notebook we use a very small number of update steps per epoch for immediate feedback, whereas this number should be increased considerably (e.g. train_steps_per_epoch=400, train_batch_size=16) to obtain a well-trained model.

In [5]:
config = Config(axes, n_channel_in, n_channel_out, train_steps_per_epoch=25, train_batch_size=4)
print(config)
vars(config)
Config(axes='ZYXC', n_channel_in=1, n_channel_out=1, n_dim=3, probabilistic=False, train_batch_size=4, train_checkpoint='weights_best.h5', train_epochs=100, train_learning_rate=0.0004, train_loss='mae', train_reduce_lr={'factor': 0.5, 'patience': 10, 'min_delta': 0}, train_steps_per_epoch=25, train_tensorboard=True, unet_input_shape=(None, None, None, 1), unet_kern_size=3, unet_last_activation='linear', unet_n_depth=2, unet_n_first=32, unet_residual=True)
Out[5]:
{'axes': 'ZYXC',
 'n_channel_in': 1,
 'n_channel_out': 1,
 'n_dim': 3,
 'probabilistic': False,
 'train_batch_size': 4,
 'train_checkpoint': 'weights_best.h5',
 'train_epochs': 100,
 'train_learning_rate': 0.0004,
 'train_loss': 'mae',
 'train_reduce_lr': {'factor': 0.5, 'min_delta': 0, 'patience': 10},
 'train_steps_per_epoch': 25,
 'train_tensorboard': True,
 'unet_input_shape': (None, None, None, 1),
 'unet_kern_size': 3,
 'unet_last_activation': 'linear',
 'unet_n_depth': 2,
 'unet_n_first': 32,
 'unet_residual': True}

We now create an upsampling CARE model with the chosen configuration:

In [6]:
model = UpsamplingCARE(config, 'my_model', basedir='models')

Training

Training the model will likely take some time. We recommend to monitor the progress with TensorBoard (example below), which allows you to inspect the losses during training. Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.

You can start TensorBoard from the current working directory with tensorboard --logdir=. Then connect to http://localhost:6006/ with your browser.

In [7]:
history = model.train(X,Y, validation_data=(X_val,Y_val))
Epoch 1/100
25/25 [==============================] - 11s 424ms/step - loss: 0.1288 - mse: 0.0272 - mae: 0.1288 - val_loss: 0.1105 - val_mse: 0.0199 - val_mae: 0.1105
Epoch 2/100
25/25 [==============================] - 8s 306ms/step - loss: 0.0937 - mse: 0.0152 - mae: 0.0937 - val_loss: 0.0806 - val_mse: 0.0122 - val_mae: 0.0806
Epoch 3/100
25/25 [==============================] - 8s 339ms/step - loss: 0.0723 - mse: 0.0098 - mae: 0.0723 - val_loss: 0.0699 - val_mse: 0.0090 - val_mae: 0.0699
Epoch 4/100
25/25 [==============================] - 9s 367ms/step - loss: 0.0695 - mse: 0.0093 - mae: 0.0695 - val_loss: 0.0671 - val_mse: 0.0090 - val_mae: 0.0671
Epoch 5/100
25/25 [==============================] - 9s 348ms/step - loss: 0.0635 - mse: 0.0081 - mae: 0.0635 - val_loss: 0.0604 - val_mse: 0.0077 - val_mae: 0.0604
Epoch 6/100
25/25 [==============================] - 9s 346ms/step - loss: 0.0604 - mse: 0.0076 - mae: 0.0604 - val_loss: 0.0575 - val_mse: 0.0070 - val_mae: 0.0575
Epoch 7/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0547 - mse: 0.0065 - mae: 0.0547 - val_loss: 0.0536 - val_mse: 0.0064 - val_mae: 0.0536
Epoch 8/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0533 - mse: 0.0061 - mae: 0.0533 - val_loss: 0.0503 - val_mse: 0.0058 - val_mae: 0.0503
Epoch 9/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0502 - mse: 0.0056 - mae: 0.0502 - val_loss: 0.0475 - val_mse: 0.0052 - val_mae: 0.0475
Epoch 10/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0447 - mse: 0.0046 - mae: 0.0447 - val_loss: 0.0436 - val_mse: 0.0042 - val_mae: 0.0436
Epoch 11/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0432 - mse: 0.0043 - mae: 0.0432 - val_loss: 0.0414 - val_mse: 0.0039 - val_mae: 0.0414
Epoch 12/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0416 - mse: 0.0040 - mae: 0.0416 - val_loss: 0.0393 - val_mse: 0.0036 - val_mae: 0.0393
Epoch 13/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0379 - mse: 0.0034 - mae: 0.0379 - val_loss: 0.0369 - val_mse: 0.0034 - val_mae: 0.0369
Epoch 14/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0383 - mse: 0.0035 - mae: 0.0383 - val_loss: 0.0376 - val_mse: 0.0035 - val_mae: 0.0376
Epoch 15/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0368 - mse: 0.0033 - mae: 0.0368 - val_loss: 0.0351 - val_mse: 0.0031 - val_mae: 0.0351
Epoch 16/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0373 - mse: 0.0033 - mae: 0.0373 - val_loss: 0.0350 - val_mse: 0.0030 - val_mae: 0.0350
Epoch 17/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0364 - mse: 0.0032 - mae: 0.0364 - val_loss: 0.0355 - val_mse: 0.0030 - val_mae: 0.0355
Epoch 18/100
25/25 [==============================] - 9s 341ms/step - loss: 0.0357 - mse: 0.0030 - mae: 0.0357 - val_loss: 0.0360 - val_mse: 0.0029 - val_mae: 0.0360
Epoch 19/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0334 - mse: 0.0028 - mae: 0.0334 - val_loss: 0.0340 - val_mse: 0.0028 - val_mae: 0.0340
Epoch 20/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0361 - mse: 0.0031 - mae: 0.0361 - val_loss: 0.0373 - val_mse: 0.0033 - val_mae: 0.0373
Epoch 21/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0331 - mse: 0.0027 - mae: 0.0331 - val_loss: 0.0329 - val_mse: 0.0027 - val_mae: 0.0329
Epoch 22/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0318 - mse: 0.0026 - mae: 0.0318 - val_loss: 0.0336 - val_mse: 0.0026 - val_mae: 0.0336
Epoch 23/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0325 - mse: 0.0026 - mae: 0.0325 - val_loss: 0.0321 - val_mse: 0.0025 - val_mae: 0.0321
Epoch 24/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0309 - mse: 0.0024 - mae: 0.0309 - val_loss: 0.0311 - val_mse: 0.0024 - val_mae: 0.0311
Epoch 25/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0308 - mse: 0.0025 - mae: 0.0308 - val_loss: 0.0314 - val_mse: 0.0025 - val_mae: 0.0314
Epoch 26/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0309 - mse: 0.0025 - mae: 0.0309 - val_loss: 0.0339 - val_mse: 0.0029 - val_mae: 0.0339
Epoch 27/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0298 - mse: 0.0022 - mae: 0.0298 - val_loss: 0.0305 - val_mse: 0.0025 - val_mae: 0.0305
Epoch 28/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0297 - mse: 0.0023 - mae: 0.0297 - val_loss: 0.0292 - val_mse: 0.0023 - val_mae: 0.0292
Epoch 29/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0293 - mse: 0.0022 - mae: 0.0293 - val_loss: 0.0299 - val_mse: 0.0023 - val_mae: 0.0299
Epoch 30/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0294 - mse: 0.0023 - mae: 0.0294 - val_loss: 0.0293 - val_mse: 0.0022 - val_mae: 0.0293
Epoch 31/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0307 - mse: 0.0023 - mae: 0.0307 - val_loss: 0.0335 - val_mse: 0.0028 - val_mae: 0.0335
Epoch 32/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0294 - mse: 0.0022 - mae: 0.0294 - val_loss: 0.0287 - val_mse: 0.0022 - val_mae: 0.0287
Epoch 33/100
25/25 [==============================] - 9s 346ms/step - loss: 0.0283 - mse: 0.0020 - mae: 0.0283 - val_loss: 0.0288 - val_mse: 0.0022 - val_mae: 0.0288
Epoch 34/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0309 - mse: 0.0024 - mae: 0.0309 - val_loss: 0.0326 - val_mse: 0.0023 - val_mae: 0.0326
Epoch 35/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0281 - mse: 0.0020 - mae: 0.0281 - val_loss: 0.0281 - val_mse: 0.0021 - val_mae: 0.0281
Epoch 36/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0283 - mse: 0.0021 - mae: 0.0283 - val_loss: 0.0276 - val_mse: 0.0021 - val_mae: 0.0276
Epoch 37/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0280 - mse: 0.0020 - mae: 0.0280 - val_loss: 0.0301 - val_mse: 0.0021 - val_mae: 0.0301
Epoch 38/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0281 - mse: 0.0021 - mae: 0.0281 - val_loss: 0.0274 - val_mse: 0.0020 - val_mae: 0.0274
Epoch 39/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0269 - mse: 0.0020 - mae: 0.0269 - val_loss: 0.0280 - val_mse: 0.0020 - val_mae: 0.0280
Epoch 40/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0294 - mse: 0.0021 - mae: 0.0294 - val_loss: 0.0284 - val_mse: 0.0021 - val_mae: 0.0284
Epoch 41/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0272 - mse: 0.0019 - mae: 0.0272 - val_loss: 0.0273 - val_mse: 0.0020 - val_mae: 0.0273
Epoch 42/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0265 - mse: 0.0019 - mae: 0.0265 - val_loss: 0.0266 - val_mse: 0.0019 - val_mae: 0.0266
Epoch 43/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0266 - mse: 0.0019 - mae: 0.0266 - val_loss: 0.0262 - val_mse: 0.0019 - val_mae: 0.0262
Epoch 44/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0275 - mse: 0.0020 - mae: 0.0275 - val_loss: 0.0286 - val_mse: 0.0020 - val_mae: 0.0286
Epoch 45/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0268 - mse: 0.0019 - mae: 0.0268 - val_loss: 0.0264 - val_mse: 0.0019 - val_mae: 0.0264
Epoch 46/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0262 - mse: 0.0018 - mae: 0.0262 - val_loss: 0.0267 - val_mse: 0.0019 - val_mae: 0.0267
Epoch 47/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0255 - mse: 0.0018 - mae: 0.0255 - val_loss: 0.0255 - val_mse: 0.0018 - val_mae: 0.0255
Epoch 48/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0252 - mse: 0.0018 - mae: 0.0252 - val_loss: 0.0255 - val_mse: 0.0018 - val_mae: 0.0255
Epoch 49/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0262 - mse: 0.0018 - mae: 0.0262 - val_loss: 0.0261 - val_mse: 0.0018 - val_mae: 0.0261
Epoch 50/100
25/25 [==============================] - 9s 340ms/step - loss: 0.0258 - mse: 0.0018 - mae: 0.0258 - val_loss: 0.0267 - val_mse: 0.0018 - val_mae: 0.0267
Epoch 51/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0262 - mse: 0.0019 - mae: 0.0262 - val_loss: 0.0254 - val_mse: 0.0018 - val_mae: 0.0254
Epoch 52/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0275 - mse: 0.0019 - mae: 0.0275 - val_loss: 0.0256 - val_mse: 0.0018 - val_mae: 0.0256
Epoch 53/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0263 - mse: 0.0018 - mae: 0.0263 - val_loss: 0.0250 - val_mse: 0.0017 - val_mae: 0.0250
Epoch 54/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0247 - mse: 0.0017 - mae: 0.0247 - val_loss: 0.0265 - val_mse: 0.0020 - val_mae: 0.0265
Epoch 55/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0257 - mse: 0.0017 - mae: 0.0257 - val_loss: 0.0264 - val_mse: 0.0019 - val_mae: 0.0264
Epoch 56/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0250 - mse: 0.0017 - mae: 0.0250 - val_loss: 0.0254 - val_mse: 0.0018 - val_mae: 0.0254
Epoch 57/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0247 - mse: 0.0016 - mae: 0.0247 - val_loss: 0.0291 - val_mse: 0.0020 - val_mae: 0.0291
Epoch 58/100
25/25 [==============================] - 9s 346ms/step - loss: 0.0251 - mse: 0.0017 - mae: 0.0251 - val_loss: 0.0254 - val_mse: 0.0017 - val_mae: 0.0254
Epoch 59/100
25/25 [==============================] - 8s 340ms/step - loss: 0.0262 - mse: 0.0018 - mae: 0.0262 - val_loss: 0.0245 - val_mse: 0.0017 - val_mae: 0.0245
Epoch 60/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0240 - mse: 0.0016 - mae: 0.0240 - val_loss: 0.0258 - val_mse: 0.0017 - val_mae: 0.0258
Epoch 61/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0255 - mse: 0.0017 - mae: 0.0255 - val_loss: 0.0253 - val_mse: 0.0018 - val_mae: 0.0253
Epoch 62/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0247 - mse: 0.0017 - mae: 0.0247 - val_loss: 0.0264 - val_mse: 0.0018 - val_mae: 0.0264
Epoch 63/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0239 - mse: 0.0015 - mae: 0.0239 - val_loss: 0.0250 - val_mse: 0.0017 - val_mae: 0.0250
Epoch 64/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0243 - mse: 0.0016 - mae: 0.0243 - val_loss: 0.0262 - val_mse: 0.0017 - val_mae: 0.0262
Epoch 65/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0253 - mse: 0.0017 - mae: 0.0253 - val_loss: 0.0257 - val_mse: 0.0017 - val_mae: 0.0257
Epoch 66/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0243 - mse: 0.0016 - mae: 0.0243 - val_loss: 0.0241 - val_mse: 0.0016 - val_mae: 0.0241
Epoch 67/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0243 - mse: 0.0016 - mae: 0.0243 - val_loss: 0.0244 - val_mse: 0.0017 - val_mae: 0.0244
Epoch 68/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0239 - mse: 0.0016 - mae: 0.0239 - val_loss: 0.0242 - val_mse: 0.0016 - val_mae: 0.0242
Epoch 69/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0239 - mse: 0.0016 - mae: 0.0239 - val_loss: 0.0243 - val_mse: 0.0017 - val_mae: 0.0243
Epoch 70/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0230 - mse: 0.0015 - mae: 0.0230 - val_loss: 0.0232 - val_mse: 0.0016 - val_mae: 0.0232
Epoch 71/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0230 - mse: 0.0014 - mae: 0.0230 - val_loss: 0.0237 - val_mse: 0.0016 - val_mae: 0.0237
Epoch 72/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0245 - mse: 0.0017 - mae: 0.0245 - val_loss: 0.0244 - val_mse: 0.0016 - val_mae: 0.0244
Epoch 73/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0251 - mse: 0.0016 - mae: 0.0251 - val_loss: 0.0239 - val_mse: 0.0016 - val_mae: 0.0239
Epoch 74/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0240 - mse: 0.0015 - mae: 0.0240 - val_loss: 0.0242 - val_mse: 0.0016 - val_mae: 0.0242
Epoch 75/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0233 - mse: 0.0015 - mae: 0.0233 - val_loss: 0.0234 - val_mse: 0.0015 - val_mae: 0.0234
Epoch 76/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0232 - mse: 0.0015 - mae: 0.0232 - val_loss: 0.0251 - val_mse: 0.0018 - val_mae: 0.0251
Epoch 77/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0234 - mse: 0.0015 - mae: 0.0234 - val_loss: 0.0239 - val_mse: 0.0015 - val_mae: 0.0239
Epoch 78/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0229 - mse: 0.0015 - mae: 0.0229 - val_loss: 0.0230 - val_mse: 0.0015 - val_mae: 0.0230
Epoch 79/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0225 - mse: 0.0014 - mae: 0.0225 - val_loss: 0.0229 - val_mse: 0.0015 - val_mae: 0.0229
Epoch 80/100
25/25 [==============================] - 9s 340ms/step - loss: 0.0221 - mse: 0.0014 - mae: 0.0221 - val_loss: 0.0231 - val_mse: 0.0015 - val_mae: 0.0231
Epoch 81/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0225 - mse: 0.0014 - mae: 0.0225 - val_loss: 0.0270 - val_mse: 0.0016 - val_mae: 0.0270
Epoch 82/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0221 - mse: 0.0014 - mae: 0.0221 - val_loss: 0.0236 - val_mse: 0.0015 - val_mae: 0.0236
Epoch 83/100
25/25 [==============================] - 9s 345ms/step - loss: 0.0227 - mse: 0.0015 - mae: 0.0227 - val_loss: 0.0236 - val_mse: 0.0015 - val_mae: 0.0236
Epoch 84/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0222 - mse: 0.0014 - mae: 0.0222 - val_loss: 0.0226 - val_mse: 0.0015 - val_mae: 0.0226
Epoch 85/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0225 - mse: 0.0014 - mae: 0.0225 - val_loss: 0.0233 - val_mse: 0.0015 - val_mae: 0.0233
Epoch 86/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0245 - mse: 0.0015 - mae: 0.0245 - val_loss: 0.0240 - val_mse: 0.0015 - val_mae: 0.0240
Epoch 87/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0218 - mse: 0.0013 - mae: 0.0218 - val_loss: 0.0228 - val_mse: 0.0015 - val_mae: 0.0228
Epoch 88/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0232 - mse: 0.0014 - mae: 0.0232 - val_loss: 0.0230 - val_mse: 0.0015 - val_mae: 0.0230
Epoch 89/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0225 - mse: 0.0014 - mae: 0.0225 - val_loss: 0.0221 - val_mse: 0.0014 - val_mae: 0.0221
Epoch 90/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0213 - mse: 0.0013 - mae: 0.0213 - val_loss: 0.0222 - val_mse: 0.0015 - val_mae: 0.0222
Epoch 91/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0224 - mse: 0.0013 - mae: 0.0224 - val_loss: 0.0222 - val_mse: 0.0014 - val_mae: 0.0222
Epoch 92/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0213 - mse: 0.0013 - mae: 0.0213 - val_loss: 0.0216 - val_mse: 0.0014 - val_mae: 0.0216
Epoch 93/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0210 - mse: 0.0013 - mae: 0.0210 - val_loss: 0.0218 - val_mse: 0.0013 - val_mae: 0.0218
Epoch 94/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0207 - mse: 0.0012 - mae: 0.0207 - val_loss: 0.0213 - val_mse: 0.0013 - val_mae: 0.0213
Epoch 95/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0224 - mse: 0.0013 - mae: 0.0224 - val_loss: 0.0231 - val_mse: 0.0014 - val_mae: 0.0231
Epoch 96/100
25/25 [==============================] - 8s 339ms/step - loss: 0.0217 - mse: 0.0014 - mae: 0.0217 - val_loss: 0.0220 - val_mse: 0.0014 - val_mae: 0.0220
Epoch 97/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0216 - mse: 0.0013 - mae: 0.0216 - val_loss: 0.0215 - val_mse: 0.0013 - val_mae: 0.0215
Epoch 98/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0206 - mse: 0.0012 - mae: 0.0206 - val_loss: 0.0212 - val_mse: 0.0013 - val_mae: 0.0212
Epoch 99/100
25/25 [==============================] - 9s 344ms/step - loss: 0.0203 - mse: 0.0011 - mae: 0.0203 - val_loss: 0.0280 - val_mse: 0.0016 - val_mae: 0.0280
Epoch 100/100
25/25 [==============================] - 9s 343ms/step - loss: 0.0213 - mse: 0.0013 - mae: 0.0213 - val_loss: 0.0219 - val_mse: 0.0013 - val_mae: 0.0219

Loading network weights from 'weights_best.h5'.

Plot final training history (available in TensorBoard during training):

In [8]:
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss'],['mse','val_mse','mae','val_mae']);
['loss', 'lr', 'mae', 'mse', 'val_loss', 'val_mae', 'val_mse']

Evaluation

Example results for validation images.

In [9]:
plt.figure(figsize=(12,4.5))
_P = model.keras_model.predict(X_val[:5])
if config.probabilistic:
    _P = _P[...,:(_P.shape[-1]//2)]
plot_some(X_val[:5,...,0,0],Y_val[:5,...,0,0],_P[...,0,0],pmax=99.5)
plt.suptitle('5 example validation patches (ZY slice)\n'      
             'top row: input (source),  '          
             'middle row: target (ground truth),  '
             'bottom row: predicted from source');

Export model to be used with CSBDeep Fiji plugins and KNIME workflows

See https://github.com/CSBDeep/CSBDeep_website/wiki/Your-Model-in-Fiji for details.

In [10]:
model.export_TF()
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: /tmp/tmpzk_ltu8w/model/saved_model.pb

Model exported in TensorFlow's SavedModel format:
/home/uschmidt/research/csbdeep/examples/examples/upsampling3D/models/my_model/TF_SavedModel.zip