TensorFlow Reproducibility
This notebook explains how to get fully reproducible code with TensorFlow.
Run in Google Colab |
Warning: this notebook accompanies the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition project, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use.
Watch this video to understand the key ideas behind TensorFlow reproducibility:
from IPython.display import IFrame
IFrame(src="https://www.youtube.com/embed/Ys8ofBeR2kA", width=560, height=315, frameborder="0", allowfullscreen=True)
Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use.
from __future__ import division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
import numpy as np
import tensorflow as tf
from tensorflow import keras
/Users/ageron/.virtualenvs/ml/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6 return f(*args, **kwds)
Some operations (like tf.reduce_sum()
) have favor performance over precision, and their outputs may vary slightly across runs. To get reproducible results, make sure TensorFlow runs on the CPU:
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
Because floats have limited precision, the order of execution matters:
2. * 5. / 7.
1.4285714285714286
2. / 7. * 5.
1.4285714285714284
You should make sure TensorFlow runs your ops on a single thread:
config = tf.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
with tf.Session(config=config) as sess:
#... this will run single threaded
pass
The thread pools for all sessions are created when you create the first session, so all sessions in the rest of this notebook will be single-threaded:
with tf.Session() as sess:
#... also single-threaded!
pass
hash()
function¶print(set("Try restarting the kernel and running this again"))
print(set("Try restarting the kernel and running this again"))
{'n', 'k', 'l', 'h', 'r', 'a', 'i', 't', 'd', 's', 'g', 'T', ' ', 'y', 'e', 'u'} {'n', 'k', 'l', 'h', 'r', 'a', 'i', 't', 'd', 's', 'g', 'T', ' ', 'y', 'e', 'u'}
Since Python 3.3, the result will be different every time, unless you start Python with the PYTHONHASHSEED
environment variable set to 0
:
PYTHONHASHSEED=0 python
>>> print(set("Now the output is stable across runs"))
{'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'}
>>> exit()
PYTHONHASHSEED=0 python
>>> print(set("Now the output is stable across runs"))
{'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'}
Alternatively, you could set this environment variable system-wide, but that's probably not a good idea, because this automatic randomization was introduced for security reasons.
Unfortunately, setting the environment variable from within Python (e.g., using os.environ["PYTHONHASHSEED"]="0"
) will not work, because Python reads it upon startup. For Jupyter notebooks, you have to start the Jupyter server like this:
PYTHONHASHSEED=0 jupyter notebook
if os.environ.get("PYTHONHASHSEED") != "0":
raise Exception("You must set PYTHONHASHSEED=0 when starting the Jupyter server to get reproducible results.")
import random
random.seed(42)
print(random.random())
print(random.random())
print()
random.seed(42)
print(random.random())
print(random.random())
0.6394267984578837 0.025010755222666936 0.6394267984578837 0.025010755222666936
import numpy as np
np.random.seed(42)
print(np.random.rand())
print(np.random.rand())
print()
np.random.seed(42)
print(np.random.rand())
print(np.random.rand())
0.3745401188473625 0.9507143064099162 0.3745401188473625 0.9507143064099162
TensorFlow's behavior is more complex because of two things:
import tensorflow as tf
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
0.63789964 0.8774011 0.63789964 0.8774011
Every time you reset the graph, you need to set the seed again:
tf.reset_default_graph()
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
0.63789964 0.8774011 0.63789964 0.8774011
If you create your own graph, it will ignore the default graph's seed:
tf.reset_default_graph()
tf.set_random_seed(42)
graph = tf.Graph()
with graph.as_default():
rnd = tf.random_uniform(shape=[])
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
0.5718187 0.6233171 0.32140207 0.46593904
You must set its own seed:
graph = tf.Graph()
with graph.as_default():
tf.set_random_seed(42)
rnd = tf.random_uniform(shape=[])
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
print()
with tf.Session(graph=graph):
print(rnd.eval())
print(rnd.eval())
0.63789964 0.8774011 0.63789964 0.8774011
If you set the seed after the random operation is created, the seed has no effet:
tf.reset_default_graph()
rnd = tf.random_uniform(shape=[])
tf.set_random_seed(42) # BAD, NO EFFECT!
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
print()
tf.set_random_seed(42) # BAD, NO EFFECT!
with tf.Session() as sess:
print(rnd.eval())
print(rnd.eval())
0.087068915 0.6322479 0.17158246 0.2868148
You can also set a seed for each individual random operation. When you do, it is combined with the graph seed into the final seed used by that op. The following table summarizes how this works:
Graph seed | Op seed | Resulting seed |
---|---|---|
None | None | Random |
graph_seed | None | f(graph_seed, op_index) |
None | op_seed | f(default_graph_seed, op_seed) |
graph_seed | op_seed | f(graph_seed, op_seed) |
f()
is a deterministic function.op_index = graph._last_id
when there is a graph seed, different random ops without op seeds will have different outputs. However, each of them will have the same sequence of outputs at every run.In eager mode, there is a global seed instead of graph seed (since there is no graph in eager mode).
tf.reset_default_graph()
rnd1 = tf.random_uniform(shape=[], seed=42)
rnd2 = tf.random_uniform(shape=[], seed=42)
rnd3 = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print()
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
0.95227146 0.95227146 0.55099714 0.8960779 0.8960779 0.54318357 0.95227146 0.95227146 0.6398845 0.8960779 0.8960779 0.24617589
In the following example, you may think that all random ops will have the same random seed, but rnd3
will actually have a different seed:
tf.reset_default_graph()
tf.set_random_seed(42)
rnd1 = tf.random_uniform(shape=[], seed=42)
rnd2 = tf.random_uniform(shape=[], seed=42)
rnd3 = tf.random_uniform(shape=[])
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print()
with tf.Session() as sess:
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
print(rnd1.eval())
print(rnd2.eval())
print(rnd3.eval())
0.4163028 0.4163028 0.96100175 0.033224702 0.033224702 0.17637014 0.4163028 0.4163028 0.96100175 0.033224702 0.033224702 0.17637014
Tip: in a Jupyter notebook, you probably want to set the random seeds regularly so that you can come back and run the notebook from there (instead of from the beginning) and still get reproducible outputs.
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
If you use the Estimators API, make sure to create a RunConfig
and set its tf_random_seed
, then pass it to the constructor of your estimator:
my_config = tf.estimator.RunConfig(tf_random_seed=42)
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10,
feature_columns=feature_cols,
config=my_config)
WARNING:tensorflow:Using temporary folder as model directory: /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmp2xxrubio INFO:tensorflow:Using config: {'_model_dir': '/var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmp2xxrubio', '_tf_random_seed': 42, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x11dba7da0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
Let's try it on MNIST:
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0
y_train = y_train.astype(np.int32)
Unfortunately, the numpy_input_fn
does not allow us to set the seed when shuffle=True
, so we must shuffle the data ourself and set shuffle=False
.
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train_shuffled}, y=y_train_shuffled, num_epochs=10, batch_size=32, shuffle=False)
dnn_clf.train(input_fn=input_fn)
INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 0 into /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmp2xxrubio/model.ckpt. INFO:tensorflow:loss = 73.945915, step = 1 INFO:tensorflow:global_step/sec: 348.999 INFO:tensorflow:loss = 21.020527, step = 101 (0.287 sec) INFO:tensorflow:global_step/sec: 431.365 INFO:tensorflow:loss = 8.926933, step = 201 (0.232 sec) INFO:tensorflow:global_step/sec: 438.11 INFO:tensorflow:loss = 2.3184745, step = 301 (0.228 sec) INFO:tensorflow:global_step/sec: 437.696 INFO:tensorflow:loss = 10.654381, step = 401 (0.228 sec) INFO:tensorflow:global_step/sec: 452.808 INFO:tensorflow:loss = 4.2829914, step = 501 (0.221 sec) INFO:tensorflow:global_step/sec: 450.062 INFO:tensorflow:loss = 2.497019, step = 601 (0.222 sec) INFO:tensorflow:global_step/sec: 451.86 INFO:tensorflow:loss = 3.9215999, step = 701 (0.221 sec) INFO:tensorflow:global_step/sec: 442.86 INFO:tensorflow:loss = 3.8031044, step = 801 (0.226 sec) INFO:tensorflow:global_step/sec: 444.581 INFO:tensorflow:loss = 3.9209557, step = 901 (0.225 sec) INFO:tensorflow:global_step/sec: 439.603 INFO:tensorflow:loss = 5.506338, step = 1001 (0.227 sec) INFO:tensorflow:global_step/sec: 444.545 INFO:tensorflow:loss = 2.6690354, step = 1101 (0.225 sec) INFO:tensorflow:global_step/sec: 445.176 INFO:tensorflow:loss = 6.559507, step = 1201 (0.225 sec) INFO:tensorflow:global_step/sec: 443.365 INFO:tensorflow:loss = 5.707597, step = 1301 (0.225 sec) INFO:tensorflow:global_step/sec: 447.822 <<314 more lines>> INFO:tensorflow:loss = 0.48648793, step = 17101 (0.227 sec) INFO:tensorflow:global_step/sec: 454.872 INFO:tensorflow:loss = 0.49331194, step = 17201 (0.220 sec) INFO:tensorflow:global_step/sec: 443.025 INFO:tensorflow:loss = 0.32060045, step = 17301 (0.226 sec) INFO:tensorflow:global_step/sec: 440.069 INFO:tensorflow:loss = 0.13167329, step = 17401 (0.227 sec) INFO:tensorflow:global_step/sec: 448.211 INFO:tensorflow:loss = 0.05688939, step = 17501 (0.223 sec) INFO:tensorflow:global_step/sec: 450.458 INFO:tensorflow:loss = 0.36213198, step = 17601 (0.222 sec) INFO:tensorflow:global_step/sec: 428.842 INFO:tensorflow:loss = 0.36243188, step = 17701 (0.233 sec) INFO:tensorflow:global_step/sec: 456.734 INFO:tensorflow:loss = 0.20977254, step = 17801 (0.219 sec) INFO:tensorflow:global_step/sec: 432.647 INFO:tensorflow:loss = 0.09754325, step = 17901 (0.231 sec) INFO:tensorflow:global_step/sec: 389.941 INFO:tensorflow:loss = 0.03494991, step = 18001 (0.256 sec) INFO:tensorflow:global_step/sec: 434.925 INFO:tensorflow:loss = 0.17031653, step = 18101 (0.230 sec) INFO:tensorflow:global_step/sec: 445.735 INFO:tensorflow:loss = 0.3200203, step = 18201 (0.224 sec) INFO:tensorflow:global_step/sec: 444.929 INFO:tensorflow:loss = 0.18385477, step = 18301 (0.225 sec) INFO:tensorflow:global_step/sec: 445.546 INFO:tensorflow:loss = 0.20921718, step = 18401 (0.225 sec) INFO:tensorflow:global_step/sec: 450.454 INFO:tensorflow:loss = 0.01868303, step = 18501 (0.222 sec) INFO:tensorflow:global_step/sec: 445.762 INFO:tensorflow:loss = 0.051421717, step = 18601 (0.224 sec) INFO:tensorflow:global_step/sec: 445.921 INFO:tensorflow:loss = 0.047041617, step = 18701 (0.224 sec) INFO:tensorflow:Saving checkpoints for 18750 into /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmp2xxrubio/model.ckpt. INFO:tensorflow:Loss for final step: 0.46282205.
<tensorflow.python.estimator.canned.dnn.DNNClassifier at 0x11711b748>
The final loss should be exactly 0.46282205.
Instead of using the numpy_input_fn()
function (which cannot reproducibly shuffle the dataset at each epoch), you can create your own input function using the Data API and set its shuffling seed:
def create_dataset(X, y=None, n_epochs=1, batch_size=32,
buffer_size=1000, seed=None):
dataset = tf.data.Dataset.from_tensor_slices(({"X": X}, y))
dataset = dataset.repeat(n_epochs)
dataset = dataset.shuffle(buffer_size, seed=seed)
return dataset.batch(batch_size)
input_fn=lambda: create_dataset(X_train, y_train, seed=42)
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
my_config = tf.estimator.RunConfig(tf_random_seed=42)
feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])]
dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10,
feature_columns=feature_cols,
config=my_config)
dnn_clf.train(input_fn=input_fn)
WARNING:tensorflow:Using temporary folder as model directory: /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmpawwl1lf0 INFO:tensorflow:Using config: {'_model_dir': '/var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmpawwl1lf0', '_tf_random_seed': 42, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x137e1c6d8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1} INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 0 into /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmpawwl1lf0/model.ckpt. INFO:tensorflow:loss = 80.279686, step = 1 INFO:tensorflow:global_step/sec: 161.253 INFO:tensorflow:loss = 16.09288, step = 101 (0.621 sec) INFO:tensorflow:global_step/sec: 433.582 INFO:tensorflow:loss = 5.605775, step = 201 (0.231 sec) INFO:tensorflow:global_step/sec: 447.561 INFO:tensorflow:loss = 12.584702, step = 301 (0.224 sec) INFO:tensorflow:global_step/sec: 442.148 INFO:tensorflow:loss = 2.089463, step = 401 (0.226 sec) INFO:tensorflow:global_step/sec: 434.492 INFO:tensorflow:loss = 9.2258215, step = 501 (0.230 sec) INFO:tensorflow:global_step/sec: 447.994 INFO:tensorflow:loss = 8.11821, step = 601 (0.223 sec) INFO:tensorflow:global_step/sec: 442.723 INFO:tensorflow:loss = 0.653025, step = 701 (0.226 sec) INFO:tensorflow:global_step/sec: 425.438 INFO:tensorflow:loss = 4.331424, step = 801 (0.235 sec) INFO:tensorflow:global_step/sec: 444.471 INFO:tensorflow:loss = 1.55325, step = 901 (0.225 sec) INFO:tensorflow:global_step/sec: 436.037 INFO:tensorflow:loss = 5.208349, step = 1001 (0.229 sec) INFO:tensorflow:global_step/sec: 433.071 INFO:tensorflow:loss = 0.80289483, step = 1101 (0.231 sec) INFO:tensorflow:global_step/sec: 436.717 INFO:tensorflow:loss = 3.1879468, step = 1201 (0.229 sec) INFO:tensorflow:global_step/sec: 452.687 INFO:tensorflow:loss = 5.55963, step = 1301 (0.221 sec) INFO:tensorflow:global_step/sec: 446.2 INFO:tensorflow:loss = 12.830038, step = 1401 (0.224 sec) INFO:tensorflow:global_step/sec: 450.525 INFO:tensorflow:loss = 6.8311796, step = 1501 (0.222 sec) INFO:tensorflow:global_step/sec: 452.967 INFO:tensorflow:loss = 1.635078, step = 1601 (0.221 sec) INFO:tensorflow:global_step/sec: 453.743 INFO:tensorflow:loss = 1.9616288, step = 1701 (0.220 sec) INFO:tensorflow:global_step/sec: 450.01 INFO:tensorflow:loss = 1.4227519, step = 1801 (0.222 sec) INFO:tensorflow:Saving checkpoints for 1875 into /var/folders/wy/h39t6kb11pnbb0pzhksd_fqh0000gn/T/tmpawwl1lf0/model.ckpt. INFO:tensorflow:Loss for final step: 1.0556093.
<tensorflow.python.estimator.canned.dnn.DNNClassifier at 0x11dba7a20>
The final loss should be exactly 1.0556093.
indices = np.random.permutation(len(X_train))
X_train_shuffled = X_train[indices]
y_train_shuffled = y_train[indices]
input_fn = tf.estimator.inputs.numpy_input_fn(
x={"X": X_train_shuffled}, y=y_train_shuffled,
num_epochs=10, batch_size=32, shuffle=False)
dnn_clf.train(input_fn=input_fn)
If you use the Keras API, all you need to do is set the random seed any time you clear the session:
keras.backend.clear_session()
random.seed(42)
np.random.seed(42)
tf.set_random_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(300, activation="relu"),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax"),
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd",
metrics=["accuracy"])
model.fit(X_train, y_train, epochs=10)
Epoch 1/10 60000/60000 [==============================] - 5s 78us/step - loss: 0.5929 - acc: 0.8450 Epoch 2/10 60000/60000 [==============================] - 4s 75us/step - loss: 0.2804 - acc: 0.9199 Epoch 3/10 60000/60000 [==============================] - 4s 74us/step - loss: 0.2276 - acc: 0.9350 Epoch 4/10 60000/60000 [==============================] - 4s 74us/step - loss: 0.1933 - acc: 0.9449 Epoch 5/10 60000/60000 [==============================] - 4s 74us/step - loss: 0.1682 - acc: 0.9518 Epoch 6/10 60000/60000 [==============================] - 4s 74us/step - loss: 0.1490 - acc: 0.9573 Epoch 7/10 60000/60000 [==============================] - 4s 74us/step - loss: 0.1332 - acc: 0.9622 Epoch 8/10 60000/60000 [==============================] - 5s 75us/step - loss: 0.1202 - acc: 0.9658 Epoch 9/10 60000/60000 [==============================] - 4s 75us/step - loss: 0.1090 - acc: 0.9693 Epoch 10/10 60000/60000 [==============================] - 4s 75us/step - loss: 0.1000 - acc: 0.9716
<tensorflow.python.keras.callbacks.History at 0x1379fff98>
You should get exactly 97.16% accuracy on the training set at the end of training.
For example, os.listdir()
returns file names in an order that depends on how the files were indexed by the file system:
for i in range(10):
with open("my_test_foo_{}".format(i), "w"):
pass
[f for f in os.listdir() if f.startswith("my_test_foo_")]
['my_test_foo_1', 'my_test_foo_6', 'my_test_foo_8', 'my_test_foo_9', 'my_test_foo_7', 'my_test_foo_0', 'my_test_foo_5', 'my_test_foo_2', 'my_test_foo_3', 'my_test_foo_4']
for i in range(10):
with open("my_test_bar_{}".format(i), "w"):
pass
[f for f in os.listdir() if f.startswith("my_test_bar_")]
['my_test_bar_4', 'my_test_bar_3', 'my_test_bar_2', 'my_test_bar_5', 'my_test_bar_0', 'my_test_bar_7', 'my_test_bar_9', 'my_test_bar_8', 'my_test_bar_6', 'my_test_bar_1']
You should sort the file names before you use them:
filenames = os.listdir()
filenames.sort()
[f for f in filenames if f.startswith("my_test_foo_")]
['my_test_foo_0', 'my_test_foo_1', 'my_test_foo_2', 'my_test_foo_3', 'my_test_foo_4', 'my_test_foo_5', 'my_test_foo_6', 'my_test_foo_7', 'my_test_foo_8', 'my_test_foo_9']
for f in os.listdir():
if f.startswith("my_test_foo_") or f.startswith("my_test_bar_"):
os.remove(f)
I hope you enjoyed this notebook. If you do not get reproducible results, or if they are different than mine, then please file an issue on github, specifying what version of Python, TensorFlow, and NumPy you are using, as well as your O.S. version. Thank you!
If you want to learn more about Deep Learning and TensorFlow, check out my book Hands-On Machine Learning with Scitkit-Learn and TensorFlow, O'Reilly. You can also follow me on twitter @aureliengeron or watch my videos on YouTube at youtube.com/c/AurelienGeron.