You no longer need to worry about...
The API for Variables will then change in the following ways for TF 2.0:
https://github.com/tensorflow/community/blob/master/rfcs/20180817-variables-20.md
from __future__ import absolute_import, division, print_function
import numpy as np
import tensorflow as tf
tf.enable_eager_execution()
print(tf.__version__)
1.12.0
x = tf.placeholder(dtype = tf.float32, shape = [1, 1])
m = tf.matmul(x, x)
print(m)
with tf.Session() as sess:
m_out = sess.run(m, feed_dict = {x : [[2.]]})
print(m_out, m_out.shape)
Tensor("mul:0", shape=(1, 1), dtype=float32)
[[4.]] (1, 1)
When using tf.enable_eager_execution()
, Bolierplate changes as belows
x = [[2.]]
m = tf.matmul(x, x)
print(m) # No sessions()!
print(tf.get_default_graph().get_operations()) # No graphs!
tf.Tensor([[4.]], shape=(1, 1), dtype=float32) []
Each iteration adds nodes to the graph
x = tf.constant(value = [[1,2],[3,4]], dtype = tf.int32)
with tf.Session() as sess:
for i in range(x.shape[0]):
for j in range(x.shape[1]):
print(sess.run(x[i, j]))
1
2
3
4
When using tf.enable_eager_execution()
, not graph
x = tf.constant(value = [[1,2],[3,4]], dtype = tf.int32)
for i in range(x.shape[0]):
for j in range(x.shape[1]):
print(x[i, j])
print(tf.get_default_graph().get_operations())
tf.Tensor(1, shape=(), dtype=int32) tf.Tensor(2, shape=(), dtype=int32) tf.Tensor(3, shape=(), dtype=int32) tf.Tensor(4, shape=(), dtype=int32) []
The most obvious differences between NumPy arrays and TensorFlow Tensors are:
# Tensors are backed by NumPy arrays
# Tensors are compatible with NumPy functions
x = tf.constant(value = [[1.,2.,3]])
assert type(x.numpy()) == np.ndarray
squared = np.square(x)
print(squared)
# Tensors are iterable!
for i in x[0]:
print(i)
[[1. 4. 9.]] tf.Tensor(1.0, shape=(), dtype=float32) tf.Tensor(2.0, shape=(), dtype=float32) tf.Tensor(3.0, shape=(), dtype=float32)
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
print(tf.encode_base64("hello world"))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
tf.Tensor(3, shape=(), dtype=int32) tf.Tensor([4 6], shape=(2,), dtype=int32) tf.Tensor(25, shape=(), dtype=int32) tf.Tensor(6, shape=(), dtype=int32) tf.Tensor(b'aGVsbG8gd29ybGQ', shape=(), dtype=string) tf.Tensor(13, shape=(), dtype=int32)
# Each Tensor has a shape and a datatype
x = tf.matmul([[1]], [[2, 3]])
print(x.shape)
print(x.dtype)
(1, 2) <dtype: 'int32'>
*Tensors can be explicitly converted to NumPy ndarrays by invoking the .numpy() method on them.* These conversions are typically cheap as the array and Tensor share the underlying memory representation if possible. *However, sharing the underlying representation isn't always possible since the Tensor may be hosted in GPU memory while NumPy arrays are always backed by host memory, and the conversion will thus involve a copy from GPU to host memory.*
ndarray = np.ones([3,3], dtype = np.float32)
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
TensorFlow operations convert numpy arrays to Tensors automatically tf.Tensor( [[42. 42. 42.] [42. 42. 42.] [42. 42. 42.]], shape=(3, 3), dtype=float32) And NumPy operations convert Tensors to numpy arrays automatically [[43. 43. 43.] [43. 43. 43.] [43. 43. 43.]] The .numpy() method explicitly converts a Tensor to a numpy array [[42. 42. 42.] [42. 42. 42.] [42. 42. 42.]]
x = tf.random_uniform(shape = [3, 3])
print("Is there a GPU available: "),
print(tf.test.is_gpu_available())
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
Is there a GPU available: False Is the Tensor on GPU #0: False
x.device
'/job:localhost/replica:0/task:0/device:CPU:0'
with tf.device("CPU:0"):
y = tf.ones([1,1])
print(y.device)
/job:localhost/replica:0/task:0/device:CPU:0
print(x.device)
z = x.cpu()
print(z.device)
/job:localhost/replica:0/task:0/device:CPU:0 /job:localhost/replica:0/task:0/device:CPU:0
print(x, '\n',z)
tf.Tensor( [[0.5488272 0.9693705 0.47811544] [0.13793623 0.53724563 0.9553573 ] [0.9873563 0.27607608 0.21941674]], shape=(3, 3), dtype=float32) tf.Tensor( [[0.5488272 0.9693705 0.47811544] [0.13793623 0.53724563 0.9553573 ] [0.9873563 0.27607608 0.21941674]], shape=(3, 3), dtype=float32)
tf.equal(x, z)
<tf.Tensor: id=98, shape=(3, 3), dtype=bool, numpy= array([[ True, True, True], [ True, True, True], [ True, True, True]])>
We recommend using the Datasets API for building performant, complex input pipelines from simple, re-usable pieces that will feed your model's training or evaluation loops.
If you're familiar with TensorFlow graphs, the API for constructing the Dataset object remains exactly the same when eager execution is enabled, but the process of iterating over elements of the dataset is slightly simpler. *You can use Python iteration over the tf.data.Dataset
object and do not need to explicitly create an tf.data.Iterator
object.* As a result, the discussion on iterators in the TensorFlow Guide is not relevant when eager execution is enabled.
tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
tensors = tensors.map(np.square) # Numpy Compatibility magic!
tensors = tensors.shuffle(2).batch(2)
for mb_tensor in tensors:
print(mb_tensor)
tf.Tensor([4 1], shape=(2,), dtype=int32) tf.Tensor([16 9], shape=(2,), dtype=int32) tf.Tensor([25 36], shape=(2,), dtype=int32)