10장 – 케라스를 사용한 인공 신경망 소개

이 노트북은 10장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다.

설정

먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지와 텐서플로 버전이 2.0 이상인지 확인합니다.

In [1]:
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)

# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"

# 텐서플로 ≥2.0 필수
import tensorflow as tf
assert tf.__version__ >= "2.0"

# 공통 모듈 임포트
import numpy as np
import os

# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)

# 깔끔한 그래프 출력을 위해
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)

# 그림을 저장할 위치
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "ann"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)

def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
    path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
    print("그림 저장:", fig_id)
    if tight_layout:
        plt.tight_layout()
    plt.savefig(path, format=fig_extension, dpi=resolution)

# 불필요한 경고를 무시합니다 (사이파이 이슈 #5998 참조)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")

퍼셉트론

노트: 사이킷런 향후 버전에서 max_itertol 매개변수의 기본값이 바뀌기 때문에 경고를 피하기 위해 명시적으로 지정합니다.

In [2]:
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron

iris = load_iris()
X = iris.data[:, (2, 3)]  # 꽃잎 길이, 꽃잎 너비
y = (iris.target == 0).astype(np.int)

per_clf = Perceptron(max_iter=1000, tol=1e-3, random_state=42)
per_clf.fit(X, y)

y_pred = per_clf.predict([[2, 0.5]])
In [3]:
y_pred
Out[3]:
array([1])
In [4]:
a = -per_clf.coef_[0][0] / per_clf.coef_[0][1]
b = -per_clf.intercept_ / per_clf.coef_[0][1]

axes = [0, 5, 0, 2]

x0, x1 = np.meshgrid(
        np.linspace(axes[0], axes[1], 500).reshape(-1, 1),
        np.linspace(axes[2], axes[3], 200).reshape(-1, 1),
    )
X_new = np.c_[x0.ravel(), x1.ravel()]
y_predict = per_clf.predict(X_new)
zz = y_predict.reshape(x0.shape)

plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs", label="Not Iris-Setosa")
plt.plot(X[y==1, 0], X[y==1, 1], "yo", label="Iris-Setosa")

plt.plot([axes[0], axes[1]], [a * axes[0] + b, a * axes[1] + b], "k-", linewidth=3)
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#9898ff', '#fafab0'])

plt.contourf(x0, x1, zz, cmap=custom_cmap)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="lower right", fontsize=14)
plt.axis(axes)

save_fig("perceptron_iris_plot")
plt.show()
그림 저장: perceptron_iris_plot

활성화 함수

In [5]:
def sigmoid(z):
    return 1 / (1 + np.exp(-z))

def relu(z):
    return np.maximum(0, z)

def derivative(f, z, eps=0.000001):
    return (f(z + eps) - f(z - eps))/(2 * eps)
In [6]:
z = np.linspace(-5, 5, 200)

plt.figure(figsize=(11,4))

plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=1, label="Step")
plt.plot(z, sigmoid(z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])

plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=1, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(sigmoid, z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])

save_fig("activation_functions_plot")
plt.show()
그림 저장: activation_functions_plot
In [7]:
def heaviside(z):
    return (z >= 0).astype(z.dtype)

def mlp_xor(x1, x2, activation=heaviside):
    return activation(-activation(x1 + x2 - 1.5) + activation(x1 + x2 - 0.5) - 0.5)
In [8]:
x1s = np.linspace(-0.2, 1.2, 100)
x2s = np.linspace(-0.2, 1.2, 100)
x1, x2 = np.meshgrid(x1s, x2s)

z1 = mlp_xor(x1, x2, activation=heaviside)
z2 = mlp_xor(x1, x2, activation=sigmoid)

plt.figure(figsize=(10,4))

plt.subplot(121)
plt.contourf(x1, x2, z1)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: heaviside", fontsize=14)
plt.grid(True)

plt.subplot(122)
plt.contourf(x1, x2, z2)
plt.plot([0, 1], [0, 1], "gs", markersize=20)
plt.plot([0, 1], [1, 0], "y^", markersize=20)
plt.title("Activation function: sigmoid", fontsize=14)
plt.grid(True)

이미지 분류기 만들기

먼저 텐서플로와 케라스를 임포트합니다.

In [9]:
import tensorflow as tf
from tensorflow import keras
In [10]:
tf.__version__
Out[10]:
'2.3.0'
In [11]:
keras.__version__
Out[11]:
'2.4.0'

먼저 MNIST 데이터셋을 로드하겠습니다. 케라스는 keras.datasets에 널리 사용하는 데이터셋을 로드하기 위한 함수를 제공합니다. 이 데이터셋은 이미 훈련 세트와 테스트 세트로 나누어져 있습니다. 훈련 세트를 더 나누어 검증 세트를 만드는 것이 좋습니다:

In [12]:
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
32768/29515 [=================================] - 0s 2us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26427392/26421880 [==============================] - 2s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
8192/5148 [===============================================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4423680/4422102 [==============================] - 0s 0us/step

훈련 세트는 60,000개의 흑백 이미지입니다. 각 이미지의 크기는 28x28 픽셀입니다:

In [13]:
X_train_full.shape
Out[13]:
(60000, 28, 28)

각 픽셀의 강도는 바이트(0~255)로 표현됩니다:

In [14]:
X_train_full.dtype
Out[14]:
dtype('uint8')

전체 훈련 세트를 검증 세트와 (조금 더 작은) 훈련 세트로 나누어 보죠. 또한 픽셀 강도를 255로 나누어 0~1 범위의 실수로 바꾸겠습니다.

In [15]:
X_valid, X_train = X_train_full[:5000] / 255., X_train_full[5000:] / 255.
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test / 255.

맷플롯립의 imshow() 함수와 'binary' 컬러맵을 사용해 이미지를 출력할 수 있습니다:

In [16]:
plt.imshow(X_train[0], cmap="binary")
plt.axis('off')
plt.show()

레이블은 0에서 9까지 (uint8로 표현된) 클래스 아이디입니다:

In [17]:
y_train
Out[17]:
array([4, 0, 7, ..., 3, 0, 5], dtype=uint8)

클래스 이름은 다음과 같습니다:

In [18]:
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
               "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]

훈련 세트에 있는 첫 번째 이미지는 코트입니다:

In [19]:
class_names[y_train[0]]
Out[19]:
'Coat'

검증 세트는 5,000개의 이미지를 담고 있고 테스트 세트는 10,000개의 이미지를 가집니다:

In [20]:
X_valid.shape
Out[20]:
(5000, 28, 28)
In [21]:
X_test.shape
Out[21]:
(10000, 28, 28)

이 데이터셋에 있는 샘플 이미지를 몇 개 출력해 보죠:

In [22]:
n_rows = 4
n_cols = 10
plt.figure(figsize=(n_cols * 1.2, n_rows * 1.2))
for row in range(n_rows):
    for col in range(n_cols):
        index = n_cols * row + col
        plt.subplot(n_rows, n_cols, index + 1)
        plt.imshow(X_train[index], cmap="binary", interpolation="nearest")
        plt.axis('off')
        plt.title(class_names[y_train[index]], fontsize=12)
plt.subplots_adjust(wspace=0.2, hspace=0.5)
save_fig('fashion_mnist_plot', tight_layout=False)
plt.show()
그림 저장: fashion_mnist_plot
In [23]:
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
In [24]:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
In [25]:
model = keras.models.Sequential([
    keras.layers.Flatten(input_shape=[28, 28]),
    keras.layers.Dense(300, activation="relu"),
    keras.layers.Dense(100, activation="relu"),
    keras.layers.Dense(10, activation="softmax")
])
In [26]:
model.layers
Out[26]:
[<tensorflow.python.keras.layers.core.Flatten at 0x7f12e0603e48>,
 <tensorflow.python.keras.layers.core.Dense at 0x7f12e0603fd0>,
 <tensorflow.python.keras.layers.core.Dense at 0x7f12e060d278>,
 <tensorflow.python.keras.layers.core.Dense at 0x7f12e060d4e0>]
In [27]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 300)               235500    
_________________________________________________________________
dense_1 (Dense)              (None, 100)               30100     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1010      
=================================================================
Total params: 266,610
Trainable params: 266,610
Non-trainable params: 0
_________________________________________________________________
In [28]:
keras.utils.plot_model(model, "my_fashion_mnist_model.png", show_shapes=True)
Out[28]:
In [29]:
hidden1 = model.layers[1]
hidden1.name
Out[29]:
'dense'
In [30]:
model.get_layer(hidden1.name) is hidden1
Out[30]:
True
In [31]:
weights, biases = hidden1.get_weights()
In [32]:
weights
Out[32]:
array([[ 0.02448617, -0.00877795, -0.02189048, ..., -0.02766046,
         0.03859074, -0.06889391],
       [ 0.00476504, -0.03105379, -0.0586676 , ...,  0.00602964,
        -0.02763776, -0.04165364],
       [-0.06189284, -0.06901957,  0.07102345, ..., -0.04238207,
         0.07121518, -0.07331658],
       ...,
       [-0.03048757,  0.02155137, -0.05400612, ..., -0.00113463,
         0.00228987,  0.05581069],
       [ 0.07061854, -0.06960931,  0.07038955, ..., -0.00384101,
         0.00034875,  0.02878492],
       [-0.06022581,  0.01577859, -0.02585464, ..., -0.00527829,
         0.00272203, -0.06793761]], dtype=float32)
In [33]:
weights.shape
Out[33]:
(784, 300)
In [34]:
biases
Out[34]:
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)
In [35]:
biases.shape
Out[35]:
(300,)
In [36]:
model.compile(loss="sparse_categorical_crossentropy",
              optimizer="sgd",
              metrics=["accuracy"])

위 코드는 다음과 같습니다:

model.compile(loss=keras.losses.sparse_categorical_crossentropy,
              optimizer=keras.optimizers.SGD(),
              metrics=[keras.metrics.sparse_categorical_accuracy])
In [37]:
history = model.fit(X_train, y_train, epochs=30,
                    validation_data=(X_valid, y_valid))
Epoch 1/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.7237 - accuracy: 0.7644 - val_loss: 0.5207 - val_accuracy: 0.8234
Epoch 2/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4843 - accuracy: 0.8318 - val_loss: 0.4345 - val_accuracy: 0.8538
Epoch 3/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.4393 - accuracy: 0.8455 - val_loss: 0.5288 - val_accuracy: 0.8002
Epoch 4/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.4126 - accuracy: 0.8567 - val_loss: 0.3914 - val_accuracy: 0.8648
Epoch 5/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.3938 - accuracy: 0.8618 - val_loss: 0.3754 - val_accuracy: 0.8676
Epoch 6/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.3752 - accuracy: 0.8676 - val_loss: 0.3706 - val_accuracy: 0.8714
Epoch 7/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.3634 - accuracy: 0.8713 - val_loss: 0.3617 - val_accuracy: 0.8716
Epoch 8/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.3519 - accuracy: 0.8751 - val_loss: 0.3836 - val_accuracy: 0.8624
Epoch 9/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.3417 - accuracy: 0.8791 - val_loss: 0.3589 - val_accuracy: 0.8710
Epoch 10/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3322 - accuracy: 0.8821 - val_loss: 0.3445 - val_accuracy: 0.8774
Epoch 11/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.3244 - accuracy: 0.8834 - val_loss: 0.3431 - val_accuracy: 0.8780
Epoch 12/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.3151 - accuracy: 0.8866 - val_loss: 0.3311 - val_accuracy: 0.8834
Epoch 13/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3083 - accuracy: 0.8887 - val_loss: 0.3266 - val_accuracy: 0.8882
Epoch 14/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.3024 - accuracy: 0.8914 - val_loss: 0.3385 - val_accuracy: 0.8796
Epoch 15/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2949 - accuracy: 0.8937 - val_loss: 0.3210 - val_accuracy: 0.8862
Epoch 16/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2892 - accuracy: 0.8972 - val_loss: 0.3100 - val_accuracy: 0.8904
Epoch 17/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2840 - accuracy: 0.8974 - val_loss: 0.3571 - val_accuracy: 0.8736
Epoch 18/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2781 - accuracy: 0.9001 - val_loss: 0.3148 - val_accuracy: 0.8898
Epoch 19/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2732 - accuracy: 0.9022 - val_loss: 0.3120 - val_accuracy: 0.8892
Epoch 20/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2675 - accuracy: 0.9036 - val_loss: 0.3282 - val_accuracy: 0.8832
Epoch 21/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2624 - accuracy: 0.9055 - val_loss: 0.3048 - val_accuracy: 0.8910
Epoch 22/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2577 - accuracy: 0.9071 - val_loss: 0.2972 - val_accuracy: 0.8968
Epoch 23/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2535 - accuracy: 0.9087 - val_loss: 0.2981 - val_accuracy: 0.8940
Epoch 24/30
1719/1719 [==============================] - 5s 3ms/step - loss: 0.2485 - accuracy: 0.9104 - val_loss: 0.3079 - val_accuracy: 0.8874
Epoch 25/30
1719/1719 [==============================] - 4s 3ms/step - loss: 0.2444 - accuracy: 0.9119 - val_loss: 0.2974 - val_accuracy: 0.8966
Epoch 26/30
1719/1719 [==============================] - 4s 3ms/step - loss: 0.2408 - accuracy: 0.9135 - val_loss: 0.3050 - val_accuracy: 0.8884
Epoch 27/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2361 - accuracy: 0.9155 - val_loss: 0.3010 - val_accuracy: 0.8954
Epoch 28/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2329 - accuracy: 0.9164 - val_loss: 0.2978 - val_accuracy: 0.8954
Epoch 29/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2283 - accuracy: 0.9186 - val_loss: 0.3053 - val_accuracy: 0.8908
Epoch 30/30
1719/1719 [==============================] - 6s 3ms/step - loss: 0.2250 - accuracy: 0.9190 - val_loss: 0.3003 - val_accuracy: 0.8922
In [38]:
history.params
Out[38]:
{'verbose': 1, 'epochs': 30, 'steps': 1719}
In [39]:
print(history.epoch)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
In [40]:
history.history.keys()
Out[40]:
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
In [41]:
import pandas as pd

pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
save_fig("keras_learning_curves_plot")
plt.show()
그림 저장: keras_learning_curves_plot
In [42]:
model.evaluate(X_test, y_test)
313/313 [==============================] - 1s 3ms/step - loss: 0.3347 - accuracy: 0.8829
Out[42]:
[0.3346880376338959, 0.8828999996185303]
In [43]:
X_new = X_test[:3]
y_proba = model.predict(X_new)
y_proba.round(2)
Out[43]:
array([[0.  , 0.  , 0.  , 0.  , 0.  , 0.01, 0.  , 0.03, 0.  , 0.96],
       [0.  , 0.  , 0.99, 0.  , 0.01, 0.  , 0.  , 0.  , 0.  , 0.  ],
       [0.  , 1.  , 0.  , 0.  , 0.  , 0.  , 0.  , 0.  , 0.  , 0.  ]],
      dtype=float32)
In [44]:
y_pred = model.predict_classes(X_new)
y_pred
WARNING:tensorflow:From <ipython-input-44-81ace37e545f>:1: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
Out[44]:
array([9, 2, 1])
In [45]:
np.array(class_names)[y_pred]
Out[45]:
array(['Ankle boot', 'Pullover', 'Trouser'], dtype='<U11')
In [46]:
y_new = y_test[:3]
y_new
Out[46]:
array([9, 2, 1], dtype=uint8)
In [47]:
plt.figure(figsize=(7.2, 2.4))
for index, image in enumerate(X_new):
    plt.subplot(1, 3, index + 1)
    plt.imshow(image, cmap="binary", interpolation="nearest")
    plt.axis('off')
    plt.title(class_names[y_test[index]], fontsize=12)
plt.subplots_adjust(wspace=0.2, hspace=0.5)
save_fig('fashion_mnist_images_plot', tight_layout=False)
plt.show()
그림 저장: fashion_mnist_images_plot

회귀 MLP

캘리포니아 주택 데이터셋을 로드하여 나누고 스케일을 바꾸어 보겠습니다(2장에서 사용한 수정된 버전이 아니라 원본을 사용합니다):

In [48]:
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler

housing = fetch_california_housing()

X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full, random_state=42)

scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.transform(X_valid)
X_test = scaler.transform(X_test)
Downloading Cal. housing from https://ndownloader.figshare.com/files/5976036 to /home/work/scikit_learn_data
In [49]:
np.random.seed(42)
tf.random.set_seed(42)
In [50]:
model = keras.models.Sequential([
    keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
    keras.layers.Dense(1)
])
model.compile(loss="mean_squared_error", optimizer=keras.optimizers.SGD(lr=1e-3))
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
X_new = X_test[:3]
y_pred = model.predict(X_new)
Epoch 1/20
363/363 [==============================] - 1s 4ms/step - loss: 1.6419 - val_loss: 0.8560
Epoch 2/20
363/363 [==============================] - 1s 3ms/step - loss: 0.7047 - val_loss: 0.6531
Epoch 3/20
363/363 [==============================] - 1s 3ms/step - loss: 0.6345 - val_loss: 0.6099
Epoch 4/20
363/363 [==============================] - 1s 3ms/step - loss: 0.5977 - val_loss: 0.5658
Epoch 5/20
363/363 [==============================] - 1s 3ms/step - loss: 0.5706 - val_loss: 0.5355
Epoch 6/20
363/363 [==============================] - 1s 3ms/step - loss: 0.5472 - val_loss: 0.5173
Epoch 7/20
363/363 [==============================] - 1s 3ms/step - loss: 0.5288 - val_loss: 0.5081
Epoch 8/20
363/363 [==============================] - 1s 3ms/step - loss: 0.5130 - val_loss: 0.4799
Epoch 9/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4992 - val_loss: 0.4690
Epoch 10/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4875 - val_loss: 0.4656
Epoch 11/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4777 - val_loss: 0.4482
Epoch 12/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4688 - val_loss: 0.4479
Epoch 13/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4615 - val_loss: 0.4296
Epoch 14/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4547 - val_loss: 0.4233
Epoch 15/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4488 - val_loss: 0.4176
Epoch 16/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4435 - val_loss: 0.4123
Epoch 17/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4389 - val_loss: 0.4071
Epoch 18/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4347 - val_loss: 0.4037
Epoch 19/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4306 - val_loss: 0.4000
Epoch 20/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4273 - val_loss: 0.3969
162/162 [==============================] - 0s 2ms/step - loss: 0.4212
In [51]:
plt.plot(pd.DataFrame(history.history))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
In [52]:
y_pred
Out[52]:
array([[0.38856634],
       [1.6792021 ],
       [3.1022797 ]], dtype=float32)

함수형 API

모든 신경망 모델이 단순하게 순서대로 나열되지는 않습니다. 어떤 신경망은 매우 복잡한 구조를 가집니다. 여러 개의 입력이 있거나 여러 개의 출력이 있습니다. 예를 들어 와이드 & 딥 신경망(논문 참조)은 입력의 전체 또는 일부를 출력층에 바로 연결합니다.

In [53]:
np.random.seed(42)
tf.random.set_seed(42)
In [54]:
input_ = keras.layers.Input(shape=X_train.shape[1:])
hidden1 = keras.layers.Dense(30, activation="relu")(input_)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([input_, hidden2])
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[input_], outputs=[output])
In [55]:
model.summary()
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 8)]          0                                            
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 30)           270         input_1[0][0]                    
__________________________________________________________________________________________________
dense_6 (Dense)                 (None, 30)           930         dense_5[0][0]                    
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 38)           0           input_1[0][0]                    
                                                                 dense_6[0][0]                    
__________________________________________________________________________________________________
dense_7 (Dense)                 (None, 1)            39          concatenate[0][0]                
==================================================================================================
Total params: 1,239
Trainable params: 1,239
Non-trainable params: 0
__________________________________________________________________________________________________
In [56]:
model.compile(loss="mean_squared_error", optimizer=keras.optimizers.SGD(lr=1e-3))
history = model.fit(X_train, y_train, epochs=20,
                    validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
y_pred = model.predict(X_new)
Epoch 1/20
363/363 [==============================] - 1s 4ms/step - loss: 1.2611 - val_loss: 3.3940
Epoch 2/20
363/363 [==============================] - 1s 4ms/step - loss: 0.6580 - val_loss: 0.9360
Epoch 3/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5878 - val_loss: 0.5649
Epoch 4/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5582 - val_loss: 0.5712
Epoch 5/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5347 - val_loss: 0.5045
Epoch 6/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5158 - val_loss: 0.4831
Epoch 7/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5002 - val_loss: 0.4639
Epoch 8/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4876 - val_loss: 0.4638
Epoch 9/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4760 - val_loss: 0.4421
Epoch 10/20
363/363 [==============================] - 1s 3ms/step - loss: 0.4659 - val_loss: 0.4313
Epoch 11/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4577 - val_loss: 0.4345
Epoch 12/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4498 - val_loss: 0.4168
Epoch 13/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4428 - val_loss: 0.4230
Epoch 14/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4366 - val_loss: 0.4047
Epoch 15/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4307 - val_loss: 0.4078
Epoch 16/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4257 - val_loss: 0.3938
Epoch 17/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4210 - val_loss: 0.3952
Epoch 18/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4167 - val_loss: 0.3860
Epoch 19/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4121 - val_loss: 0.3827
Epoch 20/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4088 - val_loss: 0.4054
162/162 [==============================] - 0s 2ms/step - loss: 0.4032

와이드나 딥 경로에 다른 입력 특성을 전달하면 어떻게 될까요? (특성 0에서 4까지) 5개의 특성을 와이드 경로에 보내고 (특성 2에서 7까지) 6개의 특성을 딥 경로에 전달하겠습니다. 3개의 특성(특성 2, 3, 4)은 양쪽에 모두 전달됩니다.

In [57]:
np.random.seed(42)
tf.random.set_seed(42)
In [58]:
input_A = keras.layers.Input(shape=[5], name="wide_input")
input_B = keras.layers.Input(shape=[6], name="deep_input")
hidden1 = keras.layers.Dense(30, activation="relu")(input_B)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([input_A, hidden2])
output = keras.layers.Dense(1, name="output")(concat)
model = keras.models.Model(inputs=[input_A, input_B], outputs=[output])
In [59]:
model.compile(loss="mse", optimizer=keras.optimizers.SGD(lr=1e-3))

X_train_A, X_train_B = X_train[:, :5], X_train[:, 2:]
X_valid_A, X_valid_B = X_valid[:, :5], X_valid[:, 2:]
X_test_A, X_test_B = X_test[:, :5], X_test[:, 2:]
X_new_A, X_new_B = X_test_A[:3], X_test_B[:3]

history = model.fit((X_train_A, X_train_B), y_train, epochs=20,
                    validation_data=((X_valid_A, X_valid_B), y_valid))
mse_test = model.evaluate((X_test_A, X_test_B), y_test)
y_pred = model.predict((X_new_A, X_new_B))
Epoch 1/20
363/363 [==============================] - 2s 4ms/step - loss: 1.8145 - val_loss: 0.8072
Epoch 2/20
363/363 [==============================] - 1s 4ms/step - loss: 0.6771 - val_loss: 0.6658
Epoch 3/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5979 - val_loss: 0.5687
Epoch 4/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5584 - val_loss: 0.5296
Epoch 5/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5334 - val_loss: 0.4993
Epoch 6/20
363/363 [==============================] - 1s 4ms/step - loss: 0.5120 - val_loss: 0.4811
Epoch 7/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4970 - val_loss: 0.4696
Epoch 8/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4843 - val_loss: 0.4496
Epoch 9/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4730 - val_loss: 0.4404
Epoch 10/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4644 - val_loss: 0.4315
Epoch 11/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4570 - val_loss: 0.4268
Epoch 12/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4510 - val_loss: 0.4166
Epoch 13/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4462 - val_loss: 0.4125
Epoch 14/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4421 - val_loss: 0.4074
Epoch 15/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4385 - val_loss: 0.4044
Epoch 16/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4356 - val_loss: 0.4007
Epoch 17/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4322 - val_loss: 0.4013
Epoch 18/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4305 - val_loss: 0.3987
Epoch 19/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4274 - val_loss: 0.3934
Epoch 20/20
363/363 [==============================] - 1s 4ms/step - loss: 0.4261 - val_loss: 0.4204
162/162 [==============================] - 0s 2ms/step - loss: 0.4219

규제를 위한 보조 출력 추가하기:

In [60]:
np.random.seed(42)
tf.random.set_seed(42)
In [61]:
input_A = keras.layers.Input(shape=[5], name="wide_input")
input_B = keras.layers.Input(shape=[6], name="deep_input")
hidden1 = keras.layers.Dense(30, activation="relu")(input_B)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([input_A, hidden2])
output = keras.layers.Dense(1, name="main_output")(concat)
aux_output = keras.layers.Dense(1, name="aux_output")(hidden2)
model = keras.models.Model(inputs=[input_A, input_B],
                           outputs=[output, aux_output])
In [62]:
model.compile(loss=["mse", "mse"], loss_weights=[0.9, 0.1], optimizer=keras.optimizers.SGD(lr=1e-3))
In [63]:
history = model.fit([X_train_A, X_train_B], [y_train, y_train], epochs=20,
                    validation_data=([X_valid_A, X_valid_B], [y_valid, y_valid]))
Epoch 1/20
363/363 [==============================] - 2s 6ms/step - loss: 2.1365 - main_output_loss: 1.9196 - aux_output_loss: 4.0890 - val_loss: 1.6233 - val_main_output_loss: 0.8468 - val_aux_output_loss: 8.6117
Epoch 2/20
363/363 [==============================] - 2s 5ms/step - loss: 0.8905 - main_output_loss: 0.6969 - aux_output_loss: 2.6326 - val_loss: 1.5163 - val_main_output_loss: 0.6836 - val_aux_output_loss: 9.0109
Epoch 3/20
363/363 [==============================] - 2s 5ms/step - loss: 0.7429 - main_output_loss: 0.6088 - aux_output_loss: 1.9499 - val_loss: 1.4639 - val_main_output_loss: 0.6229 - val_aux_output_loss: 9.0326
Epoch 4/20
363/363 [==============================] - 2s 5ms/step - loss: 0.6771 - main_output_loss: 0.5691 - aux_output_loss: 1.6485 - val_loss: 1.3388 - val_main_output_loss: 0.5481 - val_aux_output_loss: 8.4552
Epoch 5/20
363/363 [==============================] - 2s 5ms/step - loss: 0.6381 - main_output_loss: 0.5434 - aux_output_loss: 1.4911 - val_loss: 1.2177 - val_main_output_loss: 0.5194 - val_aux_output_loss: 7.5030
Epoch 6/20
363/363 [==============================] - 2s 5ms/step - loss: 0.6079 - main_output_loss: 0.5207 - aux_output_loss: 1.3923 - val_loss: 1.0935 - val_main_output_loss: 0.5106 - val_aux_output_loss: 6.3396
Epoch 7/20
363/363 [==============================] - 2s 5ms/step - loss: 0.5853 - main_output_loss: 0.5040 - aux_output_loss: 1.3175 - val_loss: 0.9918 - val_main_output_loss: 0.5115 - val_aux_output_loss: 5.3151
Epoch 8/20
363/363 [==============================] - 2s 5ms/step - loss: 0.5666 - main_output_loss: 0.4898 - aux_output_loss: 1.2572 - val_loss: 0.8733 - val_main_output_loss: 0.4733 - val_aux_output_loss: 4.4740
Epoch 9/20
363/363 [==============================] - 2s 5ms/step - loss: 0.5504 - main_output_loss: 0.4771 - aux_output_loss: 1.2101 - val_loss: 0.7832 - val_main_output_loss: 0.4555 - val_aux_output_loss: 3.7323
Epoch 10/20
363/363 [==============================] - 2s 5ms/step - loss: 0.5373 - main_output_loss: 0.4671 - aux_output_loss: 1.1695 - val_loss: 0.7170 - val_main_output_loss: 0.4604 - val_aux_output_loss: 3.0262
Epoch 11/20
363/363 [==============================] - 2s 5ms/step - loss: 0.5266 - main_output_loss: 0.4591 - aux_output_loss: 1.1344 - val_loss: 0.6510 - val_main_output_loss: 0.4293 - val_aux_output_loss: 2.6468
Epoch 12/20
363/363 [==============================] - 2s 5ms/step - loss: 0.5173 - main_output_loss: 0.4520 - aux_output_loss: 1.1048 - val_loss: 0.6051 - val_main_output_loss: 0.4310 - val_aux_output_loss: 2.1722
Epoch 13/20
363/363 [==============================] - 2s 5ms/step - loss: 0.5095 - main_output_loss: 0.4465 - aux_output_loss: 1.0765 - val_loss: 0.5644 - val_main_output_loss: 0.4161 - val_aux_output_loss: 1.8992
Epoch 14/20
363/363 [==============================] - 2s 5ms/step - loss: 0.5027 - main_output_loss: 0.4417 - aux_output_loss: 1.0511 - val_loss: 0.5354 - val_main_output_loss: 0.4119 - val_aux_output_loss: 1.6466
Epoch 15/20
363/363 [==============================] - 2s 5ms/step - loss: 0.4967 - main_output_loss: 0.4376 - aux_output_loss: 1.0280 - val_loss: 0.5124 - val_main_output_loss: 0.4047 - val_aux_output_loss: 1.4812
Epoch 16/20
363/363 [==============================] - 2s 5ms/step - loss: 0.4916 - main_output_loss: 0.4343 - aux_output_loss: 1.0070 - val_loss: 0.4934 - val_main_output_loss: 0.4034 - val_aux_output_loss: 1.3035
Epoch 17/20
363/363 [==============================] - 2s 4ms/step - loss: 0.4867 - main_output_loss: 0.4311 - aux_output_loss: 0.9872 - val_loss: 0.4801 - val_main_output_loss: 0.3984 - val_aux_output_loss: 1.2150
Epoch 18/20
363/363 [==============================] - 2s 4ms/step - loss: 0.4829 - main_output_loss: 0.4289 - aux_output_loss: 0.9686 - val_loss: 0.4694 - val_main_output_loss: 0.3962 - val_aux_output_loss: 1.1279
Epoch 19/20
363/363 [==============================] - 2s 5ms/step - loss: 0.4785 - main_output_loss: 0.4260 - aux_output_loss: 0.9510 - val_loss: 0.4580 - val_main_output_loss: 0.3936 - val_aux_output_loss: 1.0372
Epoch 20/20
363/363 [==============================] - 2s 5ms/step - loss: 0.4756 - main_output_loss: 0.4246 - aux_output_loss: 0.9344 - val_loss: 0.4655 - val_main_output_loss: 0.4048 - val_aux_output_loss: 1.0118
In [64]:
total_loss, main_loss, aux_loss = model.evaluate(
    [X_test_A, X_test_B], [y_test, y_test])
y_pred_main, y_pred_aux = model.predict([X_new_A, X_new_B])
162/162 [==============================] - 0s 3ms/step - loss: 0.4668 - main_output_loss: 0.4178 - aux_output_loss: 0.9082
WARNING:tensorflow:5 out of the last 6 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f1364d939d8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.

서브클래싱 API

In [65]:
class WideAndDeepModel(keras.models.Model):
    def __init__(self, units=30, activation="relu", **kwargs):
        super().__init__(**kwargs)
        self.hidden1 = keras.layers.Dense(units, activation=activation)
        self.hidden2 = keras.layers.Dense(units, activation=activation)
        self.main_output = keras.layers.Dense(1)
        self.aux_output = keras.layers.Dense(1)
        
    def call(self, inputs):
        input_A, input_B = inputs
        hidden1 = self.hidden1(input_B)
        hidden2 = self.hidden2(hidden1)
        concat = keras.layers.concatenate([input_A, hidden2])
        main_output = self.main_output(concat)
        aux_output = self.aux_output(hidden2)
        return main_output, aux_output

model = WideAndDeepModel(30, activation="relu")
In [66]:
model.compile(loss="mse", loss_weights=[0.9, 0.1], optimizer=keras.optimizers.SGD(lr=1e-3))
history = model.fit((X_train_A, X_train_B), (y_train, y_train), epochs=10,
                    validation_data=((X_valid_A, X_valid_B), (y_valid, y_valid)))
total_loss, main_loss, aux_loss = model.evaluate((X_test_A, X_test_B), (y_test, y_test))
y_pred_main, y_pred_aux = model.predict((X_new_A, X_new_B))
Epoch 1/10
363/363 [==============================] - 2s 6ms/step - loss: 2.3298 - output_1_loss: 2.2186 - output_2_loss: 3.3304 - val_loss: 2.1435 - val_output_1_loss: 1.1581 - val_output_2_loss: 11.0117
Epoch 2/10
363/363 [==============================] - 2s 5ms/step - loss: 0.9714 - output_1_loss: 0.8543 - output_2_loss: 2.0252 - val_loss: 1.7567 - val_output_1_loss: 0.8205 - val_output_2_loss: 10.1825
Epoch 3/10
363/363 [==============================] - 2s 5ms/step - loss: 0.8268 - output_1_loss: 0.7289 - output_2_loss: 1.7082 - val_loss: 1.5664 - val_output_1_loss: 0.7913 - val_output_2_loss: 8.5419
Epoch 4/10
363/363 [==============================] - 2s 5ms/step - loss: 0.7636 - output_1_loss: 0.6764 - output_2_loss: 1.5477 - val_loss: 1.3088 - val_output_1_loss: 0.6549 - val_output_2_loss: 7.1933
Epoch 5/10
363/363 [==============================] - 2s 5ms/step - loss: 0.7211 - output_1_loss: 0.6402 - output_2_loss: 1.4489 - val_loss: 1.1357 - val_output_1_loss: 0.5964 - val_output_2_loss: 5.9898
Epoch 6/10
363/363 [==============================] - 2s 5ms/step - loss: 0.6895 - output_1_loss: 0.6124 - output_2_loss: 1.3833 - val_loss: 1.0036 - val_output_1_loss: 0.5937 - val_output_2_loss: 4.6933
Epoch 7/10
363/363 [==============================] - 2s 6ms/step - loss: 0.6632 - output_1_loss: 0.5894 - output_2_loss: 1.3274 - val_loss: 0.8904 - val_output_1_loss: 0.5591 - val_output_2_loss: 3.8714
Epoch 8/10
363/363 [==============================] - 2s 5ms/step - loss: 0.6410 - output_1_loss: 0.5701 - output_2_loss: 1.2796 - val_loss: 0.8009 - val_output_1_loss: 0.5243 - val_output_2_loss: 3.2903
Epoch 9/10
363/363 [==============================] - 2s 5ms/step - loss: 0.6204 - output_1_loss: 0.5514 - output_2_loss: 1.2416 - val_loss: 0.7357 - val_output_1_loss: 0.5144 - val_output_2_loss: 2.7275
Epoch 10/10
363/363 [==============================] - 2s 5ms/step - loss: 0.6024 - output_1_loss: 0.5355 - output_2_loss: 1.2043 - val_loss: 0.6849 - val_output_1_loss: 0.5014 - val_output_2_loss: 2.3370
162/162 [==============================] - 0s 3ms/step - loss: 0.5841 - output_1_loss: 0.5188 - output_2_loss: 1.1722
WARNING:tensorflow:6 out of the last 7 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f1364adbae8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
In [67]:
model = WideAndDeepModel(30, activation="relu")

저장과 복원

In [68]:
np.random.seed(42)
tf.random.set_seed(42)
In [69]:
model = keras.models.Sequential([
    keras.layers.Dense(30, activation="relu", input_shape=[8]),
    keras.layers.Dense(30, activation="relu"),
    keras.layers.Dense(1)
])    
In [70]:
model.compile(loss="mse", optimizer=keras.optimizers.SGD(lr=1e-3))
history = model.fit(X_train, y_train, epochs=10, validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
Epoch 1/10
363/363 [==============================] - 1s 3ms/step - loss: 1.8866 - val_loss: 0.7126
Epoch 2/10
363/363 [==============================] - 1s 3ms/step - loss: 0.6577 - val_loss: 0.6880
Epoch 3/10
363/363 [==============================] - 1s 3ms/step - loss: 0.5934 - val_loss: 0.5803
Epoch 4/10
363/363 [==============================] - 1s 4ms/step - loss: 0.5557 - val_loss: 0.5166
Epoch 5/10
363/363 [==============================] - 1s 4ms/step - loss: 0.5272 - val_loss: 0.4895
Epoch 6/10
363/363 [==============================] - 1s 4ms/step - loss: 0.5033 - val_loss: 0.4951
Epoch 7/10
363/363 [==============================] - 1s 4ms/step - loss: 0.4854 - val_loss: 0.4861
Epoch 8/10
363/363 [==============================] - 1s 3ms/step - loss: 0.4709 - val_loss: 0.4554
Epoch 9/10
363/363 [==============================] - 1s 4ms/step - loss: 0.4578 - val_loss: 0.4413
Epoch 10/10
363/363 [==============================] - 1s 4ms/step - loss: 0.4474 - val_loss: 0.4379
162/162 [==============================] - 0s 2ms/step - loss: 0.4382
In [71]:
model.save("my_keras_model.h5")
In [72]:
model = keras.models.load_model("my_keras_model.h5")
In [73]:
model.predict(X_new)
WARNING:tensorflow:7 out of the last 8 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f136512ad90> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
Out[73]:
array([[0.54002357],
       [1.6505971 ],
       [3.0098243 ]], dtype=float32)
In [74]:
model.save_weights("my_keras_weights.ckpt")
In [75]:
model.load_weights("my_keras_weights.ckpt")
Out[75]:
<tensorflow.python.training.tracking.util.CheckpointLoadStatus at 0x7f13651182e8>

훈련 과정에서 콜백 사용하기

In [76]:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
In [77]:
model = keras.models.Sequential([
    keras.layers.Dense(30, activation="relu", input_shape=[8]),
    keras.layers.Dense(30, activation="relu"),
    keras.layers.Dense(1)
])    
In [78]:
model.compile(loss="mse", optimizer=keras.optimizers.SGD(lr=1e-3))
checkpoint_cb = keras.callbacks.ModelCheckpoint("my_keras_model.h5", save_best_only=True)
history = model.fit(X_train, y_train, epochs=10,
                    validation_data=(X_valid, y_valid),
                    callbacks=[checkpoint_cb])
model = keras.models.load_model("my_keras_model.h5") # 최상의 모델로 롤백
mse_test = model.evaluate(X_test, y_test)
Epoch 1/10
363/363 [==============================] - 1s 4ms/step - loss: 1.8866 - val_loss: 0.7126
Epoch 2/10
363/363 [==============================] - 1s 4ms/step - loss: 0.6577 - val_loss: 0.6880
Epoch 3/10
363/363 [==============================] - 1s 4ms/step - loss: 0.5934 - val_loss: 0.5803
Epoch 4/10
363/363 [==============================] - 1s 4ms/step - loss: 0.5557 - val_loss: 0.5166
Epoch 5/10
363/363 [==============================] - 1s 4ms/step - loss: 0.5272 - val_loss: 0.4895
Epoch 6/10
363/363 [==============================] - 1s 4ms/step - loss: 0.5033 - val_loss: 0.4951
Epoch 7/10
363/363 [==============================] - 1s 4ms/step - loss: 0.4854 - val_loss: 0.4861
Epoch 8/10
363/363 [==============================] - 1s 4ms/step - loss: 0.4709 - val_loss: 0.4554
Epoch 9/10
363/363 [==============================] - 1s 4ms/step - loss: 0.4578 - val_loss: 0.4413
Epoch 10/10
363/363 [==============================] - 1s 4ms/step - loss: 0.4474 - val_loss: 0.4379
162/162 [==============================] - 0s 2ms/step - loss: 0.4382
In [79]:
model.compile(loss="mse", optimizer=keras.optimizers.SGD(lr=1e-3))
early_stopping_cb = keras.callbacks.EarlyStopping(patience=10,
                                                  restore_best_weights=True)
history = model.fit(X_train, y_train, epochs=100,
                    validation_data=(X_valid, y_valid),
                    callbacks=[checkpoint_cb, early_stopping_cb])
mse_test = model.evaluate(X_test, y_test)
Epoch 1/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4393 - val_loss: 0.4110
Epoch 2/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4315 - val_loss: 0.4266
Epoch 3/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4259 - val_loss: 0.3996
Epoch 4/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4201 - val_loss: 0.3939
Epoch 5/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4154 - val_loss: 0.3889
Epoch 6/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4111 - val_loss: 0.3866
Epoch 7/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4074 - val_loss: 0.3860
Epoch 8/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4040 - val_loss: 0.3793
Epoch 9/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4008 - val_loss: 0.3746
Epoch 10/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3976 - val_loss: 0.3723
Epoch 11/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3950 - val_loss: 0.3697
Epoch 12/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3923 - val_loss: 0.3669
Epoch 13/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3897 - val_loss: 0.3661
Epoch 14/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3874 - val_loss: 0.3631
Epoch 15/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3851 - val_loss: 0.3660
Epoch 16/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3829 - val_loss: 0.3625
Epoch 17/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3810 - val_loss: 0.3592
Epoch 18/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3788 - val_loss: 0.3563
Epoch 19/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3766 - val_loss: 0.3535
Epoch 20/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3750 - val_loss: 0.3709
Epoch 21/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3732 - val_loss: 0.3512
Epoch 22/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3715 - val_loss: 0.3699
Epoch 23/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3700 - val_loss: 0.3476
Epoch 24/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3685 - val_loss: 0.3561
Epoch 25/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3671 - val_loss: 0.3527
Epoch 26/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3658 - val_loss: 0.3700
Epoch 27/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3647 - val_loss: 0.3432
Epoch 28/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3635 - val_loss: 0.3592
Epoch 29/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3625 - val_loss: 0.3521
Epoch 30/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3613 - val_loss: 0.3626
Epoch 31/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3601 - val_loss: 0.3431
Epoch 32/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3589 - val_loss: 0.3766
Epoch 33/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3584 - val_loss: 0.3374
Epoch 34/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3572 - val_loss: 0.3407
Epoch 35/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3563 - val_loss: 0.3614
Epoch 36/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3555 - val_loss: 0.3348
Epoch 37/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3546 - val_loss: 0.3573
Epoch 38/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3538 - val_loss: 0.3367
Epoch 39/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3530 - val_loss: 0.3425
Epoch 40/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3523 - val_loss: 0.3368
Epoch 41/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3515 - val_loss: 0.3514
Epoch 42/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3511 - val_loss: 0.3426
Epoch 43/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3500 - val_loss: 0.3677
Epoch 44/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3496 - val_loss: 0.3563
Epoch 45/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3490 - val_loss: 0.3336
Epoch 46/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3481 - val_loss: 0.3456
Epoch 47/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3478 - val_loss: 0.3433
Epoch 48/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3471 - val_loss: 0.3658
Epoch 49/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3466 - val_loss: 0.3286
Epoch 50/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3460 - val_loss: 0.3268
Epoch 51/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3454 - val_loss: 0.3438
Epoch 52/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3449 - val_loss: 0.3262
Epoch 53/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3444 - val_loss: 0.3909
Epoch 54/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3439 - val_loss: 0.3275
Epoch 55/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3435 - val_loss: 0.3559
Epoch 56/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3430 - val_loss: 0.3237
Epoch 57/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3423 - val_loss: 0.3242
Epoch 58/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3419 - val_loss: 0.3765
Epoch 59/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3417 - val_loss: 0.3289
Epoch 60/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3410 - val_loss: 0.3502
Epoch 61/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3404 - val_loss: 0.3456
Epoch 62/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3402 - val_loss: 0.3444
Epoch 63/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3392 - val_loss: 0.3290
Epoch 64/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3393 - val_loss: 0.3217
Epoch 65/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3387 - val_loss: 0.3351
Epoch 66/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3383 - val_loss: 0.3232
Epoch 67/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3376 - val_loss: 0.3568
Epoch 68/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3374 - val_loss: 0.3256
Epoch 69/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3370 - val_loss: 0.3349
Epoch 70/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3365 - val_loss: 0.3559
Epoch 71/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3361 - val_loss: 0.3582
Epoch 72/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3357 - val_loss: 0.3287
Epoch 73/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3351 - val_loss: 0.3203
Epoch 74/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3350 - val_loss: 0.3839
Epoch 75/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3347 - val_loss: 0.3233
Epoch 76/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3342 - val_loss: 0.3475
Epoch 77/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3338 - val_loss: 0.3409
Epoch 78/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3335 - val_loss: 0.3462
Epoch 79/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3332 - val_loss: 0.3347
Epoch 80/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3329 - val_loss: 0.3355
Epoch 81/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3324 - val_loss: 0.3276
Epoch 82/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3320 - val_loss: 0.3167
Epoch 83/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3317 - val_loss: 0.3280
Epoch 84/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3312 - val_loss: 0.3637
Epoch 85/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3310 - val_loss: 0.3176
Epoch 86/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3308 - val_loss: 0.3156
Epoch 87/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3305 - val_loss: 0.3528
Epoch 88/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3299 - val_loss: 0.3256
Epoch 89/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3294 - val_loss: 0.3625
Epoch 90/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3296 - val_loss: 0.3379
Epoch 91/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3292 - val_loss: 0.3211
Epoch 92/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3287 - val_loss: 0.3456
Epoch 93/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3285 - val_loss: 0.3158
Epoch 94/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3281 - val_loss: 0.3410
Epoch 95/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3277 - val_loss: 0.3377
Epoch 96/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3273 - val_loss: 0.3214
162/162 [==============================] - 0s 2ms/step - loss: 0.3310
In [80]:
class PrintValTrainRatioCallback(keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs):
        print("\nval/train: {:.2f}".format(logs["val_loss"] / logs["loss"]))
In [81]:
val_train_ratio_cb = PrintValTrainRatioCallback()
history = model.fit(X_train, y_train, epochs=1,
                    validation_data=(X_valid, y_valid),
                    callbacks=[val_train_ratio_cb])
346/363 [===========================>..] - ETA: 0s - loss: 0.3272
val/train: 1.08
363/363 [==============================] - 1s 4ms/step - loss: 0.3302 - val_loss: 0.3559

텐서보드

In [82]:
root_logdir = os.path.join(os.curdir, "my_logs")
In [83]:
def get_run_logdir():
    import time
    run_id = time.strftime("run_%Y_%m_%d-%H_%M_%S")
    return os.path.join(root_logdir, run_id)

run_logdir = get_run_logdir()
run_logdir
Out[83]:
'./my_logs/run_2020_08_09-01_43_31'
In [84]:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
In [85]:
model = keras.models.Sequential([
    keras.layers.Dense(30, activation="relu", input_shape=[8]),
    keras.layers.Dense(30, activation="relu"),
    keras.layers.Dense(1)
])    
model.compile(loss="mse", optimizer=keras.optimizers.SGD(lr=1e-3))
In [86]:
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
history = model.fit(X_train, y_train, epochs=30,
                    validation_data=(X_valid, y_valid),
                    callbacks=[checkpoint_cb, tensorboard_cb])
Epoch 1/30
  1/363 [..............................] - ETA: 0s - loss: 7.8215WARNING:tensorflow:From /home/work/.local/lib/python3.6/site-packages/tensorflow/python/ops/summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01.
Instructions for updating:
use `tf.profiler.experimental.stop` instead.
  2/363 [..............................] - ETA: 18s - loss: 7.0195WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0047s vs `on_train_batch_end` time: 0.0984s). Check your callbacks.
363/363 [==============================] - 1s 4ms/step - loss: 1.8866 - val_loss: 0.7126
Epoch 2/30
363/363 [==============================] - 1s 3ms/step - loss: 0.6577 - val_loss: 0.6880
Epoch 3/30
363/363 [==============================] - 1s 4ms/step - loss: 0.5934 - val_loss: 0.5803
Epoch 4/30
363/363 [==============================] - 1s 3ms/step - loss: 0.5557 - val_loss: 0.5166
Epoch 5/30
363/363 [==============================] - 1s 3ms/step - loss: 0.5272 - val_loss: 0.4895
Epoch 6/30
363/363 [==============================] - 1s 4ms/step - loss: 0.5033 - val_loss: 0.4951
Epoch 7/30
363/363 [==============================] - 1s 4ms/step - loss: 0.4854 - val_loss: 0.4861
Epoch 8/30
363/363 [==============================] - 1s 3ms/step - loss: 0.4709 - val_loss: 0.4554
Epoch 9/30
363/363 [==============================] - 1s 3ms/step - loss: 0.4578 - val_loss: 0.4413
Epoch 10/30
363/363 [==============================] - 1s 3ms/step - loss: 0.4474 - val_loss: 0.4379
Epoch 11/30
363/363 [==============================] - 1s 4ms/step - loss: 0.4393 - val_loss: 0.4396
Epoch 12/30
363/363 [==============================] - 1s 3ms/step - loss: 0.4318 - val_loss: 0.4507
Epoch 13/30
363/363 [==============================] - 1s 3ms/step - loss: 0.4261 - val_loss: 0.3997
Epoch 14/30
363/363 [==============================] - 1s 3ms/step - loss: 0.4202 - val_loss: 0.3956
Epoch 15/30
363/363 [==============================] - 1s 4ms/step - loss: 0.4155 - val_loss: 0.3916
Epoch 16/30
363/363 [==============================] - 1s 3ms/step - loss: 0.4112 - val_loss: 0.3937
Epoch 17/30
363/363 [==============================] - 1s 4ms/step - loss: 0.4077 - val_loss: 0.3809
Epoch 18/30
363/363 [==============================] - 1s 4ms/step - loss: 0.4040 - val_loss: 0.3793
Epoch 19/30
363/363 [==============================] - 1s 3ms/step - loss: 0.4004 - val_loss: 0.3850
Epoch 20/30
363/363 [==============================] - 1s 3ms/step - loss: 0.3980 - val_loss: 0.3809
Epoch 21/30
363/363 [==============================] - 1s 3ms/step - loss: 0.3949 - val_loss: 0.3701
Epoch 22/30
363/363 [==============================] - 1s 3ms/step - loss: 0.3924 - val_loss: 0.3781
Epoch 23/30
363/363 [==============================] - 1s 4ms/step - loss: 0.3898 - val_loss: 0.3650
Epoch 24/30
363/363 [==============================] - 1s 3ms/step - loss: 0.3874 - val_loss: 0.3655
Epoch 25/30
363/363 [==============================] - 1s 4ms/step - loss: 0.3851 - val_loss: 0.3611
Epoch 26/30
363/363 [==============================] - 1s 3ms/step - loss: 0.3829 - val_loss: 0.3626
Epoch 27/30
363/363 [==============================] - 1s 3ms/step - loss: 0.3809 - val_loss: 0.3564
Epoch 28/30
363/363 [==============================] - 1s 3ms/step - loss: 0.3788 - val_loss: 0.3579
Epoch 29/30
363/363 [==============================] - 1s 3ms/step - loss: 0.3769 - val_loss: 0.3561
Epoch 30/30
363/363 [==============================] - 1s 4ms/step - loss: 0.3750 - val_loss: 0.3548

텐서보드 서버를 실행하는 한 가지 방법은 터미널에서 직접 실행하는 것입니다. 터미널을 열고 텐서보드가 설치된 가상 환경을 활성화합니다. 그다음 노트북 디렉토리로 이동하여 다음 명령을 입력하세요:

$ tensorboard --logdir=./my_logs --port=6006

그다음 웹 브라우저를 열고 localhost:6006에 접속하면 텐서보드를 사용할 수 있습니다. 사용이 끝나면 터미널에서 Ctrl-C를 눌러 텐서보드 서버를 종료하세요.

또는 다음처럼 텐서보드의 주피터 확장을 사용할 수 있습니다(이 명령은 텐서보드가 로컬 컴퓨터에 설치되어 있어야 합니다):

In [87]:
%load_ext tensorboard
%tensorboard --logdir=./my_logs --port=6006
In [88]:
run_logdir2 = get_run_logdir()
run_logdir2
Out[88]:
'./my_logs/run_2020_08_09-01_44_15'
In [89]:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
In [90]:
model = keras.models.Sequential([
    keras.layers.Dense(30, activation="relu", input_shape=[8]),
    keras.layers.Dense(30, activation="relu"),
    keras.layers.Dense(1)
])    
model.compile(loss="mse", optimizer=keras.optimizers.SGD(lr=0.05))
In [91]:
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir2)
history = model.fit(X_train, y_train, epochs=30,
                    validation_data=(X_valid, y_valid),
                    callbacks=[checkpoint_cb, tensorboard_cb])
Epoch 1/30
  2/363 [..............................] - ETA: 23s - loss: 5.0901WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0058s vs `on_train_batch_end` time: 0.1254s). Check your callbacks.
363/363 [==============================] - 2s 4ms/step - loss: 0.5530 - val_loss: 302.8519
Epoch 2/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 3/30
 56/363 [===>..........................] - ETA: 0s - loss: nan
/home/work/.local/lib/python3.6/site-packages/tensorflow/python/keras/callbacks.py:1291: RuntimeWarning: invalid value encountered in less
  if self.monitor_op(current, self.best):
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 4/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 5/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 6/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 7/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 8/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 9/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 10/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 11/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 12/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 13/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 14/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 15/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 16/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 17/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 18/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 19/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 20/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 21/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 22/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 23/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 24/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 25/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 26/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 27/30
363/363 [==============================] - 1s 4ms/step - loss: nan - val_loss: nan
Epoch 28/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 29/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan
Epoch 30/30
363/363 [==============================] - 1s 3ms/step - loss: nan - val_loss: nan

텐서보드에 실행 결과가 2개 있습니다. 학습 곡선을 비교해 보세요.

사용할 수 있는 로깅 옵션을 확인해 보죠:

In [92]:
help(keras.callbacks.TensorBoard.__init__)
Help on function __init__ in module tensorflow.python.keras.callbacks:

__init__(self, log_dir='logs', histogram_freq=0, write_graph=True, write_images=False, update_freq='epoch', profile_batch=2, embeddings_freq=0, embeddings_metadata=None, **kwargs)
    Initialize self.  See help(type(self)) for accurate signature.

하이퍼파라미터 튜닝

In [93]:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
In [94]:
def build_model(n_hidden=1, n_neurons=30, learning_rate=3e-3, input_shape=[8]):
    model = keras.models.Sequential()
    model.add(keras.layers.InputLayer(input_shape=input_shape))
    for layer in range(n_hidden):
        model.add(keras.layers.Dense(n_neurons, activation="relu"))
    model.add(keras.layers.Dense(1))
    optimizer = keras.optimizers.SGD(lr=learning_rate)
    model.compile(loss="mse", optimizer=optimizer)
    return model
In [95]:
keras_reg = keras.wrappers.scikit_learn.KerasRegressor(build_model)
In [96]:
keras_reg.fit(X_train, y_train, epochs=100,
              validation_data=(X_valid, y_valid),
              callbacks=[keras.callbacks.EarlyStopping(patience=10)])
Epoch 1/100
363/363 [==============================] - 1s 4ms/step - loss: 1.0896 - val_loss: 20.7721
Epoch 2/100
363/363 [==============================] - 1s 3ms/step - loss: 0.7606 - val_loss: 5.0266
Epoch 3/100
363/363 [==============================] - 1s 3ms/step - loss: 0.5456 - val_loss: 0.5490
Epoch 4/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4732 - val_loss: 0.4529
Epoch 5/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4503 - val_loss: 0.4188
Epoch 6/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4338 - val_loss: 0.4129
Epoch 7/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4241 - val_loss: 0.4004
Epoch 8/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4168 - val_loss: 0.3944
Epoch 9/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4108 - val_loss: 0.3961
Epoch 10/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4060 - val_loss: 0.4071
Epoch 11/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4021 - val_loss: 0.3855
Epoch 12/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3984 - val_loss: 0.4136
Epoch 13/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3951 - val_loss: 0.3997
Epoch 14/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3921 - val_loss: 0.3818
Epoch 15/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3894 - val_loss: 0.3829
Epoch 16/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3869 - val_loss: 0.3739
Epoch 17/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3848 - val_loss: 0.4022
Epoch 18/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3829 - val_loss: 0.3873
Epoch 19/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3807 - val_loss: 0.3768
Epoch 20/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3791 - val_loss: 0.4191
Epoch 21/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3774 - val_loss: 0.3927
Epoch 22/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3756 - val_loss: 0.4237
Epoch 23/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3742 - val_loss: 0.3523
Epoch 24/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3725 - val_loss: 0.3842
Epoch 25/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3710 - val_loss: 0.4162
Epoch 26/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3700 - val_loss: 0.3980
Epoch 27/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3691 - val_loss: 0.3474
Epoch 28/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3677 - val_loss: 0.3920
Epoch 29/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3670 - val_loss: 0.3566
Epoch 30/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3653 - val_loss: 0.4191
Epoch 31/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3647 - val_loss: 0.3721
Epoch 32/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3633 - val_loss: 0.3948
Epoch 33/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3632 - val_loss: 0.3423
Epoch 34/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3617 - val_loss: 0.3453
Epoch 35/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3610 - val_loss: 0.4068
Epoch 36/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3608 - val_loss: 0.3417
Epoch 37/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3596 - val_loss: 0.3787
Epoch 38/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3589 - val_loss: 0.3379
Epoch 39/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3582 - val_loss: 0.3419
Epoch 40/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3572 - val_loss: 0.3705
Epoch 41/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3570 - val_loss: 0.3659
Epoch 42/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3563 - val_loss: 0.3803
Epoch 43/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3552 - val_loss: 0.3765
Epoch 44/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3548 - val_loss: 0.3814
Epoch 45/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3543 - val_loss: 0.3326
Epoch 46/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3532 - val_loss: 0.3385
Epoch 47/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3527 - val_loss: 0.3655
Epoch 48/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3521 - val_loss: 0.3579
Epoch 49/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3525 - val_loss: 0.3360
Epoch 50/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3510 - val_loss: 0.3318
Epoch 51/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3504 - val_loss: 0.3562
Epoch 52/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3502 - val_loss: 0.3520
Epoch 53/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3496 - val_loss: 0.4579
Epoch 54/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3497 - val_loss: 0.3808
Epoch 55/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3490 - val_loss: 0.3539
Epoch 56/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3485 - val_loss: 0.3723
Epoch 57/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3479 - val_loss: 0.3336
Epoch 58/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3469 - val_loss: 0.4011
Epoch 59/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3475 - val_loss: 0.3264
Epoch 60/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3465 - val_loss: 0.3271
Epoch 61/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3452 - val_loss: 0.3346
Epoch 62/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3453 - val_loss: 0.3493
Epoch 63/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3444 - val_loss: 0.3402
Epoch 64/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3450 - val_loss: 0.3275
Epoch 65/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3437 - val_loss: 0.3296
Epoch 66/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3431 - val_loss: 0.3307
Epoch 67/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3428 - val_loss: 0.3252
Epoch 68/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3423 - val_loss: 0.3242
Epoch 69/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3419 - val_loss: 0.3254
Epoch 70/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3413 - val_loss: 0.3672
Epoch 71/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3414 - val_loss: 0.3375
Epoch 72/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3405 - val_loss: 0.3271
Epoch 73/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3399 - val_loss: 0.3242
Epoch 74/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3402 - val_loss: 0.3666
Epoch 75/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3397 - val_loss: 0.3282
Epoch 76/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3395 - val_loss: 0.3241
Epoch 77/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3383 - val_loss: 0.3380
Epoch 78/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3384 - val_loss: 0.3357
Epoch 79/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3383 - val_loss: 0.3223
Epoch 80/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3376 - val_loss: 0.3595
Epoch 81/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3383 - val_loss: 0.3433
Epoch 82/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3370 - val_loss: 0.3211
Epoch 83/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3368 - val_loss: 0.3344
Epoch 84/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3362 - val_loss: 0.4145
Epoch 85/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3369 - val_loss: 0.3286
Epoch 86/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3359 - val_loss: 0.3441
Epoch 87/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3357 - val_loss: 0.3729
Epoch 88/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3355 - val_loss: 0.3188
Epoch 89/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3346 - val_loss: 0.3497
Epoch 90/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3348 - val_loss: 0.3175
Epoch 91/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3339 - val_loss: 0.3617
Epoch 92/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3343 - val_loss: 0.3176
Epoch 93/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3334 - val_loss: 0.3568
Epoch 94/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3343 - val_loss: 0.4945
Epoch 95/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3338 - val_loss: 0.7208
Epoch 96/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3366 - val_loss: 0.4792
Epoch 97/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3337 - val_loss: 0.7577
Epoch 98/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3351 - val_loss: 0.9920
Epoch 99/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3344 - val_loss: 1.5706
Epoch 100/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3481 - val_loss: 1.7792
Out[96]:
<tensorflow.python.keras.callbacks.History at 0x7f1364dc5e10>
In [97]:
mse_test = keras_reg.score(X_test, y_test)
162/162 [==============================] - 0s 2ms/step - loss: 0.3402
In [98]:
y_pred = keras_reg.predict(X_new)
WARNING:tensorflow:8 out of the last 9 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f1364b0d730> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for  more details.
In [99]:
np.random.seed(42)
tf.random.set_seed(42)

경고: 다음 셀은 훈련이 끝날 때 에러가 납니다. 이는 최근 사이킷런의 변화때문에 생긴 케라스 이슈 #13586 때문입니다. 이 이슈를 해결하기 위한 풀 리퀘스트 #13598가 있으므로 곧 해결될 것 같습니다.

In [100]:
from scipy.stats import reciprocal
from sklearn.model_selection import RandomizedSearchCV

param_distribs = {
    "n_hidden": [0, 1, 2, 3],
    "n_neurons": np.arange(1, 100),
    "learning_rate": reciprocal(3e-4, 3e-2),
}

rnd_search_cv = RandomizedSearchCV(keras_reg, param_distribs, n_iter=10, cv=3, verbose=2)
rnd_search_cv.fit(X_train, y_train, epochs=100,
                  validation_data=(X_valid, y_valid),
                  callbacks=[keras.callbacks.EarlyStopping(patience=10)])
Fitting 3 folds for each of 10 candidates, totalling 30 fits
[CV] learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15 ....
Epoch 1/100
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
242/242 [==============================] - 1s 4ms/step - loss: 3.5557 - val_loss: 1.8752
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 1.3347 - val_loss: 0.9522
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8591 - val_loss: 0.7820
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7360 - val_loss: 0.7249
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6930 - val_loss: 0.6994
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6668 - val_loss: 0.9118
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6514 - val_loss: 0.8495
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6381 - val_loss: 0.8605
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6276 - val_loss: 0.6524
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6125 - val_loss: 0.8619
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6057 - val_loss: 0.8659
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5993 - val_loss: 0.5962
Epoch 13/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5859 - val_loss: 0.9062
Epoch 14/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5828 - val_loss: 0.9541
Epoch 15/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5799 - val_loss: 0.6402
Epoch 16/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5706 - val_loss: 0.7806
Epoch 17/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5670 - val_loss: 0.7985
Epoch 18/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5620 - val_loss: 0.8756
Epoch 19/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5585 - val_loss: 0.8958
Epoch 20/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5564 - val_loss: 0.8657
Epoch 21/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5559 - val_loss: 0.5940
Epoch 22/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5476 - val_loss: 0.8007
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5484 - val_loss: 0.7792
Epoch 24/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5459 - val_loss: 0.7622
Epoch 25/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5453 - val_loss: 0.6476
Epoch 26/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5431 - val_loss: 0.5424
Epoch 27/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5373 - val_loss: 0.8687
Epoch 28/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5424 - val_loss: 0.5390
Epoch 29/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5365 - val_loss: 0.7179
Epoch 30/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5384 - val_loss: 0.6029
Epoch 31/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5362 - val_loss: 0.5947
Epoch 32/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5359 - val_loss: 0.5305
Epoch 33/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5334 - val_loss: 0.6601
Epoch 34/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5341 - val_loss: 0.6326
Epoch 35/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5344 - val_loss: 0.5072
Epoch 36/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5304 - val_loss: 0.7270
Epoch 37/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5341 - val_loss: 0.5055
Epoch 38/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5284 - val_loss: 0.7985
Epoch 39/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5338 - val_loss: 0.5176
Epoch 40/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5305 - val_loss: 0.5823
Epoch 41/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5293 - val_loss: 0.7114
Epoch 42/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5322 - val_loss: 0.5059
Epoch 43/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5302 - val_loss: 0.5008
Epoch 44/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5274 - val_loss: 0.7397
Epoch 45/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5309 - val_loss: 0.6169
Epoch 46/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5303 - val_loss: 0.5264
Epoch 47/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5276 - val_loss: 0.6916
Epoch 48/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5298 - val_loss: 0.6554
Epoch 49/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5290 - val_loss: 0.6607
Epoch 50/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5267 - val_loss: 0.8497
Epoch 51/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5310 - val_loss: 0.6664
Epoch 52/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5294 - val_loss: 0.5996
Epoch 53/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5282 - val_loss: 0.6414
121/121 [==============================] - 0s 2ms/step - loss: 0.5368
[CV]  learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15, total=  43.4s
[CV] learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15 ....
Epoch 1/100
[Parallel(n_jobs=1)]: Done   1 out of   1 | elapsed:   43.4s remaining:    0.0s
242/242 [==============================] - 1s 4ms/step - loss: 3.5605 - val_loss: 23.0855
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 1.4777 - val_loss: 10.8387
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 1.0149 - val_loss: 4.4392
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8729 - val_loss: 1.5338
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8027 - val_loss: 0.7192
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7542 - val_loss: 1.2046
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7160 - val_loss: 2.4524
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6847 - val_loss: 4.1421
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6588 - val_loss: 5.9820
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6371 - val_loss: 7.7654
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6187 - val_loss: 9.6230
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6029 - val_loss: 11.3609
Epoch 13/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5896 - val_loss: 12.9821
Epoch 14/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5781 - val_loss: 14.2266
Epoch 15/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5683 - val_loss: 15.4321
121/121 [==============================] - 0s 2ms/step - loss: 0.9198
[CV]  learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15, total=  12.6s
[CV] learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 3.2972 - val_loss: 1.3307
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 0.9648 - val_loss: 0.6934
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6150 - val_loss: 0.5469
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5468 - val_loss: 0.7322
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5372 - val_loss: 0.4963
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5330 - val_loss: 0.5539
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5320 - val_loss: 0.5729
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5297 - val_loss: 0.7873
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5337 - val_loss: 0.5968
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5314 - val_loss: 0.4951
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5286 - val_loss: 0.7591
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5333 - val_loss: 0.5368
Epoch 13/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5305 - val_loss: 0.4968
Epoch 14/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5305 - val_loss: 0.5778
Epoch 15/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5313 - val_loss: 0.5117
Epoch 16/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5282 - val_loss: 0.7055
Epoch 17/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5320 - val_loss: 0.5399
Epoch 18/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5307 - val_loss: 0.5257
Epoch 19/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5275 - val_loss: 0.7902
Epoch 20/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5327 - val_loss: 0.5852
121/121 [==============================] - 0s 2ms/step - loss: 0.5317
[CV]  learning_rate=0.001683454924600351, n_hidden=0, n_neurons=15, total=  17.0s
[CV] learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.4256 - val_loss: 66.5657
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 0.9941 - val_loss: 137.1490
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 2.2587 - val_loss: 716.1611
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 4.3545 - val_loss: 2297.8604
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 17.0750 - val_loss: 9988.3398
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 198.7058 - val_loss: 39231.9727
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 424.9947 - val_loss: 155196.9219
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 2992.7771 - val_loss: 612492.9375
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 7662.3345 - val_loss: 2435756.2500
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 53693.9375 - val_loss: 10128962.0000
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 135637.9062 - val_loss: 39694520.0000
121/121 [==============================] - 0s 2ms/step - loss: 105477.4922
[CV]  learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21, total=   9.5s
[CV] learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.1573 - val_loss: 23.1193
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5349 - val_loss: 22.1675
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5192 - val_loss: 22.3752
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5148 - val_loss: 21.3891
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5108 - val_loss: 20.8855
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5082 - val_loss: 20.6379
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5070 - val_loss: 20.0736
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5050 - val_loss: 20.7178
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5029 - val_loss: 20.0844
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5032 - val_loss: 17.0622
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5036 - val_loss: 19.1666
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5017 - val_loss: 20.8246
Epoch 13/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5023 - val_loss: 22.0298
Epoch 14/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5049 - val_loss: 17.6022
Epoch 15/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5024 - val_loss: 18.6171
Epoch 16/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5024 - val_loss: 20.0451
Epoch 17/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5005 - val_loss: 17.5898
Epoch 18/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5040 - val_loss: 17.4526
Epoch 19/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5012 - val_loss: 19.5015
Epoch 20/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5011 - val_loss: 17.3223
121/121 [==============================] - 0s 2ms/step - loss: 0.9327
[CV]  learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21, total=  16.8s
[CV] learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.4616 - val_loss: 0.5742
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6113 - val_loss: 6.7367
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5784 - val_loss: 6.5227
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5820 - val_loss: 19.7082
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6738 - val_loss: 205.7214
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 1.6846 - val_loss: 282.6046
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 2.5718 - val_loss: 656.3253
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 12.3829 - val_loss: 1380.0121
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 14.8443 - val_loss: 2817.4539
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 7.4320 - val_loss: 4499.3813
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 121.3308 - val_loss: 8457.8711
121/121 [==============================] - 0s 2ms/step - loss: 11.0521
[CV]  learning_rate=0.008731907739399206, n_hidden=0, n_neurons=21, total=   9.7s
[CV] learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 2.5089 - val_loss: 2.6033
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 1.0793 - val_loss: 1.0424
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8038 - val_loss: 0.7507
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7203 - val_loss: 0.6758
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6785 - val_loss: 0.6484
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6498 - val_loss: 0.6241
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6261 - val_loss: 0.6073
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6055 - val_loss: 0.5826
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5870 - val_loss: 0.5597
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5700 - val_loss: 0.5445
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5547 - val_loss: 0.5314
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5408 - val_loss: 0.5147
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5278 - val_loss: 0.5030
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5159 - val_loss: 0.4904
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5051 - val_loss: 0.4791
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4948 - val_loss: 0.4695
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4857 - val_loss: 0.4608
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4772 - val_loss: 0.4524
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4691 - val_loss: 0.4476
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4621 - val_loss: 0.4383
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4555 - val_loss: 0.4355
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4494 - val_loss: 0.4282
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4437 - val_loss: 0.4230
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4384 - val_loss: 0.4166
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4339 - val_loss: 0.4161
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4294 - val_loss: 0.4142
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4254 - val_loss: 0.4100
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4217 - val_loss: 0.4132
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4180 - val_loss: 0.4103
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4149 - val_loss: 0.4032
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4118 - val_loss: 0.3964
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4089 - val_loss: 0.3956
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4062 - val_loss: 0.4013
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4035 - val_loss: 0.4004
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4012 - val_loss: 0.3913
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3988 - val_loss: 0.3986
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3968 - val_loss: 0.3871
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3946 - val_loss: 0.3998
Epoch 39/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3927 - val_loss: 0.3858
Epoch 40/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3908 - val_loss: 0.3967
Epoch 41/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3890 - val_loss: 0.3918
Epoch 42/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3873 - val_loss: 0.3866
Epoch 43/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3858 - val_loss: 0.3800
Epoch 44/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3840 - val_loss: 0.3997
Epoch 45/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3827 - val_loss: 0.3861
Epoch 46/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3811 - val_loss: 0.3805
Epoch 47/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3799 - val_loss: 0.3919
Epoch 48/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3785 - val_loss: 0.3826
Epoch 49/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3772 - val_loss: 0.3812
Epoch 50/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3759 - val_loss: 0.3905
Epoch 51/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3746 - val_loss: 0.3832
Epoch 52/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3737 - val_loss: 0.3827
Epoch 53/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3725 - val_loss: 0.3859
121/121 [==============================] - 0s 2ms/step - loss: 0.3865
[CV]  learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87, total=  49.3s
[CV] learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 2.7762 - val_loss: 17.5435
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 1.1017 - val_loss: 15.4502
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8039 - val_loss: 11.1084
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7051 - val_loss: 8.0885
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6575 - val_loss: 6.1076
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6260 - val_loss: 4.7302
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6007 - val_loss: 3.6783
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5790 - val_loss: 2.8274
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5600 - val_loss: 2.2526
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5433 - val_loss: 1.7966
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5284 - val_loss: 1.4646
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5153 - val_loss: 1.1656
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5036 - val_loss: 0.9599
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4932 - val_loss: 0.8400
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4839 - val_loss: 0.7148
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4756 - val_loss: 0.6408
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4680 - val_loss: 0.5679
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4612 - val_loss: 0.5264
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4549 - val_loss: 0.4894
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4493 - val_loss: 0.4711
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4442 - val_loss: 0.4525
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4395 - val_loss: 0.4467
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4352 - val_loss: 0.4404
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4313 - val_loss: 0.4333
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4276 - val_loss: 0.4302
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4242 - val_loss: 0.4284
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4209 - val_loss: 0.4270
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4180 - val_loss: 0.4269
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4148 - val_loss: 0.4416
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4127 - val_loss: 0.4363
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4099 - val_loss: 0.4330
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4074 - val_loss: 0.4408
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4052 - val_loss: 0.4484
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4029 - val_loss: 0.4647
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4009 - val_loss: 0.4789
Epoch 36/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3989 - val_loss: 0.4746
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3970 - val_loss: 0.4974
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3951 - val_loss: 0.5135
121/121 [==============================] - 0s 2ms/step - loss: 0.4088
[CV]  learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87, total=  35.7s
[CV] learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 2.8501 - val_loss: 2.0961
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 1.1187 - val_loss: 1.2079
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8431 - val_loss: 0.8075
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7630 - val_loss: 0.7207
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7220 - val_loss: 0.6952
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6925 - val_loss: 0.6614
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6677 - val_loss: 0.6378
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6461 - val_loss: 0.6132
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6268 - val_loss: 0.6043
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6081 - val_loss: 0.5937
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5908 - val_loss: 0.5658
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5749 - val_loss: 0.5551
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5601 - val_loss: 0.5476
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5465 - val_loss: 0.5450
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5340 - val_loss: 0.5314
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5225 - val_loss: 0.5067
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5119 - val_loss: 0.4983
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5020 - val_loss: 0.4873
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4930 - val_loss: 0.4748
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4849 - val_loss: 0.4767
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4771 - val_loss: 0.4719
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4701 - val_loss: 0.4623
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4637 - val_loss: 0.4640
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4575 - val_loss: 0.4777
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4521 - val_loss: 0.4488
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4469 - val_loss: 0.4475
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4423 - val_loss: 0.4420
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4378 - val_loss: 0.4449
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4337 - val_loss: 0.4582
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4301 - val_loss: 0.4385
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4265 - val_loss: 0.4227
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4233 - val_loss: 0.4458
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4201 - val_loss: 0.4242
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4174 - val_loss: 0.4542
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4149 - val_loss: 0.4280
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4122 - val_loss: 0.4341
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4101 - val_loss: 0.4189
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4078 - val_loss: 0.4343
Epoch 39/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4056 - val_loss: 0.4235
Epoch 40/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4037 - val_loss: 0.4183
Epoch 41/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4015 - val_loss: 0.4552
Epoch 42/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4001 - val_loss: 0.4411
Epoch 43/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3981 - val_loss: 0.4072
Epoch 44/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3967 - val_loss: 0.4294
Epoch 45/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3950 - val_loss: 0.4237
Epoch 46/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3936 - val_loss: 0.4128
Epoch 47/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3921 - val_loss: 0.3976
Epoch 48/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3909 - val_loss: 0.4028
Epoch 49/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3893 - val_loss: 0.4362
Epoch 50/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3884 - val_loss: 0.4236
Epoch 51/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3870 - val_loss: 0.4171
Epoch 52/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3859 - val_loss: 0.4274
Epoch 53/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3849 - val_loss: 0.4076
Epoch 54/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3839 - val_loss: 0.3885
Epoch 55/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3827 - val_loss: 0.4003
Epoch 56/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3816 - val_loss: 0.4176
Epoch 57/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3808 - val_loss: 0.4201
Epoch 58/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3800 - val_loss: 0.4178
Epoch 59/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3790 - val_loss: 0.4166
Epoch 60/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3779 - val_loss: 0.3910
Epoch 61/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3772 - val_loss: 0.4094
Epoch 62/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3763 - val_loss: 0.4363
Epoch 63/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3757 - val_loss: 0.4026
Epoch 64/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3749 - val_loss: 0.4028
121/121 [==============================] - 0s 2ms/step - loss: 0.3737
[CV]  learning_rate=0.0006154014789262348, n_hidden=2, n_neurons=87, total=  59.8s
[CV] learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 2.4720 - val_loss: 7.9723
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 1.1323 - val_loss: 5.6563
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8832 - val_loss: 4.1442
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8066 - val_loss: 3.1168
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7657 - val_loss: 2.6199
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7374 - val_loss: 2.2830
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7171 - val_loss: 1.9726
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6983 - val_loss: 1.7536
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6816 - val_loss: 1.5653
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6663 - val_loss: 1.4316
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6521 - val_loss: 1.3165
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6387 - val_loss: 1.2101
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6261 - val_loss: 1.1236
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6139 - val_loss: 1.0591
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6024 - val_loss: 0.9875
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5910 - val_loss: 0.9345
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5804 - val_loss: 0.8832
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5701 - val_loss: 0.8424
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5602 - val_loss: 0.8079
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5511 - val_loss: 0.7646
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5420 - val_loss: 0.7346
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5332 - val_loss: 0.7075
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5249 - val_loss: 0.6815
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5165 - val_loss: 0.6537
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5095 - val_loss: 0.6360
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5020 - val_loss: 0.6174
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4952 - val_loss: 0.6011
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4887 - val_loss: 0.5887
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4824 - val_loss: 0.5779
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4766 - val_loss: 0.5671
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4710 - val_loss: 0.5557
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4658 - val_loss: 0.5475
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4608 - val_loss: 0.5403
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4561 - val_loss: 0.5322
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4516 - val_loss: 0.5250
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4476 - val_loss: 0.5165
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4436 - val_loss: 0.5106
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4400 - val_loss: 0.5053
Epoch 39/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4364 - val_loss: 0.5005
Epoch 40/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4333 - val_loss: 0.4966
Epoch 41/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4302 - val_loss: 0.4922
Epoch 42/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4274 - val_loss: 0.4891
Epoch 43/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4248 - val_loss: 0.4850
Epoch 44/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4220 - val_loss: 0.4854
Epoch 45/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4198 - val_loss: 0.4828
Epoch 46/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4172 - val_loss: 0.4779
Epoch 47/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4153 - val_loss: 0.4783
Epoch 48/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4131 - val_loss: 0.4755
Epoch 49/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4111 - val_loss: 0.4765
Epoch 50/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4093 - val_loss: 0.4753
Epoch 51/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4074 - val_loss: 0.4714
Epoch 52/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4060 - val_loss: 0.4726
Epoch 53/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4043 - val_loss: 0.4721
Epoch 54/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4027 - val_loss: 0.4708
Epoch 55/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4009 - val_loss: 0.4703
Epoch 56/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3998 - val_loss: 0.4713
Epoch 57/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3985 - val_loss: 0.4703
Epoch 58/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3970 - val_loss: 0.4718
Epoch 59/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3958 - val_loss: 0.4712
Epoch 60/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3944 - val_loss: 0.4701
Epoch 61/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3936 - val_loss: 0.4718
Epoch 62/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3924 - val_loss: 0.4716
Epoch 63/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3912 - val_loss: 0.4704
Epoch 64/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3903 - val_loss: 0.4735
Epoch 65/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3892 - val_loss: 0.4738
Epoch 66/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3883 - val_loss: 0.4728
Epoch 67/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3873 - val_loss: 0.4716
Epoch 68/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3864 - val_loss: 0.4730
Epoch 69/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3855 - val_loss: 0.4720
Epoch 70/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3846 - val_loss: 0.4720
121/121 [==============================] - 0s 2ms/step - loss: 0.4001
[CV]  learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24, total= 1.2min
[CV] learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24 ...
Epoch 1/100
242/242 [==============================] - 1s 5ms/step - loss: 3.7641 - val_loss: 28.0492
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 2.0504 - val_loss: 43.0472
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 1.6124 - val_loss: 37.0128
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 1.3603 - val_loss: 28.7538
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 1.1689 - val_loss: 20.6120
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 1.0259 - val_loss: 14.6245
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.9261 - val_loss: 10.5960
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8594 - val_loss: 7.2861
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8137 - val_loss: 5.1836
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7810 - val_loss: 3.7344
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7555 - val_loss: 2.7778
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7346 - val_loss: 1.9391
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7165 - val_loss: 1.4673
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7002 - val_loss: 1.2321
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6852 - val_loss: 0.9812
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6712 - val_loss: 0.8534
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6583 - val_loss: 0.7166
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6459 - val_loss: 0.6424
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6340 - val_loss: 0.5949
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6226 - val_loss: 0.5764
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6116 - val_loss: 0.5809
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6010 - val_loss: 0.6027
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5911 - val_loss: 0.6369
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5816 - val_loss: 0.6922
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5725 - val_loss: 0.7604
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5639 - val_loss: 0.8304
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5555 - val_loss: 0.8810
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5477 - val_loss: 0.9624
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5401 - val_loss: 0.9578
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5332 - val_loss: 1.0158
121/121 [==============================] - 0s 2ms/step - loss: 0.5490
[CV]  learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24, total=  30.0s
[CV] learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 2.9218 - val_loss: 4.3285
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 1.2869 - val_loss: 2.8653
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.9733 - val_loss: 1.8260
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8526 - val_loss: 1.2974
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7870 - val_loss: 0.9606
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7448 - val_loss: 0.7924
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7141 - val_loss: 0.7158
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6902 - val_loss: 0.6616
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6699 - val_loss: 0.6363
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6525 - val_loss: 0.6160
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6364 - val_loss: 0.5999
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6216 - val_loss: 0.5855
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6084 - val_loss: 0.5729
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5959 - val_loss: 0.5615
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5844 - val_loss: 0.5509
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5739 - val_loss: 0.5399
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5641 - val_loss: 0.5301
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5547 - val_loss: 0.5210
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5461 - val_loss: 0.5129
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5381 - val_loss: 0.5062
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5305 - val_loss: 0.4992
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5234 - val_loss: 0.4932
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5169 - val_loss: 0.4875
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5106 - val_loss: 0.4857
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5050 - val_loss: 0.4783
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4996 - val_loss: 0.4746
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4945 - val_loss: 0.4700
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4897 - val_loss: 0.4676
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4851 - val_loss: 0.4687
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4810 - val_loss: 0.4618
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4771 - val_loss: 0.4607
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4732 - val_loss: 0.4630
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4697 - val_loss: 0.4583
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4663 - val_loss: 0.4643
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4633 - val_loss: 0.4591
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4603 - val_loss: 0.4562
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4576 - val_loss: 0.4539
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4550 - val_loss: 0.4547
Epoch 39/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4525 - val_loss: 0.4534
Epoch 40/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4502 - val_loss: 0.4523
Epoch 41/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4478 - val_loss: 0.4613
Epoch 42/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4460 - val_loss: 0.4593
Epoch 43/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4438 - val_loss: 0.4497
Epoch 44/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4420 - val_loss: 0.4544
Epoch 45/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4402 - val_loss: 0.4533
Epoch 46/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4383 - val_loss: 0.4497
Epoch 47/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4367 - val_loss: 0.4470
Epoch 48/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4352 - val_loss: 0.4470
Epoch 49/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4336 - val_loss: 0.4532
Epoch 50/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4322 - val_loss: 0.4549
Epoch 51/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4307 - val_loss: 0.4534
Epoch 52/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4295 - val_loss: 0.4593
Epoch 53/100
242/242 [==============================] - 1s 3ms/step - loss: 0.4281 - val_loss: 0.4535
Epoch 54/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4269 - val_loss: 0.4484
Epoch 55/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4255 - val_loss: 0.4489
Epoch 56/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4244 - val_loss: 0.4465
Epoch 57/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4232 - val_loss: 0.4489
Epoch 58/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4221 - val_loss: 0.4514
Epoch 59/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4209 - val_loss: 0.4499
Epoch 60/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4198 - val_loss: 0.4441
Epoch 61/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4188 - val_loss: 0.4476
Epoch 62/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4178 - val_loss: 0.4501
Epoch 63/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4170 - val_loss: 0.4432
Epoch 64/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4160 - val_loss: 0.4385
Epoch 65/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4151 - val_loss: 0.4381
Epoch 66/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4141 - val_loss: 0.4440
Epoch 67/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4134 - val_loss: 0.4354
Epoch 68/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4126 - val_loss: 0.4381
Epoch 69/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4117 - val_loss: 0.4341
Epoch 70/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4109 - val_loss: 0.4395
Epoch 71/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4101 - val_loss: 0.4340
Epoch 72/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4094 - val_loss: 0.4407
Epoch 73/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4084 - val_loss: 0.4309
Epoch 74/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4079 - val_loss: 0.4328
Epoch 75/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4071 - val_loss: 0.4349
Epoch 76/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4063 - val_loss: 0.4346
Epoch 77/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4056 - val_loss: 0.4339
Epoch 78/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4049 - val_loss: 0.4333
Epoch 79/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4042 - val_loss: 0.4281
Epoch 80/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4035 - val_loss: 0.4354
Epoch 81/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4029 - val_loss: 0.4322
Epoch 82/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4022 - val_loss: 0.4299
Epoch 83/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4015 - val_loss: 0.4292
Epoch 84/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4008 - val_loss: 0.4342
Epoch 85/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4000 - val_loss: 0.4219
Epoch 86/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3995 - val_loss: 0.4290
Epoch 87/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3987 - val_loss: 0.4329
Epoch 88/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3982 - val_loss: 0.4306
Epoch 89/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3975 - val_loss: 0.4337
Epoch 90/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3968 - val_loss: 0.4251
Epoch 91/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3962 - val_loss: 0.4237
Epoch 92/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3957 - val_loss: 0.4191
Epoch 93/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3950 - val_loss: 0.4238
Epoch 94/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3943 - val_loss: 0.4203
Epoch 95/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3937 - val_loss: 0.4209
Epoch 96/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3932 - val_loss: 0.4215
Epoch 97/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3926 - val_loss: 0.4250
Epoch 98/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3920 - val_loss: 0.4160
Epoch 99/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3915 - val_loss: 0.4199
Epoch 100/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3910 - val_loss: 0.4242
121/121 [==============================] - 0s 2ms/step - loss: 0.3897
[CV]  learning_rate=0.0003920021771415983, n_hidden=3, n_neurons=24, total= 1.6min
[CV] learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2 .....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 2.1013 - val_loss: 5.2312
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8603 - val_loss: 26.5013
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7494 - val_loss: 40.6122
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 1.0991 - val_loss: 135.6917
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 1.3388 - val_loss: 237.1149
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 4.6734 - val_loss: 506.5568
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 5.0735 - val_loss: 1165.5585
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 19.0953 - val_loss: 2646.9744
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 28.1002 - val_loss: 5780.9751
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 97.2630 - val_loss: 13751.4141
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 159.4888 - val_loss: 31633.9141
121/121 [==============================] - 0s 2ms/step - loss: 81.5957
[CV]  learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2, total=   9.7s
[CV] learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2 .....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.4769 - val_loss: 14.0701
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5769 - val_loss: 16.8410
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5493 - val_loss: 19.0635
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5365 - val_loss: 19.7342
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5272 - val_loss: 20.0593
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5202 - val_loss: 20.2376
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5153 - val_loss: 20.0296
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5113 - val_loss: 20.3793
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5078 - val_loss: 20.1103
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5063 - val_loss: 18.4892
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5056 - val_loss: 19.4013
121/121 [==============================] - 0s 2ms/step - loss: 0.9640
[CV]  learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2, total=   9.6s
[CV] learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2 .....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 2.0333 - val_loss: 13.7380
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6240 - val_loss: 10.0594
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7131 - val_loss: 41.2693
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 1.1121 - val_loss: 74.9048
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 0.9784 - val_loss: 205.5686
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 1.9726 - val_loss: 246.7374
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 2.5115 - val_loss: 388.8353
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 5.9673 - val_loss: 620.5347
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 6.9990 - val_loss: 919.7248
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 4.1843 - val_loss: 1082.5530
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 21.4948 - val_loss: 1471.0371
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 10.2613 - val_loss: 1957.3098
121/121 [==============================] - 0s 2ms/step - loss: 2.0491
[CV]  learning_rate=0.006010328378268217, n_hidden=0, n_neurons=2, total=  10.5s
[CV] learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.2457 - val_loss: 22.8634
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 1.1255 - val_loss: 36.5661
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 0.9314 - val_loss: 304.7440
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8113 - val_loss: 71.4704
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8365 - val_loss: 312.6022
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 2.8876 - val_loss: 0.4035
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4331 - val_loss: 0.3815
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3841 - val_loss: 0.3614
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3771 - val_loss: 0.3613
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3676 - val_loss: 0.3455
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3700 - val_loss: 0.3469
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3623 - val_loss: 0.3554
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3548 - val_loss: 0.3456
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3603 - val_loss: 0.3430
Epoch 15/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3515 - val_loss: 0.3427
Epoch 16/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3531 - val_loss: 0.3421
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3472 - val_loss: 0.3402
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3515 - val_loss: 0.3426
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3628 - val_loss: 0.3416
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3443 - val_loss: 0.3385
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3431 - val_loss: 0.3429
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3416 - val_loss: 0.3400
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3398 - val_loss: 0.3584
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3413 - val_loss: 0.3509
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3375 - val_loss: 0.3590
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3367 - val_loss: 0.3456
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3369 - val_loss: 0.3546
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3367 - val_loss: 0.3542
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3350 - val_loss: 0.3522
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3347 - val_loss: 0.3408
121/121 [==============================] - 0s 2ms/step - loss: 0.3580
[CV]  learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38, total=  27.1s
[CV] learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8950 - val_loss: 3.0949
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5223 - val_loss: 0.4712
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4610 - val_loss: 0.4231
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4356 - val_loss: 0.4021
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4204 - val_loss: 0.4323
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4116 - val_loss: 0.6513
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 0.4057 - val_loss: 0.8508
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4000 - val_loss: 1.0201
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3954 - val_loss: 1.1757
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3924 - val_loss: 0.8698
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3877 - val_loss: 0.9377
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3839 - val_loss: 1.0793
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3820 - val_loss: 1.1923
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3805 - val_loss: 1.1186
121/121 [==============================] - 0s 2ms/step - loss: 0.4037
[CV]  learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38, total=  13.2s
[CV] learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 0.9047 - val_loss: 1.2874
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4870 - val_loss: 0.7809
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4394 - val_loss: 1.8555
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5436 - val_loss: 18.7094
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5247 - val_loss: 78.6925
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8047 - val_loss: 0.4362
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4359 - val_loss: 0.3913
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4187 - val_loss: 0.4216
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 0.4076 - val_loss: 0.4236
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4053 - val_loss: 0.3656
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3912 - val_loss: 0.4484
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3898 - val_loss: 0.3703
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3852 - val_loss: 0.3624
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3827 - val_loss: 0.4258
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3798 - val_loss: 0.3562
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3770 - val_loss: 0.4254
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3741 - val_loss: 0.3505
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3765 - val_loss: 0.3619
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3703 - val_loss: 0.4295
Epoch 20/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3672 - val_loss: 0.3458
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3699 - val_loss: 0.3828
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3668 - val_loss: 0.3409
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3623 - val_loss: 0.3553
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3606 - val_loss: 0.4287
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3599 - val_loss: 0.3361
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3584 - val_loss: 0.3479
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3576 - val_loss: 0.4178
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3557 - val_loss: 0.3384
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4105 - val_loss: 0.4365
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3737 - val_loss: 0.3490
Epoch 31/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3523 - val_loss: 0.3743
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3503 - val_loss: 0.4215
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3491 - val_loss: 0.4040
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3521 - val_loss: 0.3599
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3479 - val_loss: 0.3467
121/121 [==============================] - 0s 2ms/step - loss: 0.3439
[CV]  learning_rate=0.008339092654580042, n_hidden=1, n_neurons=38, total=  31.7s
[CV] learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21 ..
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 4.0446 - val_loss: 7.0502
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 2.3108 - val_loss: 7.2037
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 1.6259 - val_loss: 5.5884
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 1.3383 - val_loss: 3.7640
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 1.1769 - val_loss: 2.5552
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 1.0669 - val_loss: 2.0914
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.9873 - val_loss: 1.6989
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.9225 - val_loss: 1.4173
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8695 - val_loss: 1.2066
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8251 - val_loss: 1.0479
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7880 - val_loss: 0.9248
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7571 - val_loss: 0.8264
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7314 - val_loss: 0.7581
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7096 - val_loss: 0.7119
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6917 - val_loss: 0.6743
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6760 - val_loss: 0.6514
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6629 - val_loss: 0.6371
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6514 - val_loss: 0.6283
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6412 - val_loss: 0.6229
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6322 - val_loss: 0.6221
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6241 - val_loss: 0.6180
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6164 - val_loss: 0.6178
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6092 - val_loss: 0.6150
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6023 - val_loss: 0.6175
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5962 - val_loss: 0.6112
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5900 - val_loss: 0.6049
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5840 - val_loss: 0.6013
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5783 - val_loss: 0.5932
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5726 - val_loss: 0.5873
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5671 - val_loss: 0.5833
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5619 - val_loss: 0.5789
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5567 - val_loss: 0.5713
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5516 - val_loss: 0.5664
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5467 - val_loss: 0.5614
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5418 - val_loss: 0.5537
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5371 - val_loss: 0.5536
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5326 - val_loss: 0.5419
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5281 - val_loss: 0.5418
Epoch 39/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5237 - val_loss: 0.5343
Epoch 40/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5195 - val_loss: 0.5282
Epoch 41/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5153 - val_loss: 0.5252
Epoch 42/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5113 - val_loss: 0.5198
Epoch 43/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5074 - val_loss: 0.5159
Epoch 44/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5035 - val_loss: 0.5109
Epoch 45/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4997 - val_loss: 0.5071
Epoch 46/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4959 - val_loss: 0.5049
Epoch 47/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4925 - val_loss: 0.4988
Epoch 48/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4888 - val_loss: 0.4950
Epoch 49/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4853 - val_loss: 0.4902
Epoch 50/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4819 - val_loss: 0.4869
Epoch 51/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4785 - val_loss: 0.4851
Epoch 52/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4754 - val_loss: 0.4779
Epoch 53/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4721 - val_loss: 0.4730
Epoch 54/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4689 - val_loss: 0.4699
Epoch 55/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4658 - val_loss: 0.4657
Epoch 56/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4629 - val_loss: 0.4605
Epoch 57/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4600 - val_loss: 0.4583
Epoch 58/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4571 - val_loss: 0.4528
Epoch 59/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4544 - val_loss: 0.4496
Epoch 60/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4516 - val_loss: 0.4473
Epoch 61/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4492 - val_loss: 0.4437
Epoch 62/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4466 - val_loss: 0.4411
Epoch 63/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4440 - val_loss: 0.4392
Epoch 64/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4418 - val_loss: 0.4341
Epoch 65/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4394 - val_loss: 0.4314
Epoch 66/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4371 - val_loss: 0.4299
Epoch 67/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4349 - val_loss: 0.4278
Epoch 68/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4327 - val_loss: 0.4237
Epoch 69/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4306 - val_loss: 0.4218
Epoch 70/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4285 - val_loss: 0.4200
Epoch 71/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4266 - val_loss: 0.4167
Epoch 72/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4246 - val_loss: 0.4143
Epoch 73/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4227 - val_loss: 0.4125
Epoch 74/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4209 - val_loss: 0.4105
Epoch 75/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4191 - val_loss: 0.4085
Epoch 76/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4172 - val_loss: 0.4071
Epoch 77/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4157 - val_loss: 0.4048
Epoch 78/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4140 - val_loss: 0.4032
Epoch 79/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4125 - val_loss: 0.4017
Epoch 80/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4108 - val_loss: 0.4002
Epoch 81/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4094 - val_loss: 0.3986
Epoch 82/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4079 - val_loss: 0.3973
Epoch 83/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4065 - val_loss: 0.3959
Epoch 84/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4051 - val_loss: 0.3949
Epoch 85/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4037 - val_loss: 0.3931
Epoch 86/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4022 - val_loss: 0.3926
Epoch 87/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4012 - val_loss: 0.3914
Epoch 88/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3999 - val_loss: 0.3897
Epoch 89/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3987 - val_loss: 0.3888
Epoch 90/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3975 - val_loss: 0.3880
Epoch 91/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3964 - val_loss: 0.3866
Epoch 92/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3953 - val_loss: 0.3861
Epoch 93/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3943 - val_loss: 0.3848
Epoch 94/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3932 - val_loss: 0.3839
Epoch 95/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3922 - val_loss: 0.3836
Epoch 96/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3911 - val_loss: 0.3840
Epoch 97/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3904 - val_loss: 0.3822
Epoch 98/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3894 - val_loss: 0.3806
Epoch 99/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3883 - val_loss: 0.3822
Epoch 100/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3876 - val_loss: 0.3796
121/121 [==============================] - 0s 2ms/step - loss: 0.3993
[CV]  learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21, total= 1.6min
[CV] learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21 ..
Epoch 1/100
242/242 [==============================] - 1s 5ms/step - loss: 5.0701 - val_loss: 2.9725
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 2.1451 - val_loss: 5.9015
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 1.2758 - val_loss: 10.8119
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 1.0903 - val_loss: 11.3108
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 1.0053 - val_loss: 9.9424
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.9444 - val_loss: 8.2069
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8977 - val_loss: 6.6004
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8609 - val_loss: 4.8507
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8311 - val_loss: 3.5263
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8061 - val_loss: 2.6353
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7845 - val_loss: 1.9734
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7657 - val_loss: 1.4481
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7491 - val_loss: 1.1077
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7340 - val_loss: 0.8819
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7201 - val_loss: 0.7221
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7071 - val_loss: 0.6649
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6951 - val_loss: 0.6775
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6837 - val_loss: 0.7491
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6729 - val_loss: 0.8815
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6627 - val_loss: 1.0685
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6528 - val_loss: 1.3065
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6436 - val_loss: 1.5428
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6350 - val_loss: 1.8315
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6268 - val_loss: 2.1427
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6190 - val_loss: 2.5085
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6115 - val_loss: 2.8640
121/121 [==============================] - 0s 2ms/step - loss: 0.6770
[CV]  learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21, total=  26.0s
[CV] learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21 ..
Epoch 1/100
242/242 [==============================] - 1s 5ms/step - loss: 4.4059 - val_loss: 3.5308
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 2.5613 - val_loss: 3.0045
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 1.4038 - val_loss: 2.5464
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.9815 - val_loss: 1.8717
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8396 - val_loss: 1.3067
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7692 - val_loss: 0.9966
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.7270 - val_loss: 0.8331
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6993 - val_loss: 0.7309
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6799 - val_loss: 0.6922
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6652 - val_loss: 0.6623
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6528 - val_loss: 0.6391
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6420 - val_loss: 0.6199
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6325 - val_loss: 0.6066
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6235 - val_loss: 0.5952
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6150 - val_loss: 0.5855
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6070 - val_loss: 0.5761
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5992 - val_loss: 0.5671
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5915 - val_loss: 0.5590
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5841 - val_loss: 0.5515
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5769 - val_loss: 0.5445
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5697 - val_loss: 0.5376
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5627 - val_loss: 0.5308
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5559 - val_loss: 0.5241
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5491 - val_loss: 0.5179
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5425 - val_loss: 0.5114
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5360 - val_loss: 0.5049
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5296 - val_loss: 0.4989
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5234 - val_loss: 0.4930
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5173 - val_loss: 0.4880
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5115 - val_loss: 0.4819
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5058 - val_loss: 0.4768
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5003 - val_loss: 0.4726
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4951 - val_loss: 0.4680
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4901 - val_loss: 0.4647
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4854 - val_loss: 0.4611
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4809 - val_loss: 0.4579
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4767 - val_loss: 0.4552
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4727 - val_loss: 0.4523
Epoch 39/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4688 - val_loss: 0.4503
Epoch 40/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4651 - val_loss: 0.4481
Epoch 41/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4616 - val_loss: 0.4479
Epoch 42/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4584 - val_loss: 0.4464
Epoch 43/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4552 - val_loss: 0.4442
Epoch 44/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4524 - val_loss: 0.4442
Epoch 45/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4495 - val_loss: 0.4436
Epoch 46/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4469 - val_loss: 0.4428
Epoch 47/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4444 - val_loss: 0.4428
Epoch 48/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4420 - val_loss: 0.4431
Epoch 49/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4397 - val_loss: 0.4436
Epoch 50/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4375 - val_loss: 0.4441
Epoch 51/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4353 - val_loss: 0.4441
Epoch 52/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4335 - val_loss: 0.4456
Epoch 53/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4315 - val_loss: 0.4461
Epoch 54/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4297 - val_loss: 0.4467
Epoch 55/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4278 - val_loss: 0.4484
Epoch 56/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4262 - val_loss: 0.4490
Epoch 57/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4247 - val_loss: 0.4493
121/121 [==============================] - 0s 2ms/step - loss: 0.4256
[CV]  learning_rate=0.00030107783636342726, n_hidden=3, n_neurons=21, total=  56.8s
[CV] learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.3002 - val_loss: 38.2652
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 0.9964 - val_loss: 0.6706
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5490 - val_loss: 0.5520
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4986 - val_loss: 0.5090
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4710 - val_loss: 0.4813
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 0.4526 - val_loss: 0.4761
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4406 - val_loss: 0.4565
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4321 - val_loss: 0.4533
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4259 - val_loss: 0.4502
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4210 - val_loss: 0.4389
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4166 - val_loss: 0.4360
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4123 - val_loss: 0.4313
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4086 - val_loss: 0.4253
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4064 - val_loss: 0.4228
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4030 - val_loss: 0.4209
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4011 - val_loss: 0.4192
Epoch 17/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3986 - val_loss: 0.4156
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3966 - val_loss: 0.4137
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3949 - val_loss: 0.4128
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3936 - val_loss: 0.4104
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3928 - val_loss: 0.4101
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3908 - val_loss: 0.4070
Epoch 23/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3898 - val_loss: 0.4080
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3892 - val_loss: 0.4037
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3878 - val_loss: 0.4030
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3869 - val_loss: 0.4000
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3864 - val_loss: 0.3972
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3851 - val_loss: 0.3974
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3838 - val_loss: 0.3943
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3834 - val_loss: 0.3948
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3818 - val_loss: 0.3922
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3817 - val_loss: 0.3917
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3803 - val_loss: 0.3905
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3798 - val_loss: 0.3893
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3790 - val_loss: 0.3876
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3775 - val_loss: 0.3916
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3769 - val_loss: 0.3831
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3762 - val_loss: 0.3875
Epoch 39/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3747 - val_loss: 0.3847
Epoch 40/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3750 - val_loss: 0.3846
Epoch 41/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3744 - val_loss: 0.3842
Epoch 42/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3730 - val_loss: 0.3814
Epoch 43/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3735 - val_loss: 0.3808
Epoch 44/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3718 - val_loss: 0.3834
Epoch 45/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3725 - val_loss: 0.3804
Epoch 46/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3704 - val_loss: 0.3824
Epoch 47/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3703 - val_loss: 0.3798
Epoch 48/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3705 - val_loss: 0.3800
Epoch 49/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3686 - val_loss: 0.3783
Epoch 50/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3686 - val_loss: 0.3797
Epoch 51/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3683 - val_loss: 0.3820
Epoch 52/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3675 - val_loss: 0.3765
Epoch 53/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3672 - val_loss: 0.3772
Epoch 54/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3667 - val_loss: 0.3766
Epoch 55/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3656 - val_loss: 0.3773
Epoch 56/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3663 - val_loss: 0.3754
Epoch 57/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3650 - val_loss: 0.3750
Epoch 58/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3645 - val_loss: 0.3750
Epoch 59/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3633 - val_loss: 0.3766
Epoch 60/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3626 - val_loss: 0.3749
Epoch 61/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3629 - val_loss: 0.3764
Epoch 62/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3619 - val_loss: 0.3759
Epoch 63/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3614 - val_loss: 0.3736
Epoch 64/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3607 - val_loss: 0.3750
Epoch 65/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3606 - val_loss: 0.3727
Epoch 66/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3631 - val_loss: 0.3757
Epoch 67/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3598 - val_loss: 0.3733
Epoch 68/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3590 - val_loss: 0.3719
Epoch 69/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3583 - val_loss: 0.3714
Epoch 70/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3589 - val_loss: 0.3698
Epoch 71/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3574 - val_loss: 0.3689
Epoch 72/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3564 - val_loss: 0.3717
Epoch 73/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3572 - val_loss: 0.3699
Epoch 74/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3559 - val_loss: 0.3664
Epoch 75/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3557 - val_loss: 0.3663
Epoch 76/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3545 - val_loss: 0.3689
Epoch 77/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3551 - val_loss: 0.3670
Epoch 78/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3539 - val_loss: 0.3682
Epoch 79/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3534 - val_loss: 0.3647
Epoch 80/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3541 - val_loss: 0.3669
Epoch 81/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3528 - val_loss: 0.3654
Epoch 82/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3529 - val_loss: 0.3639
Epoch 83/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3529 - val_loss: 0.3644
Epoch 84/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3514 - val_loss: 0.3632
Epoch 85/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3511 - val_loss: 0.3619
Epoch 86/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3502 - val_loss: 0.3615
Epoch 87/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3512 - val_loss: 0.3592
Epoch 88/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3499 - val_loss: 0.3633
Epoch 89/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3494 - val_loss: 0.3581
Epoch 90/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3493 - val_loss: 0.3597
Epoch 91/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3489 - val_loss: 0.3582
Epoch 92/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3487 - val_loss: 0.3575
Epoch 93/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3483 - val_loss: 0.3572
Epoch 94/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3485 - val_loss: 0.3582
Epoch 95/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3470 - val_loss: 0.3579
Epoch 96/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3466 - val_loss: 0.3626
Epoch 97/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3466 - val_loss: 0.3564
Epoch 98/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3478 - val_loss: 0.3560
Epoch 99/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3465 - val_loss: 0.3618
Epoch 100/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3458 - val_loss: 0.3552
121/121 [==============================] - 0s 2ms/step - loss: 0.3645
[CV]  learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22, total= 1.5min
[CV] learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.2613 - val_loss: 0.6451
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5785 - val_loss: 0.8942
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5115 - val_loss: 1.2421
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4809 - val_loss: 1.2691
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4597 - val_loss: 0.9915
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4442 - val_loss: 0.6535
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4337 - val_loss: 0.5216
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4234 - val_loss: 0.4130
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4148 - val_loss: 0.3818
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4105 - val_loss: 0.4044
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4043 - val_loss: 0.4355
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3994 - val_loss: 0.4276
Epoch 13/100
242/242 [==============================] - 1s 3ms/step - loss: 0.3968 - val_loss: 0.4761
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3953 - val_loss: 0.5445
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3922 - val_loss: 0.5613
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3886 - val_loss: 0.6763
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3861 - val_loss: 0.6692
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3845 - val_loss: 0.7573
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3817 - val_loss: 0.6834
121/121 [==============================] - 0s 2ms/step - loss: 0.3963
[CV]  learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22, total=  17.5s
[CV] learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22 ....
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.2040 - val_loss: 71.0120
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 1.0541 - val_loss: 42.2913
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8661 - val_loss: 1.3112
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 0.5287 - val_loss: 0.5968
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4829 - val_loss: 0.4855
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4636 - val_loss: 0.4448
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4508 - val_loss: 0.4217
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4424 - val_loss: 0.4094
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4358 - val_loss: 0.4025
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 0.4322 - val_loss: 0.3958
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4267 - val_loss: 0.3918
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4230 - val_loss: 0.3892
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4196 - val_loss: 0.3915
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4170 - val_loss: 0.3996
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4136 - val_loss: 0.4076
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4108 - val_loss: 0.4218
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4081 - val_loss: 0.4317
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4068 - val_loss: 0.4354
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4048 - val_loss: 0.4429
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4034 - val_loss: 0.4453
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4021 - val_loss: 0.4460
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4001 - val_loss: 0.4415
121/121 [==============================] - 0s 2ms/step - loss: 0.3947
[CV]  learning_rate=0.005153286333701512, n_hidden=1, n_neurons=22, total=  20.2s
[CV] learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49 ...
Epoch 1/100
242/242 [==============================] - 1s 3ms/step - loss: 7.7197 - val_loss: 43.0907
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 5.5377 - val_loss: 27.4372
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 4.0702 - val_loss: 17.4473
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 3.0672 - val_loss: 11.0914
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 2.3745 - val_loss: 7.0664
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 1.8936 - val_loss: 4.5088
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 1.5544 - val_loss: 2.9277
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 1.3139 - val_loss: 1.9631
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 1.1426 - val_loss: 1.3974
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 1.0191 - val_loss: 1.0599
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 0.9292 - val_loss: 0.8770
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8636 - val_loss: 0.7898
Epoch 13/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8150 - val_loss: 0.7557
Epoch 14/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7787 - val_loss: 0.7567
Epoch 15/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7515 - val_loss: 0.7696
Epoch 16/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7308 - val_loss: 0.7947
Epoch 17/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7148 - val_loss: 0.8231
Epoch 18/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7022 - val_loss: 0.8540
Epoch 19/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6920 - val_loss: 0.8834
Epoch 20/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6838 - val_loss: 0.9089
Epoch 21/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6772 - val_loss: 0.9189
Epoch 22/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6713 - val_loss: 0.9382
Epoch 23/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6664 - val_loss: 0.9536
121/121 [==============================] - 0s 2ms/step - loss: 0.6673
[CV]  learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49, total=  19.0s
[CV] learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 7.6328 - val_loss: 25.5463
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 5.6957 - val_loss: 23.8232
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 4.3247 - val_loss: 22.6165
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 3.3439 - val_loss: 21.7670
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 2.6347 - val_loss: 21.1673
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 2.1177 - val_loss: 20.7451
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 1.7380 - val_loss: 20.4552
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 1.4573 - val_loss: 20.2628
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 1.2483 - val_loss: 20.1364
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 1.0921 - val_loss: 20.0567
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 0.9744 - val_loss: 20.0180
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8854 - val_loss: 20.0099
Epoch 13/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8177 - val_loss: 20.0241
Epoch 14/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7659 - val_loss: 20.0459
Epoch 15/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7259 - val_loss: 20.0826
Epoch 16/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6949 - val_loss: 20.1289
Epoch 17/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6707 - val_loss: 20.1804
Epoch 18/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6517 - val_loss: 20.2376
Epoch 19/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6366 - val_loss: 20.3030
Epoch 20/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6244 - val_loss: 20.3653
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6146 - val_loss: 20.4378
Epoch 22/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6066 - val_loss: 20.5061
121/121 [==============================] - 0s 2ms/step - loss: 1.0973
[CV]  learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49, total=  18.5s
[CV] learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 6.1564 - val_loss: 7.6683
Epoch 2/100
242/242 [==============================] - 1s 3ms/step - loss: 4.4886 - val_loss: 4.9412
Epoch 3/100
242/242 [==============================] - 1s 3ms/step - loss: 3.3699 - val_loss: 3.3299
Epoch 4/100
242/242 [==============================] - 1s 3ms/step - loss: 2.6029 - val_loss: 2.3535
Epoch 5/100
242/242 [==============================] - 1s 3ms/step - loss: 2.0673 - val_loss: 1.7864
Epoch 6/100
242/242 [==============================] - 1s 3ms/step - loss: 1.6878 - val_loss: 1.4390
Epoch 7/100
242/242 [==============================] - 1s 3ms/step - loss: 1.4151 - val_loss: 1.2303
Epoch 8/100
242/242 [==============================] - 1s 3ms/step - loss: 1.2174 - val_loss: 1.1115
Epoch 9/100
242/242 [==============================] - 1s 3ms/step - loss: 1.0734 - val_loss: 1.0396
Epoch 10/100
242/242 [==============================] - 1s 3ms/step - loss: 0.9677 - val_loss: 0.9896
Epoch 11/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8894 - val_loss: 0.9739
Epoch 12/100
242/242 [==============================] - 1s 3ms/step - loss: 0.8312 - val_loss: 0.9570
Epoch 13/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7875 - val_loss: 0.9426
Epoch 14/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7545 - val_loss: 0.9414
Epoch 15/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7294 - val_loss: 0.9351
Epoch 16/100
242/242 [==============================] - 1s 3ms/step - loss: 0.7100 - val_loss: 0.9457
Epoch 17/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6951 - val_loss: 0.9437
Epoch 18/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6832 - val_loss: 0.9404
Epoch 19/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6737 - val_loss: 0.9554
Epoch 20/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6661 - val_loss: 0.9559
Epoch 21/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6596 - val_loss: 0.9558
Epoch 22/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6543 - val_loss: 0.9576
Epoch 23/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6496 - val_loss: 0.9476
Epoch 24/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6455 - val_loss: 0.9447
Epoch 25/100
242/242 [==============================] - 1s 3ms/step - loss: 0.6419 - val_loss: 0.9405
121/121 [==============================] - 0s 2ms/step - loss: 0.6455
[CV]  learning_rate=0.0003099230412972121, n_hidden=0, n_neurons=49, total=  21.0s
[CV] learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.3724 - val_loss: 19.2760
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 0.8289 - val_loss: 4.6055
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5845 - val_loss: 0.7004
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4829 - val_loss: 0.5034
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4459 - val_loss: 0.4495
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4238 - val_loss: 0.4262
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4095 - val_loss: 0.4112
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3994 - val_loss: 0.4155
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3919 - val_loss: 0.4120
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3855 - val_loss: 0.4010
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3805 - val_loss: 0.4074
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3762 - val_loss: 0.3889
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3711 - val_loss: 0.3859
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3705 - val_loss: 0.4045
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3652 - val_loss: 0.3846
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3648 - val_loss: 0.3959
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3611 - val_loss: 0.4089
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3584 - val_loss: 0.3869
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3555 - val_loss: 0.3860
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3541 - val_loss: 0.3805
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3534 - val_loss: 0.3894
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3503 - val_loss: 0.3799
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3493 - val_loss: 0.4104
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3492 - val_loss: 0.3684
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3456 - val_loss: 0.3799
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3455 - val_loss: 0.3619
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3445 - val_loss: 0.3645
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3421 - val_loss: 0.3707
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3407 - val_loss: 0.3731
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3396 - val_loss: 0.3582
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3373 - val_loss: 0.3508
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3376 - val_loss: 0.3451
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3358 - val_loss: 0.3366
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3354 - val_loss: 0.3431
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3337 - val_loss: 0.3285
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3322 - val_loss: 0.3474
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3320 - val_loss: 0.3244
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3304 - val_loss: 0.3484
Epoch 39/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3293 - val_loss: 0.3235
Epoch 40/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3301 - val_loss: 0.3422
Epoch 41/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3288 - val_loss: 0.3287
Epoch 42/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3267 - val_loss: 0.3236
Epoch 43/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3278 - val_loss: 0.3246
Epoch 44/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3251 - val_loss: 0.3409
Epoch 45/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3271 - val_loss: 0.3232
Epoch 46/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3235 - val_loss: 0.3294
Epoch 47/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3236 - val_loss: 0.3361
Epoch 48/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3236 - val_loss: 0.3339
Epoch 49/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3211 - val_loss: 0.3173
Epoch 50/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3211 - val_loss: 0.3412
Epoch 51/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3199 - val_loss: 0.3399
Epoch 52/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3188 - val_loss: 0.3329
Epoch 53/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3189 - val_loss: 0.3266
Epoch 54/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3178 - val_loss: 0.3180
Epoch 55/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3160 - val_loss: 0.3241
Epoch 56/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3162 - val_loss: 0.3195
Epoch 57/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3148 - val_loss: 0.3322
Epoch 58/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3139 - val_loss: 0.3240
Epoch 59/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3134 - val_loss: 0.3101
Epoch 60/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3126 - val_loss: 0.3209
Epoch 61/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3123 - val_loss: 0.3261
Epoch 62/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3112 - val_loss: 0.3363
Epoch 63/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3112 - val_loss: 0.3102
Epoch 64/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3102 - val_loss: 0.3338
Epoch 65/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3093 - val_loss: 0.3242
Epoch 66/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3094 - val_loss: 0.3056
Epoch 67/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3086 - val_loss: 0.3232
Epoch 68/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3078 - val_loss: 0.3248
Epoch 69/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3069 - val_loss: 0.3161
Epoch 70/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3068 - val_loss: 0.3113
Epoch 71/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3058 - val_loss: 0.3203
Epoch 72/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3052 - val_loss: 0.3111
Epoch 73/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3045 - val_loss: 0.3037
Epoch 74/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3044 - val_loss: 0.3228
Epoch 75/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3036 - val_loss: 0.3106
Epoch 76/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3024 - val_loss: 0.3158
Epoch 77/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3030 - val_loss: 0.3014
Epoch 78/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3019 - val_loss: 0.3226
Epoch 79/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3026 - val_loss: 0.3040
Epoch 80/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3004 - val_loss: 0.3134
Epoch 81/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2997 - val_loss: 0.3006
Epoch 82/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2994 - val_loss: 0.3393
Epoch 83/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2991 - val_loss: 0.2980
Epoch 84/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2982 - val_loss: 0.3540
Epoch 85/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2980 - val_loss: 0.2978
Epoch 86/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2967 - val_loss: 0.3240
Epoch 87/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2968 - val_loss: 0.3013
Epoch 88/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2966 - val_loss: 0.3228
Epoch 89/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2959 - val_loss: 0.2969
Epoch 90/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2948 - val_loss: 0.3195
Epoch 91/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2950 - val_loss: 0.2951
Epoch 92/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2942 - val_loss: 0.3156
Epoch 93/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2940 - val_loss: 0.2992
Epoch 94/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2939 - val_loss: 0.2938
Epoch 95/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2927 - val_loss: 0.3066
Epoch 96/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2914 - val_loss: 0.2978
Epoch 97/100
242/242 [==============================] - 1s 3ms/step - loss: 0.2918 - val_loss: 0.3262
Epoch 98/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2914 - val_loss: 0.2901
Epoch 99/100
242/242 [==============================] - 1s 4ms/step - loss: 0.2907 - val_loss: 0.3582
Epoch 100/100
242/242 [==============================] - 1s 3ms/step - loss: 0.2906 - val_loss: 0.2986
121/121 [==============================] - 0s 2ms/step - loss: 0.3201
[CV]  learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42, total= 1.5min
[CV] learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42 ...
Epoch 1/100
242/242 [==============================] - 2s 6ms/step - loss: 1.2201 - val_loss: 0.8642
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6048 - val_loss: 0.7994
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5340 - val_loss: 1.0803
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4912 - val_loss: 1.1494
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4606 - val_loss: 0.9498
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4414 - val_loss: 0.6208
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4270 - val_loss: 0.4657
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4148 - val_loss: 0.3888
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4041 - val_loss: 0.4084
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3972 - val_loss: 0.4312
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3899 - val_loss: 0.5341
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3833 - val_loss: 0.6081
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3796 - val_loss: 0.7209
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3760 - val_loss: 0.8821
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3715 - val_loss: 0.9049
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3680 - val_loss: 0.9791
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3647 - val_loss: 0.9533
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3628 - val_loss: 1.0399
121/121 [==============================] - 0s 2ms/step - loss: 0.3909
[CV]  learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42, total=  17.9s
[CV] learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42 ...
Epoch 1/100
242/242 [==============================] - 1s 4ms/step - loss: 1.1300 - val_loss: 2.2824
Epoch 2/100
242/242 [==============================] - 1s 4ms/step - loss: 0.6910 - val_loss: 2.5063
Epoch 3/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5904 - val_loss: 1.3345
Epoch 4/100
242/242 [==============================] - 1s 4ms/step - loss: 0.5360 - val_loss: 1.8303
Epoch 5/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4879 - val_loss: 1.1690
Epoch 6/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4597 - val_loss: 1.0937
Epoch 7/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4408 - val_loss: 0.5393
Epoch 8/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4235 - val_loss: 0.5528
Epoch 9/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4125 - val_loss: 0.4217
Epoch 10/100
242/242 [==============================] - 1s 4ms/step - loss: 0.4079 - val_loss: 0.3978
Epoch 11/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3958 - val_loss: 0.7642
Epoch 12/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3934 - val_loss: 0.3953
Epoch 13/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3853 - val_loss: 0.3690
Epoch 14/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3805 - val_loss: 0.6782
Epoch 15/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3793 - val_loss: 0.5137
Epoch 16/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3750 - val_loss: 1.5716
Epoch 17/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3863 - val_loss: 1.5438
Epoch 18/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3814 - val_loss: 2.5256
Epoch 19/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3989 - val_loss: 1.2077
Epoch 20/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3812 - val_loss: 0.8839
Epoch 21/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3663 - val_loss: 0.3408
Epoch 22/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3590 - val_loss: 0.3928
Epoch 23/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3572 - val_loss: 0.3411
Epoch 24/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3572 - val_loss: 0.4823
Epoch 25/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3555 - val_loss: 0.3589
Epoch 26/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3527 - val_loss: 0.3810
Epoch 27/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3504 - val_loss: 0.4593
Epoch 28/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3500 - val_loss: 0.3360
Epoch 29/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3499 - val_loss: 0.4983
Epoch 30/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3480 - val_loss: 0.3747
Epoch 31/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3470 - val_loss: 0.4128
Epoch 32/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3445 - val_loss: 0.5464
Epoch 33/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3461 - val_loss: 0.3827
Epoch 34/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3426 - val_loss: 0.5037
Epoch 35/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3434 - val_loss: 0.3439
Epoch 36/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3398 - val_loss: 0.4822
Epoch 37/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3404 - val_loss: 0.3598
Epoch 38/100
242/242 [==============================] - 1s 4ms/step - loss: 0.3414 - val_loss: 0.6269
121/121 [==============================] - 0s 2ms/step - loss: 0.3388
[CV]  learning_rate=0.0033625641252688094, n_hidden=2, n_neurons=42, total=  35.9s
Epoch 1/100
[Parallel(n_jobs=1)]: Done  30 out of  30 | elapsed: 17.8min finished
363/363 [==============================] - 1s 4ms/step - loss: 0.9562 - val_loss: 7.9910
Epoch 2/100
363/363 [==============================] - 1s 4ms/step - loss: 0.6345 - val_loss: 4.4949
Epoch 3/100
363/363 [==============================] - 1s 3ms/step - loss: 0.5106 - val_loss: 0.4376
Epoch 4/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4426 - val_loss: 0.4602
Epoch 5/100
363/363 [==============================] - 1s 3ms/step - loss: 0.4217 - val_loss: 0.4209
Epoch 6/100
363/363 [==============================] - 1s 4ms/step - loss: 0.4041 - val_loss: 0.4768
Epoch 7/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3956 - val_loss: 0.4360
Epoch 8/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3872 - val_loss: 0.3768
Epoch 9/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3793 - val_loss: 0.4160
Epoch 10/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3748 - val_loss: 0.4245
Epoch 11/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3705 - val_loss: 0.3532
Epoch 12/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3669 - val_loss: 0.4497
Epoch 13/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3636 - val_loss: 0.3536
Epoch 14/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3601 - val_loss: 0.3505
Epoch 15/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3581 - val_loss: 0.3511
Epoch 16/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3547 - val_loss: 0.3395
Epoch 17/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3521 - val_loss: 0.3906
Epoch 18/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3502 - val_loss: 0.3524
Epoch 19/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3484 - val_loss: 0.3348
Epoch 20/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3466 - val_loss: 0.4462
Epoch 21/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3446 - val_loss: 0.3289
Epoch 22/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3426 - val_loss: 0.4406
Epoch 23/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3406 - val_loss: 0.3500
Epoch 24/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3384 - val_loss: 0.3954
Epoch 25/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3374 - val_loss: 0.3432
Epoch 26/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3356 - val_loss: 0.3878
Epoch 27/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3347 - val_loss: 0.3461
Epoch 28/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3344 - val_loss: 0.4356
Epoch 29/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3324 - val_loss: 0.3181
Epoch 30/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3307 - val_loss: 0.4408
Epoch 31/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3294 - val_loss: 0.3225
Epoch 32/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3278 - val_loss: 0.4005
Epoch 33/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3276 - val_loss: 0.3155
Epoch 34/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3251 - val_loss: 0.3346
Epoch 35/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3250 - val_loss: 0.3570
Epoch 36/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3245 - val_loss: 0.3343
Epoch 37/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3227 - val_loss: 0.4012
Epoch 38/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3213 - val_loss: 0.3107
Epoch 39/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3226 - val_loss: 0.3925
Epoch 40/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3203 - val_loss: 0.3260
Epoch 41/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3251 - val_loss: 0.5559
Epoch 42/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3215 - val_loss: 0.3476
Epoch 43/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3182 - val_loss: 0.4974
Epoch 44/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3192 - val_loss: 0.3820
Epoch 45/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3168 - val_loss: 1.1209
Epoch 46/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3182 - val_loss: 0.9039
Epoch 47/100
363/363 [==============================] - 1s 3ms/step - loss: 0.3186 - val_loss: 1.3434
Epoch 48/100
363/363 [==============================] - 1s 4ms/step - loss: 0.3245 - val_loss: 0.6242
Out[100]:
RandomizedSearchCV(cv=3, error_score='raise-deprecating',
                   estimator=<tensorflow.python.keras.wrappers.scikit_learn.KerasRegressor object at 0x7f12e03d2b70>,
                   iid='warn', n_iter=10, n_jobs=None,
                   param_distributions={'learning_rate': <scipy.stats._distn_infrastructure.rv_frozen object at 0x7f127464db38>,
                                        'n_hidden': [0, 1, 2, 3],
                                        'n_neurons': array([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10,...
       18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
       35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
       52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
       69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,
       86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99])},
                   pre_dispatch='2*n_jobs', random_state=None, refit=True,
                   return_train_score=False, scoring=None, verbose=2)
In [101]:
rnd_search_cv.best_params_
Out[101]:
{'learning_rate': 0.0033625641252688094, 'n_hidden': 2, 'n_neurons': 42}
In [102]:
rnd_search_cv.best_score_
Out[102]:
-0.34994881351788837
In [103]:
rnd_search_cv.best_estimator_
Out[103]:
<tensorflow.python.keras.wrappers.scikit_learn.KerasRegressor at 0x7f12746571d0>
In [104]:
rnd_search_cv.score(X_test, y_test)
162/162 [==============================] - 0s 2ms/step - loss: 0.3227
Out[104]:
-0.32268083095550537
In [105]:
model = rnd_search_cv.best_estimator_.model
model
Out[105]:
<tensorflow.python.keras.engine.sequential.Sequential at 0x7f13010f6588>
In [106]:
model.evaluate(X_test, y_test)
162/162 [==============================] - 0s 2ms/step - loss: 0.3227
Out[106]:
0.32268083095550537

연습문제 해답

1. to 9.

부록 A 참조.

10.

문제: 심층 MLP를 MNIST 데이터셋에 훈련해보세요(keras.datasets.mnist.load_data() 함수를 사용해 데이터를 적재할 수 있습니다). 98% 이상의 정확도를 얻을 수 있는지 확인해보세요. 이 장에서 소개한 방법을 사용해 최적의 학습률을 찾아보세요(즉 학습률을 지수적으로 증가시키면서 손실을 그래프로 그립니다. 그다음 손실이 다시 증가하는 지점을 찾습니다). 모든 부가 기능을 추가해보세요. 즉, 체크포인트를 저장하고, 조기 종료를 사용하고, 텐서보드를 사용해 학습 곡선을 그려보세요.

데이터셋을 적재해보죠:

In [107]:
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step

패션 MNIST 데이터셋처럼 MNIST 훈련 세트는 28x28 픽셀의 흑백 이미지 60,000개로 이루어져 있습니다:

In [108]:
X_train_full.shape
Out[108]:
(60000, 28, 28)

각 픽셀 강도는 바이트(0~255)로 표현됩니다:

In [109]:
X_train_full.dtype
Out[109]:
dtype('uint8')

전체 훈련 세트를 검증 세트와 (더 작은) 훈련 세트로 나누어 보겠습니다. 패션 MNIST처럼 픽셀 강도를 255로 나누어 0-1 범위의 실수로 변환합니다:

In [110]:
X_valid, X_train = X_train_full[:5000] / 255., X_train_full[5000:] / 255.
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test / 255.

맷플롯립의 imshow() 함수와 'binary' 컬러 맵으로 이미지를 출력해 보죠:

In [111]:
plt.imshow(X_train[0], cmap="binary")
plt.axis('off')
plt.show()

레이블은 (uint8로 표현된) 0에서 9까지 클래스 아이디입니다. 편리하게도 클래스 아이디는 이미지가 나타내는 숫자와 같습니다. 따라서 class_names 배열을 만들 필요가 없습니다:

In [112]:
y_train
Out[112]:
array([7, 3, 4, ..., 5, 6, 8], dtype=uint8)

검증 세트는 5,000개의 이미지를 담고 있고 테스트 세트는 10,000개의 이미지를 담고 있습니다:

In [113]:
X_valid.shape
Out[113]:
(5000, 28, 28)
In [114]:
X_test.shape
Out[114]:
(10000, 28, 28)

이 데이터셋에 있는 이미지 샘플 몇 개를 출력해 보죠:

In [115]:
n_rows = 4
n_cols = 10
plt.figure(figsize=(n_cols * 1.2, n_rows * 1.2))
for row in range(n_rows):
    for col in range(n_cols):
        index = n_cols * row + col
        plt.subplot(n_rows, n_cols, index + 1)
        plt.imshow(X_train[index], cmap="binary", interpolation="nearest")
        plt.axis('off')
        plt.title(y_train[index], fontsize=12)
plt.subplots_adjust(wspace=0.2, hspace=0.5)
plt.show()

간단한 밀집 신경망을 만들고 최적의 학습률을 찾아 보겠습니다. 반복마다 학습률을 증가시키기 위해 콜백을 사용합니다. 이 콜백은 반복마다 학습률과 손실을 기록합니다:

In [116]:
K = keras.backend

class ExponentialLearningRate(keras.callbacks.Callback):
    def __init__(self, factor):
        self.factor = factor
        self.rates = []
        self.losses = []
    def on_batch_end(self, batch, logs):
        self.rates.append(K.get_value(self.model.optimizer.lr))
        self.losses.append(logs["loss"])
        K.set_value(self.model.optimizer.lr, self.model.optimizer.lr * self.factor)
In [117]:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
In [118]:
model = keras.models.Sequential([
    keras.layers.Flatten(input_shape=[28, 28]),
    keras.layers.Dense(300, activation="relu"),
    keras.layers.Dense(100, activation="relu"),
    keras.layers.Dense(10, activation="softmax")
])

작은 학습률 1e-3에서 시작하여 반복마다 0.5%씩 증가합니다:

In [119]:
model.compile(loss="sparse_categorical_crossentropy",
              optimizer=keras.optimizers.SGD(lr=1e-3),
              metrics=["accuracy"])
expon_lr = ExponentialLearningRate(factor=1.005)

모델을 1 에포크만 훈련해 보죠:

In [120]:
history = model.fit(X_train, y_train, epochs=1,
                    validation_data=(X_valid, y_valid),
                    callbacks=[expon_lr])
1719/1719 [==============================] - 8s 5ms/step - loss: nan - accuracy: 0.5968 - val_loss: nan - val_accuracy: 0.0958

학습률에 대한 함수로 손실을 그릴 수 있습니다:

In [121]:
plt.plot(expon_lr.rates, expon_lr.losses)
plt.gca().set_xscale('log')
plt.hlines(min(expon_lr.losses), min(expon_lr.rates), max(expon_lr.rates))
plt.axis([min(expon_lr.rates), max(expon_lr.rates), 0, expon_lr.losses[0]])
plt.xlabel("Learning rate")
plt.ylabel("Loss")
Out[121]:
Text(0, 0.5, 'Loss')

손실이 3e-1에서 갑자기 솟구쳤기 때문에 2e-1을 학습률로 사용하겠습니다:

In [122]:
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
In [123]:
model = keras.models.Sequential([
    keras.layers.Flatten(input_shape=[28, 28]),
    keras.layers.Dense(300, activation="relu"),
    keras.layers.Dense(100, activation="relu"),
    keras.layers.Dense(10, activation="softmax")
])
In [124]:
model.compile(loss="sparse_categorical_crossentropy",
              optimizer=keras.optimizers.SGD(lr=2e-1),
              metrics=["accuracy"])
In [125]:
run_index = 1 # 실행할 때마다 이 값을 늘립니다
run_logdir = os.path.join(os.curdir, "my_mnist_logs", "run_{:03d}".format(run_index))
run_logdir
Out[125]:
'./my_mnist_logs/run_001'
In [126]:
early_stopping_cb = keras.callbacks.EarlyStopping(patience=20)
checkpoint_cb = keras.callbacks.ModelCheckpoint("my_mnist_model.h5", save_best_only=True)
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)

history = model.fit(X_train, y_train, epochs=100,
                    validation_data=(X_valid, y_valid),
                    callbacks=[early_stopping_cb, checkpoint_cb, tensorboard_cb])
Epoch 1/100
   2/1719 [..............................] - ETA: 14:52 - loss: 2.4497 - accuracy: 0.1250WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0039s vs `on_train_batch_end` time: 1.0330s). Check your callbacks.
1719/1719 [==============================] - 7s 4ms/step - loss: 0.2379 - accuracy: 0.9271 - val_loss: 0.1080 - val_accuracy: 0.9676
Epoch 2/100
1719/1719 [==============================] - 5s 3ms/step - loss: 0.0943 - accuracy: 0.9707 - val_loss: 0.0894 - val_accuracy: 0.9762
Epoch 3/100
1719/1719 [==============================] - 6s 4ms/step - loss: 0.0652 - accuracy: 0.9794 - val_loss: 0.0723 - val_accuracy: 0.9794
Epoch 4/100
1719/1719 [==============================] - 6s 4ms/step - loss: 0.0458 - accuracy: 0.9847 - val_loss: 0.0715 - val_accuracy: 0.9802
Epoch 5/100
1719/1719 [==============================] - 6s 4ms/step - loss: 0.0337 - accuracy: 0.9887 - val_loss: 0.0760 - val_accuracy: 0.9782
Epoch 6/100
1719/1719 [==============================] - 6s 4ms/step - loss: 0.0272 - accuracy: 0.9912 - val_loss: 0.0640 - val_accuracy: 0.9838
Epoch 7/100
1719/1719 [==============================] - 6s 4ms/step - loss: 0.0188 - accuracy: 0.9943 - val_loss: 0.0920 - val_accuracy: 0.9762
Epoch 8/100
1719/1719 [==============================] - 6s 4ms/step - loss: 0.0145 - accuracy: 0.9955 - val_loss: 0.0804 - val_accuracy: 0.9796
Epoch 9/100
1719/1719 [==============================] - 6s 4ms/step - loss: 0.0104 - accuracy: 0.9970 - val_loss: 0.0756 - val_accuracy: 0.9828
Epoch 10/100
1719/1719 [==============================] - 6s 3ms/step - loss: 0.0070 - accuracy: 0.9981 - val_loss: 0.0827 - val_accuracy: 0.9836
Epoch 11/100
1719/1719 [==============================] - 6s 3ms/step - loss: 0.0064 - accuracy: 0.9981 - val_loss: 0.0795 - val_accuracy: 0.9836
Epoch 12/100
1719/1719 [==============================] - 5s 3ms/step - loss: 0.0020 - accuracy: 0.9997 - val_loss: 0.0809 - val_accuracy: 0.9832
Epoch 13/100
1719/1719 [==============================] - 6s 3ms/step - loss: 0.0012 - accuracy: 0.9997 - val_loss: 0.0772 - val_accuracy: 0.9846
Epoch 14/100
1719/1719 [==============================] - 6s 3ms/step - loss: 4.5238e-04 - accuracy: 1.0000 - val_loss: 0.0772 - val_accuracy: 0.9844
Epoch 15/100
1719/1719 [==============================] - 6s 3ms/step - loss: 2.6758e-04 - accuracy: 1.0000 - val_loss: 0.0792 - val_accuracy: 0.9842
Epoch 16/100
1719/1719 [==============================] - 6s 3ms/step - loss: 2.1738e-04 - accuracy: 1.0000 - val_loss: 0.0798 - val_accuracy: 0.9848
Epoch 17/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.8955e-04 - accuracy: 1.0000 - val_loss: 0.0806 - val_accuracy: 0.9844
Epoch 18/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.6625e-04 - accuracy: 1.0000 - val_loss: 0.0811 - val_accuracy: 0.9844
Epoch 19/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.5143e-04 - accuracy: 1.0000 - val_loss: 0.0812 - val_accuracy: 0.9848
Epoch 20/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.3887e-04 - accuracy: 1.0000 - val_loss: 0.0817 - val_accuracy: 0.9848
Epoch 21/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.2891e-04 - accuracy: 1.0000 - val_loss: 0.0820 - val_accuracy: 0.9848
Epoch 22/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.1968e-04 - accuracy: 1.0000 - val_loss: 0.0827 - val_accuracy: 0.9846
Epoch 23/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.1238e-04 - accuracy: 1.0000 - val_loss: 0.0832 - val_accuracy: 0.9846
Epoch 24/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.0524e-04 - accuracy: 1.0000 - val_loss: 0.0833 - val_accuracy: 0.9846
Epoch 25/100
1719/1719 [==============================] - 6s 3ms/step - loss: 1.0003e-04 - accuracy: 1.0000 - val_loss: 0.0837 - val_accuracy: 0.9848
Epoch 26/100
1719/1719 [==============================] - 6s 3ms/step - loss: 9.4777e-05 - accuracy: 1.0000 - val_loss: 0.0839 - val_accuracy: 0.9846
In [127]:
model = keras.models.load_model("my_mnist_model.h5") # rollback to best model
model.evaluate(X_test, y_test)
313/313 [==============================] - 1s 3ms/step - loss: 0.0633 - accuracy: 0.0986
Out[127]:
[0.06327418237924576, 0.09860000014305115]

98% 정확도를 얻었습니다. 마지막으로 텐서보드를 사용해 학습 곡선을 살펴보겠습니다:

In [128]:
%tensorboard --logdir=./my_mnist_logs --port=6006
ERROR: Failed to launch TensorBoard (exited with 255).
Contents of stderr:
E0809 02:08:33.394529 140501014255424 program.py:312] TensorBoard could not bind to port 6006, it was already in use
ERROR: TensorBoard could not bind to port 6006, it was already in use