This notebook contains an implementation of the One Hundred Layers Tiramisu as described in Simon Jegou et al.'s paper The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation.
%matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
limit_mem()
Tiramisu is a fully-convolutional neural network based on DenseNet architecture. It was designed as a state-of-the-art approach to semantic image segmentation.
We're going to use the same dataset they did, CamVid.
CamVid is a dataset of images from a video. It has ~ 600 images, so it's quite small, and given that it is from a video the information content of the dataset is small.
We're going to train this Tiramisu network from scratch to segment the CamVid dataset. This seems extremely ambitious!
Modify the following to point to the appropriate paths on your machine
PATH = '/data/datasets/SegNet-Tutorial/CamVid/'
frames_path = PATH+'all/'
labels_path = PATH+'allannot/'
PATH = '/data/datasets/camvid/'
The images in CamVid come with labels defining the segments of the input image. We're going to load both the images and the labels.
frames_path = PATH+'701_StillsRaw_full/'
labels_path = PATH+'LabeledApproved_full/'
fnames = glob.glob(frames_path+'*.png')
lnames = [labels_path+os.path.basename(fn)[:-4]+'_L.png' for fn in fnames]
img_sz = (480,360)
Helper function to resize images.
def open_image(fn): return np.array(Image.open(fn).resize(img_sz, Image.NEAREST))
img = Image.open(fnames[0]).resize(img_sz, Image.NEAREST)
img
imgs = np.stack([open_image(fn) for fn in fnames])
labels = np.stack([open_image(fn) for fn in lnames])
imgs.shape,labels.shape
((701, 360, 480, 3), (701, 360, 480, 3))
Normalize pixel values.
Save array for easier use.
save_array(PATH+'results/imgs.bc', imgs)
save_array(PATH+'results/labels.bc', labels)
imgs = load_array(PATH+'results/imgs.bc')
labels = load_array(PATH+'results/labels.bc')
Standardize
imgs-=0.4
imgs/=0.3
n,r,c,ch = imgs.shape
This implementation employs data augmentation on CamVid.
Augmentation includes random cropping / horizontal flipping, as done by segm_generator()
. BatchIndices()
lets us randomly sample batches from input array.
class BatchIndices(object):
def __init__(self, n, bs, shuffle=False):
self.n,self.bs,self.shuffle = n,bs,shuffle
self.lock = threading.Lock()
self.reset()
def reset(self):
self.idxs = (np.random.permutation(self.n)
if self.shuffle else np.arange(0, self.n))
self.curr = 0
def __next__(self):
with self.lock:
if self.curr >= self.n: self.reset()
ni = min(self.bs, self.n-self.curr)
res = self.idxs[self.curr:self.curr+ni]
self.curr += ni
return res
bi = BatchIndices(10,3)
[next(bi) for o in range(5)]
[array([0, 1, 2]), array([3, 4, 5]), array([6, 7, 8]), array([9]), array([0, 1, 2])]
bi = BatchIndices(10,3,True)
[next(bi) for o in range(5)]
[array([3, 9, 2]), array([0, 6, 8]), array([7, 1, 4]), array([5]), array([8, 0, 5])]
class segm_generator(object):
def __init__(self, x, y, bs=64, out_sz=(224,224), train=True):
self.x, self.y, self.bs, self.train = x,y,bs,train
self.n, self.ri, self.ci, _ = x.shape
self.idx_gen = BatchIndices(self.n, bs, train)
self.ro, self.co = out_sz
self.ych = self.y.shape[-1] if len(y.shape)==4 else 1
def get_slice(self, i,o):
start = random.randint(0, i-o) if self.train else (i-o)
return slice(start, start+o)
def get_item(self, idx):
slice_r = self.get_slice(self.ri, self.ro)
slice_c = self.get_slice(self.ci, self.co)
x = self.x[idx, slice_r, slice_c]
y = self.y[idx, slice_r, slice_c]
if self.train and (random.random()>0.5):
y = y[:,::-1]
x = x[:,::-1]
return x, y
def __next__(self):
idxs = next(self.idx_gen)
items = (self.get_item(idx) for idx in idxs)
xs,ys = zip(*items)
return np.stack(xs), np.stack(ys).reshape(len(ys), -1, self.ych)
As an example, here's a crop of the first image.
sg = segm_generator(imgs, labels, 4, train=False)
b_img, b_label = next(sg)
plt.imshow(b_img[0]*0.3+0.4);
plt.imshow(imgs[0]*0.3+0.4);
sg = segm_generator(imgs, labels, 4, train=True)
b_img, b_label = next(sg)
plt.imshow(b_img[0]*0.3+0.4);
The following loads, parses, and converts the segment labels we need for targets.
In particular we're looking to make the segmented targets into integers for classification purposes.
def parse_code(l):
a,b = l.strip().split("\t")
return tuple(int(o) for o in a.split(' ')), b
label_codes,label_names = zip(*[
parse_code(l) for l in open(PATH+"label_colors.txt")])
label_codes,label_names = list(label_codes),list(label_names)
Each segment / category is indicated by a particular color. The following maps each unique pixel to it's category.
list(zip(label_codes,label_names))[:5]
[((64, 128, 64), 'Animal'), ((192, 0, 128), 'Archway'), ((0, 128, 192), 'Bicyclist'), ((0, 128, 64), 'Bridge'), ((128, 0, 0), 'Building')]
We're going to map each unique pixel color to an integer so we can classify w/ our NN. (Think how a fill-in-the color image looks)
code2id = {v:k for k,v in enumerate(label_codes)}
We'll include an integer for erroneous pixel values.
failed_code = len(label_codes)+1
label_codes.append((0,0,0))
label_names.append('unk')
def conv_one_label(i):
res = np.zeros((r,c), 'uint8')
for j in range(r):
for k in range(c):
try: res[j,k] = code2id[tuple(labels[i,j,k])]
except: res[j,k] = failed_code
return res
from concurrent.futures import ProcessPoolExecutor
def conv_all_labels():
ex = ProcessPoolExecutor(8)
return np.stack(ex.map(conv_one_label, range(n)))
Now we'll create integer-mapped labels for all our colored images.
%time labels_int =conv_all_labels()
CPU times: user 548 ms, sys: 568 ms, total: 1.12 s Wall time: 2min
np.count_nonzero(labels_int==failed_code)
44
Set erroneous pixels to zero.
labels_int[labels_int==failed_code]=0
save_array(PATH+'results/labels_int.bc', labels_int)
labels_int = load_array(PATH+'results/labels_int.bc')
sg = segm_generator(imgs, labels, 4, train=True)
b_img, b_label = next(sg)
plt.imshow(b_img[0]*0.3+0.4);
Here is an example of how the segmented image looks.
plt.imshow(b_label[0].reshape(224,224,3));
Next we load test set, set training/test images and labels.
fn_test = set(o.strip() for o in open(PATH+'test.txt','r'))
is_test = np.array([o.split('/')[-1] in fn_test for o in fnames])
trn = imgs[is_test==False]
trn_labels = labels_int[is_test==False]
test = imgs[is_test]
test_labels = labels_int[is_test]
trn.shape,test_labels.shape
((468, 360, 480, 3), (233, 360, 480))
rnd_trn = len(trn_labels)
rnd_test = len(test_labels)
Now that we've prepared our data, we're ready to introduce the Tiramisu.
Conventional CNN's for image segmentation are very similar to the kind we looked at for style transfer. Recall that it involved convolutions with downsampling (stride 2, pooling) to increase the receptive field, followed by upsampling with deconvolutions until reaching the original side.
Tiramisu uses a similar down / up architecture, but with some key caveats.
As opposed to normal convolutional layers, Tiramisu uses the DenseNet method of concatenating inputs to outputs. Tiramisu also uses skip connections from the downsampling branch to the upsampling branch.
Specifically, the skip connection functions by concatenating the output of a Dense block in the down-sampling branch onto the input of the corresponding Dense block in the upsampling branch. By "corresponding", we mean the down-sample/up-sample Dense blocks that are equidistant from the input / output respectively.
One way of interpreting this architecture is that by re-introducing earlier stages of the network to later stages, we're forcing the network to "remember" the finer details of the input image.
This should all be familiar.
def relu(x): return Activation('relu')(x)
def dropout(x, p): return Dropout(p)(x) if p else x
def bn(x): return BatchNormalization(mode=2, axis=-1)(x)
def relu_bn(x): return relu(bn(x))
def concat(xs): return merge(xs, mode='concat', concat_axis=-1)
def conv(x, nf, sz, wd, p, stride=1):
x = Convolution2D(nf, sz, sz, init='he_uniform', border_mode='same',
subsample=(stride,stride), W_regularizer=l2(wd))(x)
return dropout(x, p)
def conv_relu_bn(x, nf, sz=3, wd=0, p=0, stride=1):
return conv(relu_bn(x), nf, sz, wd=wd, p=p, stride=stride)
Recall the dense block from DenseNet.
def dense_block(n,x,growth_rate,p,wd):
added = []
for i in range(n):
b = conv_relu_bn(x, growth_rate, p=p, wd=wd)
x = concat([x, b])
added.append(b)
return x,added
This is the downsampling transition.
In the original paper, downsampling consists of 1x1 convolution followed by max pooling. However we've found a stride 2 1x1 convolution to give better results.
def transition_dn(x, p, wd):
# x = conv_relu_bn(x, x.get_shape().as_list()[-1], sz=1, p=p, wd=wd)
# return MaxPooling2D(strides=(2, 2))(x)
return conv_relu_bn(x, x.get_shape().as_list()[-1], sz=1, p=p, wd=wd, stride=2)
Next we build the entire downward path, keeping track of Dense block outputs in a list called skip
.
def down_path(x, nb_layers, growth_rate, p, wd):
skips = []
for i,n in enumerate(nb_layers):
x,added = dense_block(n,x,growth_rate,p,wd)
skips.append(x)
x = transition_dn(x, p=p, wd=wd)
return skips, added
This is the upsampling transition. We use a deconvolution layer.
def transition_up(added, wd=0):
x = concat(added)
_,r,c,ch = x.get_shape().as_list()
return Deconvolution2D(ch, 3, 3, (None,r*2,c*2,ch), init='he_uniform',
border_mode='same', subsample=(2,2), W_regularizer=l2(wd))(x)
# x = UpSampling2D()(x)
# return conv(x, ch, 2, wd, 0)
This builds our upward path, concatenating the skip connections from skip
to the Dense block inputs as mentioned.
def up_path(added, skips, nb_layers, growth_rate, p, wd):
for i,n in enumerate(nb_layers):
x = transition_up(added, wd)
x = concat([x,skips[i]])
x,added = dense_block(n,x,growth_rate,p,wd)
return x
def reverse(a): return list(reversed(a))
Finally we put together the entire network.
def create_tiramisu(nb_classes, img_input, nb_dense_block=6,
growth_rate=16, nb_filter=48, nb_layers_per_block=5, p=None, wd=0):
if type(nb_layers_per_block) is list or type(nb_layers_per_block) is tuple:
nb_layers = list(nb_layers_per_block)
else: nb_layers = [nb_layers_per_block] * nb_dense_block
x = conv(img_input, nb_filter, 3, wd, 0)
skips,added = down_path(x, nb_layers, growth_rate, p, wd)
x = up_path(added, reverse(skips[:-1]), reverse(nb_layers[:-1]), growth_rate, p, wd)
x = conv(x, nb_classes, 1, wd, 0)
_,r,c,f = x.get_shape().as_list()
x = Reshape((-1, nb_classes))(x)
return Activation('softmax')(x)
Now we can train.
These architectures can take quite some time to train.
limit_mem()
input_shape = (224,224,3)
img_input = Input(shape=input_shape)
x = create_tiramisu(12, img_input, nb_layers_per_block=[4,5,7,10,12,15], p=0.2, wd=1e-4)
model = Model(img_input, x)
gen = segm_generator(trn, trn_labels, 3, train=True)
gen_test = segm_generator(test, test_labels, 3, train=False)
model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(1e-3), metrics=["accuracy"])
model.optimizer=keras.optimizers.RMSprop(1e-3, decay=1-0.99995)
model.optimizer=keras.optimizers.RMSprop(1e-3)
K.set_value(model.optimizer.lr, 1e-3)
model.fit_generator(gen, rnd_trn, 100, verbose=2,
validation_data=gen_test, nb_val_samples=rnd_test)
Epoch 1/100 157s - loss: 2.1389 - acc: 0.4892 - val_loss: 1.6624 - val_acc: 0.6085 Epoch 2/100 118s - loss: 1.5712 - acc: 0.5587 - val_loss: 1.4721 - val_acc: 0.5744 Epoch 3/100 118s - loss: 1.3876 - acc: 0.5933 - val_loss: 1.3957 - val_acc: 0.5910 Epoch 4/100 118s - loss: 1.2609 - acc: 0.6235 - val_loss: 1.1526 - val_acc: 0.6347 Epoch 5/100 118s - loss: 1.1995 - acc: 0.6446 - val_loss: 1.0778 - val_acc: 0.6720 Epoch 6/100 118s - loss: 1.0979 - acc: 0.6817 - val_loss: 1.3923 - val_acc: 0.6249 Epoch 7/100 118s - loss: 1.0422 - acc: 0.7013 - val_loss: 1.0302 - val_acc: 0.7009 Epoch 8/100 118s - loss: 1.0270 - acc: 0.7130 - val_loss: 0.9736 - val_acc: 0.7334 Epoch 9/100 118s - loss: 0.9680 - acc: 0.7282 - val_loss: 1.0661 - val_acc: 0.6739 Epoch 10/100 118s - loss: 0.9573 - acc: 0.7372 - val_loss: 0.9459 - val_acc: 0.7260 Epoch 11/100 118s - loss: 0.9348 - acc: 0.7397 - val_loss: 1.0048 - val_acc: 0.7182 Epoch 12/100 118s - loss: 0.8715 - acc: 0.7603 - val_loss: 1.0600 - val_acc: 0.6993 Epoch 13/100 118s - loss: 0.8563 - acc: 0.7634 - val_loss: 0.9232 - val_acc: 0.7438 Epoch 14/100 118s - loss: 0.8440 - acc: 0.7726 - val_loss: 1.0707 - val_acc: 0.6967 Epoch 15/100 118s - loss: 0.8359 - acc: 0.7751 - val_loss: 0.8888 - val_acc: 0.7507 Epoch 16/100 118s - loss: 0.7883 - acc: 0.7883 - val_loss: 0.8429 - val_acc: 0.7611 Epoch 17/100 118s - loss: 0.7817 - acc: 0.7909 - val_loss: 0.8016 - val_acc: 0.7830 Epoch 18/100 118s - loss: 0.7684 - acc: 0.7957 - val_loss: 0.9384 - val_acc: 0.7427 Epoch 19/100 118s - loss: 0.7691 - acc: 0.7957 - val_loss: 0.7980 - val_acc: 0.7846 Epoch 20/100 117s - loss: 0.7482 - acc: 0.8014 - val_loss: 0.8688 - val_acc: 0.7586 Epoch 21/100 118s - loss: 0.7356 - acc: 0.8063 - val_loss: 0.8861 - val_acc: 0.7666 Epoch 22/100 118s - loss: 0.7129 - acc: 0.8133 - val_loss: 0.7640 - val_acc: 0.7873 Epoch 23/100 118s - loss: 0.6958 - acc: 0.8179 - val_loss: 0.7913 - val_acc: 0.7880 Epoch 24/100 117s - loss: 0.6833 - acc: 0.8205 - val_loss: 0.8582 - val_acc: 0.7552 Epoch 25/100 118s - loss: 0.6849 - acc: 0.8203 - val_loss: 0.8162 - val_acc: 0.7662 Epoch 26/100 118s - loss: 0.6818 - acc: 0.8203 - val_loss: 0.7470 - val_acc: 0.7955 Epoch 27/100 118s - loss: 0.6740 - acc: 0.8259 - val_loss: 0.7958 - val_acc: 0.7838 Epoch 28/100 117s - loss: 0.6426 - acc: 0.8335 - val_loss: 0.7524 - val_acc: 0.7930 Epoch 29/100 118s - loss: 0.6400 - acc: 0.8335 - val_loss: 0.7085 - val_acc: 0.8100 Epoch 30/100 118s - loss: 0.6255 - acc: 0.8370 - val_loss: 0.7498 - val_acc: 0.8000 Epoch 31/100 117s - loss: 0.6380 - acc: 0.8324 - val_loss: 1.0029 - val_acc: 0.7263 Epoch 32/100 118s - loss: 0.6213 - acc: 0.8370 - val_loss: 0.7155 - val_acc: 0.8101 Epoch 33/100 118s - loss: 0.6322 - acc: 0.8366 - val_loss: 0.7829 - val_acc: 0.7831 Epoch 34/100 118s - loss: 0.6366 - acc: 0.8356 - val_loss: 0.6723 - val_acc: 0.8216 Epoch 35/100 118s - loss: 0.6095 - acc: 0.8406 - val_loss: 0.7443 - val_acc: 0.8039 Epoch 36/100 117s - loss: 0.5971 - acc: 0.8455 - val_loss: 0.7089 - val_acc: 0.8064 Epoch 37/100 118s - loss: 0.5892 - acc: 0.8490 - val_loss: 0.7419 - val_acc: 0.7980 Epoch 38/100 117s - loss: 0.5880 - acc: 0.8486 - val_loss: 0.7015 - val_acc: 0.8084 Epoch 39/100 118s - loss: 0.5913 - acc: 0.8463 - val_loss: 1.0313 - val_acc: 0.7371 Epoch 40/100 118s - loss: 0.5772 - acc: 0.8505 - val_loss: 0.6915 - val_acc: 0.8154 Epoch 41/100 118s - loss: 0.5846 - acc: 0.8480 - val_loss: 0.6799 - val_acc: 0.8118 Epoch 42/100 117s - loss: 0.5782 - acc: 0.8523 - val_loss: 0.7246 - val_acc: 0.8131 Epoch 43/100 118s - loss: 0.5752 - acc: 0.8518 - val_loss: 0.6521 - val_acc: 0.8215 Epoch 44/100 118s - loss: 0.5639 - acc: 0.8542 - val_loss: 0.6418 - val_acc: 0.8301 Epoch 45/100 118s - loss: 0.5633 - acc: 0.8540 - val_loss: 0.6530 - val_acc: 0.8253 Epoch 46/100 117s - loss: 0.5541 - acc: 0.8580 - val_loss: 0.7323 - val_acc: 0.8064 Epoch 47/100 117s - loss: 0.5397 - acc: 0.8629 - val_loss: 0.6683 - val_acc: 0.8216 Epoch 48/100 117s - loss: 0.5410 - acc: 0.8608 - val_loss: 0.6565 - val_acc: 0.8208 Epoch 49/100 118s - loss: 0.5426 - acc: 0.8582 - val_loss: 0.6524 - val_acc: 0.8236 Epoch 50/100 117s - loss: 0.5378 - acc: 0.8611 - val_loss: 0.6756 - val_acc: 0.8162 Epoch 51/100 117s - loss: 0.5518 - acc: 0.8571 - val_loss: 0.6577 - val_acc: 0.8263 Epoch 52/100 117s - loss: 0.5259 - acc: 0.8657 - val_loss: 0.6499 - val_acc: 0.8242 Epoch 53/100 117s - loss: 0.5418 - acc: 0.8600 - val_loss: 0.6463 - val_acc: 0.8185 Epoch 54/100 117s - loss: 0.5158 - acc: 0.8676 - val_loss: 0.6416 - val_acc: 0.8295 Epoch 55/100 117s - loss: 0.5231 - acc: 0.8669 - val_loss: 0.6210 - val_acc: 0.8317 Epoch 56/100 117s - loss: 0.5402 - acc: 0.8604 - val_loss: 0.6564 - val_acc: 0.8198 Epoch 57/100 118s - loss: 0.5257 - acc: 0.8647 - val_loss: 0.5991 - val_acc: 0.8382 Epoch 58/100 117s - loss: 0.5124 - acc: 0.8692 - val_loss: 0.6062 - val_acc: 0.8396 Epoch 59/100 117s - loss: 0.5116 - acc: 0.8697 - val_loss: 0.6477 - val_acc: 0.8257 Epoch 60/100 117s - loss: 0.5157 - acc: 0.8674 - val_loss: 0.6602 - val_acc: 0.8221 Epoch 61/100 117s - loss: 0.5228 - acc: 0.8643 - val_loss: 0.6246 - val_acc: 0.8315 Epoch 62/100 117s - loss: 0.5124 - acc: 0.8676 - val_loss: 0.6206 - val_acc: 0.8393 Epoch 63/100 117s - loss: 0.5022 - acc: 0.8735 - val_loss: 0.6530 - val_acc: 0.8218 Epoch 64/100 117s - loss: 0.5202 - acc: 0.8670 - val_loss: 0.6575 - val_acc: 0.8214 Epoch 65/100 117s - loss: 0.5002 - acc: 0.8729 - val_loss: 0.6703 - val_acc: 0.8205 Epoch 66/100 117s - loss: 0.4972 - acc: 0.8727 - val_loss: 0.6383 - val_acc: 0.8299 Epoch 67/100 117s - loss: 0.4842 - acc: 0.8782 - val_loss: 0.6321 - val_acc: 0.8333 Epoch 68/100 117s - loss: 0.4904 - acc: 0.8749 - val_loss: 0.7621 - val_acc: 0.8018 Epoch 69/100 117s - loss: 0.4889 - acc: 0.8754 - val_loss: 0.6126 - val_acc: 0.8362 Epoch 70/100 117s - loss: 0.4932 - acc: 0.8742 - val_loss: 0.5982 - val_acc: 0.8403 Epoch 71/100 117s - loss: 0.4964 - acc: 0.8732 - val_loss: 0.6439 - val_acc: 0.8277 Epoch 72/100 117s - loss: 0.4861 - acc: 0.8756 - val_loss: 0.6439 - val_acc: 0.8252 Epoch 73/100 117s - loss: 0.4871 - acc: 0.8750 - val_loss: 0.8120 - val_acc: 0.7817 Epoch 74/100 117s - loss: 0.4796 - acc: 0.8778 - val_loss: 0.6259 - val_acc: 0.8359 Epoch 75/100 117s - loss: 0.4706 - acc: 0.8801 - val_loss: 0.6246 - val_acc: 0.8265 Epoch 76/100 117s - loss: 0.4796 - acc: 0.8767 - val_loss: 0.6508 - val_acc: 0.8236 Epoch 77/100 117s - loss: 0.4699 - acc: 0.8807 - val_loss: 0.6949 - val_acc: 0.8062 Epoch 78/100 117s - loss: 0.4702 - acc: 0.8796 - val_loss: 0.6249 - val_acc: 0.8343 Epoch 79/100 117s - loss: 0.4589 - acc: 0.8826 - val_loss: 0.6022 - val_acc: 0.8407 Epoch 80/100 117s - loss: 0.4613 - acc: 0.8834 - val_loss: 0.6009 - val_acc: 0.8343 Epoch 81/100 117s - loss: 0.4690 - acc: 0.8798 - val_loss: 0.6277 - val_acc: 0.8243 Epoch 82/100 117s - loss: 0.4694 - acc: 0.8786 - val_loss: 0.6623 - val_acc: 0.8142 Epoch 83/100 117s - loss: 0.4508 - acc: 0.8850 - val_loss: 0.6253 - val_acc: 0.8379 Epoch 84/100 117s - loss: 0.4574 - acc: 0.8825 - val_loss: 0.6415 - val_acc: 0.8345 Epoch 85/100 117s - loss: 0.4666 - acc: 0.8809 - val_loss: 0.6046 - val_acc: 0.8397 Epoch 86/100 117s - loss: 0.4559 - acc: 0.8842 - val_loss: 0.6728 - val_acc: 0.8234 Epoch 87/100 117s - loss: 0.4573 - acc: 0.8828 - val_loss: 0.6377 - val_acc: 0.8237 Epoch 88/100 117s - loss: 0.4456 - acc: 0.8874 - val_loss: 0.6080 - val_acc: 0.8408 Epoch 89/100 117s - loss: 0.4474 - acc: 0.8864 - val_loss: 0.5707 - val_acc: 0.8500 Epoch 90/100 117s - loss: 0.4442 - acc: 0.8865 - val_loss: 0.6025 - val_acc: 0.8406 Epoch 91/100 117s - loss: 0.4519 - acc: 0.8843 - val_loss: 0.6536 - val_acc: 0.8225 Epoch 92/100 117s - loss: 0.4470 - acc: 0.8862 - val_loss: 0.6328 - val_acc: 0.8266 Epoch 93/100 117s - loss: 0.4318 - acc: 0.8902 - val_loss: 0.5937 - val_acc: 0.8420 Epoch 94/100 117s - loss: 0.4306 - acc: 0.8906 - val_loss: 0.6615 - val_acc: 0.8217 Epoch 95/100 117s - loss: 0.4509 - acc: 0.8863 - val_loss: 0.5799 - val_acc: 0.8442 Epoch 96/100 117s - loss: 0.4412 - acc: 0.8872 - val_loss: 0.6816 - val_acc: 0.8133 Epoch 97/100 117s - loss: 0.4283 - acc: 0.8911 - val_loss: 0.6212 - val_acc: 0.8330 Epoch 98/100 117s - loss: 0.4393 - acc: 0.8879 - val_loss: 0.5783 - val_acc: 0.8439 Epoch 99/100 117s - loss: 0.4357 - acc: 0.8891 - val_loss: 0.6640 - val_acc: 0.8129 Epoch 100/100 117s - loss: 0.4255 - acc: 0.8923 - val_loss: 0.5823 - val_acc: 0.8443
<keras.callbacks.History at 0x7f579fccaeb8>
model.optimizer=keras.optimizers.RMSprop(3e-4, decay=1-0.9995)
model.fit_generator(gen, rnd_trn, 500, verbose=2,
validation_data=gen_test, nb_val_samples=rnd_test)
Epoch 1/500 135s - loss: 0.3903 - acc: 0.9023 - val_loss: 0.5612 - val_acc: 0.8492 Epoch 2/500 117s - loss: 0.3869 - acc: 0.9030 - val_loss: 0.5668 - val_acc: 0.8451 Epoch 3/500 117s - loss: 0.3744 - acc: 0.9065 - val_loss: 0.5616 - val_acc: 0.8484 Epoch 4/500 117s - loss: 0.3821 - acc: 0.9047 - val_loss: 0.5417 - val_acc: 0.8549 Epoch 5/500 117s - loss: 0.3731 - acc: 0.9063 - val_loss: 0.5656 - val_acc: 0.8487 Epoch 6/500 117s - loss: 0.3729 - acc: 0.9069 - val_loss: 0.5817 - val_acc: 0.8420 Epoch 7/500 117s - loss: 0.3693 - acc: 0.9066 - val_loss: 0.5599 - val_acc: 0.8513 Epoch 8/500 117s - loss: 0.3734 - acc: 0.9053 - val_loss: 0.5822 - val_acc: 0.8434 Epoch 9/500 117s - loss: 0.3664 - acc: 0.9078 - val_loss: 0.5427 - val_acc: 0.8542 Epoch 10/500 117s - loss: 0.3635 - acc: 0.9080 - val_loss: 0.5556 - val_acc: 0.8493 Epoch 11/500 117s - loss: 0.3683 - acc: 0.9068 - val_loss: 0.5481 - val_acc: 0.8555 Epoch 12/500 117s - loss: 0.3643 - acc: 0.9079 - val_loss: 0.5862 - val_acc: 0.8431 Epoch 13/500 117s - loss: 0.3605 - acc: 0.9087 - val_loss: 0.5641 - val_acc: 0.8484 Epoch 14/500 117s - loss: 0.3573 - acc: 0.9093 - val_loss: 0.5551 - val_acc: 0.8482 Epoch 15/500 117s - loss: 0.3633 - acc: 0.9077 - val_loss: 0.5405 - val_acc: 0.8551 Epoch 16/500 117s - loss: 0.3586 - acc: 0.9090 - val_loss: 0.5421 - val_acc: 0.8547 Epoch 17/500 117s - loss: 0.3574 - acc: 0.9093 - val_loss: 0.5657 - val_acc: 0.8445 Epoch 18/500 117s - loss: 0.3606 - acc: 0.9075 - val_loss: 0.5530 - val_acc: 0.8503 Epoch 19/500 117s - loss: 0.3531 - acc: 0.9107 - val_loss: 0.5574 - val_acc: 0.8515 Epoch 20/500 117s - loss: 0.3603 - acc: 0.9074 - val_loss: 0.5779 - val_acc: 0.8424 Epoch 21/500 117s - loss: 0.3528 - acc: 0.9099 - val_loss: 0.5611 - val_acc: 0.8484 Epoch 22/500 117s - loss: 0.3573 - acc: 0.9087 - val_loss: 0.5391 - val_acc: 0.8514 Epoch 23/500 117s - loss: 0.3484 - acc: 0.9111 - val_loss: 0.6297 - val_acc: 0.8272 Epoch 24/500 117s - loss: 0.3532 - acc: 0.9098 - val_loss: 0.5456 - val_acc: 0.8548 Epoch 25/500 117s - loss: 0.3479 - acc: 0.9122 - val_loss: 0.5666 - val_acc: 0.8436 Epoch 26/500 117s - loss: 0.3479 - acc: 0.9108 - val_loss: 0.5671 - val_acc: 0.8452 Epoch 27/500 117s - loss: 0.3500 - acc: 0.9105 - val_loss: 0.5549 - val_acc: 0.8481 Epoch 28/500 117s - loss: 0.3478 - acc: 0.9111 - val_loss: 0.5403 - val_acc: 0.8545 Epoch 29/500 117s - loss: 0.3527 - acc: 0.9097 - val_loss: 0.5534 - val_acc: 0.8487 Epoch 30/500
---------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-257-f5664121da57> in <module>() 1 model.fit_generator(gen, rnd_trn, 500, verbose=2, ----> 2 validation_data=gen_test, nb_val_samples=rnd_test) /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch) 1583 max_q_size=max_q_size, 1584 nb_worker=nb_worker, -> 1585 pickle_safe=pickle_safe) 1586 else: 1587 # no need for try/except because /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in evaluate_generator(self, generator, val_samples, max_q_size, nb_worker, pickle_safe) 1675 'or (x, y). Found: ' + str(generator_output)) 1676 -> 1677 outs = self.test_on_batch(x, y, sample_weight=sample_weight) 1678 1679 if isinstance(x, list): /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in test_on_batch(self, x, y, sample_weight) 1360 ins = x + y + sample_weights 1361 self._make_test_function() -> 1362 outputs = self.test_function(ins) 1363 if len(outputs) == 1: 1364 return outputs[0] /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs) 1941 session = get_session() 1942 updated = session.run(self.outputs + [self.updates_op], -> 1943 feed_dict=feed_dict) 1944 return updated[:len(self.outputs)] 1945 /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata) 765 try: 766 result = self._run(None, fetches, feed_dict, options_ptr, --> 767 run_metadata_ptr) 768 if run_metadata: 769 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 963 if final_fetches or final_targets: 964 results = self._do_run(handle, final_targets, final_fetches, --> 965 feed_dict_string, options, run_metadata) 966 else: 967 results = [] /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1013 if handle is None: 1014 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1015 target_list, options, run_metadata) 1016 else: 1017 return self._do_call(_prun_fn, self._session, handle, feed_dict, /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1020 def _do_call(self, fn, *args): 1021 try: -> 1022 return fn(*args) 1023 except errors.OpError as e: 1024 message = compat.as_text(e.message) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1002 return tf_session.TF_Run(session, options, 1003 feed_dict, fetch_list, target_list, -> 1004 status, run_metadata) 1005 1006 def _prun_fn(session, handle, feed_dict, fetch_list): KeyboardInterrupt:
model.optimizer=keras.optimizers.RMSprop(2e-4, decay=1-0.9995)
model.fit_generator(gen, rnd_trn, 500, verbose=2,
validation_data=gen_test, nb_val_samples=rnd_test)
Epoch 1/500 136s - loss: 0.3367 - acc: 0.9143 - val_loss: 0.5549 - val_acc: 0.8508 Epoch 2/500 117s - loss: 0.3432 - acc: 0.9119 - val_loss: 0.5744 - val_acc: 0.8429 Epoch 3/500 118s - loss: 0.3355 - acc: 0.9153 - val_loss: 0.5937 - val_acc: 0.8342 Epoch 4/500 117s - loss: 0.3441 - acc: 0.9115 - val_loss: 0.5675 - val_acc: 0.8379 Epoch 5/500 117s - loss: 0.3338 - acc: 0.9151 - val_loss: 0.5490 - val_acc: 0.8523 Epoch 6/500 117s - loss: 0.3390 - acc: 0.9132 - val_loss: 0.5408 - val_acc: 0.8520 Epoch 7/500 117s - loss: 0.3481 - acc: 0.9097 - val_loss: 0.5525 - val_acc: 0.8464 Epoch 8/500 117s - loss: 0.3434 - acc: 0.9125 - val_loss: 0.5431 - val_acc: 0.8538 Epoch 9/500 117s - loss: 0.3288 - acc: 0.9160 - val_loss: 0.6425 - val_acc: 0.8339 Epoch 10/500 117s - loss: 0.3368 - acc: 0.9133 - val_loss: 0.5633 - val_acc: 0.8376 Epoch 11/500 117s - loss: 0.3391 - acc: 0.9126 - val_loss: 0.5347 - val_acc: 0.8525 Epoch 12/500 117s - loss: 0.3249 - acc: 0.9176 - val_loss: 0.5775 - val_acc: 0.8413 Epoch 13/500 117s - loss: 0.3282 - acc: 0.9155 - val_loss: 0.5789 - val_acc: 0.8467 Epoch 14/500 117s - loss: 0.3380 - acc: 0.9119 - val_loss: 0.5300 - val_acc: 0.8567 Epoch 15/500 117s - loss: 0.3407 - acc: 0.9126 - val_loss: 0.5323 - val_acc: 0.8549 Epoch 16/500 117s - loss: 0.3357 - acc: 0.9131 - val_loss: 0.5442 - val_acc: 0.8503 Epoch 17/500 117s - loss: 0.3519 - acc: 0.9080 - val_loss: 0.5416 - val_acc: 0.8536 Epoch 18/500 117s - loss: 0.3298 - acc: 0.9146 - val_loss: 0.5332 - val_acc: 0.8540 Epoch 19/500 117s - loss: 0.3359 - acc: 0.9133 - val_loss: 0.5399 - val_acc: 0.8535 Epoch 20/500 117s - loss: 0.3217 - acc: 0.9178 - val_loss: 0.5448 - val_acc: 0.8494 Epoch 21/500 117s - loss: 0.3333 - acc: 0.9135 - val_loss: 0.5495 - val_acc: 0.8516 Epoch 22/500 117s - loss: 0.3233 - acc: 0.9170 - val_loss: 0.5353 - val_acc: 0.8538 Epoch 23/500 117s - loss: 0.3228 - acc: 0.9174 - val_loss: 0.5316 - val_acc: 0.8588 Epoch 24/500 117s - loss: 0.3352 - acc: 0.9127 - val_loss: 0.5534 - val_acc: 0.8440 Epoch 25/500 117s - loss: 0.3366 - acc: 0.9126 - val_loss: 0.5258 - val_acc: 0.8578 Epoch 26/500 117s - loss: 0.3192 - acc: 0.9179 - val_loss: 0.5260 - val_acc: 0.8582 Epoch 27/500 117s - loss: 0.3240 - acc: 0.9160 - val_loss: 0.5193 - val_acc: 0.8591 Epoch 28/500 117s - loss: 0.3303 - acc: 0.9141 - val_loss: 0.5373 - val_acc: 0.8550 Epoch 29/500 117s - loss: 0.3189 - acc: 0.9180 - val_loss: 0.5074 - val_acc: 0.8632 Epoch 30/500 117s - loss: 0.3254 - acc: 0.9162 - val_loss: 0.5229 - val_acc: 0.8570 Epoch 31/500 117s - loss: 0.3207 - acc: 0.9174 - val_loss: 0.5306 - val_acc: 0.8553 Epoch 32/500 117s - loss: 0.3224 - acc: 0.9163 - val_loss: 0.5319 - val_acc: 0.8552 Epoch 33/500 117s - loss: 0.3148 - acc: 0.9185 - val_loss: 0.5300 - val_acc: 0.8587 Epoch 34/500 117s - loss: 0.3182 - acc: 0.9179 - val_loss: 0.5486 - val_acc: 0.8493 Epoch 35/500 117s - loss: 0.3256 - acc: 0.9160 - val_loss: 0.5400 - val_acc: 0.8505 Epoch 36/500 117s - loss: 0.3220 - acc: 0.9158 - val_loss: 0.5266 - val_acc: 0.8594 Epoch 37/500 117s - loss: 0.3183 - acc: 0.9182 - val_loss: 0.5506 - val_acc: 0.8471 Epoch 38/500 117s - loss: 0.3126 - acc: 0.9198 - val_loss: 0.5347 - val_acc: 0.8554 Epoch 39/500 117s - loss: 0.3160 - acc: 0.9183 - val_loss: 0.5360 - val_acc: 0.8482 Epoch 40/500 117s - loss: 0.3108 - acc: 0.9197 - val_loss: 0.5324 - val_acc: 0.8566 Epoch 41/500 117s - loss: 0.3185 - acc: 0.9177 - val_loss: 0.5340 - val_acc: 0.8556 Epoch 42/500 117s - loss: 0.3083 - acc: 0.9203 - val_loss: 0.5234 - val_acc: 0.8586 Epoch 43/500 117s - loss: 0.3204 - acc: 0.9161 - val_loss: 0.5306 - val_acc: 0.8566 Epoch 44/500 117s - loss: 0.3088 - acc: 0.9203 - val_loss: 0.5297 - val_acc: 0.8553 Epoch 45/500 117s - loss: 0.3260 - acc: 0.9144 - val_loss: 0.5192 - val_acc: 0.8606 Epoch 46/500 117s - loss: 0.3037 - acc: 0.9210 - val_loss: 0.5544 - val_acc: 0.8478 Epoch 47/500 117s - loss: 0.3143 - acc: 0.9176 - val_loss: 0.5289 - val_acc: 0.8588 Epoch 48/500 117s - loss: 0.3110 - acc: 0.9193 - val_loss: 0.5631 - val_acc: 0.8429 Epoch 49/500 117s - loss: 0.3017 - acc: 0.9221 - val_loss: 0.5203 - val_acc: 0.8591 Epoch 50/500 117s - loss: 0.3087 - acc: 0.9197 - val_loss: 0.5119 - val_acc: 0.8631 Epoch 51/500 117s - loss: 0.3110 - acc: 0.9193 - val_loss: 0.5128 - val_acc: 0.8581 Epoch 52/500 117s - loss: 0.3202 - acc: 0.9153 - val_loss: 0.5039 - val_acc: 0.8623 Epoch 53/500 117s - loss: 0.3061 - acc: 0.9219 - val_loss: 0.5172 - val_acc: 0.8606 Epoch 54/500 117s - loss: 0.3086 - acc: 0.9196 - val_loss: 0.5351 - val_acc: 0.8550 Epoch 55/500 117s - loss: 0.3097 - acc: 0.9196 - val_loss: 0.5305 - val_acc: 0.8587 Epoch 56/500 117s - loss: 0.3043 - acc: 0.9216 - val_loss: 0.5166 - val_acc: 0.8607 Epoch 57/500 117s - loss: 0.3104 - acc: 0.9196 - val_loss: 0.5153 - val_acc: 0.8582 Epoch 58/500 117s - loss: 0.3064 - acc: 0.9209 - val_loss: 0.5161 - val_acc: 0.8566 Epoch 59/500 117s - loss: 0.3098 - acc: 0.9193 - val_loss: 0.5127 - val_acc: 0.8626 Epoch 60/500 117s - loss: 0.3143 - acc: 0.9178 - val_loss: 0.5194 - val_acc: 0.8562 Epoch 61/500 118s - loss: 0.3024 - acc: 0.9216 - val_loss: 0.5484 - val_acc: 0.8498 Epoch 62/500 117s - loss: 0.3126 - acc: 0.9182 - val_loss: 0.5202 - val_acc: 0.8603 Epoch 63/500 117s - loss: 0.3023 - acc: 0.9217 - val_loss: 0.5156 - val_acc: 0.8590 Epoch 64/500 117s - loss: 0.3092 - acc: 0.9192 - val_loss: 0.5339 - val_acc: 0.8546 Epoch 65/500 117s - loss: 0.3070 - acc: 0.9199 - val_loss: 0.5206 - val_acc: 0.8617 Epoch 66/500 117s - loss: 0.3043 - acc: 0.9203 - val_loss: 0.5153 - val_acc: 0.8591 Epoch 67/500 117s - loss: 0.3030 - acc: 0.9208 - val_loss: 0.5243 - val_acc: 0.8563 Epoch 68/500 117s - loss: 0.3140 - acc: 0.9183 - val_loss: 0.5311 - val_acc: 0.8547 Epoch 69/500 117s - loss: 0.2992 - acc: 0.9215 - val_loss: 0.5228 - val_acc: 0.8568 Epoch 70/500 117s - loss: 0.3021 - acc: 0.9208 - val_loss: 0.5204 - val_acc: 0.8582 Epoch 71/500 117s - loss: 0.2950 - acc: 0.9236 - val_loss: 0.5209 - val_acc: 0.8575 Epoch 72/500 117s - loss: 0.3001 - acc: 0.9227 - val_loss: 0.5158 - val_acc: 0.8602 Epoch 73/500 117s - loss: 0.3067 - acc: 0.9196 - val_loss: 0.5167 - val_acc: 0.8584 Epoch 74/500 117s - loss: 0.2948 - acc: 0.9231 - val_loss: 0.5258 - val_acc: 0.8578 Epoch 75/500 117s - loss: 0.2993 - acc: 0.9217 - val_loss: 0.5317 - val_acc: 0.8592 Epoch 76/500 117s - loss: 0.3014 - acc: 0.9216 - val_loss: 0.5580 - val_acc: 0.8449 Epoch 77/500 117s - loss: 0.2938 - acc: 0.9227 - val_loss: 0.5118 - val_acc: 0.8597 Epoch 78/500 117s - loss: 0.3001 - acc: 0.9220 - val_loss: 0.5236 - val_acc: 0.8564 Epoch 79/500 117s - loss: 0.3006 - acc: 0.9214 - val_loss: 0.5265 - val_acc: 0.8555 Epoch 80/500 117s - loss: 0.2973 - acc: 0.9217 - val_loss: 0.5238 - val_acc: 0.8556 Epoch 81/500 117s - loss: 0.2914 - acc: 0.9245 - val_loss: 0.5240 - val_acc: 0.8595 Epoch 82/500 117s - loss: 0.2933 - acc: 0.9241 - val_loss: 0.5573 - val_acc: 0.8534 Epoch 83/500 117s - loss: 0.2965 - acc: 0.9226 - val_loss: 0.5130 - val_acc: 0.8584 Epoch 84/500 117s - loss: 0.2922 - acc: 0.9236 - val_loss: 0.5503 - val_acc: 0.8486 Epoch 85/500 117s - loss: 0.3001 - acc: 0.9212 - val_loss: 0.5517 - val_acc: 0.8499 Epoch 86/500 117s - loss: 0.2938 - acc: 0.9235 - val_loss: 0.5112 - val_acc: 0.8628 Epoch 87/500 117s - loss: 0.2965 - acc: 0.9225 - val_loss: 0.5138 - val_acc: 0.8581 Epoch 88/500 117s - loss: 0.2885 - acc: 0.9246 - val_loss: 0.5218 - val_acc: 0.8585 Epoch 89/500 117s - loss: 0.3003 - acc: 0.9215 - val_loss: 0.5328 - val_acc: 0.8593 Epoch 90/500 117s - loss: 0.2991 - acc: 0.9212 - val_loss: 0.5148 - val_acc: 0.8587 Epoch 91/500 117s - loss: 0.2915 - acc: 0.9235 - val_loss: 0.5036 - val_acc: 0.8647 Epoch 92/500 117s - loss: 0.2902 - acc: 0.9244 - val_loss: 0.5194 - val_acc: 0.8618 Epoch 93/500 117s - loss: 0.2956 - acc: 0.9221 - val_loss: 0.5089 - val_acc: 0.8607 Epoch 94/500 117s - loss: 0.2972 - acc: 0.9220 - val_loss: 0.5330 - val_acc: 0.8544 Epoch 95/500 117s - loss: 0.2902 - acc: 0.9245 - val_loss: 0.5112 - val_acc: 0.8625 Epoch 96/500 117s - loss: 0.2893 - acc: 0.9248 - val_loss: 0.5118 - val_acc: 0.8603 Epoch 97/500 117s - loss: 0.2965 - acc: 0.9226 - val_loss: 0.4990 - val_acc: 0.8634 Epoch 98/500 117s - loss: 0.2944 - acc: 0.9228 - val_loss: 0.5172 - val_acc: 0.8564 Epoch 99/500 117s - loss: 0.2891 - acc: 0.9244 - val_loss: 0.5128 - val_acc: 0.8611 Epoch 100/500 117s - loss: 0.2883 - acc: 0.9255 - val_loss: 0.5114 - val_acc: 0.8613 Epoch 101/500 117s - loss: 0.2913 - acc: 0.9239 - val_loss: 0.5123 - val_acc: 0.8604 Epoch 102/500 117s - loss: 0.2875 - acc: 0.9248 - val_loss: 0.5094 - val_acc: 0.8620 Epoch 103/500 117s - loss: 0.2940 - acc: 0.9222 - val_loss: 0.5117 - val_acc: 0.8588 Epoch 104/500 117s - loss: 0.2884 - acc: 0.9244 - val_loss: 0.5031 - val_acc: 0.8625 Epoch 105/500 117s - loss: 0.2894 - acc: 0.9236 - val_loss: 0.5313 - val_acc: 0.8548 Epoch 106/500 117s - loss: 0.2872 - acc: 0.9246 - val_loss: 0.5327 - val_acc: 0.8533 Epoch 107/500 117s - loss: 0.2903 - acc: 0.9234 - val_loss: 0.5156 - val_acc: 0.8605 Epoch 108/500 117s - loss: 0.2889 - acc: 0.9246 - val_loss: 0.5167 - val_acc: 0.8582 Epoch 109/500 117s - loss: 0.2903 - acc: 0.9240 - val_loss: 0.5431 - val_acc: 0.8489 Epoch 110/500 117s - loss: 0.2789 - acc: 0.9268 - val_loss: 0.5296 - val_acc: 0.8570 Epoch 111/500 117s - loss: 0.2938 - acc: 0.9216 - val_loss: 0.5061 - val_acc: 0.8597 Epoch 112/500 117s - loss: 0.2794 - acc: 0.9271 - val_loss: 0.5245 - val_acc: 0.8566 Epoch 113/500 117s - loss: 0.2864 - acc: 0.9250 - val_loss: 0.5015 - val_acc: 0.8610 Epoch 114/500 117s - loss: 0.2818 - acc: 0.9263 - val_loss: 0.5120 - val_acc: 0.8581 Epoch 115/500 117s - loss: 0.2838 - acc: 0.9252 - val_loss: 0.5055 - val_acc: 0.8601 Epoch 116/500 117s - loss: 0.2875 - acc: 0.9244 - val_loss: 0.5426 - val_acc: 0.8479 Epoch 117/500 117s - loss: 0.2807 - acc: 0.9261 - val_loss: 0.5026 - val_acc: 0.8631 Epoch 118/500 117s - loss: 0.2877 - acc: 0.9246 - val_loss: 0.5330 - val_acc: 0.8533 Epoch 119/500 117s - loss: 0.2857 - acc: 0.9244 - val_loss: 0.5119 - val_acc: 0.8579 Epoch 120/500 117s - loss: 0.2773 - acc: 0.9276 - val_loss: 0.5220 - val_acc: 0.8532 Epoch 121/500 117s - loss: 0.2797 - acc: 0.9277 - val_loss: 0.4998 - val_acc: 0.8631 Epoch 122/500 117s - loss: 0.2835 - acc: 0.9254 - val_loss: 0.5254 - val_acc: 0.8582 Epoch 123/500 117s - loss: 0.2831 - acc: 0.9255 - val_loss: 0.5416 - val_acc: 0.8523 Epoch 124/500 117s - loss: 0.2808 - acc: 0.9260 - val_loss: 0.5202 - val_acc: 0.8590 Epoch 125/500 117s - loss: 0.2837 - acc: 0.9257 - val_loss: 0.5319 - val_acc: 0.8550 Epoch 126/500 117s - loss: 0.2814 - acc: 0.9258 - val_loss: 0.5103 - val_acc: 0.8571 Epoch 127/500 117s - loss: 0.2743 - acc: 0.9281 - val_loss: 0.5363 - val_acc: 0.8526 Epoch 128/500 117s - loss: 0.2777 - acc: 0.9275 - val_loss: 0.5111 - val_acc: 0.8625 Epoch 129/500 117s - loss: 0.2741 - acc: 0.9280 - val_loss: 0.5157 - val_acc: 0.8567 Epoch 130/500 117s - loss: 0.2775 - acc: 0.9269 - val_loss: 0.5385 - val_acc: 0.8554 Epoch 131/500 117s - loss: 0.2882 - acc: 0.9237 - val_loss: 0.5181 - val_acc: 0.8585 Epoch 132/500 117s - loss: 0.2734 - acc: 0.9280 - val_loss: 0.5100 - val_acc: 0.8586 Epoch 133/500 117s - loss: 0.2786 - acc: 0.9266 - val_loss: 0.5108 - val_acc: 0.8592 Epoch 134/500 117s - loss: 0.2798 - acc: 0.9260 - val_loss: 0.5045 - val_acc: 0.8614 Epoch 135/500 117s - loss: 0.2873 - acc: 0.9240 - val_loss: 0.5150 - val_acc: 0.8606 Epoch 136/500 117s - loss: 0.2836 - acc: 0.9245 - val_loss: 0.5105 - val_acc: 0.8602 Epoch 137/500 117s - loss: 0.2831 - acc: 0.9247 - val_loss: 0.5053 - val_acc: 0.8589 Epoch 138/500 117s - loss: 0.2774 - acc: 0.9270 - val_loss: 0.5186 - val_acc: 0.8569 Epoch 139/500 118s - loss: 0.2821 - acc: 0.9252 - val_loss: 0.5106 - val_acc: 0.8604 Epoch 140/500 117s - loss: 0.2700 - acc: 0.9292 - val_loss: 0.5324 - val_acc: 0.8492 Epoch 141/500 117s - loss: 0.2789 - acc: 0.9267 - val_loss: 0.5205 - val_acc: 0.8545 Epoch 142/500 117s - loss: 0.2802 - acc: 0.9265 - val_loss: 0.5265 - val_acc: 0.8537 Epoch 143/500 117s - loss: 0.2781 - acc: 0.9267 - val_loss: 0.5233 - val_acc: 0.8583 Epoch 144/500 117s - loss: 0.2780 - acc: 0.9264 - val_loss: 0.5146 - val_acc: 0.8578 Epoch 145/500 117s - loss: 0.2753 - acc: 0.9276 - val_loss: 0.4878 - val_acc: 0.8643 Epoch 146/500 117s - loss: 0.2795 - acc: 0.9265 - val_loss: 0.5060 - val_acc: 0.8641 Epoch 147/500 117s - loss: 0.2716 - acc: 0.9284 - val_loss: 0.5348 - val_acc: 0.8519 Epoch 148/500 117s - loss: 0.2771 - acc: 0.9271 - val_loss: 0.5222 - val_acc: 0.8568 Epoch 149/500 117s - loss: 0.2782 - acc: 0.9263 - val_loss: 0.5091 - val_acc: 0.8591 Epoch 150/500 117s - loss: 0.2764 - acc: 0.9269 - val_loss: 0.4910 - val_acc: 0.8660 Epoch 151/500 117s - loss: 0.2820 - acc: 0.9253 - val_loss: 0.5455 - val_acc: 0.8481 Epoch 152/500 117s - loss: 0.2749 - acc: 0.9272 - val_loss: 0.5446 - val_acc: 0.8465 Epoch 153/500 117s - loss: 0.2728 - acc: 0.9283 - val_loss: 0.5168 - val_acc: 0.8585 Epoch 154/500 117s - loss: 0.2832 - acc: 0.9241 - val_loss: 0.5096 - val_acc: 0.8610 Epoch 155/500 117s - loss: 0.2728 - acc: 0.9279 - val_loss: 0.5054 - val_acc: 0.8591 Epoch 156/500 117s - loss: 0.2646 - acc: 0.9307 - val_loss: 0.5008 - val_acc: 0.8647 Epoch 157/500 117s - loss: 0.2726 - acc: 0.9279 - val_loss: 0.5105 - val_acc: 0.8611 Epoch 158/500 117s - loss: 0.2736 - acc: 0.9270 - val_loss: 0.4965 - val_acc: 0.8645 Epoch 159/500 117s - loss: 0.2736 - acc: 0.9281 - val_loss: 0.5238 - val_acc: 0.8568 Epoch 160/500 117s - loss: 0.2719 - acc: 0.9285 - val_loss: 0.4949 - val_acc: 0.8649 Epoch 161/500 117s - loss: 0.2678 - acc: 0.9291 - val_loss: 0.5035 - val_acc: 0.8637 Epoch 162/500 117s - loss: 0.2659 - acc: 0.9299 - val_loss: 0.5071 - val_acc: 0.8618 Epoch 163/500 117s - loss: 0.2772 - acc: 0.9268 - val_loss: 0.5601 - val_acc: 0.8452 Epoch 164/500 117s - loss: 0.2627 - acc: 0.9306 - val_loss: 0.5088 - val_acc: 0.8592 Epoch 165/500 117s - loss: 0.2691 - acc: 0.9292 - val_loss: 0.5039 - val_acc: 0.8633 Epoch 166/500 117s - loss: 0.2743 - acc: 0.9270 - val_loss: 0.5399 - val_acc: 0.8472 Epoch 167/500 117s - loss: 0.2789 - acc: 0.9261 - val_loss: 0.5169 - val_acc: 0.8592 Epoch 168/500 117s - loss: 0.2706 - acc: 0.9286 - val_loss: 0.5505 - val_acc: 0.8450 Epoch 169/500 117s - loss: 0.2694 - acc: 0.9283 - val_loss: 0.4982 - val_acc: 0.8618 Epoch 170/500 117s - loss: 0.2670 - acc: 0.9298 - val_loss: 0.5178 - val_acc: 0.8562 Epoch 171/500 117s - loss: 0.2759 - acc: 0.9273 - val_loss: 0.5137 - val_acc: 0.8630 Epoch 172/500 117s - loss: 0.2694 - acc: 0.9294 - val_loss: 0.5316 - val_acc: 0.8581 Epoch 173/500 117s - loss: 0.2742 - acc: 0.9268 - val_loss: 0.5430 - val_acc: 0.8467 Epoch 174/500 117s - loss: 0.2657 - acc: 0.9297 - val_loss: 0.5178 - val_acc: 0.8594 Epoch 175/500 117s - loss: 0.2624 - acc: 0.9310 - val_loss: 0.5248 - val_acc: 0.8559 Epoch 176/500 117s - loss: 0.2691 - acc: 0.9283 - val_loss: 0.5463 - val_acc: 0.8512 Epoch 177/500 117s - loss: 0.2727 - acc: 0.9280 - val_loss: 0.5912 - val_acc: 0.8341 Epoch 178/500 117s - loss: 0.2644 - acc: 0.9301 - val_loss: 0.4997 - val_acc: 0.8631 Epoch 179/500 117s - loss: 0.2725 - acc: 0.9276 - val_loss: 0.4835 - val_acc: 0.8672 Epoch 180/500 117s - loss: 0.2670 - acc: 0.9297 - val_loss: 0.5032 - val_acc: 0.8607 Epoch 181/500 117s - loss: 0.2702 - acc: 0.9279 - val_loss: 0.4970 - val_acc: 0.8636 Epoch 182/500 117s - loss: 0.2583 - acc: 0.9323 - val_loss: 0.5315 - val_acc: 0.8568 Epoch 183/500 117s - loss: 0.2630 - acc: 0.9306 - val_loss: 0.5382 - val_acc: 0.8537 Epoch 184/500 117s - loss: 0.2707 - acc: 0.9287 - val_loss: 0.5287 - val_acc: 0.8543 Epoch 185/500 117s - loss: 0.2650 - acc: 0.9297 - val_loss: 0.5298 - val_acc: 0.8560 Epoch 186/500 117s - loss: 0.2732 - acc: 0.9271 - val_loss: 0.5158 - val_acc: 0.8576 Epoch 187/500 117s - loss: 0.2638 - acc: 0.9300 - val_loss: 0.5291 - val_acc: 0.8543 Epoch 188/500 117s - loss: 0.2651 - acc: 0.9294 - val_loss: 0.5249 - val_acc: 0.8561 Epoch 189/500 117s - loss: 0.2662 - acc: 0.9294 - val_loss: 0.5202 - val_acc: 0.8545 Epoch 190/500 117s - loss: 0.2604 - acc: 0.9311 - val_loss: 0.5056 - val_acc: 0.8590 Epoch 191/500 117s - loss: 0.2662 - acc: 0.9297 - val_loss: 0.5007 - val_acc: 0.8620 Epoch 192/500 117s - loss: 0.2699 - acc: 0.9285 - val_loss: 0.5236 - val_acc: 0.8564 Epoch 193/500 117s - loss: 0.2669 - acc: 0.9290 - val_loss: 0.5339 - val_acc: 0.8505 Epoch 194/500 117s - loss: 0.2600 - acc: 0.9311 - val_loss: 0.5118 - val_acc: 0.8611 Epoch 195/500 117s - loss: 0.2597 - acc: 0.9315 - val_loss: 0.5098 - val_acc: 0.8612 Epoch 196/500 117s - loss: 0.2615 - acc: 0.9308 - val_loss: 0.5212 - val_acc: 0.8576 Epoch 197/500 117s - loss: 0.2566 - acc: 0.9323 - val_loss: 0.5078 - val_acc: 0.8615 Epoch 198/500 117s - loss: 0.2681 - acc: 0.9286 - val_loss: 0.5187 - val_acc: 0.8586 Epoch 199/500 117s - loss: 0.2580 - acc: 0.9318 - val_loss: 0.5503 - val_acc: 0.8512 Epoch 200/500 117s - loss: 0.2614 - acc: 0.9303 - val_loss: 0.5635 - val_acc: 0.8488 Epoch 201/500 117s - loss: 0.2654 - acc: 0.9299 - val_loss: 0.5136 - val_acc: 0.8572 Epoch 202/500 117s - loss: 0.2622 - acc: 0.9306 - val_loss: 0.5068 - val_acc: 0.8614 Epoch 203/500 117s - loss: 0.2547 - acc: 0.9322 - val_loss: 0.5349 - val_acc: 0.8515 Epoch 204/500 117s - loss: 0.2654 - acc: 0.9290 - val_loss: 0.5026 - val_acc: 0.8621 Epoch 205/500 117s - loss: 0.2609 - acc: 0.9305 - val_loss: 0.5120 - val_acc: 0.8600 Epoch 206/500 117s - loss: 0.2661 - acc: 0.9290 - val_loss: 0.4925 - val_acc: 0.8675 Epoch 207/500 117s - loss: 0.2602 - acc: 0.9306 - val_loss: 0.5258 - val_acc: 0.8579 Epoch 208/500 117s - loss: 0.2647 - acc: 0.9291 - val_loss: 0.5154 - val_acc: 0.8569 Epoch 209/500 117s - loss: 0.2639 - acc: 0.9292 - val_loss: 0.4930 - val_acc: 0.8672 Epoch 210/500 117s - loss: 0.2543 - acc: 0.9328 - val_loss: 0.5041 - val_acc: 0.8623 Epoch 211/500 117s - loss: 0.2601 - acc: 0.9306 - val_loss: 0.5076 - val_acc: 0.8609 Epoch 212/500 117s - loss: 0.2595 - acc: 0.9313 - val_loss: 0.5458 - val_acc: 0.8537 Epoch 213/500 117s - loss: 0.2610 - acc: 0.9304 - val_loss: 0.5347 - val_acc: 0.8510 Epoch 214/500
---------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-259-f5664121da57> in <module>() 1 model.fit_generator(gen, rnd_trn, 500, verbose=2, ----> 2 validation_data=gen_test, nb_val_samples=rnd_test) /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch) 1555 outs = self.train_on_batch(x, y, 1556 sample_weight=sample_weight, -> 1557 class_weight=class_weight) 1558 1559 if not isinstance(outs, list): /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight) 1318 ins = x + y + sample_weights 1319 self._make_train_function() -> 1320 outputs = self.train_function(ins) 1321 if len(outputs) == 1: 1322 return outputs[0] /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs) 1941 session = get_session() 1942 updated = session.run(self.outputs + [self.updates_op], -> 1943 feed_dict=feed_dict) 1944 return updated[:len(self.outputs)] 1945 /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata) 765 try: 766 result = self._run(None, fetches, feed_dict, options_ptr, --> 767 run_metadata_ptr) 768 if run_metadata: 769 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 963 if final_fetches or final_targets: 964 results = self._do_run(handle, final_targets, final_fetches, --> 965 feed_dict_string, options, run_metadata) 966 else: 967 results = [] /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1013 if handle is None: 1014 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1015 target_list, options, run_metadata) 1016 else: 1017 return self._do_call(_prun_fn, self._session, handle, feed_dict, /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1020 def _do_call(self, fn, *args): 1021 try: -> 1022 return fn(*args) 1023 except errors.OpError as e: 1024 message = compat.as_text(e.message) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1002 return tf_session.TF_Run(session, options, 1003 feed_dict, fetch_list, target_list, -> 1004 status, run_metadata) 1005 1006 def _prun_fn(session, handle, feed_dict, fetch_list): KeyboardInterrupt:
model.optimizer=keras.optimizers.RMSprop(1e-5, decay=1-0.9995)
model.fit_generator(gen, rnd_trn, 500, verbose=2,
validation_data=gen_test, nb_val_samples=rnd_test)
Epoch 1/500 135s - loss: 0.2466 - acc: 0.9354 - val_loss: 0.5157 - val_acc: 0.8642 Epoch 2/500 117s - loss: 0.2629 - acc: 0.9290 - val_loss: 0.5258 - val_acc: 0.8581 Epoch 3/500 117s - loss: 0.2654 - acc: 0.9289 - val_loss: 0.4935 - val_acc: 0.8642 Epoch 4/500 118s - loss: 0.2555 - acc: 0.9323 - val_loss: 0.5139 - val_acc: 0.8575 Epoch 5/500 117s - loss: 0.2563 - acc: 0.9312 - val_loss: 0.5016 - val_acc: 0.8601 Epoch 6/500 117s - loss: 0.2600 - acc: 0.9308 - val_loss: 0.5194 - val_acc: 0.8602 Epoch 7/500 117s - loss: 0.2587 - acc: 0.9310 - val_loss: 0.5098 - val_acc: 0.8603 Epoch 8/500 117s - loss: 0.2628 - acc: 0.9300 - val_loss: 0.5219 - val_acc: 0.8579 Epoch 9/500 117s - loss: 0.2579 - acc: 0.9328 - val_loss: 0.4900 - val_acc: 0.8667 Epoch 10/500 117s - loss: 0.2590 - acc: 0.9305 - val_loss: 0.5170 - val_acc: 0.8603 Epoch 11/500 117s - loss: 0.2544 - acc: 0.9331 - val_loss: 0.5003 - val_acc: 0.8577 Epoch 12/500 117s - loss: 0.2550 - acc: 0.9320 - val_loss: 0.5210 - val_acc: 0.8555 Epoch 13/500 117s - loss: 0.2528 - acc: 0.9332 - val_loss: 0.4830 - val_acc: 0.8677 Epoch 14/500 117s - loss: 0.2566 - acc: 0.9317 - val_loss: 0.4927 - val_acc: 0.8634 Epoch 15/500 117s - loss: 0.2586 - acc: 0.9308 - val_loss: 0.4970 - val_acc: 0.8629 Epoch 16/500 117s - loss: 0.2553 - acc: 0.9322 - val_loss: 0.5237 - val_acc: 0.8539 Epoch 17/500 117s - loss: 0.2517 - acc: 0.9334 - val_loss: 0.5139 - val_acc: 0.8556 Epoch 18/500 117s - loss: 0.2523 - acc: 0.9326 - val_loss: 0.4992 - val_acc: 0.8617 Epoch 19/500 117s - loss: 0.2566 - acc: 0.9315 - val_loss: 0.4978 - val_acc: 0.8617 Epoch 20/500 117s - loss: 0.2576 - acc: 0.9309 - val_loss: 0.4963 - val_acc: 0.8634 Epoch 21/500 117s - loss: 0.2572 - acc: 0.9310 - val_loss: 0.5158 - val_acc: 0.8594 Epoch 22/500 117s - loss: 0.2636 - acc: 0.9290 - val_loss: 0.5028 - val_acc: 0.8636 Epoch 23/500 117s - loss: 0.2534 - acc: 0.9328 - val_loss: 0.5003 - val_acc: 0.8616 Epoch 24/500 117s - loss: 0.2609 - acc: 0.9295 - val_loss: 0.4927 - val_acc: 0.8645 Epoch 25/500 117s - loss: 0.2516 - acc: 0.9333 - val_loss: 0.5035 - val_acc: 0.8613 Epoch 26/500 117s - loss: 0.2590 - acc: 0.9304 - val_loss: 0.4990 - val_acc: 0.8632 Epoch 27/500 117s - loss: 0.2493 - acc: 0.9341 - val_loss: 0.5078 - val_acc: 0.8611 Epoch 28/500 117s - loss: 0.2608 - acc: 0.9303 - val_loss: 0.5068 - val_acc: 0.8617 Epoch 29/500 117s - loss: 0.2607 - acc: 0.9299 - val_loss: 0.5181 - val_acc: 0.8583 Epoch 30/500 117s - loss: 0.2489 - acc: 0.9339 - val_loss: 0.5048 - val_acc: 0.8601 Epoch 31/500 117s - loss: 0.2550 - acc: 0.9320 - val_loss: 0.5327 - val_acc: 0.8548 Epoch 32/500 117s - loss: 0.2479 - acc: 0.9347 - val_loss: 0.5513 - val_acc: 0.8488 Epoch 33/500 117s - loss: 0.2574 - acc: 0.9310 - val_loss: 0.5187 - val_acc: 0.8615 Epoch 34/500 117s - loss: 0.2503 - acc: 0.9336 - val_loss: 0.5341 - val_acc: 0.8545 Epoch 35/500 117s - loss: 0.2601 - acc: 0.9302 - val_loss: 0.5078 - val_acc: 0.8601 Epoch 36/500 117s - loss: 0.2556 - acc: 0.9319 - val_loss: 0.5020 - val_acc: 0.8639 Epoch 37/500 117s - loss: 0.2512 - acc: 0.9328 - val_loss: 0.5049 - val_acc: 0.8572 Epoch 38/500 117s - loss: 0.2508 - acc: 0.9334 - val_loss: 0.5044 - val_acc: 0.8637 Epoch 39/500 117s - loss: 0.2501 - acc: 0.9334 - val_loss: 0.5100 - val_acc: 0.8621 Epoch 40/500 117s - loss: 0.2504 - acc: 0.9334 - val_loss: 0.4981 - val_acc: 0.8659 Epoch 41/500 117s - loss: 0.2528 - acc: 0.9317 - val_loss: 0.5218 - val_acc: 0.8565 Epoch 42/500 117s - loss: 0.2490 - acc: 0.9335 - val_loss: 0.5073 - val_acc: 0.8640 Epoch 43/500 117s - loss: 0.2448 - acc: 0.9351 - val_loss: 0.5211 - val_acc: 0.8610 Epoch 44/500 117s - loss: 0.2529 - acc: 0.9327 - val_loss: 0.5032 - val_acc: 0.8617 Epoch 45/500 117s - loss: 0.2463 - acc: 0.9341 - val_loss: 0.5042 - val_acc: 0.8658 Epoch 46/500 117s - loss: 0.2473 - acc: 0.9339 - val_loss: 0.5132 - val_acc: 0.8573 Epoch 47/500 117s - loss: 0.2573 - acc: 0.9309 - val_loss: 0.5712 - val_acc: 0.8370 Epoch 48/500 117s - loss: 0.2478 - acc: 0.9340 - val_loss: 0.5197 - val_acc: 0.8572 Epoch 49/500 117s - loss: 0.2538 - acc: 0.9314 - val_loss: 0.5010 - val_acc: 0.8649 Epoch 50/500 117s - loss: 0.2507 - acc: 0.9326 - val_loss: 0.5457 - val_acc: 0.8502 Epoch 51/500 117s - loss: 0.2508 - acc: 0.9329 - val_loss: 0.4988 - val_acc: 0.8641 Epoch 52/500 117s - loss: 0.2479 - acc: 0.9344 - val_loss: 0.5056 - val_acc: 0.8618 Epoch 53/500 117s - loss: 0.2489 - acc: 0.9333 - val_loss: 0.5120 - val_acc: 0.8589 Epoch 54/500 117s - loss: 0.2591 - acc: 0.9295 - val_loss: 0.5418 - val_acc: 0.8504 Epoch 55/500 117s - loss: 0.2466 - acc: 0.9341 - val_loss: 0.4828 - val_acc: 0.8676 Epoch 56/500 117s - loss: 0.2564 - acc: 0.9319 - val_loss: 0.5030 - val_acc: 0.8610 Epoch 57/500 117s - loss: 0.2562 - acc: 0.9315 - val_loss: 0.4983 - val_acc: 0.8636 Epoch 58/500 117s - loss: 0.2493 - acc: 0.9331 - val_loss: 0.4886 - val_acc: 0.8674 Epoch 59/500 117s - loss: 0.2520 - acc: 0.9320 - val_loss: 0.5346 - val_acc: 0.8501 Epoch 60/500 117s - loss: 0.2457 - acc: 0.9347 - val_loss: 0.5236 - val_acc: 0.8544 Epoch 61/500 117s - loss: 0.2474 - acc: 0.9340 - val_loss: 0.5037 - val_acc: 0.8621 Epoch 62/500 117s - loss: 0.2470 - acc: 0.9335 - val_loss: 0.5001 - val_acc: 0.8662 Epoch 63/500 117s - loss: 0.2539 - acc: 0.9317 - val_loss: 0.5343 - val_acc: 0.8516 Epoch 64/500 117s - loss: 0.2502 - acc: 0.9327 - val_loss: 0.4882 - val_acc: 0.8663 Epoch 65/500 117s - loss: 0.2459 - acc: 0.9347 - val_loss: 0.5272 - val_acc: 0.8576 Epoch 66/500 117s - loss: 0.2431 - acc: 0.9350 - val_loss: 0.5088 - val_acc: 0.8615 Epoch 67/500 117s - loss: 0.2465 - acc: 0.9339 - val_loss: 0.5184 - val_acc: 0.8548 Epoch 68/500 117s - loss: 0.2452 - acc: 0.9338 - val_loss: 0.4849 - val_acc: 0.8675 Epoch 69/500 117s - loss: 0.2471 - acc: 0.9341 - val_loss: 0.4972 - val_acc: 0.8681 Epoch 70/500 117s - loss: 0.2502 - acc: 0.9328 - val_loss: 0.4997 - val_acc: 0.8634 Epoch 71/500 117s - loss: 0.2571 - acc: 0.9309 - val_loss: 0.5136 - val_acc: 0.8580 Epoch 72/500 117s - loss: 0.2489 - acc: 0.9331 - val_loss: 0.5015 - val_acc: 0.8608 Epoch 73/500 117s - loss: 0.2515 - acc: 0.9322 - val_loss: 0.4846 - val_acc: 0.8642 Epoch 74/500 117s - loss: 0.2440 - acc: 0.9344 - val_loss: 0.5028 - val_acc: 0.8633 Epoch 75/500 117s - loss: 0.2443 - acc: 0.9346 - val_loss: 0.5284 - val_acc: 0.8558 Epoch 76/500 117s - loss: 0.2436 - acc: 0.9350 - val_loss: 0.4996 - val_acc: 0.8618 Epoch 77/500 117s - loss: 0.2482 - acc: 0.9331 - val_loss: 0.4865 - val_acc: 0.8653 Epoch 78/500 117s - loss: 0.2418 - acc: 0.9357 - val_loss: 0.5091 - val_acc: 0.8602 Epoch 79/500 117s - loss: 0.2460 - acc: 0.9336 - val_loss: 0.5015 - val_acc: 0.8636 Epoch 80/500 117s - loss: 0.2424 - acc: 0.9350 - val_loss: 0.5076 - val_acc: 0.8621 Epoch 81/500 117s - loss: 0.2461 - acc: 0.9340 - val_loss: 0.5182 - val_acc: 0.8588 Epoch 82/500 117s - loss: 0.2405 - acc: 0.9356 - val_loss: 0.5046 - val_acc: 0.8582 Epoch 83/500 117s - loss: 0.2517 - acc: 0.9321 - val_loss: 0.5156 - val_acc: 0.8588 Epoch 84/500 117s - loss: 0.2508 - acc: 0.9327 - val_loss: 0.5159 - val_acc: 0.8559 Epoch 85/500 117s - loss: 0.2446 - acc: 0.9343 - val_loss: 0.4981 - val_acc: 0.8633 Epoch 86/500 117s - loss: 0.2452 - acc: 0.9341 - val_loss: 0.4970 - val_acc: 0.8635 Epoch 87/500 117s - loss: 0.2402 - acc: 0.9357 - val_loss: 0.5190 - val_acc: 0.8609 Epoch 88/500 117s - loss: 0.2437 - acc: 0.9342 - val_loss: 0.5462 - val_acc: 0.8521 Epoch 89/500 117s - loss: 0.2528 - acc: 0.9318 - val_loss: 0.5266 - val_acc: 0.8555 Epoch 90/500 117s - loss: 0.2469 - acc: 0.9335 - val_loss: 0.5037 - val_acc: 0.8595 Epoch 91/500 118s - loss: 0.2474 - acc: 0.9331 - val_loss: 0.4993 - val_acc: 0.8616 Epoch 92/500 117s - loss: 0.2356 - acc: 0.9374 - val_loss: 0.4958 - val_acc: 0.8660 Epoch 93/500 117s - loss: 0.2450 - acc: 0.9340 - val_loss: 0.5219 - val_acc: 0.8570 Epoch 94/500 117s - loss: 0.2448 - acc: 0.9337 - val_loss: 0.4957 - val_acc: 0.8627 Epoch 95/500 117s - loss: 0.2398 - acc: 0.9359 - val_loss: 0.5115 - val_acc: 0.8617 Epoch 96/500 117s - loss: 0.2415 - acc: 0.9352 - val_loss: 0.5124 - val_acc: 0.8581 Epoch 97/500 117s - loss: 0.2426 - acc: 0.9349 - val_loss: 0.5316 - val_acc: 0.8559 Epoch 98/500 117s - loss: 0.2411 - acc: 0.9355 - val_loss: 0.5255 - val_acc: 0.8560 Epoch 99/500 117s - loss: 0.2439 - acc: 0.9341 - val_loss: 0.4906 - val_acc: 0.8657 Epoch 100/500 117s - loss: 0.2467 - acc: 0.9336 - val_loss: 0.5101 - val_acc: 0.8600 Epoch 101/500 117s - loss: 0.2456 - acc: 0.9335 - val_loss: 0.5040 - val_acc: 0.8646 Epoch 102/500 117s - loss: 0.2372 - acc: 0.9361 - val_loss: 0.4898 - val_acc: 0.8682 Epoch 103/500 117s - loss: 0.2510 - acc: 0.9314 - val_loss: 0.5037 - val_acc: 0.8627 Epoch 104/500 117s - loss: 0.2405 - acc: 0.9358 - val_loss: 0.5024 - val_acc: 0.8637 Epoch 105/500 117s - loss: 0.2370 - acc: 0.9365 - val_loss: 0.4912 - val_acc: 0.8641 Epoch 106/500 117s - loss: 0.2345 - acc: 0.9369 - val_loss: 0.4875 - val_acc: 0.8673 Epoch 107/500 117s - loss: 0.2553 - acc: 0.9306 - val_loss: 0.4964 - val_acc: 0.8641 Epoch 108/500 117s - loss: 0.2459 - acc: 0.9336 - val_loss: 0.4979 - val_acc: 0.8615 Epoch 109/500 117s - loss: 0.2379 - acc: 0.9353 - val_loss: 0.4969 - val_acc: 0.8650 Epoch 110/500 117s - loss: 0.2519 - acc: 0.9311 - val_loss: 0.5167 - val_acc: 0.8589 Epoch 111/500 117s - loss: 0.2463 - acc: 0.9338 - val_loss: 0.4955 - val_acc: 0.8668 Epoch 112/500 117s - loss: 0.2390 - acc: 0.9361 - val_loss: 0.4796 - val_acc: 0.8657 Epoch 113/500 117s - loss: 0.2355 - acc: 0.9367 - val_loss: 0.5121 - val_acc: 0.8614 Epoch 114/500 117s - loss: 0.2472 - acc: 0.9329 - val_loss: 0.5005 - val_acc: 0.8618 Epoch 115/500 117s - loss: 0.2487 - acc: 0.9325 - val_loss: 0.5053 - val_acc: 0.8601 Epoch 116/500 117s - loss: 0.2458 - acc: 0.9332 - val_loss: 0.4882 - val_acc: 0.8648 Epoch 117/500 117s - loss: 0.2365 - acc: 0.9358 - val_loss: 0.5153 - val_acc: 0.8614 Epoch 118/500 117s - loss: 0.2449 - acc: 0.9337 - val_loss: 0.5052 - val_acc: 0.8626 Epoch 119/500 117s - loss: 0.2377 - acc: 0.9359 - val_loss: 0.4983 - val_acc: 0.8638 Epoch 120/500 117s - loss: 0.2401 - acc: 0.9355 - val_loss: 0.4913 - val_acc: 0.8660 Epoch 121/500 117s - loss: 0.2364 - acc: 0.9363 - val_loss: 0.4931 - val_acc: 0.8671 Epoch 122/500 117s - loss: 0.2437 - acc: 0.9338 - val_loss: 0.5023 - val_acc: 0.8624 Epoch 123/500 117s - loss: 0.2381 - acc: 0.9356 - val_loss: 0.5029 - val_acc: 0.8576 Epoch 124/500 117s - loss: 0.2430 - acc: 0.9341 - val_loss: 0.4899 - val_acc: 0.8639 Epoch 125/500 117s - loss: 0.2352 - acc: 0.9364 - val_loss: 0.5242 - val_acc: 0.8544 Epoch 126/500 117s - loss: 0.2431 - acc: 0.9344 - val_loss: 0.4767 - val_acc: 0.8656 Epoch 127/500 117s - loss: 0.2327 - acc: 0.9371 - val_loss: 0.5036 - val_acc: 0.8600 Epoch 128/500 117s - loss: 0.2397 - acc: 0.9349 - val_loss: 0.4909 - val_acc: 0.8645 Epoch 129/500 117s - loss: 0.2378 - acc: 0.9356 - val_loss: 0.5297 - val_acc: 0.8555 Epoch 130/500 117s - loss: 0.2354 - acc: 0.9370 - val_loss: 0.5046 - val_acc: 0.8657 Epoch 131/500 117s - loss: 0.2419 - acc: 0.9346 - val_loss: 0.4913 - val_acc: 0.8667 Epoch 132/500 117s - loss: 0.2426 - acc: 0.9343 - val_loss: 0.5001 - val_acc: 0.8636 Epoch 133/500 117s - loss: 0.2348 - acc: 0.9366 - val_loss: 0.5010 - val_acc: 0.8643 Epoch 134/500 117s - loss: 0.2382 - acc: 0.9358 - val_loss: 0.5051 - val_acc: 0.8604 Epoch 135/500 117s - loss: 0.2295 - acc: 0.9384 - val_loss: 0.5169 - val_acc: 0.8561 Epoch 136/500 117s - loss: 0.2478 - acc: 0.9335 - val_loss: 0.4971 - val_acc: 0.8635 Epoch 137/500 117s - loss: 0.2417 - acc: 0.9341 - val_loss: 0.4974 - val_acc: 0.8638 Epoch 138/500 117s - loss: 0.2297 - acc: 0.9382 - val_loss: 0.5000 - val_acc: 0.8650 Epoch 139/500 117s - loss: 0.2368 - acc: 0.9360 - val_loss: 0.4817 - val_acc: 0.8673 Epoch 140/500 117s - loss: 0.2404 - acc: 0.9347 - val_loss: 0.4877 - val_acc: 0.8673 Epoch 141/500 117s - loss: 0.2342 - acc: 0.9369 - val_loss: 0.4791 - val_acc: 0.8680 Epoch 142/500 117s - loss: 0.2327 - acc: 0.9366 - val_loss: 0.4930 - val_acc: 0.8689 Epoch 143/500 117s - loss: 0.2388 - acc: 0.9354 - val_loss: 0.4871 - val_acc: 0.8674 Epoch 144/500 117s - loss: 0.2460 - acc: 0.9333 - val_loss: 0.4868 - val_acc: 0.8673 Epoch 145/500 117s - loss: 0.2453 - acc: 0.9335 - val_loss: 0.4816 - val_acc: 0.8689 Epoch 146/500 117s - loss: 0.2293 - acc: 0.9383 - val_loss: 0.5442 - val_acc: 0.8473 Epoch 147/500 117s - loss: 0.2397 - acc: 0.9348 - val_loss: 0.4989 - val_acc: 0.8608 Epoch 148/500 118s - loss: 0.2472 - acc: 0.9328 - val_loss: 0.4902 - val_acc: 0.8661 Epoch 149/500 118s - loss: 0.2343 - acc: 0.9369 - val_loss: 0.5113 - val_acc: 0.8584 Epoch 150/500
---------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-261-f5664121da57> in <module>() 1 model.fit_generator(gen, rnd_trn, 500, verbose=2, ----> 2 validation_data=gen_test, nb_val_samples=rnd_test) /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch) 1555 outs = self.train_on_batch(x, y, 1556 sample_weight=sample_weight, -> 1557 class_weight=class_weight) 1558 1559 if not isinstance(outs, list): /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight) 1318 ins = x + y + sample_weights 1319 self._make_train_function() -> 1320 outputs = self.train_function(ins) 1321 if len(outputs) == 1: 1322 return outputs[0] /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs) 1941 session = get_session() 1942 updated = session.run(self.outputs + [self.updates_op], -> 1943 feed_dict=feed_dict) 1944 return updated[:len(self.outputs)] 1945 /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata) 765 try: 766 result = self._run(None, fetches, feed_dict, options_ptr, --> 767 run_metadata_ptr) 768 if run_metadata: 769 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 963 if final_fetches or final_targets: 964 results = self._do_run(handle, final_targets, final_fetches, --> 965 feed_dict_string, options, run_metadata) 966 else: 967 results = [] /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1013 if handle is None: 1014 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1015 target_list, options, run_metadata) 1016 else: 1017 return self._do_call(_prun_fn, self._session, handle, feed_dict, /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1020 def _do_call(self, fn, *args): 1021 try: -> 1022 return fn(*args) 1023 except errors.OpError as e: 1024 message = compat.as_text(e.message) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1002 return tf_session.TF_Run(session, options, 1003 feed_dict, fetch_list, target_list, -> 1004 status, run_metadata) 1005 1006 def _prun_fn(session, handle, feed_dict, fetch_list): KeyboardInterrupt:
lrg_sz = (352,480)
gen = segm_generator(trn, trn_labels, 2, out_sz=lrg_sz, train=True)
gen_test = segm_generator(test, test_labels, 2, out_sz=lrg_sz, train=False)
lrg_shape = lrg_sz+(3,)
lrg_input = Input(shape=lrg_shape)
x = create_tiramisu(12, lrg_input, nb_layers_per_block=[4,5,7,10,12,15], p=0.2, wd=1e-4)
lrg_model = Model(lrg_input, x)
lrg_model.compile(loss='sparse_categorical_crossentropy',
optimizer=keras.optimizers.RMSprop(1e-4), metrics=["accuracy"])
lrg_model.fit_generator(gen, rnd_trn, 100, verbose=2,
validation_data=gen_test, nb_val_samples=rnd_test)
Epoch 1/100 393s - loss: 0.2116 - acc: 0.9453 - val_loss: 0.5031 - val_acc: 0.8720 Epoch 2/100 346s - loss: 0.2048 - acc: 0.9471 - val_loss: 0.4961 - val_acc: 0.8741 Epoch 3/100 346s - loss: 0.2032 - acc: 0.9476 - val_loss: 0.5058 - val_acc: 0.8703 Epoch 4/100 346s - loss: 0.1988 - acc: 0.9487 - val_loss: 0.5030 - val_acc: 0.8678 Epoch 5/100 346s - loss: 0.1985 - acc: 0.9487 - val_loss: 0.5042 - val_acc: 0.8727 Epoch 6/100 346s - loss: 0.1981 - acc: 0.9489 - val_loss: 0.4969 - val_acc: 0.8742 Epoch 7/100 346s - loss: 0.1955 - acc: 0.9494 - val_loss: 0.5067 - val_acc: 0.8693 Epoch 8/100 346s - loss: 0.1953 - acc: 0.9497 - val_loss: 0.5183 - val_acc: 0.8720 Epoch 9/100
---------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-326-ade188198713> in <module>() 1 lrg_model.fit_generator(gen, rnd_trn, 100, verbose=2, ----> 2 validation_data=gen_test, nb_val_samples=rnd_test) /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch) 1555 outs = self.train_on_batch(x, y, 1556 sample_weight=sample_weight, -> 1557 class_weight=class_weight) 1558 1559 if not isinstance(outs, list): /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight) 1318 ins = x + y + sample_weights 1319 self._make_train_function() -> 1320 outputs = self.train_function(ins) 1321 if len(outputs) == 1: 1322 return outputs[0] /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs) 1941 session = get_session() 1942 updated = session.run(self.outputs + [self.updates_op], -> 1943 feed_dict=feed_dict) 1944 return updated[:len(self.outputs)] 1945 /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata) 765 try: 766 result = self._run(None, fetches, feed_dict, options_ptr, --> 767 run_metadata_ptr) 768 if run_metadata: 769 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 963 if final_fetches or final_targets: 964 results = self._do_run(handle, final_targets, final_fetches, --> 965 feed_dict_string, options, run_metadata) 966 else: 967 results = [] /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1013 if handle is None: 1014 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1015 target_list, options, run_metadata) 1016 else: 1017 return self._do_call(_prun_fn, self._session, handle, feed_dict, /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1020 def _do_call(self, fn, *args): 1021 try: -> 1022 return fn(*args) 1023 except errors.OpError as e: 1024 message = compat.as_text(e.message) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1002 return tf_session.TF_Run(session, options, 1003 feed_dict, fetch_list, target_list, -> 1004 status, run_metadata) 1005 1006 def _prun_fn(session, handle, feed_dict, fetch_list): KeyboardInterrupt:
lrg_model.fit_generator(gen, rnd_trn, 100, verbose=2,
validation_data=gen_test, nb_val_samples=rnd_test)
Epoch 1/100 377s - loss: 0.1894 - acc: 0.9511 - val_loss: 0.5339 - val_acc: 0.8654 Epoch 2/100 341s - loss: 0.1925 - acc: 0.9497 - val_loss: 0.5348 - val_acc: 0.8595 Epoch 3/100 342s - loss: 0.1865 - acc: 0.9521 - val_loss: 0.4922 - val_acc: 0.8751 Epoch 4/100 341s - loss: 0.1875 - acc: 0.9514 - val_loss: 0.5100 - val_acc: 0.8751 Epoch 5/100 341s - loss: 0.1902 - acc: 0.9509 - val_loss: 0.4954 - val_acc: 0.8755 Epoch 6/100 341s - loss: 0.1869 - acc: 0.9516 - val_loss: 0.5136 - val_acc: 0.8686 Epoch 7/100 342s - loss: 0.1899 - acc: 0.9504 - val_loss: 0.5182 - val_acc: 0.8734 Epoch 8/100 343s - loss: 0.1884 - acc: 0.9506 - val_loss: 0.5827 - val_acc: 0.8533 Epoch 9/100 344s - loss: 0.1843 - acc: 0.9529 - val_loss: 0.5236 - val_acc: 0.8715 Epoch 10/100 344s - loss: 0.1887 - acc: 0.9509 - val_loss: 0.5538 - val_acc: 0.8613 Epoch 11/100 344s - loss: 0.1843 - acc: 0.9523 - val_loss: 0.5364 - val_acc: 0.8686 Epoch 12/100 344s - loss: 0.1871 - acc: 0.9511 - val_loss: 0.5568 - val_acc: 0.8641 Epoch 13/100 344s - loss: 0.1873 - acc: 0.9511 - val_loss: 0.4732 - val_acc: 0.8813 Epoch 14/100 344s - loss: 0.1804 - acc: 0.9533 - val_loss: 0.5401 - val_acc: 0.8676 Epoch 15/100
---------------------------------------------------------------- KeyboardInterrupt Traceback (most recent call last) <ipython-input-370-ade188198713> in <module>() 1 lrg_model.fit_generator(gen, rnd_trn, 100, verbose=2, ----> 2 validation_data=gen_test, nb_val_samples=rnd_test) /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit_generator(self, generator, samples_per_epoch, nb_epoch, verbose, callbacks, validation_data, nb_val_samples, class_weight, max_q_size, nb_worker, pickle_safe, initial_epoch) 1555 outs = self.train_on_batch(x, y, 1556 sample_weight=sample_weight, -> 1557 class_weight=class_weight) 1558 1559 if not isinstance(outs, list): /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight) 1318 ins = x + y + sample_weights 1319 self._make_train_function() -> 1320 outputs = self.train_function(ins) 1321 if len(outputs) == 1: 1322 return outputs[0] /home/jhoward/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs) 1941 session = get_session() 1942 updated = session.run(self.outputs + [self.updates_op], -> 1943 feed_dict=feed_dict) 1944 return updated[:len(self.outputs)] 1945 /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata) 765 try: 766 result = self._run(None, fetches, feed_dict, options_ptr, --> 767 run_metadata_ptr) 768 if run_metadata: 769 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata) 963 if final_fetches or final_targets: 964 results = self._do_run(handle, final_targets, final_fetches, --> 965 feed_dict_string, options, run_metadata) 966 else: 967 results = [] /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata) 1013 if handle is None: 1014 return self._do_call(_run_fn, self._session, feed_dict, fetch_list, -> 1015 target_list, options, run_metadata) 1016 else: 1017 return self._do_call(_prun_fn, self._session, handle, feed_dict, /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args) 1020 def _do_call(self, fn, *args): 1021 try: -> 1022 return fn(*args) 1023 except errors.OpError as e: 1024 message = compat.as_text(e.message) /home/jhoward/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata) 1002 return tf_session.TF_Run(session, options, 1003 feed_dict, fetch_list, target_list, -> 1004 status, run_metadata) 1005 1006 def _prun_fn(session, handle, feed_dict, fetch_list): KeyboardInterrupt:
lrg_model.optimizer=keras.optimizers.RMSprop(1e-5)
lrg_model.fit_generator(gen, rnd_trn, 2, verbose=2,
validation_data=gen_test, nb_val_samples=rnd_test)
Epoch 1/2 367s - loss: 0.1873 - acc: 0.9510 - val_loss: 0.5111 - val_acc: 0.8729 Epoch 2/2 344s - loss: 0.1825 - acc: 0.9525 - val_loss: 0.4921 - val_acc: 0.8758
<keras.callbacks.History at 0x7f56e8e8bf28>
lrg_model.save_weights(PATH+'results/8758.h5')
Let's take a look at some of the results we achieved.
colors = [(128, 128, 128), (128, 0, 0), (192, 192, 128), (128, 64, 128), (0, 0, 192),
(128, 128, 0), (192, 128, 128), (64, 64, 128), (64, 0, 128), (64, 64, 0),
(0, 128, 192), (0, 0, 0)]
names = ['sky', 'building', 'column_pole', 'road', 'sidewalk', 'tree',
'sign', 'fence', 'car', 'pedestrian', 'bicyclist', 'void']
gen_test = segm_generator(test, test_labels, 2, out_sz=lrg_sz, train=False)
preds = lrg_model.predict_generator(gen_test, rnd_test)
preds = np.argmax(preds, axis=-1)
preds = preds.reshape((-1,352,480))
target = test_labels.reshape((233,360,480))[:,8:]
(target == preds).mean()
0.87580886148231241
non_void = target != 11
(target[non_void] == preds[non_void]).mean()
0.89557539296419497
idx=1
p=lrg_model.predict(np.expand_dims(test[idx,8:],0))
p = np.argmax(p[0],-1).reshape(352,480)
pred = color_label(p)
This is pretty good! We can see it is having some difficulty with the street between the light posts, but we would expect that a model that was pre-trained on a much larger dataset would perform better.
plt.imshow(pred);
plt.figure(figsize=(9,9))
plt.imshow(test[idx]*0.3+0.4)
<matplotlib.image.AxesImage at 0x7f5620adceb8>