Training for 100000 steps ...
922/100000: episode: 1, duration: 29.546s, episode steps: 922, steps per second: 31, episode reward: 370.000, mean reward: 0.401 [0.000, 10.000], mean action: 3.359 [0.000, 8.000], mean observation: 72.595 [0.000, 228.000], loss: 2.750501, mean_absolute_error: 0.159018, acc: 0.409802, mean_q: 0.655422
1355/100000: episode: 2, duration: 12.858s, episode steps: 433, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.009 [1.000, 6.000], mean observation: 72.910 [0.000, 228.000], loss: 1.932692, mean_absolute_error: 0.100334, acc: 0.736865, mean_q: 0.939665
1794/100000: episode: 3, duration: 12.995s, episode steps: 439, steps per second: 34, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.030 [0.000, 7.000], mean observation: 72.906 [0.000, 228.000], loss: 1.599364, mean_absolute_error: 0.077912, acc: 0.806948, mean_q: 0.964618
2230/100000: episode: 4, duration: 13.087s, episode steps: 436, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.018 [0.000, 7.000], mean observation: 72.914 [0.000, 228.000], loss: 1.493305, mean_absolute_error: 0.067161, acc: 0.844252, mean_q: 0.977191
2695/100000: episode: 5, duration: 19.184s, episode steps: 465, steps per second: 24, episode reward: 110.000, mean reward: 0.237 [0.000, 10.000], mean action: 3.015 [0.000, 8.000], mean observation: 72.873 [0.000, 228.000], loss: 1.364146, mean_absolute_error: 0.060264, acc: 0.862769, mean_q: 0.984469
3244/100000: episode: 6, duration: 17.063s, episode steps: 549, steps per second: 32, episode reward: 60.000, mean reward: 0.109 [0.000, 10.000], mean action: 3.002 [0.000, 8.000], mean observation: 72.894 [0.000, 228.000], loss: 1.354492, mean_absolute_error: 0.054776, acc: 0.885986, mean_q: 0.989604
3764/100000: episode: 7, duration: 16.936s, episode steps: 520, steps per second: 31, episode reward: 110.000, mean reward: 0.212 [0.000, 10.000], mean action: 3.033 [2.000, 8.000], mean observation: 72.857 [0.000, 228.000], loss: 1.285808, mean_absolute_error: 0.049860, acc: 0.902103, mean_q: 0.992924
4286/100000: episode: 8, duration: 16.436s, episode steps: 522, steps per second: 32, episode reward: 110.000, mean reward: 0.211 [0.000, 10.000], mean action: 3.021 [0.000, 8.000], mean observation: 72.854 [0.000, 228.000], loss: 1.222823, mean_absolute_error: 0.046015, acc: 0.913374, mean_q: 0.995000
4818/100000: episode: 9, duration: 16.453s, episode steps: 532, steps per second: 32, episode reward: 110.000, mean reward: 0.207 [0.000, 10.000], mean action: 2.994 [0.000, 6.000], mean observation: 72.854 [0.000, 228.000], loss: 1.269515, mean_absolute_error: 0.045624, acc: 0.919290, mean_q: 0.996452
5240/100000: episode: 10, duration: 13.077s, episode steps: 422, steps per second: 32, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 3.012 [2.000, 8.000], mean observation: 72.910 [0.000, 228.000], loss: 1.165863, mean_absolute_error: 0.041262, acc: 0.929354, mean_q: 0.997315
5660/100000: episode: 11, duration: 12.973s, episode steps: 420, steps per second: 32, episode reward: 60.000, mean reward: 0.143 [0.000, 10.000], mean action: 3.007 [0.000, 8.000], mean observation: 72.908 [0.000, 228.000], loss: 1.207981, mean_absolute_error: 0.041799, acc: 0.931696, mean_q: 0.997878
6188/100000: episode: 12, duration: 17.165s, episode steps: 528, steps per second: 31, episode reward: 110.000, mean reward: 0.208 [0.000, 10.000], mean action: 3.019 [0.000, 8.000], mean observation: 72.858 [0.000, 228.000], loss: 1.079953, mean_absolute_error: 0.037732, acc: 0.937855, mean_q: 0.998365
7318/100000: episode: 13, duration: 36.109s, episode steps: 1130, steps per second: 31, episode reward: 180.000, mean reward: 0.159 [0.000, 50.000], mean action: 3.014 [1.000, 8.000], mean observation: 72.859 [0.000, 228.000], loss: 1.126516, mean_absolute_error: 0.036324, acc: 0.942893, mean_q: 0.998921
7749/100000: episode: 14, duration: 13.283s, episode steps: 431, steps per second: 32, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.021 [3.000, 7.000], mean observation: 72.914 [0.000, 228.000], loss: 1.242681, mean_absolute_error: 0.034535, acc: 0.947288, mean_q: 0.999283
8193/100000: episode: 15, duration: 13.465s, episode steps: 444, steps per second: 33, episode reward: 60.000, mean reward: 0.135 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.910 [0.000, 228.000], loss: 1.276073, mean_absolute_error: 0.034827, acc: 0.949747, mean_q: 0.999426
8626/100000: episode: 16, duration: 13.100s, episode steps: 433, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.005 [3.000, 5.000], mean observation: 72.912 [0.000, 228.000], loss: 1.219086, mean_absolute_error: 0.032687, acc: 0.953161, mean_q: 0.999536
9058/100000: episode: 17, duration: 13.008s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.014 [1.000, 8.000], mean observation: 72.917 [0.000, 228.000], loss: 0.966979, mean_absolute_error: 0.031524, acc: 0.955295, mean_q: 0.999627
9568/100000: episode: 18, duration: 15.549s, episode steps: 510, steps per second: 33, episode reward: 110.000, mean reward: 0.216 [0.000, 10.000], mean action: 2.994 [1.000, 4.000], mean observation: 72.859 [0.000, 228.000], loss: 0.906498, mean_absolute_error: 0.029293, acc: 0.959559, mean_q: 0.999698
9996/100000: episode: 19, duration: 13.345s, episode steps: 428, steps per second: 32, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 2.993 [0.000, 5.000], mean observation: 72.908 [0.000, 228.000], loss: 0.958007, mean_absolute_error: 0.029338, acc: 0.957579, mean_q: 0.999756
10419/100000: episode: 20, duration: 13.323s, episode steps: 423, steps per second: 32, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 3.005 [0.000, 8.000], mean observation: 72.906 [0.000, 228.000], loss: 1.070048, mean_absolute_error: 0.031253, acc: 0.959146, mean_q: 0.999802
10851/100000: episode: 21, duration: 13.485s, episode steps: 432, steps per second: 32, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.002 [2.000, 5.000], mean observation: 72.909 [0.000, 228.000], loss: 1.149229, mean_absolute_error: 0.031137, acc: 0.961010, mean_q: 0.999839
11429/100000: episode: 22, duration: 17.292s, episode steps: 578, steps per second: 33, episode reward: 110.000, mean reward: 0.190 [0.000, 10.000], mean action: 3.012 [0.000, 8.000], mean observation: 72.896 [0.000, 228.000], loss: 1.168064, mean_absolute_error: 0.029637, acc: 0.962695, mean_q: 0.999876
11944/100000: episode: 23, duration: 15.214s, episode steps: 515, steps per second: 34, episode reward: 110.000, mean reward: 0.214 [0.000, 10.000], mean action: 2.992 [0.000, 4.000], mean observation: 72.853 [0.000, 228.000], loss: 1.067216, mean_absolute_error: 0.030874, acc: 0.961650, mean_q: 0.999906
12384/100000: episode: 24, duration: 13.070s, episode steps: 440, steps per second: 34, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 2.993 [0.000, 3.000], mean observation: 72.912 [0.000, 228.000], loss: 0.935310, mean_absolute_error: 0.026834, acc: 0.966264, mean_q: 0.999924
12965/100000: episode: 25, duration: 17.462s, episode steps: 581, steps per second: 33, episode reward: 70.000, mean reward: 0.120 [0.000, 10.000], mean action: 3.014 [2.000, 8.000], mean observation: 72.888 [0.000, 228.000], loss: 1.057993, mean_absolute_error: 0.028709, acc: 0.966652, mean_q: 0.999940
13391/100000: episode: 26, duration: 12.881s, episode steps: 426, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.012 [3.000, 8.000], mean observation: 72.909 [0.000, 228.000], loss: 0.865644, mean_absolute_error: 0.026880, acc: 0.966623, mean_q: 0.999953
13919/100000: episode: 27, duration: 15.738s, episode steps: 528, steps per second: 34, episode reward: 110.000, mean reward: 0.208 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.854 [0.000, 228.000], loss: 1.009845, mean_absolute_error: 0.028659, acc: 0.966974, mean_q: 0.999963
14350/100000: episode: 28, duration: 12.788s, episode steps: 431, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.009 [3.000, 7.000], mean observation: 72.912 [0.000, 228.000], loss: 0.837868, mean_absolute_error: 0.025444, acc: 0.970780, mean_q: 0.999970
14775/100000: episode: 29, duration: 12.599s, episode steps: 425, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.012 [3.000, 6.000], mean observation: 72.913 [0.000, 228.000], loss: 0.885915, mean_absolute_error: 0.027345, acc: 0.966765, mean_q: 0.999976
15202/100000: episode: 30, duration: 12.630s, episode steps: 427, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [0.000, 7.000], mean observation: 72.907 [0.000, 228.000], loss: 1.067898, mean_absolute_error: 0.027376, acc: 0.970653, mean_q: 0.999980
15628/100000: episode: 31, duration: 12.637s, episode steps: 426, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 2.991 [1.000, 3.000], mean observation: 72.907 [0.000, 228.000], loss: 0.926451, mean_absolute_error: 0.027283, acc: 0.971097, mean_q: 0.999984
16057/100000: episode: 32, duration: 12.722s, episode steps: 429, steps per second: 34, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.804952, mean_absolute_error: 0.024675, acc: 0.971372, mean_q: 0.999987
16588/100000: episode: 33, duration: 15.744s, episode steps: 531, steps per second: 34, episode reward: 110.000, mean reward: 0.207 [0.000, 10.000], mean action: 3.013 [2.000, 8.000], mean observation: 72.854 [0.000, 228.000], loss: 0.925594, mean_absolute_error: 0.027047, acc: 0.972575, mean_q: 0.999990
17105/100000: episode: 34, duration: 15.288s, episode steps: 517, steps per second: 34, episode reward: 60.000, mean reward: 0.116 [0.000, 10.000], mean action: 3.008 [3.000, 5.000], mean observation: 72.920 [0.000, 228.000], loss: 0.892844, mean_absolute_error: 0.025035, acc: 0.971893, mean_q: 0.999992
17536/100000: episode: 35, duration: 12.722s, episode steps: 431, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.023 [3.000, 8.000], mean observation: 72.909 [0.000, 228.000], loss: 0.973951, mean_absolute_error: 0.025876, acc: 0.975348, mean_q: 0.999994
17974/100000: episode: 36, duration: 13.189s, episode steps: 438, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.910 [0.000, 228.000], loss: 0.845903, mean_absolute_error: 0.023325, acc: 0.974244, mean_q: 0.999995
18405/100000: episode: 37, duration: 12.907s, episode steps: 431, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.005 [0.000, 8.000], mean observation: 72.917 [0.000, 228.000], loss: 0.885737, mean_absolute_error: 0.025633, acc: 0.974985, mean_q: 0.999996
18834/100000: episode: 38, duration: 12.750s, episode steps: 429, steps per second: 34, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.005 [2.000, 7.000], mean observation: 72.911 [0.000, 228.000], loss: 0.966934, mean_absolute_error: 0.028101, acc: 0.971882, mean_q: 0.999997
19299/100000: episode: 39, duration: 13.883s, episode steps: 465, steps per second: 33, episode reward: 110.000, mean reward: 0.237 [0.000, 10.000], mean action: 3.004 [1.000, 6.000], mean observation: 72.894 [0.000, 228.000], loss: 0.950698, mean_absolute_error: 0.025855, acc: 0.973790, mean_q: 0.999997
19725/100000: episode: 40, duration: 12.708s, episode steps: 426, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.016 [3.000, 7.000], mean observation: 72.907 [0.000, 228.000], loss: 0.934569, mean_absolute_error: 0.025027, acc: 0.975719, mean_q: 0.999998
20161/100000: episode: 41, duration: 12.911s, episode steps: 436, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.913 [0.000, 228.000], loss: 0.966219, mean_absolute_error: 0.025600, acc: 0.975989, mean_q: 0.999998
20590/100000: episode: 42, duration: 12.721s, episode steps: 429, steps per second: 34, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.012 [3.000, 8.000], mean observation: 72.912 [0.000, 228.000], loss: 0.831869, mean_absolute_error: 0.024158, acc: 0.976034, mean_q: 0.999999
21022/100000: episode: 43, duration: 12.844s, episode steps: 432, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.021 [3.000, 8.000], mean observation: 72.912 [0.000, 228.000], loss: 1.055637, mean_absolute_error: 0.027234, acc: 0.977937, mean_q: 0.999999
21455/100000: episode: 44, duration: 12.819s, episode steps: 433, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.021 [3.000, 7.000], mean observation: 72.909 [0.000, 228.000], loss: 0.836770, mean_absolute_error: 0.023797, acc: 0.978493, mean_q: 0.999999
21889/100000: episode: 45, duration: 12.940s, episode steps: 434, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.912 [0.000, 228.000], loss: 0.836639, mean_absolute_error: 0.023856, acc: 0.977823, mean_q: 0.999999
22343/100000: episode: 46, duration: 13.496s, episode steps: 454, steps per second: 34, episode reward: 110.000, mean reward: 0.242 [0.000, 10.000], mean action: 3.009 [2.000, 8.000], mean observation: 72.888 [0.000, 228.000], loss: 0.805641, mean_absolute_error: 0.023271, acc: 0.977905, mean_q: 0.999999
22777/100000: episode: 47, duration: 12.854s, episode steps: 434, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.005 [0.000, 8.000], mean observation: 72.914 [0.000, 228.000], loss: 1.006345, mean_absolute_error: 0.026670, acc: 0.974942, mean_q: 1.000000
23214/100000: episode: 48, duration: 12.924s, episode steps: 437, steps per second: 34, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.844489, mean_absolute_error: 0.023875, acc: 0.979334, mean_q: 1.000000
23641/100000: episode: 49, duration: 12.627s, episode steps: 427, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 2.991 [1.000, 3.000], mean observation: 72.910 [0.000, 228.000], loss: 0.983037, mean_absolute_error: 0.026869, acc: 0.979435, mean_q: 1.000000
24067/100000: episode: 50, duration: 12.623s, episode steps: 426, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.009 [3.000, 6.000], mean observation: 72.912 [0.000, 228.000], loss: 0.934281, mean_absolute_error: 0.025823, acc: 0.979020, mean_q: 1.000000
24494/100000: episode: 51, duration: 12.634s, episode steps: 427, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 2.988 [0.000, 3.000], mean observation: 72.909 [0.000, 228.000], loss: 0.891939, mean_absolute_error: 0.025250, acc: 0.977532, mean_q: 1.000000
24932/100000: episode: 52, duration: 12.988s, episode steps: 438, steps per second: 34, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.876506, mean_absolute_error: 0.024347, acc: 0.979880, mean_q: 1.000000
25362/100000: episode: 53, duration: 12.794s, episode steps: 430, steps per second: 34, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.817384, mean_absolute_error: 0.022920, acc: 0.980959, mean_q: 1.000000
25784/100000: episode: 54, duration: 12.561s, episode steps: 422, steps per second: 34, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 3.002 [1.000, 7.000], mean observation: 72.907 [0.000, 228.000], loss: 0.859725, mean_absolute_error: 0.023460, acc: 0.982968, mean_q: 1.000000
26222/100000: episode: 55, duration: 12.969s, episode steps: 438, steps per second: 34, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 2.993 [0.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 0.877860, mean_absolute_error: 0.020972, acc: 0.981949, mean_q: 1.000000
26654/100000: episode: 56, duration: 12.859s, episode steps: 432, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.012 [3.000, 8.000], mean observation: 72.914 [0.000, 228.000], loss: 0.893530, mean_absolute_error: 0.025140, acc: 0.978516, mean_q: 1.000000
27184/100000: episode: 57, duration: 15.685s, episode steps: 530, steps per second: 34, episode reward: 110.000, mean reward: 0.208 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.849 [0.000, 228.000], loss: 0.871484, mean_absolute_error: 0.024115, acc: 0.980837, mean_q: 1.000000
27647/100000: episode: 58, duration: 13.676s, episode steps: 463, steps per second: 34, episode reward: 110.000, mean reward: 0.238 [0.000, 10.000], mean action: 2.998 [1.000, 4.000], mean observation: 72.896 [0.000, 228.000], loss: 0.882798, mean_absolute_error: 0.022712, acc: 0.981776, mean_q: 1.000000
28083/100000: episode: 59, duration: 12.928s, episode steps: 436, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.005 [3.000, 5.000], mean observation: 72.912 [0.000, 228.000], loss: 0.924782, mean_absolute_error: 0.024951, acc: 0.982870, mean_q: 1.000000
28515/100000: episode: 60, duration: 13.170s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [1.000, 5.000], mean observation: 72.909 [0.000, 228.000], loss: 0.950630, mean_absolute_error: 0.025596, acc: 0.982784, mean_q: 1.000000
28957/100000: episode: 61, duration: 13.095s, episode steps: 442, steps per second: 34, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 3.009 [3.000, 7.000], mean observation: 72.911 [0.000, 228.000], loss: 1.017386, mean_absolute_error: 0.025839, acc: 0.980345, mean_q: 1.000000
29391/100000: episode: 62, duration: 12.831s, episode steps: 434, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.917 [0.000, 228.000], loss: 0.819406, mean_absolute_error: 0.023005, acc: 0.980703, mean_q: 1.000000
29824/100000: episode: 63, duration: 12.812s, episode steps: 433, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.009 [3.000, 7.000], mean observation: 72.914 [0.000, 228.000], loss: 0.993344, mean_absolute_error: 0.026588, acc: 0.982318, mean_q: 1.000000
30267/100000: episode: 64, duration: 13.512s, episode steps: 443, steps per second: 33, episode reward: 60.000, mean reward: 0.135 [0.000, 10.000], mean action: 2.989 [0.000, 3.000], mean observation: 72.913 [0.000, 228.000], loss: 0.812653, mean_absolute_error: 0.022780, acc: 0.980954, mean_q: 1.000000
30707/100000: episode: 65, duration: 13.171s, episode steps: 440, steps per second: 33, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 3.005 [3.000, 5.000], mean observation: 72.907 [0.000, 228.000], loss: 0.804780, mean_absolute_error: 0.023003, acc: 0.979119, mean_q: 1.000000
31137/100000: episode: 66, duration: 12.918s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.005 [3.000, 5.000], mean observation: 72.913 [0.000, 228.000], loss: 0.958653, mean_absolute_error: 0.022369, acc: 0.983358, mean_q: 1.000000
31567/100000: episode: 67, duration: 13.041s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.842303, mean_absolute_error: 0.023308, acc: 0.981904, mean_q: 1.000000
31999/100000: episode: 68, duration: 12.833s, episode steps: 432, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 2.993 [0.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.969986, mean_absolute_error: 0.024267, acc: 0.983362, mean_q: 1.000000
32434/100000: episode: 69, duration: 12.959s, episode steps: 435, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.018 [3.000, 7.000], mean observation: 72.910 [0.000, 228.000], loss: 0.943249, mean_absolute_error: 0.023929, acc: 0.981968, mean_q: 1.000000
32860/100000: episode: 70, duration: 12.618s, episode steps: 426, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.005 [1.000, 7.000], mean observation: 72.914 [0.000, 228.000], loss: 0.853826, mean_absolute_error: 0.022960, acc: 0.984815, mean_q: 1.000000
33301/100000: episode: 71, duration: 13.135s, episode steps: 441, steps per second: 34, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 2.995 [1.000, 3.000], mean observation: 72.909 [0.000, 228.000], loss: 0.945814, mean_absolute_error: 0.023591, acc: 0.983985, mean_q: 1.000000
33738/100000: episode: 72, duration: 12.972s, episode steps: 437, steps per second: 34, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 2.995 [2.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.853012, mean_absolute_error: 0.023421, acc: 0.982194, mean_q: 1.000000
34169/100000: episode: 73, duration: 12.839s, episode steps: 431, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.002 [0.000, 7.000], mean observation: 72.910 [0.000, 228.000], loss: 0.837158, mean_absolute_error: 0.022482, acc: 0.985426, mean_q: 1.000000
34595/100000: episode: 74, duration: 12.850s, episode steps: 426, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.005 [2.000, 6.000], mean observation: 72.906 [0.000, 228.000], loss: 0.903527, mean_absolute_error: 0.022658, acc: 0.983715, mean_q: 1.000000
35022/100000: episode: 75, duration: 12.748s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 2.991 [0.000, 4.000], mean observation: 72.911 [0.000, 228.000], loss: 0.792673, mean_absolute_error: 0.021522, acc: 0.984778, mean_q: 1.000000
35448/100000: episode: 76, duration: 12.698s, episode steps: 426, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [0.000, 6.000], mean observation: 72.908 [0.000, 228.000], loss: 0.867522, mean_absolute_error: 0.023318, acc: 0.983788, mean_q: 1.000000
35875/100000: episode: 77, duration: 12.782s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 2.993 [0.000, 3.000], mean observation: 72.913 [0.000, 228.000], loss: 0.835425, mean_absolute_error: 0.021034, acc: 0.984265, mean_q: 1.000000
36306/100000: episode: 78, duration: 12.901s, episode steps: 431, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.007 [3.000, 5.000], mean observation: 72.917 [0.000, 228.000], loss: 0.875452, mean_absolute_error: 0.022229, acc: 0.982961, mean_q: 1.000000
36740/100000: episode: 79, duration: 12.985s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.909 [0.000, 228.000], loss: 0.947098, mean_absolute_error: 0.025119, acc: 0.984087, mean_q: 1.000000
37166/100000: episode: 80, duration: 12.678s, episode steps: 426, steps per second: 34, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.910 [0.000, 228.000], loss: 0.748542, mean_absolute_error: 0.020739, acc: 0.984522, mean_q: 1.000000
37612/100000: episode: 81, duration: 13.315s, episode steps: 446, steps per second: 33, episode reward: 60.000, mean reward: 0.135 [0.000, 10.000], mean action: 2.996 [1.000, 3.000], mean observation: 72.920 [0.000, 228.000], loss: 0.752258, mean_absolute_error: 0.020643, acc: 0.984725, mean_q: 1.000000
38046/100000: episode: 82, duration: 12.966s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 2.993 [0.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.746647, mean_absolute_error: 0.020482, acc: 0.985383, mean_q: 1.000000
38482/100000: episode: 83, duration: 12.969s, episode steps: 436, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.007 [2.000, 7.000], mean observation: 72.911 [0.000, 228.000], loss: 0.842648, mean_absolute_error: 0.022776, acc: 0.984088, mean_q: 1.000000
38998/100000: episode: 84, duration: 15.467s, episode steps: 516, steps per second: 33, episode reward: 110.000, mean reward: 0.213 [0.000, 10.000], mean action: 3.010 [0.000, 8.000], mean observation: 72.853 [0.000, 228.000], loss: 0.791431, mean_absolute_error: 0.021996, acc: 0.982316, mean_q: 1.000000
39430/100000: episode: 85, duration: 12.932s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.912 [0.000, 228.000], loss: 0.770487, mean_absolute_error: 0.021347, acc: 0.983507, mean_q: 1.000000
39870/100000: episode: 86, duration: 13.094s, episode steps: 440, steps per second: 34, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.912 [0.000, 228.000], loss: 0.851223, mean_absolute_error: 0.022746, acc: 0.985653, mean_q: 1.000000
40298/100000: episode: 87, duration: 12.801s, episode steps: 428, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.913 [0.000, 228.000], loss: 0.866558, mean_absolute_error: 0.023239, acc: 0.984959, mean_q: 1.000000
40738/100000: episode: 88, duration: 13.130s, episode steps: 440, steps per second: 34, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.913 [0.000, 228.000], loss: 0.814577, mean_absolute_error: 0.021974, acc: 0.985369, mean_q: 1.000000
41180/100000: episode: 89, duration: 13.200s, episode steps: 442, steps per second: 33, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 2.993 [1.000, 6.000], mean observation: 72.913 [0.000, 228.000], loss: 0.791810, mean_absolute_error: 0.021386, acc: 0.985506, mean_q: 1.000000
41702/100000: episode: 90, duration: 15.572s, episode steps: 522, steps per second: 34, episode reward: 110.000, mean reward: 0.211 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.853 [0.000, 228.000], loss: 0.927495, mean_absolute_error: 0.024034, acc: 0.987428, mean_q: 1.000000
42138/100000: episode: 91, duration: 13.054s, episode steps: 436, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 2.995 [1.000, 3.000], mean observation: 72.918 [0.000, 228.000], loss: 0.935920, mean_absolute_error: 0.023190, acc: 0.985163, mean_q: 1.000000
42568/100000: episode: 92, duration: 12.831s, episode steps: 430, steps per second: 34, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.009 [3.000, 7.000], mean observation: 72.907 [0.000, 228.000], loss: 0.749676, mean_absolute_error: 0.020661, acc: 0.984448, mean_q: 1.000000
43039/100000: episode: 93, duration: 13.984s, episode steps: 471, steps per second: 34, episode reward: 110.000, mean reward: 0.234 [0.000, 10.000], mean action: 3.011 [3.000, 8.000], mean observation: 72.888 [0.000, 228.000], loss: 0.832054, mean_absolute_error: 0.022190, acc: 0.985801, mean_q: 1.000000
43473/100000: episode: 94, duration: 12.932s, episode steps: 434, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.023 [3.000, 8.000], mean observation: 72.909 [0.000, 228.000], loss: 0.718739, mean_absolute_error: 0.019606, acc: 0.986679, mean_q: 1.000000
43904/100000: episode: 95, duration: 12.903s, episode steps: 431, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.906 [0.000, 228.000], loss: 0.790879, mean_absolute_error: 0.020783, acc: 0.988399, mean_q: 1.000000
44329/100000: episode: 96, duration: 12.874s, episode steps: 425, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.914 [0.000, 228.000], loss: 0.912065, mean_absolute_error: 0.022168, acc: 0.987132, mean_q: 1.000000
44763/100000: episode: 97, duration: 13.024s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [2.000, 5.000], mean observation: 72.913 [0.000, 228.000], loss: 0.837879, mean_absolute_error: 0.020969, acc: 0.984375, mean_q: 1.000000
45290/100000: episode: 98, duration: 15.742s, episode steps: 527, steps per second: 33, episode reward: 110.000, mean reward: 0.209 [0.000, 10.000], mean action: 3.009 [3.000, 8.000], mean observation: 72.863 [0.000, 228.000], loss: 0.899594, mean_absolute_error: 0.023656, acc: 0.986717, mean_q: 1.000000
45714/100000: episode: 99, duration: 13.066s, episode steps: 424, steps per second: 32, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.913 [0.000, 228.000], loss: 0.795141, mean_absolute_error: 0.020977, acc: 0.988060, mean_q: 1.000000
46150/100000: episode: 100, duration: 13.063s, episode steps: 436, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.005 [3.000, 5.000], mean observation: 72.913 [0.000, 228.000], loss: 0.718090, mean_absolute_error: 0.019679, acc: 0.985737, mean_q: 1.000000
46580/100000: episode: 101, duration: 12.913s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.009 [3.000, 7.000], mean observation: 72.912 [0.000, 228.000], loss: 0.697030, mean_absolute_error: 0.019125, acc: 0.986773, mean_q: 1.000000
47013/100000: episode: 102, duration: 13.298s, episode steps: 433, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.739714, mean_absolute_error: 0.019914, acc: 0.986937, mean_q: 1.000000
47448/100000: episode: 103, duration: 13.779s, episode steps: 435, steps per second: 32, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.009 [3.000, 7.000], mean observation: 72.911 [0.000, 228.000], loss: 0.738085, mean_absolute_error: 0.020341, acc: 0.984770, mean_q: 1.000000
47889/100000: episode: 104, duration: 13.802s, episode steps: 441, steps per second: 32, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.832484, mean_absolute_error: 0.021954, acc: 0.986820, mean_q: 1.000000
48319/100000: episode: 105, duration: 12.800s, episode steps: 430, steps per second: 34, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.012 [3.000, 8.000], mean observation: 72.909 [0.000, 228.000], loss: 0.798077, mean_absolute_error: 0.021110, acc: 0.987718, mean_q: 1.000000
48752/100000: episode: 106, duration: 12.868s, episode steps: 433, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.914 [0.000, 228.000], loss: 0.888920, mean_absolute_error: 0.021588, acc: 0.987009, mean_q: 1.000000
49181/100000: episode: 107, duration: 12.792s, episode steps: 429, steps per second: 34, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.910 [0.000, 228.000], loss: 0.817413, mean_absolute_error: 0.021793, acc: 0.986233, mean_q: 1.000000
49614/100000: episode: 108, duration: 12.875s, episode steps: 433, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.014 [3.000, 7.000], mean observation: 72.911 [0.000, 228.000], loss: 0.804071, mean_absolute_error: 0.021243, acc: 0.987803, mean_q: 1.000000
50042/100000: episode: 109, duration: 13.319s, episode steps: 428, steps per second: 32, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.914 [0.000, 228.000], loss: 0.796233, mean_absolute_error: 0.021130, acc: 0.987077, mean_q: 1.000000
50472/100000: episode: 110, duration: 13.205s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.012 [3.000, 6.000], mean observation: 72.909 [0.000, 228.000], loss: 0.958833, mean_absolute_error: 0.023197, acc: 0.987064, mean_q: 1.000000
51182/100000: episode: 111, duration: 21.496s, episode steps: 710, steps per second: 33, episode reward: 780.000, mean reward: 1.099 [0.000, 400.000], mean action: 3.023 [3.000, 8.000], mean observation: 72.878 [0.000, 228.000], loss: 1.750635, mean_absolute_error: 0.022866, acc: 0.987192, mean_q: 1.000000
51616/100000: episode: 112, duration: 12.981s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.912 [0.000, 228.000], loss: 2.289796, mean_absolute_error: 0.022587, acc: 0.986247, mean_q: 1.000000
52052/100000: episode: 113, duration: 13.128s, episode steps: 436, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.018 [3.000, 7.000], mean observation: 72.909 [0.000, 228.000], loss: 6.869392, mean_absolute_error: 0.026987, acc: 0.988174, mean_q: 1.000000
52486/100000: episode: 114, duration: 12.987s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.005 [3.000, 5.000], mean observation: 72.911 [0.000, 228.000], loss: 0.908648, mean_absolute_error: 0.021901, acc: 0.987975, mean_q: 1.000000
53028/100000: episode: 115, duration: 16.227s, episode steps: 542, steps per second: 33, episode reward: 110.000, mean reward: 0.203 [0.000, 10.000], mean action: 3.015 [1.000, 8.000], mean observation: 72.861 [0.000, 228.000], loss: 0.927577, mean_absolute_error: 0.021164, acc: 0.988872, mean_q: 1.000000
53549/100000: episode: 116, duration: 15.605s, episode steps: 521, steps per second: 33, episode reward: 110.000, mean reward: 0.211 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.854 [0.000, 228.000], loss: 6.778060, mean_absolute_error: 0.024810, acc: 0.987224, mean_q: 1.000000
53976/100000: episode: 117, duration: 12.945s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.906 [0.000, 228.000], loss: 0.895132, mean_absolute_error: 0.023102, acc: 0.988876, mean_q: 1.000000
54417/100000: episode: 118, duration: 13.326s, episode steps: 441, steps per second: 33, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.907 [0.000, 228.000], loss: 0.815021, mean_absolute_error: 0.021397, acc: 0.987883, mean_q: 1.000000
54848/100000: episode: 119, duration: 12.849s, episode steps: 431, steps per second: 34, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.914 [0.000, 228.000], loss: 6.555128, mean_absolute_error: 0.023074, acc: 0.989342, mean_q: 1.000000
55278/100000: episode: 120, duration: 12.983s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.913 [0.000, 228.000], loss: 0.788291, mean_absolute_error: 0.020740, acc: 0.988517, mean_q: 1.000000
55712/100000: episode: 121, duration: 13.138s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.905 [0.000, 228.000], loss: 0.701904, mean_absolute_error: 0.018827, acc: 0.988767, mean_q: 1.000000
56144/100000: episode: 122, duration: 12.976s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.016 [3.000, 8.000], mean observation: 72.912 [0.000, 228.000], loss: 6.674213, mean_absolute_error: 0.026572, acc: 0.986400, mean_q: 1.000000
56584/100000: episode: 123, duration: 13.134s, episode steps: 440, steps per second: 34, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.907 [0.000, 228.000], loss: 6.481470, mean_absolute_error: 0.024026, acc: 0.989205, mean_q: 1.000000
57010/100000: episode: 124, duration: 12.779s, episode steps: 426, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 2.567588, mean_absolute_error: 0.024750, acc: 0.987089, mean_q: 1.000000
57440/100000: episode: 125, duration: 12.885s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.000 [0.000, 6.000], mean observation: 72.911 [0.000, 228.000], loss: 2.430678, mean_absolute_error: 0.023412, acc: 0.988154, mean_q: 1.000000
57878/100000: episode: 126, duration: 13.035s, episode steps: 438, steps per second: 34, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.009 [3.000, 5.000], mean observation: 72.909 [0.000, 228.000], loss: 2.234050, mean_absolute_error: 0.021098, acc: 0.989155, mean_q: 1.000000
58316/100000: episode: 127, duration: 13.131s, episode steps: 438, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 1.070799, mean_absolute_error: 0.022651, acc: 0.986658, mean_q: 1.000000
58750/100000: episode: 128, duration: 13.015s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.919 [0.000, 228.000], loss: 3.657577, mean_absolute_error: 0.023798, acc: 0.988119, mean_q: 1.000000
59176/100000: episode: 129, duration: 12.747s, episode steps: 426, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.005 [3.000, 5.000], mean observation: 72.913 [0.000, 228.000], loss: 2.296508, mean_absolute_error: 0.023142, acc: 0.989583, mean_q: 1.000000
59653/100000: episode: 130, duration: 14.280s, episode steps: 477, steps per second: 33, episode reward: 110.000, mean reward: 0.231 [0.000, 10.000], mean action: 3.010 [3.000, 8.000], mean observation: 72.867 [0.000, 228.000], loss: 0.889277, mean_absolute_error: 0.021522, acc: 0.988273, mean_q: 1.000000
60084/100000: episode: 131, duration: 12.940s, episode steps: 431, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.917 [0.000, 228.000], loss: 0.812250, mean_absolute_error: 0.021473, acc: 0.987602, mean_q: 1.000000
60510/100000: episode: 132, duration: 12.855s, episode steps: 426, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.910 [0.000, 228.000], loss: 6.668528, mean_absolute_error: 0.024226, acc: 0.987969, mean_q: 1.000000
60951/100000: episode: 133, duration: 13.184s, episode steps: 441, steps per second: 33, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 3.011 [3.000, 6.000], mean observation: 72.905 [0.000, 228.000], loss: 6.516801, mean_absolute_error: 0.023675, acc: 0.988450, mean_q: 1.000000
61377/100000: episode: 134, duration: 12.783s, episode steps: 426, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.774365, mean_absolute_error: 0.020130, acc: 0.990023, mean_q: 1.000000
61807/100000: episode: 135, duration: 12.900s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.012 [3.000, 6.000], mean observation: 72.918 [0.000, 228.000], loss: 0.730819, mean_absolute_error: 0.019347, acc: 0.988808, mean_q: 1.000000
62241/100000: episode: 136, duration: 12.967s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 0.739101, mean_absolute_error: 0.019514, acc: 0.989127, mean_q: 1.000000
62688/100000: episode: 137, duration: 13.364s, episode steps: 447, steps per second: 33, episode reward: 60.000, mean reward: 0.134 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.831790, mean_absolute_error: 0.021591, acc: 0.988465, mean_q: 1.000000
63121/100000: episode: 138, duration: 13.000s, episode steps: 433, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.914 [0.000, 228.000], loss: 7.999557, mean_absolute_error: 0.025455, acc: 0.988380, mean_q: 1.000000
63546/100000: episode: 139, duration: 13.515s, episode steps: 425, steps per second: 31, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.910 [0.000, 228.000], loss: 6.665014, mean_absolute_error: 0.023986, acc: 0.988235, mean_q: 1.000000
63987/100000: episode: 140, duration: 13.334s, episode steps: 441, steps per second: 33, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 2.995 [1.000, 3.000], mean observation: 72.906 [0.000, 228.000], loss: 0.847174, mean_absolute_error: 0.021910, acc: 0.988662, mean_q: 1.000000
64433/100000: episode: 141, duration: 13.484s, episode steps: 446, steps per second: 33, episode reward: 60.000, mean reward: 0.135 [0.000, 10.000], mean action: 3.004 [3.000, 5.000], mean observation: 72.907 [0.000, 228.000], loss: 0.801544, mean_absolute_error: 0.021092, acc: 0.988299, mean_q: 1.000000
64862/100000: episode: 142, duration: 12.891s, episode steps: 429, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 6.596956, mean_absolute_error: 0.023347, acc: 0.989948, mean_q: 1.000000
65301/100000: episode: 143, duration: 13.241s, episode steps: 439, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 2.995 [1.000, 3.000], mean observation: 72.907 [0.000, 228.000], loss: 0.756605, mean_absolute_error: 0.020061, acc: 0.988468, mean_q: 1.000000
65740/100000: episode: 144, duration: 13.186s, episode steps: 439, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.014 [3.000, 8.000], mean observation: 72.913 [0.000, 228.000], loss: 0.754340, mean_absolute_error: 0.019724, acc: 0.990248, mean_q: 1.000000
66175/100000: episode: 145, duration: 13.118s, episode steps: 435, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 2.998 [0.000, 5.000], mean observation: 72.912 [0.000, 228.000], loss: 13.629837, mean_absolute_error: 0.026191, acc: 0.991738, mean_q: 1.000000
66613/100000: episode: 146, duration: 13.218s, episode steps: 438, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.728253, mean_absolute_error: 0.019346, acc: 0.988870, mean_q: 1.000000
67050/100000: episode: 147, duration: 13.268s, episode steps: 437, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.009 [3.000, 7.000], mean observation: 72.912 [0.000, 228.000], loss: 0.771391, mean_absolute_error: 0.020079, acc: 0.990132, mean_q: 1.000000
67487/100000: episode: 148, duration: 13.208s, episode steps: 437, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 1.055708, mean_absolute_error: 0.023504, acc: 0.988630, mean_q: 1.000000
67917/100000: episode: 149, duration: 13.018s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.904 [0.000, 228.000], loss: 0.734914, mean_absolute_error: 0.019412, acc: 0.989390, mean_q: 1.000000
68349/100000: episode: 150, duration: 13.049s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 0.879950, mean_absolute_error: 0.020989, acc: 0.988860, mean_q: 1.000000
68776/100000: episode: 151, duration: 12.993s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.002 [2.000, 5.000], mean observation: 72.910 [0.000, 228.000], loss: 0.810485, mean_absolute_error: 0.021195, acc: 0.988583, mean_q: 1.000000
69200/100000: episode: 152, duration: 13.099s, episode steps: 424, steps per second: 32, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 6.769201, mean_absolute_error: 0.024035, acc: 0.989608, mean_q: 1.000000
69634/100000: episode: 153, duration: 13.483s, episode steps: 434, steps per second: 32, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.911 [0.000, 228.000], loss: 6.727956, mean_absolute_error: 0.025991, acc: 0.990351, mean_q: 1.000000
70065/100000: episode: 154, duration: 13.497s, episode steps: 431, steps per second: 32, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.912 [0.000, 228.000], loss: 0.831990, mean_absolute_error: 0.019917, acc: 0.988907, mean_q: 1.000000
70500/100000: episode: 155, duration: 13.260s, episode steps: 435, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 0.903836, mean_absolute_error: 0.021813, acc: 0.987644, mean_q: 1.000000
70924/100000: episode: 156, duration: 12.851s, episode steps: 424, steps per second: 33, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.912 [0.000, 228.000], loss: 0.813252, mean_absolute_error: 0.020972, acc: 0.990050, mean_q: 1.000000
71351/100000: episode: 157, duration: 12.881s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.906 [0.000, 228.000], loss: 0.762409, mean_absolute_error: 0.019593, acc: 0.991437, mean_q: 1.000000
71898/100000: episode: 158, duration: 16.541s, episode steps: 547, steps per second: 33, episode reward: 60.000, mean reward: 0.110 [0.000, 10.000], mean action: 3.004 [3.000, 5.000], mean observation: 72.901 [0.000, 228.000], loss: 5.462224, mean_absolute_error: 0.022918, acc: 0.989431, mean_q: 1.000000
72325/100000: episode: 159, duration: 12.927s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.913 [0.000, 228.000], loss: 0.815619, mean_absolute_error: 0.020754, acc: 0.991584, mean_q: 1.000000
72749/100000: episode: 160, duration: 12.806s, episode steps: 424, steps per second: 33, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.910 [0.000, 228.000], loss: 0.805345, mean_absolute_error: 0.020817, acc: 0.989829, mean_q: 1.000000
73176/100000: episode: 161, duration: 12.953s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.909 [0.000, 228.000], loss: 6.730695, mean_absolute_error: 0.024181, acc: 0.989095, mean_q: 1.000000
73615/100000: episode: 162, duration: 13.336s, episode steps: 439, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.837817, mean_absolute_error: 0.020025, acc: 0.989749, mean_q: 1.000000
74051/100000: episode: 163, duration: 13.276s, episode steps: 436, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 6.441117, mean_absolute_error: 0.021877, acc: 0.989822, mean_q: 1.000000
74488/100000: episode: 164, duration: 13.053s, episode steps: 437, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.909 [0.000, 228.000], loss: 0.819825, mean_absolute_error: 0.021008, acc: 0.990346, mean_q: 1.000000
74921/100000: episode: 165, duration: 13.294s, episode steps: 433, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 2.998 [0.000, 5.000], mean observation: 72.913 [0.000, 228.000], loss: 0.865574, mean_absolute_error: 0.022025, acc: 0.990257, mean_q: 1.000000
75354/100000: episode: 166, duration: 13.565s, episode steps: 433, steps per second: 32, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.912 [0.000, 228.000], loss: 0.708961, mean_absolute_error: 0.018218, acc: 0.992350, mean_q: 1.000000
75781/100000: episode: 167, duration: 12.828s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.911 [0.000, 228.000], loss: 8.054317, mean_absolute_error: 0.024153, acc: 0.989681, mean_q: 1.000000
76295/100000: episode: 168, duration: 15.366s, episode steps: 514, steps per second: 33, episode reward: 110.000, mean reward: 0.214 [0.000, 10.000], mean action: 3.010 [3.000, 8.000], mean observation: 72.859 [0.000, 228.000], loss: 5.656568, mean_absolute_error: 0.023127, acc: 0.990333, mean_q: 1.000000
76710/100000: episode: 169, duration: 12.637s, episode steps: 415, steps per second: 33, episode reward: 60.000, mean reward: 0.145 [0.000, 10.000], mean action: 3.012 [3.000, 8.000], mean observation: 72.909 [0.000, 228.000], loss: 0.841335, mean_absolute_error: 0.019893, acc: 0.990361, mean_q: 1.000000
77142/100000: episode: 170, duration: 13.082s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.767345, mean_absolute_error: 0.017985, acc: 0.991753, mean_q: 1.000000
77592/100000: episode: 171, duration: 13.450s, episode steps: 450, steps per second: 33, episode reward: 110.000, mean reward: 0.244 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.899 [0.000, 228.000], loss: 0.689204, mean_absolute_error: 0.018016, acc: 0.991042, mean_q: 1.000000
78114/100000: episode: 172, duration: 15.622s, episode steps: 522, steps per second: 33, episode reward: 110.000, mean reward: 0.211 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.850 [0.000, 228.000], loss: 0.714887, mean_absolute_error: 0.018603, acc: 0.990960, mean_q: 1.000000
78544/100000: episode: 173, duration: 12.919s, episode steps: 430, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.913 [0.000, 228.000], loss: 0.737498, mean_absolute_error: 0.019308, acc: 0.989971, mean_q: 1.000000
78975/100000: episode: 174, duration: 12.894s, episode steps: 431, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.918 [0.000, 228.000], loss: 0.797951, mean_absolute_error: 0.018914, acc: 0.990502, mean_q: 1.000000
79417/100000: episode: 175, duration: 13.310s, episode steps: 442, steps per second: 33, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 12.094293, mean_absolute_error: 0.026394, acc: 0.991233, mean_q: 1.000000
79849/100000: episode: 176, duration: 12.968s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.005 [3.000, 5.000], mean observation: 72.907 [0.000, 228.000], loss: 0.746510, mean_absolute_error: 0.019266, acc: 0.991247, mean_q: 1.000000
80281/100000: episode: 177, duration: 13.057s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.012 [3.000, 8.000], mean observation: 72.914 [0.000, 228.000], loss: 0.878578, mean_absolute_error: 0.020813, acc: 0.990162, mean_q: 1.000000
80717/100000: episode: 178, duration: 12.996s, episode steps: 436, steps per second: 34, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 2.995 [1.000, 3.000], mean observation: 72.914 [0.000, 228.000], loss: 0.747418, mean_absolute_error: 0.017835, acc: 0.990396, mean_q: 1.000000
81150/100000: episode: 179, duration: 13.030s, episode steps: 433, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.909 [0.000, 228.000], loss: 2.231029, mean_absolute_error: 0.021677, acc: 0.991700, mean_q: 1.000000
81577/100000: episode: 180, duration: 12.772s, episode steps: 427, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.911 [0.000, 228.000], loss: 0.783079, mean_absolute_error: 0.020112, acc: 0.990925, mean_q: 1.000000
82008/100000: episode: 181, duration: 12.885s, episode steps: 431, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.910 [0.000, 228.000], loss: 0.696581, mean_absolute_error: 0.018047, acc: 0.991734, mean_q: 1.000000
82440/100000: episode: 182, duration: 13.183s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.909 [0.000, 228.000], loss: 0.836997, mean_absolute_error: 0.019772, acc: 0.991030, mean_q: 1.000000
82871/100000: episode: 183, duration: 13.619s, episode steps: 431, steps per second: 32, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.916 [0.000, 228.000], loss: 13.865776, mean_absolute_error: 0.028900, acc: 0.991372, mean_q: 1.000000
83293/100000: episode: 184, duration: 12.943s, episode steps: 422, steps per second: 33, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 2.995 [0.000, 4.000], mean observation: 72.904 [0.000, 228.000], loss: 0.801700, mean_absolute_error: 0.020705, acc: 0.989929, mean_q: 1.000000
83729/100000: episode: 185, duration: 13.499s, episode steps: 436, steps per second: 32, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.009 [3.000, 7.000], mean observation: 72.912 [0.000, 228.000], loss: 6.508937, mean_absolute_error: 0.023269, acc: 0.990181, mean_q: 1.000000
84173/100000: episode: 186, duration: 13.531s, episode steps: 444, steps per second: 33, episode reward: 60.000, mean reward: 0.135 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.912 [0.000, 228.000], loss: 0.848594, mean_absolute_error: 0.019877, acc: 0.991695, mean_q: 1.000000
84615/100000: episode: 187, duration: 13.393s, episode steps: 442, steps per second: 33, episode reward: 60.000, mean reward: 0.136 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.853347, mean_absolute_error: 0.021670, acc: 0.990950, mean_q: 1.000000
85049/100000: episode: 188, duration: 13.252s, episode steps: 434, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.913 [0.000, 228.000], loss: 6.567101, mean_absolute_error: 0.023859, acc: 0.991215, mean_q: 1.000000
85488/100000: episode: 189, duration: 13.414s, episode steps: 439, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.911 [0.000, 228.000], loss: 12.108674, mean_absolute_error: 0.024965, acc: 0.991315, mean_q: 1.000000
85917/100000: episode: 190, duration: 13.049s, episode steps: 429, steps per second: 33, episode reward: 60.000, mean reward: 0.140 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.909 [0.000, 228.000], loss: 0.760346, mean_absolute_error: 0.019699, acc: 0.989948, mean_q: 1.000000
86352/100000: episode: 191, duration: 13.272s, episode steps: 435, steps per second: 33, episode reward: 60.000, mean reward: 0.138 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.912 [0.000, 228.000], loss: 0.691576, mean_absolute_error: 0.017892, acc: 0.991882, mean_q: 1.000000
86777/100000: episode: 192, duration: 12.884s, episode steps: 425, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 2.998 [2.000, 3.000], mean observation: 72.913 [0.000, 228.000], loss: 0.688200, mean_absolute_error: 0.018250, acc: 0.989853, mean_q: 1.000000
87199/100000: episode: 193, duration: 12.817s, episode steps: 422, steps per second: 33, episode reward: 60.000, mean reward: 0.142 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.910 [0.000, 228.000], loss: 0.798901, mean_absolute_error: 0.020241, acc: 0.992595, mean_q: 1.000000
87631/100000: episode: 194, duration: 13.218s, episode steps: 432, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.906 [0.000, 228.000], loss: 0.758069, mean_absolute_error: 0.017800, acc: 0.991536, mean_q: 1.000000
88076/100000: episode: 195, duration: 13.552s, episode steps: 445, steps per second: 33, episode reward: 60.000, mean reward: 0.135 [0.000, 10.000], mean action: 3.002 [3.000, 4.000], mean observation: 72.907 [0.000, 228.000], loss: 0.755982, mean_absolute_error: 0.019846, acc: 0.989045, mean_q: 1.000000
88513/100000: episode: 196, duration: 13.312s, episode steps: 437, steps per second: 33, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.915 [0.000, 228.000], loss: 0.821040, mean_absolute_error: 0.020977, acc: 0.990775, mean_q: 1.000000
88975/100000: episode: 197, duration: 14.064s, episode steps: 462, steps per second: 33, episode reward: 110.000, mean reward: 0.238 [0.000, 10.000], mean action: 3.000 [2.000, 4.000], mean observation: 72.892 [0.000, 228.000], loss: 0.816265, mean_absolute_error: 0.019451, acc: 0.990530, mean_q: 1.000000
89401/100000: episode: 198, duration: 12.976s, episode steps: 426, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.007 [3.000, 6.000], mean observation: 72.912 [0.000, 228.000], loss: 0.783816, mean_absolute_error: 0.020237, acc: 0.990244, mean_q: 1.000000
89827/100000: episode: 199, duration: 12.980s, episode steps: 426, steps per second: 33, episode reward: 60.000, mean reward: 0.141 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.907 [0.000, 228.000], loss: 3.702346, mean_absolute_error: 0.023155, acc: 0.990390, mean_q: 1.000000
90270/100000: episode: 200, duration: 13.735s, episode steps: 443, steps per second: 32, episode reward: 60.000, mean reward: 0.135 [0.000, 10.000], mean action: 3.020 [3.000, 7.000], mean observation: 72.914 [0.000, 228.000], loss: 6.430346, mean_absolute_error: 0.023296, acc: 0.991112, mean_q: 1.000000
90691/100000: episode: 201, duration: 12.984s, episode steps: 421, steps per second: 32, episode reward: 60.000, mean reward: 0.143 [0.000, 10.000], mean action: 2.995 [1.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 6.711028, mean_absolute_error: 0.023351, acc: 0.990202, mean_q: 1.000000
91130/100000: episode: 202, duration: 14.014s, episode steps: 439, steps per second: 31, episode reward: 60.000, mean reward: 0.137 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 0.857046, mean_absolute_error: 0.020169, acc: 0.991173, mean_q: 1.000000
91578/100000: episode: 203, duration: 13.709s, episode steps: 448, steps per second: 33, episode reward: 60.000, mean reward: 0.134 [0.000, 10.000], mean action: 3.000 [3.000, 3.000], mean observation: 72.908 [0.000, 228.000], loss: 7.718023, mean_absolute_error: 0.023832, acc: 0.991211, mean_q: 1.000000
92011/100000: episode: 204, duration: 13.224s, episode steps: 433, steps per second: 33, episode reward: 60.000, mean reward: 0.139 [0.000, 10.000], mean action: 2.993 [0.000, 3.000], mean observation: 72.912 [0.000, 228.000], loss: 0.776836, mean_absolute_error: 0.018400, acc: 0.990834, mean_q: 1.000000
done, took 2826.234 seconds