여기서 가장 적절한 a값을 찾아내는 것이 Machine Learning의 목표
학습 간 a값 Update 과정
data | label |
---|---|
1 | 2 |
2 | 4 |
3 | 6 |
4 | 8 |
적절한 난수로 a값 초기화
y=0.5xdata = 1을 넣어서 prediction
1.5=0.5×1prediction과 label 비교하여 a의 변화량 계산
−0.6=grad(2,1.5)a update
1.1=0.5−(−0.6)학습된 모델
y=1.1x반복 data = 2, label = 4
y=1.1x반복 data = 3, label = 6
y=1.6x반복 data = 4, label = 8
y=1.9xImage('./day1/neurons1.png')
Image('./day1/neurons2.png')
import tensorflow as tf
const = tf.constant('hello tensorflow!!')
with tf.Session() as sess:
_const = sess.run(const)
print(_const)
b'hello tensorflow!!'
GPU를 효과적으로 활용 할 수 있도록 하여 연산 속도를 보장하면서 성능 강하 최소화 → 프레임워크의 경쟁력
연구자가 쉽게 코드를 생산 할 수 있도록 여러가지 Wrapper를 제공
매우 유연한 개발환경 → Mobile, Backend, Cloud, RaspberryPi 등 포팅 가능, 일반 어플리케이션과 유사하게 모델 배포, 서비스 지원
Google Cloud Platform에서 더욱 쉽게 사용 가능
Define by Run at Python
_list = 'hello python!'
print(_list)
hello python!
Define and Run at Tensorflow
"""Define and Run at tensorflow"""
import tensorflow as tf
# Define
const = tf.constant('hello tensorflow!!')
# Run
with tf.Session() as sess:
_const = sess.run(const)
print(_const)
b'hello tensorflow!!'
import tensorflow as tf
tf_string = tf.constant('hello tensorflow')
tf_int = tf.constant(10)
tf_float = tf.constant(3.14)
print(tf_string)
print(tf_int)
print(tf_float)
Tensor("Const_2:0", shape=(), dtype=string) Tensor("Const_3:0", shape=(), dtype=int32) Tensor("Const_4:0", shape=(), dtype=float32)
import tensorflow as tf
one = tf.constant([[1, 1, 1]])
two = tf.constant([[2], [2], [2]])
matmul = tf.matmul(one, two)
print(matmul)
Tensor("MatMul:0", shape=(1, 1), dtype=int32)
import tensorflow as tf
tf_string = tf.constant('hello tensorflow')
tf_int = tf.constant(10)
tf_float = tf.constant(3.14)
with tf.Session() as sess:
_tf_string = sess.run(tf_string)
_tf_int = sess.run(tf_int)
_tf_float = sess.run(tf_float)
print(_tf_string)
print(_tf_int)
print(_tf_float)
b'hello tensorflow' 10 3.14
import tensorflow as tf
one = tf.constant([[1, 1, 1]])
two = tf.constant([[2], [2], [2]])
matmul = tf.matmul(one, two)
with tf.Session() as sess:
_matmul = sess.run(matmul)
print(_matmul)
[[6]]
import tensorflow as tf
one = tf.constant([[1, 1, 1]])
two = tf.constant([[2], [2], [2]])
matmul = tf.matmul(one, two)
with tf.Session() as sess:
_one, _two, _matmul = sess.run([one, two, matmul])
print(_one)
print(_two)
print(_matmul)
[[1 1 1]] [[2] [2] [2]] [[6]]
import tensorflow as tf
const = tf.constant(10)
with tf.Session() as sess:
print(sess.run(const))
10
import tensorflow as tf
data = [1, 2, 3, 4, 5]
pl_data = tf.placeholder(tf.float32)
with tf.Session() as sess:
print(sess.run(pl_data, {pl_data: data}))
[ 1. 2. 3. 4. 5.]
import tensorflow as tf
var = tf.Variable(10)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(var))
10
현실 데이터와 유사하게 학습용 시뮬레이션 데이터 셋 생성
Image('./day1/dataset.png')
Image('./day1/ax_b_1.png')
Image('./day1/ax_b_2.png')
Image('./day1/loss_fn_1.gif.png')
최적의 w값을 찾는다 = loss 함수의 최소값에 해당하는 w값을 찾는다
Image('./day1/loss_fn_2.png')
최소값 찾는 방법 → Gradient Descent Optimizer
w=w−−α∂L∂wnImage('./day1/gradientdes.gif.png')
기울기가 너무 클 경우 w→∞
Image('./day1/infw.png')
기울기에 작은 수를 곱해주어 weight 변화량 조절 → learning rate
Image('./day1/learning_rate.png')
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
samples = 1000
data = np.array([1e-3*float(i) for i in range(samples)])
label = 4.2 * data + 2.2 + np.random.randn(samples)
target = 4.2 * data + 2.2
plt.figure(11)
plt.ylim(-2, 10)
plt.scatter(data, label, 1, 'r')
plt.figure(21)
plt.ylim(-2, 10)
plt.scatter(data, target, 1, 'g')
<matplotlib.collections.PathCollection at 0x118462780>
x = tf.placeholder(tf.float32)
y_ = tf.placeholder(tf.float32)
w = tf.Variable(0.0)
b = tf.Variable(0.0)
y = w*x + b
loss = tf.losses.mean_squared_error(y_, y)
train_op = tf.train.GradientDescentOptimizer(1e-2).minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(100):
_, _loss = sess.run([train_op, loss], feed_dict={x: data, y_: label})
if i%10 == 0:
print('step: {}, loss: {}'.format(i, _loss))
step: 0, loss: 20.681596755981445 step: 10, loss: 12.917558670043945 step: 20, loss: 8.266731262207031 step: 30, loss: 5.479213714599609 step: 40, loss: 3.8069612979888916 step: 50, loss: 2.802278995513916 step: 60, loss: 2.1972241401672363 step: 70, loss: 1.831439733505249 step: 80, loss: 1.6089481115341187 step: 90, loss: 1.4723080396652222
현실 데이터와 유사하게 학습용 시뮬레이션 데이터 셋 생성
Image('./day1/logistic_data.png')
Image('./day1/./sigmoid.png')
Image('./day1/sig_ax_b.png')
Loss 함수를 이용하여 label과 prediction의 차이를 계산
L(y′,y)=1mm∑n=1−y′log(y)−(1−y′)log(1−y)⋯cross entropyImage('./day1/cross_entropy2.png')
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
samples = 1000
data = [float(i)*0.01 for i in range(-samples, samples)]
label = [1 if i>2.5 else 0 for i in data]
plt.plot(label)
[<matplotlib.lines.Line2D at 0x1183dc1d0>]
x = tf.placeholder(tf.float32)
y_ = tf.placeholder(tf.float32)
a = tf.Variable(0.0)
b = tf.Variable(0.0)
y = a*x + b
loss = tf.losses.sigmoid_cross_entropy(y_, y)
train_op = tf.train.GradientDescentOptimizer(1e-2).minimize(loss)
INFO:tensorflow:logits.dtype=<dtype: 'float32'>. INFO:tensorflow:multi_class_labels.dtype=<dtype: 'float32'>. INFO:tensorflow:losses.dtype=<dtype: 'float32'>.
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(100):
_, _loss = sess.run([train_op, loss], feed_dict={x: data, y_: label})
if i%10 == 0:
print('step: {}, loss: {}'.format(i, _loss))
step: 0, loss: 0.6931459307670593 step: 10, loss: 0.40895897150039673 step: 20, loss: 0.32846373319625854 step: 30, loss: 0.2922522723674774 step: 40, loss: 0.27161192893981934 step: 50, loss: 0.25815078616142273 step: 60, loss: 0.24857492744922638 step: 70, loss: 0.24133384227752686 step: 80, loss: 0.23560315370559692 step: 90, loss: 0.23090486228466034