[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 02. TensorFlow๋ก Linear regression ๊ตฌํ
๋ชฉ์ฐจ
- Build graph using TensorFlow operations
- Place holders
Build graph using TensorFlow operations
Linear regression์ ๊ตฌํํ์ฌ ๊ทธ๋ํ๋ฅผ ๋ง๋ค๊ธฐ ์ํด ์์ ๊ทธ๋ฆผ์ ๋ณด๋ฉด์ ์ง๋ ๋ด์ฉ์ ๋ณต์ตํ๋๋ก ํ์.
Hypothesis์ Cost function์ ์์ [๊ทธ๋ฆผ1]๊ณผ ๊ฐ๊ณ , ํ ์ํ๋ก์ฐ์ ๋ฉ์ปค๋์ฆ์ [๊ทธ๋ฆผ 2]์ ๊ฐ๋ค.
# X and Y data
x_train = [1, 2, 3]
y_train = [1, 2, 3]
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
# Our hyphothesis XW+b
hypothesis = x_train * W + b
Hypothesis๋ฅผ Python ์ฝ๋๋ก ๋ํ๋ด๋ฉด ์์ ๊ฐ๋ค.
x_train, y_train์ ์ฐ๋ฆฌ๊ฐ ํ์ตํ ๋ฐ์ดํฐ์ธ training set์ด๊ณ ,
W์ b๋ tf.Variable๋ก ๋ํ๋๋๋ฐ ๊ธฐ์กด์ ํ๋ก๊ทธ๋๋ฐ์์ ๋ณ์์๋ ์กฐ๊ธ ๋ค๋ฅธ ๊ฐ๋ ์ด๋ค.
์ฝ๊ฒ ์๊ฐํด์ ํ ์ํ๋ก์ฐ์์ ์ฌ์ฉํ๋ Variable, ์ฆ ํ ์ํ๋ก์ฐ์์ ์์ฒด์ ์ผ๋ก ๋ณ๊ฒฝ์ํค๋ ๊ฐ(ํ์ต์ ์ํด์)์ด๋ผ๊ณ ์๊ฐํ๋ฉด ๋๋ค.
์ฐ๋ฆฌ๋ W์ b์ ๊ฐ์ ๋ชจ๋ฅด๋๊น tf.random_normal๋ก ๋ํ๋ด์๊ณ ๊ฐ์ด ํ๋์ธ 1์ฐจ์ ๋ฐฐ์ด์ด๋ฏ๋ก [1](Shape)์ ์ธ์๋ก ์ฃผ์๋ค.
cost = tf.reduce_mean(tf.square(hypothesis - y_train))
Cost function๋ ์ฝ๋๋ก ๋ํ๋ด๋ฉด ์์ ๊ฐ๋ค tf.square์ ๊ฐ์ ์ ๊ณฑํด์ฃผ๋ ํจ์, tf.reduce_mean๋ ๊ฐ์ ํ๊ท ๋ด์ฃผ๋ ํจ์๋ค.
# Minimize
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)
๊ทธ๋ค์ ์์ ์ Cost๋ฅผ ์ต์ํํ๋ ๊ฒ์ธ๋ฐ ์ฝ๋๋ก ๋ํ๋ด๋ฉด ์์ ๊ฐ๋ค.
์ ๋ ๊ฒ minimize ํ๋ฉด ํ ์ํ๋ก์ฐ๊ฐ W์ b๋ฅผ ์กฐ์ ํ๋ฉด์ Cost๋ฅผ ์ต์ํ์์ผ์ค๋ค. ์ง๊ธ์ ์ผ๋จ ํต์ผ๋ก ์ธ์๋์!
# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph
sess.run(tf.global_variables_initializer())
# Fit the line
for step in range(2001):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(cost), sess.run(W), sess.run(b))
๋ค์์ ์ธ์ ์ ๋ง๋ค์ด์ฃผ๊ณ , ์ฐ๋ฆฌ๋ W์ b๋ผ๋ Variable์ ๋ง๋ค์์ผ๋ฏ๋ก ๋ฐ๋์ tf.global_variables_initializer() ๋ผ๋ ํจ์๋ฅผ ์คํ์์ผ์ฃผ์ด์ผ ํ๋ค. (๊ทธ๋์ผ์ง Variable๋ฅผ ์ธ์ํ๋ค)
๊ทธ๋ฆฌ๊ณ Cost๋ฅผ ์ต์ํ ์์ผ์ฃผ๊ธฐ์ํด ์ฐ๋ฆฌ๋ train์ด๋ผ๋ node๋ฅผ ๋ง๋ค์์ผ๋ฏ๋ก ์ธ์ ์ผ๋ก train์ run ์์ผ์ผ ํ๋ค.
๊ทธ๋ฆฌ๊ณ 20๋ฒ์ ํ๋ฒ์ฉ ๊ฒฐ๊ด๊ฐ์ ์ฐ์ด๋ณด๋ฉด.. ์๋์ ๊ฐ์ด ๋์จ๋ค.
[์ ์ฒด ์์ค์ฝ๋]
import tensorflow as tf
# X and Y data
x_train = [1, 2, 3]
y_train = [1, 2, 3]
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
# Our hyphothesis XW+b
hypothesis = x_train * W + b
# cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - y_train))
# Minimize
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)
# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph
sess.run(tf.global_variables_initializer())
# Fit the line
for step in range(2001):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(cost), sess.run(W), sess.run(b))
...
...
1800 6.571583e-06 [0.9970226] [0.00676825]
1820 5.9684476e-06 [0.9971626] [0.00645017]
1840 5.420529e-06 [0.9972959] [0.00614703]
1860 4.922903e-06 [0.997423] [0.00585814]
1880 4.4713174e-06 [0.9975441] [0.00558284]
1900 4.060744e-06 [0.9976595] [0.00532046]
1920 3.688065e-06 [0.9977695] [0.00507043]
1940 3.3496133e-06 [0.9978743] [0.00483215]
1960 3.0423025e-06 [0.9979742] [0.00460506]
1980 2.7630122e-06 [0.9980694] [0.00438865]
2000 2.5094093e-06 [0.9981602] [0.0041824]
๊ฒฐ๊ณผ๋ฅผ ๋ณด๋ฉด W๊ฐ 1, b๊ฐ 0์ผ๋ก ๊ฐ๊น์์ง๋ ํํ๋ฅผ ๋ณผ ์ ์๋ค.
Place holders
์์์๋ ์ฐ๋ฆฌ๊ฐ training set๋ฅผ (x_train, y_train) ๊ฐ๊ฐ ๊ณ ์ ํด์ ํ์ต์์ผฐ์ง๋ง, ์ฐ๋ฆฌ๊ฐ Place holder๋ฅผ ์ด์ฉํด์ ์์๋ก ๊ฐ์ ์ ํด ์ค ์ ์๋ค.
# X and Y data
X = tf.placeholder(tf.float32, shape=[None])
Y = tf.placeholder(tf.float32, shape=[None])
์ฌ๊ธฐ์ shape=[None]์ 1์ฐจ์ ๋ฐฐ์ด์ด๊ณ ๊ฐ์ ์๋ฌด ๊ฐ์ด๋ ๋ค์ด์ฌ ์ ์์์ ์๋ฏธํ๋ค.
# Fit the line
for step in range(2001):
cost_val, W_val, b_val, _ = sess.run([cost, W, b, train],
feed_dict={X: [1,2,3,4,5],
Y: [2.1,3.1,4.1,5.1,6.1]})
if step % 20 == 0:
print(step, cost_val, W_val, b_val)
์ฝ๊ฐ ๋ณต์กํด๋ณด์ด์ง๋ง, ์ธ์ ์ run ์ํฌ ๋ ์ผ์ผ์ด ์ฌ๋ฌ ๋ฒ run ์ํค๋ ๊ฒ ์๋๋ผ ์์ ๊ฐ์ด ํ๋ฒ์ listํํ๋ก ์ ๋ฌํ์ฌ run ์ํฌ ์๋ ์๋ค.
์ฐ๋ฆฌ๊ฐ Placeholders๋ฅผ ์ฌ์ฉํ์ผ๋ฏ๋ก feed_dict๋ฅผ ํตํด X์ Y์ ๊ฐ์ ์ธ์๋ก ๋๊ฒจ์ฃผ๋ฉด ๋๋ค.
[์ ์ฒด ์์ค์ฝ๋]
import tensorflow as tf
# X and Y data
X = tf.placeholder(tf.float32, shape=[None])
Y = tf.placeholder(tf.float32, shape=[None])
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
# Our hyphothesis XW+b
hypothesis = X * W + b
# cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
# Minimize
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)
# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph
sess.run(tf.global_variables_initializer())
# Fit the line
for step in range(2001):
cost_val, W_val, b_val, _ = sess.run([cost, W, b, train],
feed_dict={X: [1,2,3,4,5],
Y: [2.1,3.1,4.1,5.1,6.1]})
if step % 20 == 0:
print(step, cost_val, W_val, b_val)
...
...
1620 1.1853394e-05 [1.0022277] [1.0919573] None
1640 1.0352453e-05 [1.0020819] [1.092484] None
1660 9.040323e-06 [1.0019455] [1.0929762] None
1680 7.894941e-06 [1.0018181] [1.0934364] None
1700 6.8945096e-06 [1.001699] [1.0938662] None
1720 6.0213283e-06 [1.0015877] [1.0942678] None
1740 5.257885e-06 [1.0014838] [1.0946432] None
1760 4.592922e-06 [1.0013866] [1.0949937] None
1780 4.010986e-06 [1.0012959] [1.0953215] None
1800 3.5028606e-06 [1.0012109] [1.0956279] None
1820 3.0593212e-06 [1.0011318] [1.0959142] None
1840 2.6715702e-06 [1.0010576] [1.0961818] None
1860 2.3329264e-06 [1.0009884] [1.0964319] None
1880 2.0374566e-06 [1.0009236] [1.0966655] None
1900 1.7793939e-06 [1.0008631] [1.0968839] None
1920 1.5542213e-06 [1.0008067] [1.0970877] None
1940 1.3571125e-06 [1.0007539] [1.0972784] None
1960 1.1851548e-06 [1.0007045] [1.0974566] None
1980 1.0354879e-06 [1.0006584] [1.0976231] None
2000 9.042254e-07 [1.0006152] [1.0977786] None
์ฐธ๊ณ ์๋ฃ
https://www.youtube.com/watch?v=mQGwjrStQgg
Sung Kim- ML lab 02 TensorFlow๋ก ๊ฐ๋จํ linear regression์ ๊ตฌํ
'Development > Machine Learning' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
๋๊ธ
์ด ๊ธ ๊ณต์ ํ๊ธฐ
-
๊ตฌ๋
ํ๊ธฐ
๊ตฌ๋ ํ๊ธฐ
-
์นด์นด์คํก
์นด์นด์คํก
-
๋ผ์ธ
๋ผ์ธ
-
ํธ์ํฐ
ํธ์ํฐ
-
Facebook
Facebook
-
์นด์นด์ค์คํ ๋ฆฌ
์นด์นด์ค์คํ ๋ฆฌ
-
๋ฐด๋
๋ฐด๋
-
๋ค์ด๋ฒ ๋ธ๋ก๊ทธ
๋ค์ด๋ฒ ๋ธ๋ก๊ทธ
-
Pocket
Pocket
-
Evernote
Evernote
๋ค๋ฅธ ๊ธ
-
[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 03. Linear Regression ์ cost ์ต์ํ์ TensorFlow ๊ตฌํ
[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 03. Linear Regression ์ cost ์ต์ํ์ TensorFlow ๊ตฌํ
2019.08.08 -
[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 03. Linear Regression์ cost ์ต์ํ ์๊ณ ๋ฆฌ์ฆ์ ์๋ฆฌ ์ค๋ช
[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 03. Linear Regression์ cost ์ต์ํ ์๊ณ ๋ฆฌ์ฆ์ ์๋ฆฌ ์ค๋ช
2019.08.07 -
[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 02. Linear Regression์ Hypothesis์ Cost
[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 02. Linear Regression์ Hypothesis์ Cost
2019.07.04 -
[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 01. TensorFlow์ ๊ธฐ๋ณธ์ ์ธ operations
[๋จธ์ ๋ฌ๋ ์ ๋ฌธ] 01. TensorFlow์ ๊ธฐ๋ณธ์ ์ธ operations
2019.07.01