Programming/Tensorflow
[Tensorflow] Multi-Variable linear regression (use matrix)
BadSchool
2017. 7. 30. 18:12
Recap
- Hypothesis(가설. 어떻게 예측할 것 인지)
- Cost function(cost 계산 방법)
- Gradient descent algorithm(cost 값 최적화)
변수들이 많아지면 복잡해 지므로 Matrix를 사용하여 일괄 계산
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28 |
#Matrix 사용 함
import tensorflow as tf
x_data = [[73., 80., 75.], [93.,88.,93], [89.,91.,90], [96.,98.,100.], [73.,66.,70]]
y_data = [[152.], [185.], [180.], [196.], [142.]]
X = tf.placeholder(tf.float32, shape=[None, 3])
Y = tf.placeholder(tf.float32, shape=[None, 1])
W = tf.Variable(tf.random_normal([3,1]), name='weight')
b = tf.Variable(tf.random_normal([1]),name='bias')
hypothesis = tf.matmul(X,W) + b
cost = tf.reduce_mean(tf.square(hypothesis - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 1e-5)
train = optimizer.minimize(cost)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
for step in range(2001):
cost_val, hy_val, _ = sess.run([cost, hypothesis, train],
feed_dict={x1: x1_data, x2: x2_data, x3: x3_data, Y: y_data})
if step % 10 == 0:
print(step, "Cost: ", cost_val, "\nPrediction:\n",hy_val)
|
cs |
==================================================================
==================================================================