# tensorflowtensorflow入门

## 基本例子

Tensorflow不仅仅是一个深度学习框架。它是以并行和分布方式执行一般数学运算的通用计算框架。下面描述这样的一个例子。

# 线性回归

（TensorFlow）脚本的主要步骤是：

1. 声明占位符`x_ph``y_ph` ）和变量`W``b`
2. 定义初始化运算符（ `init`
3. 声明占位符和变量的操作（ `y_pred``loss``train_op`
4. 创建一个会话（ `sess`
5. 运行初始化运算符（ `sess.run(init)`
6. 运行一些图形操作（例如`sess.run([train_op, loss], feed_dict={x_ph: x, y_ph: y})`

``````'''
function: create a linear model which try to fit the line
y = x + 2 using SGD optimizer to minimize
root-mean-square(RMS) loss function

'''
import tensorflow as tf
import numpy as np

# number of epoch
num_epoch = 100

# training data x and label y
x = np.array([0., 1., 2., 3.], dtype=np.float32)
y = np.array([2., 3., 4., 5.], dtype=np.float32)

# convert x and y to 4x1 matrix
x = np.reshape(x, [4, 1])
y = np.reshape(y, [4, 1])

# test set(using a little trick)
x_test = x + 0.5
y_test = y + 0.5

# This part of the script builds the TensorFlow graph using the Python API

# First declare placeholders for input x and label y
# Placeholders are TensorFlow variables requiring to be explicitly fed by some
# input data
x_ph = tf.placeholder(tf.float32, shape=[None, 1])
y_ph = tf.placeholder(tf.float32, shape=[None, 1])

# Variables (if not specified) will be learnt as the GradientDescentOptimizer
# is run
# Declare weight variable initialized using a truncated_normal law
W = tf.Variable(tf.truncated_normal([1, 1], stddev=0.1))
# Declare bias variable initialized to a constant 0.1
b = tf.Variable(tf.constant(0.1, shape=[1]))

# Initialize variables just declared
init = tf.initialize_all_variables()

# In this part of the script, we build operators storing operations
# on the previous variables and placeholders.
# model: y = w * x + b
y_pred = x_ph * W + b

# loss function
loss = tf.mul(tf.reduce_mean(tf.square(tf.sub(y_pred, y_ph))), 1. / 2)
# create training graph

# This part of the script runs the TensorFlow graph (variables and operations
# operators) just built.
with tf.Session() as sess:
# initialize all the variables by running the initializer operator
sess.run(init)
for epoch in xrange(num_epoch):
# Run sequentially the train_op and loss operators with
# x_ph and y_ph placeholders fed by variables x and y
_, loss_val = sess.run([train_op, loss], feed_dict={x_ph: x, y_ph: y})
print('epoch %d: loss is %.4f' % (epoch, loss_val))

# see what model do in the test set
# by evaluating the y_pred operator using the x_test data
test_val = sess.run(y_pred, feed_dict={x_ph: x_test})
print('ground truth y is: %s' % y_test.flatten())
print('predict y is     : %s' % test_val.flatten())
``` ```

## 数到10

``````import tensorflow as tf

# create a variable, refer to it as 'state' and set it to 0
state = tf.Variable(0)

# set one to a constant set to 1
one = tf.constant(1)

# update phase adds state and one and then assigns to state

# create a session
with tf.Session() as sess:
# initialize session variables
sess.run( tf.global_variables_initializer() )

print "The starting state is",sess.run(state)

print "Run the update 10 times..."
for count in range(10):
# execute the update
sess.run(update)

print "The end state is",sess.run(state)
``` ```

## 安装或设置

``````pip install --upgrade tensorflow      # for Python 2.7
pip3 install --upgrade tensorflow     # for Python 3.n
``` ```

``````pip install --upgrade tensorflow-gpu  # for Python 2.7 and GPU
pip3 install --upgrade tensorflow-gpu # for Python 3.n and GPU
``` ```

``````import tensorflow
``` ```

*请注意，这可以引用主分支，可以在上面的链接上更改此参考当前的稳定版本。）

## Tensorflow基础知识

Tensorflow基于数据流图的原理工作。要执行一些计算，有两个步骤：

1. 将计算表示为图形。
2. 执行图表。

• 等级表示张量的维数（立方体或框具有等级3）。
• 形状表示这些尺寸的值（框可以具有形状1x1x1或2x5x7）。
• Type表示Tensor的每个坐标中的数据类型。

1. 创建一个新会话。
2. 在图表中运行任何Op。通常我们运行最终的Op，我们期望计算的输出。

Op上的传入边缘就像是另一个Op上的数据依赖。因此，当我们运行任何Op时，它上面的所有传入边都被跟踪，另一边的ops也被运行。

``````import tensorflow as tf

# Create a Constant op that produces a 1x2 matrix.  The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])

# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])

# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)

# Launch the default graph.
sess = tf.Session()

# To run the matmul op we call the session 'run()' method, passing 'product'
# which represents the output of the matmul op.  This indicates to the call
# that we want to get the output of the matmul op back.
#
# All inputs needed by the op are run automatically by the session.  They
# typically are run in parallel.
#
# The call 'run(product)' thus causes the execution of three ops in the
# graph: the two constants and matmul.
#
# The output of the op is returned in 'result' as a numpy `ndarray` object.
result = sess.run(product)
print(result)
# ==> [[ 12.]]

# Close the Session when we're done.
sess.close()
``` ```