神经网络优化(2)学习率

中国大学mooc课程笔记:《人工智能实践:Tensorflow笔记》曹健 第四讲:神经网络优化 课程地址 课程地址

学习率

学习率 learning_rate:表示了每次参数更新的幅度大小。学习率过大,会导致待优化的参数在最 小值附近波动,不收敛;学习率过小,会导致待优化的参数收敛缓慢。

在训练过程中,参数的更新向着损失函数梯度下降的方向。梯度是损失函数loss的导数。

prt17.png

prt.png

由图可知,损失函数 loss 的最小值会在(-1,0)处得到,此时损失函数的导数为 0,得到最终参数 w = -1。

代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 设损失函数 loss = (w+1)²,令w的初值为5,。反向传播就是求最优的w。即求最小loss对应的w值
import tensorflow as tf
# 定义待优化的参数w初值为5
w = tf.Variable(tf.constant(5, dtype=tf.float32))
# 定义loss
loss = tf.square(w+1)
# 定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)
# 生成会话,训练40轮
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    for i in range(40):
        sess.run(train_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        print("After %s steps:w is %f , loss is %f." % (i, w_val, loss_val))

结果:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
After 0 steps:w is 2.600000 , loss is 12.959999.
After 1 steps:w is 1.160000 , loss is 4.665599.
After 2 steps:w is 0.296000 , loss is 1.679616.
After 3 steps:w is -0.222400 , loss is 0.604662.
After 4 steps:w is -0.533440 , loss is 0.217678.
After 5 steps:w is -0.720064 , loss is 0.078364.
After 6 steps:w is -0.832038 , loss is 0.028211.
After 7 steps:w is -0.899223 , loss is 0.010156.
After 8 steps:w is -0.939534 , loss is 0.003656.
After 9 steps:w is -0.963720 , loss is 0.001316.
After 10 steps:w is -0.978232 , loss is 0.000474.
After 11 steps:w is -0.986939 , loss is 0.000171.
After 12 steps:w is -0.992164 , loss is 0.000061.
After 13 steps:w is -0.995298 , loss is 0.000022.
After 14 steps:w is -0.997179 , loss is 0.000008.
After 15 steps:w is -0.998307 , loss is 0.000003.
After 16 steps:w is -0.998984 , loss is 0.000001.
After 17 steps:w is -0.999391 , loss is 0.000000.
After 18 steps:w is -0.999634 , loss is 0.000000.
After 19 steps:w is -0.999781 , loss is 0.000000.
After 20 steps:w is -0.999868 , loss is 0.000000.
After 21 steps:w is -0.999921 , loss is 0.000000.
After 22 steps:w is -0.999953 , loss is 0.000000.
After 23 steps:w is -0.999972 , loss is 0.000000.
After 24 steps:w is -0.999983 , loss is 0.000000.
After 25 steps:w is -0.999990 , loss is 0.000000.
After 26 steps:w is -0.999994 , loss is 0.000000.
After 27 steps:w is -0.999996 , loss is 0.000000.
After 28 steps:w is -0.999998 , loss is 0.000000.
After 29 steps:w is -0.999999 , loss is 0.000000.
After 30 steps:w is -0.999999 , loss is 0.000000.
After 31 steps:w is -1.000000 , loss is 0.000000.
After 32 steps:w is -1.000000 , loss is 0.000000.
After 33 steps:w is -1.000000 , loss is 0.000000.
After 34 steps:w is -1.000000 , loss is 0.000000.
After 35 steps:w is -1.000000 , loss is 0.000000.
After 36 steps:w is -1.000000 , loss is 0.000000.
After 37 steps:w is -1.000000 , loss is 0.000000.
After 38 steps:w is -1.000000 , loss is 0.000000.
After 39 steps:w is -1.000000 , loss is 0.000000.

由结果可知,随着损失函数值的减小,w 无限趋近于-1,模型计算推测出最优参数 w = -1。

学习率的设置

学习率过大,会导致待优化的参数在最小值附近波动,不收敛;学习率过小,会导致待优化的参数收敛缓慢。

1.对于上例的损失函数 loss = (w + 1)²。则将上述代码中学习率修改为 1,其余内容不变。 实验结果如下:

prt18.png

由运行结果可知,损失函数 loss 值并没有收敛,而是在 5 和-7 之间波动。

2.对于上例的损失函数 loss = (w + 1)²。则将上述代码中学习率修改为 0.0001,其余内容不变。 实验结果如下:

prt19.png

由运行结果可知,损失函数 loss 值缓慢下降,w 值也在小幅度变化,收敛缓慢。

指数衰减学习率

指数衰减学习率:学习率随着训练轮数变化而动态更新 。

prt20.png

例如:

在本例中,模型训练过程不设定固定的学习率,使用指数衰减学习率进行训练。其中,学习率初值设置为 0.1,学习率衰减率设置为 0.99,BATCH_SIZE 设置为 1。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 设损损失函数 loss = (w+1)²,令w的初值为10,。反向传播就是求最优的w。即求最小loss对应的w值
# 使用指数衰减的学习率,在迭代初期得到较高的下降速度,可以在较小的训练轮数下取得更有效果的收敛速度。

import tensorflow as tf

LEARNING_RATE_BASE = 0.1  # 最初学习率
LEARNING_RATE_DECAY = 0.99  # 学习率衰减率
LEARNING_RATE_STEP = 1  # 喂入多少轮BATCH_SIZE后,更新一次学习率,一般设为:总样本数/BATCH_SIZE

# 运行了几轮BATCH_SIZE的计数器,初值为0,设为不被训练
global_step = tf.Variable(0, trainable=False)
# 定义指数下降学习率
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step, LEARNING_RATE_STEP, LEARNING_RATE_DECAY,
                                           staircase=True)
# 定义待优化参数,初值给10
w = tf.Variable(tf.constant(5, dtype=tf.float32))
# 定义损失函数loss
loss = tf.square(w+1)
# 定义反向传播方法
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

# 生成对话,训练40轮
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    for i in range(40):
        sess.run(train_step)
        learning_rate_val = sess.run(learning_rate)
        global_step_val = sess.run(global_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        print("After %s steps: global_step is %f ,w is %f,learning rate is %f,loss is %f" % (i, global_step_val, w_val, learning_rate_val,loss_val))
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
After 0 steps: global_step is 1.000000 ,w is 3.800000,learning rate is 0.099000,loss is 23.040001
After 1 steps: global_step is 2.000000 ,w is 2.849600,learning rate is 0.098010,loss is 14.819419
After 2 steps: global_step is 3.000000 ,w is 2.095001,learning rate is 0.097030,loss is 9.579033
After 3 steps: global_step is 4.000000 ,w is 1.494386,learning rate is 0.096060,loss is 6.221961
After 4 steps: global_step is 5.000000 ,w is 1.015167,learning rate is 0.095099,loss is 4.060896
After 5 steps: global_step is 6.000000 ,w is 0.631886,learning rate is 0.094148,loss is 2.663051
After 6 steps: global_step is 7.000000 ,w is 0.324608,learning rate is 0.093207,loss is 1.754587
After 7 steps: global_step is 8.000000 ,w is 0.077684,learning rate is 0.092274,loss is 1.161403
After 8 steps: global_step is 9.000000 ,w is -0.121202,learning rate is 0.091352,loss is 0.772287
After 9 steps: global_step is 10.000000 ,w is -0.281761,learning rate is 0.090438,loss is 0.515867
After 10 steps: global_step is 11.000000 ,w is -0.411674,learning rate is 0.089534,loss is 0.346128
After 11 steps: global_step is 12.000000 ,w is -0.517024,learning rate is 0.088638,loss is 0.233266
After 12 steps: global_step is 13.000000 ,w is -0.602644,learning rate is 0.087752,loss is 0.157891
After 13 steps: global_step is 14.000000 ,w is -0.672382,learning rate is 0.086875,loss is 0.107334
After 14 steps: global_step is 15.000000 ,w is -0.729305,learning rate is 0.086006,loss is 0.073276
After 15 steps: global_step is 16.000000 ,w is -0.775868,learning rate is 0.085146,loss is 0.050235
After 16 steps: global_step is 17.000000 ,w is -0.814036,learning rate is 0.084294,loss is 0.034583
After 17 steps: global_step is 18.000000 ,w is -0.845387,learning rate is 0.083451,loss is 0.023905
After 18 steps: global_step is 19.000000 ,w is -0.871193,learning rate is 0.082617,loss is 0.016591
After 19 steps: global_step is 20.000000 ,w is -0.892476,learning rate is 0.081791,loss is 0.011561
After 20 steps: global_step is 21.000000 ,w is -0.910065,learning rate is 0.080973,loss is 0.008088
After 21 steps: global_step is 22.000000 ,w is -0.924629,learning rate is 0.080163,loss is 0.005681
After 22 steps: global_step is 23.000000 ,w is -0.936713,learning rate is 0.079361,loss is 0.004005
After 23 steps: global_step is 24.000000 ,w is -0.946758,learning rate is 0.078568,loss is 0.002835
After 24 steps: global_step is 25.000000 ,w is -0.955125,learning rate is 0.077782,loss is 0.002014
After 25 steps: global_step is 26.000000 ,w is -0.962106,learning rate is 0.077004,loss is 0.001436
After 26 steps: global_step is 27.000000 ,w is -0.967942,learning rate is 0.076234,loss is 0.001028
After 27 steps: global_step is 28.000000 ,w is -0.972830,learning rate is 0.075472,loss is 0.000738
After 28 steps: global_step is 29.000000 ,w is -0.976931,learning rate is 0.074717,loss is 0.000532
After 29 steps: global_step is 30.000000 ,w is -0.980378,learning rate is 0.073970,loss is 0.000385
After 30 steps: global_step is 31.000000 ,w is -0.983281,learning rate is 0.073230,loss is 0.000280
After 31 steps: global_step is 32.000000 ,w is -0.985730,learning rate is 0.072498,loss is 0.000204
After 32 steps: global_step is 33.000000 ,w is -0.987799,learning rate is 0.071773,loss is 0.000149
After 33 steps: global_step is 34.000000 ,w is -0.989550,learning rate is 0.071055,loss is 0.000109
After 34 steps: global_step is 35.000000 ,w is -0.991035,learning rate is 0.070345,loss is 0.000080
After 35 steps: global_step is 36.000000 ,w is -0.992297,learning rate is 0.069641,loss is 0.000059
After 36 steps: global_step is 37.000000 ,w is -0.993369,learning rate is 0.068945,loss is 0.000044
After 37 steps: global_step is 38.000000 ,w is -0.994284,learning rate is 0.068255,loss is 0.000033
After 38 steps: global_step is 39.000000 ,w is -0.995064,learning rate is 0.067573,loss is 0.000024
After 39 steps: global_step is 40.000000 ,w is -0.995731,learning rate is 0.066897,loss is 0.000018

由结果可以看出,随着训练轮数增加学习率在不断减小.