python 深度學習之初窺門徑

圖像縮小

  1. opencv-python中resize方法
    1. INTER_NEAREST - 最近鄰插值法
    2. INTER_LINEAR - 雙線性插值法(默認)
    3. INTER_AREA - 基於局部像素的重採樣(resampling using pixel area relation)。對於圖像抽取(image decimation)來說,這可能是一個更好的方法。但如果是放大圖像時,它和最近鄰法的效果類似。
    4. INTER_CUBIC - 基於4x4像素鄰域的3次插值法
    5. INTER_LANCZOS4 - 基於8x8像素鄰域的Lanczos插值
  2. numpy中resize方法
    • numpy中reszie與reshape區別
  3. 卷積縮小圖像
    • 圖像大小計算方法

學習率

固定學習率

  • 存在一個問題,學習率太小,運行時間過長,學習率太大,容易出現振盪不收斂的情況

指數衰減學習率

import tensorflow as tf
w = tf.Variable(tf.constant(5, dtype = tf.float32))
# 運行次數計數器,不可被訓練
global_step = tf.Variable(0, trainable = False)
# 定義損失函數loss
loss = tf.square(w + 1)
# 指數下降學習率
learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step,
                                          LEARNING_RATE_STEP, LEARNING_RATE_DECAY,
                                          staircase = True)
# 定義反向傳播方法
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step = global_step)
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    for i in range(40):
        sess.run(train_step)
        w_val = sess.run(w)
        loss_val = sess.run(loss)
        learning_rate_val = sess.run(learning_rate)
        print("After %s steps: learning rate is %f, w is %f, loss is %f" %(i, learning_rate_val, w_val, loss_val))
After 0 steps: learning rate is 0.099000, w is 3.800000, loss is 23.040001
After 1 steps: learning rate is 0.098010, w is 2.849600, loss is 14.819419
After 2 steps: learning rate is 0.097030, w is 2.095001, loss is 9.579033
After 3 steps: learning rate is 0.096060, w is 1.494386, loss is 6.221961
After 4 steps: learning rate is 0.095099, w is 1.015167, loss is 4.060896
After 5 steps: learning rate is 0.094148, w is 0.631886, loss is 2.663051
After 6 steps: learning rate is 0.093207, w is 0.324608, loss is 1.754587
After 7 steps: learning rate is 0.092274, w is 0.077684, loss is 1.161403
After 8 steps: learning rate is 0.091352, w is -0.121202, loss is 0.772287
After 9 steps: learning rate is 0.090438, w is -0.281761, loss is 0.515867
After 10 steps: learning rate is 0.089534, w is -0.411674, loss is 0.346128
After 11 steps: learning rate is 0.088638, w is -0.517024, loss is 0.233266
After 12 steps: learning rate is 0.087752, w is -0.602644, loss is 0.157891
After 13 steps: learning rate is 0.086875, w is -0.672382, loss is 0.107334
After 14 steps: learning rate is 0.086006, w is -0.729305, loss is 0.073276
After 15 steps: learning rate is 0.085146, w is -0.775868, loss is 0.050235
After 16 steps: learning rate is 0.084294, w is -0.814036, loss is 0.034583
After 17 steps: learning rate is 0.083451, w is -0.845387, loss is 0.023905
After 18 steps: learning rate is 0.082617, w is -0.871193, loss is 0.016591
After 19 steps: learning rate is 0.081791, w is -0.892476, loss is 0.011561
After 20 steps: learning rate is 0.080973, w is -0.910065, loss is 0.008088
After 21 steps: learning rate is 0.080163, w is -0.924629, loss is 0.005681
After 22 steps: learning rate is 0.079361, w is -0.936713, loss is 0.004005
After 23 steps: learning rate is 0.078568, w is -0.946758, loss is 0.002835
After 24 steps: learning rate is 0.077782, w is -0.955125, loss is 0.002014
After 25 steps: learning rate is 0.077004, w is -0.962106, loss is 0.001436
After 26 steps: learning rate is 0.076234, w is -0.967942, loss is 0.001028
After 27 steps: learning rate is 0.075472, w is -0.972830, loss is 0.000738
After 28 steps: learning rate is 0.074717, w is -0.976931, loss is 0.000532
After 29 steps: learning rate is 0.073970, w is -0.980378, loss is 0.000385
After 30 steps: learning rate is 0.073230, w is -0.983281, loss is 0.000280
After 31 steps: learning rate is 0.072498, w is -0.985730, loss is 0.000204
After 32 steps: learning rate is 0.071773, w is -0.987799, loss is 0.000149
After 33 steps: learning rate is 0.071055, w is -0.989550, loss is 0.000109
After 34 steps: learning rate is 0.070345, w is -0.991035, loss is 0.000080
After 35 steps: learning rate is 0.069641, w is -0.992297, loss is 0.000059
After 36 steps: learning rate is 0.068945, w is -0.993369, loss is 0.000044
After 37 steps: learning rate is 0.068255, w is -0.994284, loss is 0.000033
After 38 steps: learning rate is 0.067573, w is -0.995064, loss is 0.000024
After 39 steps: learning rate is 0.066897, w is -0.995731, loss is 0.000018

損失函數

  • 不加入正則化
    loss_mse = tf.reduce_mean(tf.square(y - y_))
  • 加入正則化,預防過擬合
tf.add_to_collection('losses',tf.contrib.layers.l2_regularizer(regularizer)(w))
loss_mse = tf.reduce_mean(tf.square(y - y_))
loss_total = loss_mse + tf.add_n(tf.get_collection('losses'))
  • 加入正則化的深度學習擬合圖形
    加入正則化
  • 不加入正則化的深度學習擬合圖形
    不加入正則化
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章