使用Numpy構建神經網絡 百度免費課程學習筆記
本節將使用Python語言和Numpy庫來構建神經網絡模型,向讀者展示神經網絡的基本概念和工作過程。
構建神經網絡/深度學習模型的基本步驟
如之前的介紹,應用於不同場景的深度學習模型具備一定的通用性,均可以從下述五個步驟來完成模型的構建和訓練。
- 數據處理:從本地文件或網絡地址讀取數據,並做預處理操作,如校驗數據的正確性等。
- 模型設計:完成網絡結構的設計(模型要素1),相當於模型的假設空間,即模型能夠表達的關係集合。
- 訓練配置:設定模型採用的尋解算法(模型要素2),即優化器,並指定計算資源。
- 訓練過程:循環調用訓練過程,每輪均包括前向計算 、損失函數(優化目標,模型要素3)和後向傳播這三個步驟。
- 保存模型:將訓練好的模型保存,以備預測時調用。
下面使用Python編寫預測波士頓房價的模型,一樣遵循這樣的五個步驟。
正是由於這個建模和訓練的過程存在通用性,即不同的模型僅僅在模型三要素上不同,而五個步驟中的其它部分保持一致,深度學習框架纔有用武之地。
波士頓房價預測
波士頓房價預測是一個經典的機器學習問題,類似於程序員世界的“Hello World”。波士頓地區的房價是由諸多因素影響的,該數據集統計了13種可能影響房價的因素和該類型房屋的均價,期望構建一個基於13個因素預測房價的模型。預測問題根據預測輸出的類型是連續的實數值,還是離散的標籤,區分爲迴歸任務和分類任務。因爲房價是一個連續值,所以房價預測顯然是一個迴歸任務。下面我們嘗試用最簡單的線性迴歸模型解決這個問題,並用神經網絡來實現這個模型。
線性迴歸模型
假設房價和各影響因素之間能夠用線性關係來描述(類似牛頓第二定律的案例):
模型的求解即是通過數據擬合出每個和。和分別表示該線性模型的權重和偏置。一維情況下,和就是直線的斜率和截距。
數據處理
在搭建模型之前,讓我們先導入數據,查閱下內容。房價數據存放在本地目錄下的housing.data文件中,通過執行如下的代碼可以導入數據並查閱。
# 導入需要用到的package
import numpy as np
import json
# 讀入訓練數據
datafile = './home/housing.data'
data = np.fromfile(datafile, sep=' ')
data
array([6.320e-03, 1.800e+01, 2.310e+00, ..., 3.969e+02, 7.880e+00,
1.190e+01])
%pwd
'D:\\python code\\百度大腦'
因爲讀入的原始數據是1維的,所有數據都連在了一起。所以將數據的形狀進行變換,形成一個2維的矩陣。每行爲一個數據樣本(14個值),每個數據樣本包含13個X(影響房價的特徵)和一個Y(該類型房屋的均價)。
# import pandas_profiling
# pandas_profiling.ProfileReport(data)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-7e92257ce716> in <module>
----> 1 pandas_profiling.ProfileReport(data)
D:\installation\Anaconda3\lib\site-packages\pandas_profiling\__init__.py in __init__(self, df, minimal, config_file, **kwargs)
51 # Treat index as any other column
52 if (
---> 53 not pd.Index(np.arange(0, len(df))).equals(df.index)
54 or df.index.dtype != np.int64
55 ):
AttributeError: 'numpy.ndarray' object has no attribute 'index'
# 讀入之後的數據被轉化成1維array,其中array的
# 第0-13項是第一條數據,第14-27項是第二條數據,....
# 這裏對原始數據做reshape,變成N x 14的形式
feature_names = [ 'CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE','DIS',
'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV' ]
feature_num = len(feature_names)
data = data.reshape([data.shape[0] // feature_num, feature_num])
# " / " 表示浮點數除法,返回浮點結果;
# " // " 表示整數除法,返回不大於結果的一個最大的整數
# 查看數據
x = data[0]
print(x.shape)
print(x)
(14,)
[6.320e-03 1.800e+01 2.310e+00 0.000e+00 5.380e-01 6.575e+00 6.520e+01
4.090e+00 1.000e+00 2.960e+02 1.530e+01 3.969e+02 4.980e+00 2.400e+01]
取80%的數據作爲訓練集,預留20%的數據用於測試模型的預測效果(訓練好的模型預測值與實際房價的差距)。打印訓練集的形狀可見,我們共有404個樣本,每個樣本含有13個特徵和1個預測值。
print(data.shape)
print(data.shape[0])
print(data.shape[1])
print(type(data))
(506, 14)
506
14
<class 'numpy.ndarray'>
ratio = 0.8
offset = int(data.shape[0] * ratio)
training_data = data[:offset]
training_data.shape
(404, 14)
對每個特徵進行歸一化處理,使得每個特徵的取值縮放到0~1之間。這樣做有兩個好處:
- 模型訓練更高效。
- 特徵前的權重大小可代表該變量對預測結果的貢獻度(因爲每個特徵值本身的範圍相同)。
# 計算train數據集的最大值,最小值,平均值
maximums, minimums, avgs = \
training_data.max(axis=0), \
training_data.min(axis=0), \
training_data.sum(axis=0) / training_data.shape[0]
# 對數據進行歸一化處理
for i in range(feature_num):
#print(maximums[i], minimums[i], avgs[i])
data[:, i] = (data[:, i] - avgs[i]) / (maximums[i] - minimums[i])
將上述幾個數據處理操作合併成load data函數,並確認函數的執行效果。
def load_data():
# 從文件導入數據
datafile = './home/housing.data'
data = np.fromfile(datafile, sep=' ')
# 每條數據包括14項,其中前面13項是影響因素,第14項是相應的房屋價格中位數
feature_names = [ 'CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', \
'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV' ]
feature_num = len(feature_names)
# 將原始數據進行Reshape,變成[N, 14]這樣的形狀
data = data.reshape([data.shape[0] // feature_num, feature_num])
# 將原數據集拆分成訓練集和測試集
# 這裏使用80%的數據做訓練,20%的數據做測試
# 測試集和訓練集必須是沒有交集的
ratio = 0.8
offset = int(data.shape[0] * ratio)
training_data = data[:offset]
# 計算train數據集的最大值,最小值,平均值
maximums, minimums, avgs = training_data.max(axis=0), training_data.min(axis=0), \
training_data.sum(axis=0) / training_data.shape[0]
# 對數據進行歸一化處理
for i in range(feature_num):
#print(maximums[i], minimums[i], avgs[i])
data[:, i] = (data[:, i] - avgs[i]) / (maximums[i] - minimums[i])
# 訓練集和測試集的劃分比例
training_data = data[:offset]
test_data = data[offset:]
return training_data, test_data
# 獲取數據
training_data, test_data = load_data()
x = training_data[:, :-1]
y = training_data[:, -1:]
# 查看數據
print(x[0])
print(y[0])
[-0.02146321 0.03767327 -0.28552309 -0.08663366 0.01289726 0.04634817
0.00795597 -0.00765794 -0.25172191 -0.11881188 -0.29002528 0.0519112
-0.17590923]
[-0.00390539]
如果將輸入特徵和輸出預測值均以向量表示,輸入特徵x一共有13個分量,y只有1個分量,所以參數權重的形狀(shape)應該是。假設我們以如下任意數字賦值參數做初始化:
w = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, -0.1, -0.2, -0.3, -0.4, 0.0]
w = np.array(w).reshape([13, 1])
取出第1條樣本數據,觀察樣本的特徵向量與參數向量相乘之後的結果。
x1=x[0]
t = np.dot(x1, w)
print(t)
[0.03395597]
此外,完整的線性迴歸公式,還需要初始化偏移量,同樣隨意賦初值-0.2。
那麼,線性迴歸模型的完整輸出是,這個從特徵和參數計算輸出值的過程稱爲“前向計算”。
b = -0.2
z = t + b
print(z)
[-0.16604403]
構建神經網絡
將上述計算預測輸出的過程以“類和對象”的方式來描述,實現的方案如下所示。類成員變量有參數 w 和 b,並寫了一個forward函數(代表“前向計算”)完成上述從特徵和參數到輸出預測值的計算過程。
class Network(object):
def __init__(self, num_of_weights):
# 隨機產生w的初始值
# 爲了保持程序每次運行結果的一致性,
# 此處設置固定的隨機數種子
np.random.seed(0)
self.w = np.random.randn(num_of_weights, 1)
self.b = 0.
def forward(self, x):
z = np.dot(x, self.w) + self.b
return z
基於Network類的定義,模型的計算過程可以按下述方式達成。
net = Network(13)
x1 = x[0]
y1 = y[0]
z = net.forward(x1)
print(z)
[-0.63182506]
通過模型計算表示的影響因素所對應的房價應該是, 但實際數據告訴我們房價是,這時我們需要有某種指標來衡量預測值跟真實值之間的差距。對於迴歸問題,最常採用的衡量方法是使用均方誤差作爲評價模型好壞的指標,具體定義如下:
上式中的(簡記爲: ) 通常也被稱作損失函數,它是衡量模型好壞的指標,在迴歸問題中均方誤差是一種比較常見的形式,分類問題中通常會採用交叉熵損失函數,在後續的章節中會更詳細的介紹。
對一個樣本計算損失的代碼實現如下:
Loss = (y1 - z)*(y1 - z)
print(Loss)
[0.39428312]
因爲計算損失時需要把每個樣本的損失都考慮到,所以我們需要對單個樣本的損失函數進行求和,併除以樣本總數。
對上面的計算代碼做出相應的調整,在Network類下面添加損失函數的計算過程如下
class Network(object):
def __init__(self, num_of_weights):
# 隨機產生w的初始值
# 爲了保持程序每次運行結果的一致性,此處設置固定的隨機數種子
np.random.seed(0)
self.w = np.random.randn(num_of_weights, 1)
self.b = 0.
def forward(self, x):
z = np.dot(x, self.w) + self.b
return z
def loss(self, z, y):
error = z - y
cost = error * error
cost = np.mean(cost)
return cost
使用上面定義的Network類,可以方便的計算預測值和損失函數。
需要注意,類中的變量x, w,b, z, error等均是向量。以變量x爲例,共有兩個維度,一個代表特徵數量(=13),一個代表樣本數量(演示程序如下)。
net = Network(13)
# 此處可以一次性計算多個樣本的預測值和損失函數
x1 = x[0:3]
y1 = y[0:3]
z = net.forward(x1)
print('predict: ', z)
loss = net.loss(z, y1)
print('loss:', loss)
predict: [[-0.63182506]
[-0.55793096]
[-1.00062009]]
loss: 0.7229825055441156
神經網絡的訓練
上述計算過程描述瞭如何構建神經網絡,通過神經網絡完成預測值和損失函數的計算。接下來將介紹如何求解參數和的數值,這個過程也稱爲模型訓練。模型訓練的目標是讓定義的損失函數儘可能的小,也就是說找到一個參數解和使得損失函數取得極小值。
求解損失函數的極小值
基於最基本的微積分知識,函數在極值點處的導數爲0。那麼,讓損失函數取極小值的和應該是下述方程組的解:
將樣本數據帶入上面的方程組固然可以求解出和的值,但是這種方法只對線性迴歸這樣簡單的情況有效。如果模型中含有非線性變換,或者損失函數不是均方差這種簡單形式,則很難通過上式求解。爲了避免這一情況,下面我們將引入更加普適的數值求解方法。
梯度下降法
訓練的關鍵是找到一組使得損失函數取極小值。我們先看一下損失函數只隨兩個參數變化時的簡單情形,啓發下尋解的思路。
這裏我們將中除之外的參數和都固定下來,可以用圖畫出的形式。
net = Network(13)
losses = []
#只畫出參數w5和w9在區間[-160, 160]的曲線部分,已經包含損失函數的極值
w5 = np.arange(-160.0, 160.0, 1.0)
w9 = np.arange(-160.0, 160.0, 1.0)
losses = np.zeros([len(w5), len(w9)])
#計算設定區域內每個參數取值所對應的Loss
for i in range(len(w5)):
for j in range(len(w9)):
net.w[5] = w5[i]
net.w[9] = w9[j]
z = net.forward(x)
loss = net.loss(z, y)
losses[i, j] = loss
#將兩個變量和對應的Loss作3D圖
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
w5, w9 = np.meshgrid(w5, w9)
ax.plot_surface(w5, w9, losses, rstride=1, cstride=1, cmap='rainbow')
plt.show()
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-mhrjwPgq-1582627946515)(output_35_0.png)]
簡單情形——只考慮兩個參數和
對於這種簡單情形,我們利用上面的程序在3維空間中畫出了損失函數隨參數變化的曲面圖,從上圖可以看出有些區域的函數值明顯比周圍的點小。需要說明的是:爲什麼這裏我們選擇和來畫圖?這是因爲選擇這兩個參數的時候,可比較直觀的從損失函數的曲面圖上發現極值點的存在。其他參數組合,從圖形上觀測損失函數的極值點不夠直觀。
上文提到,直接求解導數方程的方式在多數情況下較困難,本質原因是導數方程往往正向求解容易(已知X,求得Y),反向求解較難(已知Y,求得X)。這種特性的方程在很多加密算法中較爲常見,與日常見到的鎖頭特性一樣:已知“鑰匙”,鎖頭判斷是否正確容易;已知“鎖頭”,反推鑰匙的形狀比較難。
這種情況特別類似於一位想從山峯走到坡谷的盲人,他看不見坡谷在哪(無法逆向求解出Loss導數爲0時的參數值),但可以伸腳探索身邊的坡度(當前點的導數值,也稱爲梯度)。那麼,求解Loss函數最小值可以“從當前的參數取值,一步步的按照下坡的方向下降,直到走到最低點”實現。這種方法個人稱它爲“瞎子下坡法”。哦不,有個更正式的說法“梯度下降法”。
現在我們要找出一組的值,使得損失函數最小,實現梯度下降法的方案如下:
- 隨機的選一組初始值,例如:
- 選取下一個點使得
- 重複上面的步驟2,直到損失函數幾乎不再下降
圖1-2-1 :梯度下降方向示意圖
如何選擇是至關重要的,第一要保證是下降的,第二要使得下降的趨勢儘可能的快。微積分的基礎知識告訴我們,沿着梯度的反方向,是函數值下降最快的方向,如下圖所示在點,,梯度方向是圖中點的箭頭指向的方向,沿着箭頭方向向前移動一小步,可以觀察損失函數的變化。
在點,,可以計算出,此時的loss在1300左右。
計算梯度
上面我們講過了損失函數的計算方法,這裏稍微加以改寫,引入因子,定義損失函數如下
其中是網絡對第個樣本的預測值
可以計算出對和的偏導數
從導數的計算過程可以看出,因子被消掉了,這是因爲二次函數求導的時候會產生因子,這也是我們將損失函數改寫的原因
這裏我們感興趣的是和,
則可以在Network類中定義如下的梯度計算函數
梯度計算公式
藉助於numpy裏面的矩陣操作,我們可以直接對所有 一次性的計算出13個參數所對應的梯度來
先考慮只有一個樣本的情況,上式中的,
可以通過具體的程序查看每個變量的數據和維度
x1 = x[0]
y1 = y[0]
z1 = net.forward(x1)
print('x1 {}, shape {}'.format(x1, x1.shape))
print('y1 {}, shape {}'.format(y1, y1.shape))
print('z1 {}, shape {}'.format(z1, z1.shape))
x1 [-0.02146321 0.03767327 -0.28552309 -0.08663366 0.01289726 0.04634817
0.00795597 -0.00765794 -0.25172191 -0.11881188 -0.29002528 0.0519112
-0.17590923], shape (13,)
y1 [-0.00390539], shape (1,)
z1 [-12.05947643], shape (1,)
按上面的公式,當只有一個樣本時,可以計算某個,比如的梯度
gradient_w0 = (z1 - y1) * x1[0]
print('gradient_w0 {}'.format(gradient_w0))
gradient_w0 [0.25875126]
同樣我們可以計算的梯度
gradient_w1 = (z1 - y1) * x1[1]
print('gradient_w1 {}'.format(gradient_w1))
gradient_w1 [-0.45417275]
依次計算的梯度
gradient_w2= (z1 - y1) * x1[2]
print('gradient_w1 {}'.format(gradient_w2))
gradient_w1 [3.44214394]
聰明的讀者可能已經想到,寫一個for循環即可計算從到的所有權重的梯度,這留作作業題。
Numpy給我們提供了更簡單的操作方法,即使用矩陣操作。計算梯度的代碼中直接用 (z1 - y1) * x1,得到的是一個13維的向量,每個分量分別代表該維度的梯度。Numpy的廣播功能(對向量和矩陣計算如同對1個單一變量計算一樣)是我們使用它的原因。
gradient_w = (z1 - y1) * x1
print('gradient_w_by_sample1 {}, gradient.shape {}'.format(gradient_w, gradient_w.shape))
gradient_w_by_sample1 [ 0.25875126 -0.45417275 3.44214394 1.04441828 -0.15548386 -0.55875363
-0.09591377 0.09232085 3.03465138 1.43234507 3.49642036 -0.62581917
2.12068622], gradient.shape (13,)
再回到上面的梯度計算公式
這裏輸入數據中有多個樣本,每個樣本都對梯度有貢獻。如上代碼計算了只有樣本1時的梯度值,同樣的計算方法也可以計算樣本2和樣本3對梯度的貢獻。
x2 = x[1]
y2 = y[1]
z2 = net.forward(x2)
gradient_w = (z2 - y2) * x2
print('gradient_w_by_sample2 {}, gradient.shape {}'.format(gradient_w, gradient_w.shape))
gradient_w_by_sample2 [ 0.7329239 4.91417754 3.33394253 2.9912385 4.45673435 -0.58146277
-5.14623287 -2.4894594 7.19011988 7.99471607 0.83100061 -1.79236081
2.11028056], gradient.shape (13,)
x3 = x[2]
y3 = y[2]
z3 = net.forward(x3)
gradient_w = (z3 - y3) * x3
print('gradient_w_by_sample3 {}, gradient.shape {}'.format(gradient_w, gradient_w.shape))
gradient_w_by_sample3 [ 0.25138584 1.68549775 1.14349809 1.02595515 1.5286008 -1.93302947
0.4058236 -0.85385157 2.46611579 2.74208162 0.28502219 -0.46695229
2.39363651], gradient.shape (13,)
可能有的讀者再次想到可以使用for循環把每個樣本對梯度的貢獻都計算出來,然後再作平均。
但是我們不需要這麼做,仍然可以使用Numpy的矩陣操作來簡化運算,比如三個樣本的情況。
# 注意這裏是一次取出3個樣本的數據,不是取出第3個樣本
x3samples = x[0:3]
y3samples = y[0:3]
z3samples = net.forward(x3samples)
print('x {}, shape {}'.format(x3samples, x3samples.shape))
print('y {}, shape {}'.format(y3samples, y3samples.shape))
print('z {}, shape {}'.format(z3samples, z3samples.shape))
x [[-0.02146321 0.03767327 -0.28552309 -0.08663366 0.01289726 0.04634817
0.00795597 -0.00765794 -0.25172191 -0.11881188 -0.29002528 0.0519112
-0.17590923]
[-0.02122729 -0.14232673 -0.09655922 -0.08663366 -0.12907805 0.0168406
0.14904763 0.0721009 -0.20824365 -0.23154675 -0.02406783 0.0519112
-0.06111894]
[-0.02122751 -0.14232673 -0.09655922 -0.08663366 -0.12907805 0.1632288
-0.03426854 0.0721009 -0.20824365 -0.23154675 -0.02406783 0.03943037
-0.20212336]], shape (3, 13)
y [[-0.00390539]
[-0.05723872]
[ 0.23387239]], shape (3, 1)
z [[-12.05947643]
[-34.58467747]
[-11.60858134]], shape (3, 1)
上面的x3samples, y3samples, z3samples的第一維大小均爲3,表示有3個樣本。下面計算這3個樣本對梯度的貢獻。
gradient_w = (z3samples - y3samples) * x3samples
print('gradient_w {}, gradient.shape {}'.format(gradient_w, gradient_w.shape))
gradient_w [[ 0.25875126 -0.45417275 3.44214394 1.04441828 -0.15548386 -0.55875363
-0.09591377 0.09232085 3.03465138 1.43234507 3.49642036 -0.62581917
2.12068622]
[ 0.7329239 4.91417754 3.33394253 2.9912385 4.45673435 -0.58146277
-5.14623287 -2.4894594 7.19011988 7.99471607 0.83100061 -1.79236081
2.11028056]
[ 0.25138584 1.68549775 1.14349809 1.02595515 1.5286008 -1.93302947
0.4058236 -0.85385157 2.46611579 2.74208162 0.28502219 -0.46695229
2.39363651]], gradient.shape (3, 13)
此處可見,計算梯度gradient_w的維度是,並且其第1行與上面第1個樣本計算的梯度gradient_w_by_sample1一致,第2行與上面第2個樣本計算的梯度gradient_w_by_sample1一致,第3行與上面第3個樣本計算的梯度gradient_w_by_sample1一致。這裏使用矩陣操作,可能更加方便的對3個樣本分別計算各自對梯度的貢獻。
那麼對於有N個樣本的情形,我們可以直接使用如下方式計算出所有樣本對梯度的貢獻,這就是使用Numpy庫廣播功能帶來的便捷。
z = net.forward(x)
gradient_w = (z - y) * x
print('gradient_w shape {}'.format(gradient_w.shape))
print(gradient_w)
gradient_w shape (404, 13)
[[ 0.25875126 -0.45417275 3.44214394 ... 3.49642036 -0.62581917
2.12068622]
[ 0.7329239 4.91417754 3.33394253 ... 0.83100061 -1.79236081
2.11028056]
[ 0.25138584 1.68549775 1.14349809 ... 0.28502219 -0.46695229
2.39363651]
...
[ 14.70025543 -15.10890735 36.23258734 ... 24.54882966 5.51071122
26.26098922]
[ 9.29832217 -15.33146159 36.76629344 ... 24.91043398 -1.27564923
26.61808955]
[ 19.55115919 -10.8177237 25.94192351 ... 17.5765494 3.94557661
17.64891012]]
上面gradient_w的每一行代表了一個樣本對梯度的貢獻。根據梯度的計算公式,總梯度是對每個樣本對梯度貢獻的平均值。
我們也可以使用Numpy的均值函數來完成此過程:
# axis = 0 表示把每一行做相加然後再除以總的行數
gradient_w = np.mean(gradient_w, axis=0)
print('gradient_w ', gradient_w.shape)
print('w ', net.w.shape)
print(gradient_w)
print(net.w)
gradient_w (13,)
w (13, 1)
[ 1.59697064 -0.92928123 4.72726926 1.65712204 4.96176389 1.18068454
4.55846519 -3.37770889 9.57465893 10.29870662 1.3900257 -0.30152215
1.09276043]
[[ 1.76405235e+00]
[ 4.00157208e-01]
[ 9.78737984e-01]
[ 2.24089320e+00]
[ 1.86755799e+00]
[ 1.59000000e+02]
[ 9.50088418e-01]
[-1.51357208e-01]
[-1.03218852e-01]
[ 1.59000000e+02]
[ 1.44043571e-01]
[ 1.45427351e+00]
[ 7.61037725e-01]]
我們使用numpy的矩陣操作方便的完成了gradient的計算,但引入了一個問題,gradient_w的形狀是(13,),而w的維度是(13, 1)。導致該問題的原因是使用np.mean函數的時候消除了第0維。爲了加減乘除等計算方便,gradient_w和w必須保持一致的形狀。所以,我們將gradient_w的維度也設置爲(13, 1),代碼如下:
gradient_w = gradient_w[:, np.newaxis]
print('gradient_w shape', gradient_w.shape)
gradient_w shape (13, 1)
綜合上面的討論,我們可以把計算梯度的代碼整理如下:
z = net.forward(x)
gradient_w = (z - y) * x
gradient_w = np.mean(gradient_w, axis=0)
gradient_w = gradient_w[:, np.newaxis]
gradient_w
array([[ 1.59697064],
[-0.92928123],
[ 4.72726926],
[ 1.65712204],
[ 4.96176389],
[ 1.18068454],
[ 4.55846519],
[-3.37770889],
[ 9.57465893],
[10.29870662],
[ 1.3900257 ],
[-0.30152215],
[ 1.09276043]])
上述代碼非常簡潔的完成了的梯度計算。同樣,計算的梯度的代碼也是類似的原理。
gradient_b = (z - y)
gradient_b = np.mean(gradient_b)
# 此處b是一個數值,所以可以直接用np.mean得到一個標量
gradient_b
-1.0918438870293816e-13
將上面計算和的梯度的過程,寫成Network類的gradient函數,代碼如下所示。
class Network(object):
def __init__(self, num_of_weights):
# 隨機產生w的初始值
# 爲了保持程序每次運行結果的一致性,此處設置固定的隨機數種子
np.random.seed(0)
self.w = np.random.randn(num_of_weights, 1)
self.b = 0.
def forward(self, x):
z = np.dot(x, self.w) + self.b
return z
def loss(self, z, y):
error = z - y
num_samples = error.shape[0]
cost = error * error
cost = np.sum(cost) / num_samples
return cost
def gradient(self, x, y):
z = self.forward(x)
gradient_w = (z-y)*x
gradient_w = np.mean(gradient_w, axis=0)
gradient_w = gradient_w[:, np.newaxis]
gradient_b = (z - y)
gradient_b = np.mean(gradient_b)
return gradient_w, gradient_b
# 調用上面定義的gradient函數,計算梯度
# 初始化網絡,
net = Network(13)
# 設置[w5, w9] = [-100., +100.]
net.w[5] = -100.0
net.w[9] = -100.0
z = net.forward(x)
loss = net.loss(z, y)
gradient_w, gradient_b = net.gradient(x, y)
gradient_w5 = gradient_w[5][0]
gradient_w9 = gradient_w[9][0]
print('point {}, loss {}'.format([net.w[5][0], net.w[9][0]], loss))
print('gradient {}'.format([gradient_w5, gradient_w9]))
point [-100.0, -100.0], loss 686.300500817916
gradient [-0.850073323995813, -6.138412364807848]
尋找損失函數更小的點
下面我們開始研究怎樣更新梯度,首先沿着梯度的反方向移動一小步下下一個點P1,觀察損失函數的變化。
# 在[w5, w9]平面上,沿着梯度的反方向移動到下一個點P1
# 定義移動步長 eta
eta = 0.1
# 更新參數w5和w9
net.w[5] = net.w[5] - eta * gradient_w5
net.w[9] = net.w[9] - eta * gradient_w9
# 重新計算z和loss
z = net.forward(x)
loss = net.loss(z, y)
gradient_w, gradient_b = net.gradient(x, y)
gradient_w5 = gradient_w[5][0]
gradient_w9 = gradient_w[9][0]
print('point {}, loss {}'.format([net.w[5][0], net.w[9][0]], loss))
print('gradient {}'.format([gradient_w5, gradient_w9]))
point [-99.91499266760042, -99.38615876351922], loss 678.6472185028844
gradient [-0.855635617864529, -6.093226863406581]
運行上面的代碼,可以發現沿着梯度反方向走一小步,下一個點的損失函數的確減少了。
- 讀者可以不停的點擊上面的代碼塊,觀察損失函數是否一直在變小。
將上面的循環的計算過程封裝在train和update函數中,如下代碼所示。
class Network(object):
def __init__(self, num_of_weights):
# 隨機產生w的初始值
# 爲了保持程序每次運行結果的一致性,此處設置固定的隨機數種子
np.random.seed(0)
self.w = np.random.randn(num_of_weights,1)
self.w[5] = -100.
self.w[9] = -100.
self.b = 0.
def forward(self, x):
z = np.dot(x, self.w) + self.b
return z
def loss(self, z, y):
error = z - y
num_samples = error.shape[0]
cost = error * error
cost = np.sum(cost) / num_samples
return cost
def gradient(self, x, y):
z = self.forward(x)
gradient_w = (z-y)*x
gradient_w = np.mean(gradient_w, axis=0)
gradient_w = gradient_w[:, np.newaxis]
gradient_b = (z - y)
gradient_b = np.mean(gradient_b)
return gradient_w, gradient_b
def update(self, graident_w5, gradient_w9, eta=0.01):
net.w[5] = net.w[5] - eta * gradient_w5
net.w[9] = net.w[9] - eta * gradient_w9
def train(self, x, y, iterations=100, eta=0.01):
points = []
losses = []
for i in range(iterations):
points.append([net.w[5][0], net.w[9][0]])
z = self.forward(x)
L = self.loss(z, y)
gradient_w, gradient_b = self.gradient(x, y)
gradient_w5 = gradient_w[5][0]
gradient_w9 = gradient_w[9][0]
self.update(gradient_w5, gradient_w9, eta)
losses.append(L)
if i % 50 == 0:
print('iter {}, point {}, loss {}'.format(i, [net.w[5][0], net.w[9][0]], L))
return points, losses
# 獲取數據
train_data, test_data = load_data()
x = train_data[:, :-1]
y = train_data[:, -1:]
# 創建網絡
net = Network(13)
num_iterations=2000
# 啓動訓練
points, losses = net.train(x, y, iterations=num_iterations, eta=0.01)
# 畫出損失函數的變化趨勢
plot_x = np.arange(num_iterations)
plot_y = np.array(losses)
plt.plot(plot_x, plot_y)
plt.show()
iter 0, point [-99.99144364382136, -99.93861587635192], loss 686.300500817916
iter 50, point [-99.56362583488914, -96.92631128470325], loss 649.2213468309388
iter 100, point [-99.13580802595692, -94.02279509580971], loss 614.6970095624063
iter 150, point [-98.7079902170247, -91.22404911807594], loss 582.543755023494
iter 200, point [-98.28017240809248, -88.52620357520894], loss 552.5911329872217
iter 250, point [-97.85235459916026, -85.9255316243737], loss 524.6810152322887
iter 300, point [-97.42453679022805, -83.41844407682491], loss 498.6667034691001
iter 350, point [-96.99671898129583, -81.00148431353688], loss 474.4121018974464
iter 400, point [-96.56890117236361, -78.67132338862874], loss 451.7909497114133
iter 450, point [-96.14108336343139, -76.42475531364933], loss 430.6861092067028
iter 500, point [-95.71326555449917, -74.25869251604028], loss 410.988905460488
iter 550, point [-95.28544774556696, -72.17016146534513], loss 392.5985138460825
iter 600, point [-94.85762993663474, -70.15629846096763], loss 375.4213919156372
iter 650, point [-94.42981212770252, -68.21434557551346], loss 359.3707524354014
iter 700, point [-94.0019943187703, -66.34164674796719], loss 344.36607459115214
iter 750, point [-93.57417650983808, -64.53564402117185], loss 330.33265059761464
iter 800, point [-93.14635870090586, -62.793873918279786], loss 317.2011651461846
iter 850, point [-92.71854089197365, -61.11396395304264], loss 304.907305311265
iter 900, point [-92.29072308304143, -59.49362926899678], loss 293.3913987080144
iter 950, point [-91.86290527410921, -57.930669402782904], loss 282.5980778542974
iter 1000, point [-91.43508746517699, -56.4229651670156], loss 272.47596883802515
iter 1050, point [-91.00726965624477, -54.968475648286564], loss 262.9774025287022
iter 1100, point [-90.57945184731255, -53.56523531604897], loss 254.05814669965383
iter 1150, point [-90.15163403838034, -52.21135123828792], loss 245.6771575458149
iter 1200, point [-89.72381622944812, -50.90500040003218], loss 237.796349191773
iter 1250, point [-89.2959984205159, -49.6444271209092], loss 230.3803798866218
iter 1300, point [-88.86818061158368, -48.42794056808474], loss 223.39645367664923
iter 1350, point [-88.44036280265146, -47.2539123610643], loss 216.81413643451378
iter 1400, point [-88.01254499371925, -46.12077426496303], loss 210.60518520483126
iter 1450, point [-87.58472718478703, -45.027015968976976], loss 204.74338990147896
iter 1500, point [-87.15690937585481, -43.9711829469081], loss 199.20442646183585
iter 1550, point [-86.72909156692259, -42.95187439671279], loss 193.96572062803054
iter 1600, point [-86.30127375799037, -41.96774125615467], loss 189.00632158541163
iter 1650, point [-85.87345594905815, -41.017484291751295], loss 184.30678474424633
iter 1700, point [-85.44563814012594, -40.0998522583068], loss 179.84906300239203
iter 1750, point [-85.01782033119372, -39.21364012642417], loss 175.61640587468244
iter 1800, point [-84.5900025222615, -38.35768737548557], loss 171.59326591927962
iter 1850, point [-84.16218471332928, -37.530876349682856], loss 167.76521193253296
iter 1900, point [-83.73436690439706, -36.73213067476985], loss 164.11884842217904
iter 1950, point [-83.30654909546485, -35.96041373329276], loss 160.64174090423475
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-Lfix79QQ-1582627946519)(output_70_1.png)]
對所有參數計算梯度並更新
爲了能給讀者直觀的感受,上面演示的梯度下降法的過程僅包含和兩個參數。房價預測的完整模型,必須要對所有參數和進行求解。這需要將Network中的update和train函數進行修改。由於不在限定參與計算的參數(所有參數均參與計算),修改之後的代碼反而更加簡潔。
class Network(object):
def __init__(self, num_of_weights):
# 隨機產生w的初始值
# 爲了保持程序每次運行結果的一致性,此處設置固定的隨機數種子
np.random.seed(0)
self.w = np.random.randn(num_of_weights, 1)
self.b = 0.
def forward(self, x):
z = np.dot(x, self.w) + self.b
return z
def loss(self, z, y):
error = z - y
num_samples = error.shape[0]
cost = error * error
cost = np.sum(cost) / num_samples
return cost
def gradient(self, x, y):
z = self.forward(x)
gradient_w = (z-y)*x
gradient_w = np.mean(gradient_w, axis=0)
gradient_w = gradient_w[:, np.newaxis]
gradient_b = (z - y)
gradient_b = np.mean(gradient_b)
return gradient_w, gradient_b
def update(self, gradient_w, gradient_b, eta = 0.01):
self.w = self.w - eta * gradient_w
self.b = self.b - eta * gradient_b
def train(self, x, y, iterations=100, eta=0.01):
losses = []
for i in range(iterations):
z = self.forward(x)
L = self.loss(z, y)
gradient_w, gradient_b = self.gradient(x, y)
self.update(gradient_w, gradient_b, eta)
losses.append(L)
if (i+1) % 10 == 0:
print('iter {}, loss {}'.format(i, L))
return losses
# 獲取數據
train_data, test_data = load_data()
x = train_data[:, :-1]
y = train_data[:, -1:]
# 創建網絡
net = Network(13)
num_iterations=1000
# 啓動訓練
losses = net.train(x,y, iterations=num_iterations, eta=0.01)
# 畫出損失函數的變化趨勢
plot_x = np.arange(num_iterations)
plot_y = np.array(losses)
plt.plot(plot_x, plot_y)
plt.show()
iter 9, loss 1.898494731457622
iter 19, loss 1.8031783384598723
iter 29, loss 1.7135517565541092
iter 39, loss 1.6292649416831266
iter 49, loss 1.5499895293373234
iter 59, loss 1.4754174896452612
iter 69, loss 1.4052598659324693
iter 79, loss 1.3392455915676866
iter 89, loss 1.2771203802372915
iter 99, loss 1.218645685090292
iter 109, loss 1.1635977224791534
iter 119, loss 1.111766556287068
iter 129, loss 1.0629552390811503
iter 139, loss 1.0169790065644477
iter 149, loss 0.9736645220185994
iter 159, loss 0.9328491676343147
iter 169, loss 0.8943803798194311
iter 179, loss 0.8581150257549611
iter 189, loss 0.8239188186389671
iter 199, loss 0.7916657692169988
iter 209, loss 0.761237671346902
iter 219, loss 0.7325236194855752
iter 229, loss 0.7054195561163928
iter 239, loss 0.6798278472589763
iter 249, loss 0.6556568843183528
iter 259, loss 0.6328207106387195
iter 269, loss 0.6112386712285091
iter 279, loss 0.59083508421862
iter 289, loss 0.5715389327049418
iter 299, loss 0.5532835757100347
iter 309, loss 0.5360064770773407
iter 319, loss 0.5196489511849665
iter 329, loss 0.5041559244351539
iter 339, loss 0.48947571154034963
iter 349, loss 0.47555980568755696
iter 359, loss 0.46236268171965056
iter 369, loss 0.44984161152579916
iter 379, loss 0.43795649088328303
iter 389, loss 0.42666967704002257
iter 399, loss 0.41594583637124666
iter 409, loss 0.4057518014851036
iter 419, loss 0.3960564371908221
iter 429, loss 0.38683051477942226
iter 439, loss 0.3780465941011246
iter 449, loss 0.3696789129556087
iter 459, loss 0.36170328334131785
iter 469, loss 0.3540969941381648
iter 479, loss 0.3468387198244131
iter 489, loss 0.3399084348532937
iter 499, loss 0.33328733333814486
iter 509, loss 0.32695775371667785
iter 519, loss 0.32090310808539985
iter 529, loss 0.31510781591441284
iter 539, loss 0.30955724187078903
iter 549, loss 0.3042376374955925
iter 559, loss 0.29913608649543905
iter 569, loss 0.29424045342432864
iter 579, loss 0.2895393355454012
iter 589, loss 0.28502201767532415
iter 599, loss 0.28067842982626157
iter 609, loss 0.27649910747186535
iter 619, loss 0.2724751542744919
iter 629, loss 0.2685982071209627
iter 639, loss 0.26486040332365085
iter 649, loss 0.2612543498525749
iter 659, loss 0.2577730944725093
iter 669, loss 0.2544100986669443
iter 679, loss 0.2511592122380609
iter 689, loss 0.2480146494787638
iter 699, loss 0.24497096681926714
iter 709, loss 0.2420230418567801
iter 719, loss 0.23916605368251415
iter 729, loss 0.23639546442555456
iter 739, loss 0.23370700193813698
iter 749, loss 0.23109664355154746
iter 759, loss 0.2285606008362593
iter 769, loss 0.22609530530403904
iter 779, loss 0.2236973949936189
iter 789, loss 0.22136370188515428
iter 799, loss 0.21909124009208833
iter 809, loss 0.21687719478222933
iter 819, loss 0.21471891178284028
iter 829, loss 0.21261388782734392
iter 839, loss 0.2105597614038757
iter 849, loss 0.20855430416838638
iter 859, loss 0.20659541288730932
iter 869, loss 0.20468110187697833
iter 879, loss 0.2028094959090178
iter 889, loss 0.20097882355283644
iter 899, loss 0.19918741092814593
iter 909, loss 0.1974336758421087
iter 919, loss 0.1957161222872899
iter 929, loss 0.19403333527807176
iter 939, loss 0.19238397600456975
iter 949, loss 0.19076677728439415
iter 959, loss 0.18918053929381623
iter 969, loss 0.18762412556104593
iter 979, loss 0.18609645920539716
iter 989, loss 0.18459651940712488
iter 999, loss 0.18312333809366155
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-SOsnBSn8-1582627946519)(output_72_1.png)]
小批量隨機梯度下降法(Mini-batch Stochastic Gradient Descent)
在上述程序中,每次迭代的時候均基於數據集中的全部數據進行計算。但在實際問題中數據集往往非常大,如果每次計算都使用全部的數據來計算損失函數和梯度,效率非常低。一個合理的解決方案是每次從總的數據集中隨機抽取出小部分數據來代表整體,基於這部分數據計算梯度和損失,然後更新參數。這種方法被稱作小批量隨機梯度下降法(Mini-batch Stochastic Gradient Descent),簡稱SGD。每次迭代時抽取出來的一批數據被稱爲一個min-batch,一個mini-batch所包含的樣本數目稱爲batch_size。當程序迭代的時候,按mini-batch逐漸抽取出樣本,當把整個數據集都遍歷到了的時候,則完成了一輪的訓練,也叫一個epoch。啓動訓練時,可以將訓練的輪數num_epochs和batch_size作爲參數傳入。
下面結合程序介紹具體的實現過程。
# 獲取數據
train_data, test_data = load_data()
train_data.shape
(404, 14)
train_data中一共包含404條數據,如果batch_size=10,即取前0-9號樣本作爲第一個mini-batch,命名train_data1。
train_data1 = train_data[0:10]
train_data1.shape
(10, 14)
使用train_data1的數據(0-9號樣本)計算梯度並更新網絡參數。
net = Network(13)
x = train_data1[:, :-1]
y = train_data1[:, -1:]
loss = net.train(x, y, iterations=1, eta=0.01)
loss
[0.9001866101467376]
再取出10-19號樣本作爲第二個mini-batch,計算梯度並更新網絡參數。
train_data2 = train_data[10:19]
x = train_data1[:, :-1]
y = train_data1[:, -1:]
loss = net.train(x, y, iterations=1, eta=0.01)
loss
[0.8903272433979659]
按此方法不斷的取出新的mini-batch並逐漸更新網絡參數。
下面的程序可以將train_data分成大小爲batch_size的多個mini_batch。
batch_size = 10
n = len(train_data)
mini_batches = [train_data[k:k+batch_size] for k in range(0, n, batch_size)]
print('total number of mini_batches is ', len(mini_batches))
print('first mini_batch shape ', mini_batches[0].shape)
print('last mini_batch shape ', mini_batches[-1].shape)
total number of mini_batches is 41
first mini_batch shape (10, 14)
last mini_batch shape (4, 14)
上面的代碼將train_data分成 個 mini_batch了,其中前40個mini_batch,每個均含有10個樣本,最後一個mini_batch只含有4個樣本。
另外,我們這裏是按順序取出mini_batch的,而SGD裏面是隨機的抽取一部分樣本代表總體。爲了實現隨機抽樣的效果,我們先將train_data裏面的樣本順序隨機打亂,然後再抽取mini_batch。隨機打亂樣本順序,需要用到np.random.shuffle函數,下面先介紹它的用法。
# 新建一個array
a = np.array([1,2,3,4,5,6,7,8,9,10,11,12])
print('before shuffle', a)
np.random.shuffle(a)
print('after shuffle', a)
before shuffle [ 1 2 3 4 5 6 7 8 9 10 11 12]
after shuffle [ 7 2 11 3 8 6 12 1 4 5 10 9]
多次運行上面的代碼,可以發現每次執行shuffle函數後的數字順序均不同。
上面舉的是一個1維數組亂序的案例,我們在觀察下2維數組亂序後的效果。
# 新建一個array
a = np.array([1,2,3,4,5,6,7,8,9,10,11,12])
a = a.reshape([6, 2])
print('before shuffle\n', a)
np.random.shuffle(a)
print('after shuffle\n', a)
before shuffle
[[ 1 2]
[ 3 4]
[ 5 6]
[ 7 8]
[ 9 10]
[11 12]]
after shuffle
[[ 1 2]
[ 3 4]
[ 5 6]
[ 9 10]
[11 12]
[ 7 8]]
觀察運行結果可發現,數組的元素在第0維被隨機打亂,但第1維的順序保持不變。例如數字2仍然緊挨在數字1的後面,數字8仍然緊挨在數字7的後面,而第二維的[3, 4]並不排在[1, 2]的後面。
綜上隨機亂序和抽取mini_batch的步驟,我們可以改寫訓練過程如下。每個隨機抽取的mini-batch數據,輸入到模型中用於參數訓練。
# 獲取數據
train_data, test_data = load_data()
# 打亂樣本順序
np.random.shuffle(train_data)
# 將train_data分成多個mini_batch
batch_size = 10
n = len(train_data)
mini_batches = [train_data[k:k+batch_size] for k in range(0, n, batch_size)]
# 創建網絡
net = Network(13)
# 依次使用每個mini_batch的數據
for mini_batch in mini_batches:
x = mini_batch[:, :-1]
y = mini_batch[:, -1:]
loss = net.train(x, y, iterations=1)
將這部分實現SGD算法的代碼集成到Network類中的train函數中,最終的完整代碼如下。
import numpy as np
class Network(object):
def __init__(self, num_of_weights):
# 隨機產生w的初始值
# 爲了保持程序每次運行結果的一致性,此處設置固定的隨機數種子
#np.random.seed(0)
self.w = np.random.randn(num_of_weights, 1)
self.b = 0.
def forward(self, x):
z = np.dot(x, self.w) + self.b
return z
def loss(self, z, y):
error = z - y
num_samples = error.shape[0]
cost = error * error
cost = np.sum(cost) / num_samples
return cost
def gradient(self, x, y):
z = self.forward(x)
N = x.shape[0]
gradient_w = 1. / N * np.sum((z-y) * x, axis=0)
gradient_w = gradient_w[:, np.newaxis]
gradient_b = 1. / N * np.sum(z-y)
return gradient_w, gradient_b
def update(self, gradient_w, gradient_b, eta = 0.01):
self.w = self.w - eta * gradient_w
self.b = self.b - eta * gradient_b
def train(self, training_data, num_epoches, batch_size=10, eta=0.01):
n = len(training_data)
losses = []
for epoch_id in range(num_epoches):
# 在每輪迭代開始之前,將訓練數據的順序隨機的打亂,
# 然後再按每次取batch_size條數據的方式取出
np.random.shuffle(training_data)
# 將訓練數據進行拆分,每個mini_batch包含batch_size條的數據
mini_batches = [training_data[k:k+batch_size] for k in range(0, n, batch_size)]
for iter_id, mini_batch in enumerate(mini_batches):
#print(self.w.shape)
#print(self.b)
x = mini_batch[:, :-1]
y = mini_batch[:, -1:]
a = self.forward(x)
loss = self.loss(a, y)
gradient_w, gradient_b = self.gradient(x, y)
self.update(gradient_w, gradient_b, eta)
losses.append(loss)
print('Epoch {:3d} / iter {:3d}, loss = {:.4f}'.
format(epoch_id, iter_id, loss))
return losses
# 獲取數據
train_data, test_data = load_data()
# 創建網絡
net = Network(13)
# 啓動訓練
losses = net.train(train_data, num_epoches=50, batch_size=100, eta=0.1)
# 畫出損失函數的變化趨勢
plot_x = np.arange(len(losses))
plot_y = np.array(losses)
plt.plot(plot_x, plot_y)
plt.show()
Epoch 0 / iter 0, loss = 0.6273
Epoch 0 / iter 1, loss = 0.4835
Epoch 0 / iter 2, loss = 0.5830
Epoch 0 / iter 3, loss = 0.5466
Epoch 0 / iter 4, loss = 0.2147
Epoch 1 / iter 0, loss = 0.6645
Epoch 1 / iter 1, loss = 0.4875
Epoch 1 / iter 2, loss = 0.4707
Epoch 1 / iter 3, loss = 0.4153
Epoch 1 / iter 4, loss = 0.1402
Epoch 2 / iter 0, loss = 0.5897
Epoch 2 / iter 1, loss = 0.4373
Epoch 2 / iter 2, loss = 0.4631
Epoch 2 / iter 3, loss = 0.3960
Epoch 2 / iter 4, loss = 0.2340
Epoch 3 / iter 0, loss = 0.4139
Epoch 3 / iter 1, loss = 0.5635
Epoch 3 / iter 2, loss = 0.3807
Epoch 3 / iter 3, loss = 0.3975
Epoch 3 / iter 4, loss = 0.1207
Epoch 4 / iter 0, loss = 0.3786
Epoch 4 / iter 1, loss = 0.4474
Epoch 4 / iter 2, loss = 0.4019
Epoch 4 / iter 3, loss = 0.4352
Epoch 4 / iter 4, loss = 0.0435
Epoch 5 / iter 0, loss = 0.4387
Epoch 5 / iter 1, loss = 0.3886
Epoch 5 / iter 2, loss = 0.3182
Epoch 5 / iter 3, loss = 0.4189
Epoch 5 / iter 4, loss = 0.1741
Epoch 6 / iter 0, loss = 0.3191
Epoch 6 / iter 1, loss = 0.3601
Epoch 6 / iter 2, loss = 0.4199
Epoch 6 / iter 3, loss = 0.3289
Epoch 6 / iter 4, loss = 1.2691
Epoch 7 / iter 0, loss = 0.3202
Epoch 7 / iter 1, loss = 0.2855
Epoch 7 / iter 2, loss = 0.4129
Epoch 7 / iter 3, loss = 0.3331
Epoch 7 / iter 4, loss = 0.2218
Epoch 8 / iter 0, loss = 0.2368
Epoch 8 / iter 1, loss = 0.3457
Epoch 8 / iter 2, loss = 0.3339
Epoch 8 / iter 3, loss = 0.3812
Epoch 8 / iter 4, loss = 0.0534
Epoch 9 / iter 0, loss = 0.3567
Epoch 9 / iter 1, loss = 0.4033
Epoch 9 / iter 2, loss = 0.1926
Epoch 9 / iter 3, loss = 0.2803
Epoch 9 / iter 4, loss = 0.1557
Epoch 10 / iter 0, loss = 0.3435
Epoch 10 / iter 1, loss = 0.2790
Epoch 10 / iter 2, loss = 0.3456
Epoch 10 / iter 3, loss = 0.2076
Epoch 10 / iter 4, loss = 0.0935
Epoch 11 / iter 0, loss = 0.3024
Epoch 11 / iter 1, loss = 0.2517
Epoch 11 / iter 2, loss = 0.2797
Epoch 11 / iter 3, loss = 0.2989
Epoch 11 / iter 4, loss = 0.0301
Epoch 12 / iter 0, loss = 0.2507
Epoch 12 / iter 1, loss = 0.2563
Epoch 12 / iter 2, loss = 0.2971
Epoch 12 / iter 3, loss = 0.2833
Epoch 12 / iter 4, loss = 0.0597
Epoch 13 / iter 0, loss = 0.2827
Epoch 13 / iter 1, loss = 0.2094
Epoch 13 / iter 2, loss = 0.2417
Epoch 13 / iter 3, loss = 0.2985
Epoch 13 / iter 4, loss = 0.4036
Epoch 14 / iter 0, loss = 0.3085
Epoch 14 / iter 1, loss = 0.2015
Epoch 14 / iter 2, loss = 0.1830
Epoch 14 / iter 3, loss = 0.2978
Epoch 14 / iter 4, loss = 0.0630
Epoch 15 / iter 0, loss = 0.2342
Epoch 15 / iter 1, loss = 0.2780
Epoch 15 / iter 2, loss = 0.2571
Epoch 15 / iter 3, loss = 0.1838
Epoch 15 / iter 4, loss = 0.0627
Epoch 16 / iter 0, loss = 0.1896
Epoch 16 / iter 1, loss = 0.1966
Epoch 16 / iter 2, loss = 0.2018
Epoch 16 / iter 3, loss = 0.3257
Epoch 16 / iter 4, loss = 0.1268
Epoch 17 / iter 0, loss = 0.1990
Epoch 17 / iter 1, loss = 0.2031
Epoch 17 / iter 2, loss = 0.2662
Epoch 17 / iter 3, loss = 0.2128
Epoch 17 / iter 4, loss = 0.0133
Epoch 18 / iter 0, loss = 0.1780
Epoch 18 / iter 1, loss = 0.1575
Epoch 18 / iter 2, loss = 0.2547
Epoch 18 / iter 3, loss = 0.2544
Epoch 18 / iter 4, loss = 0.2007
Epoch 19 / iter 0, loss = 0.1657
Epoch 19 / iter 1, loss = 0.2000
Epoch 19 / iter 2, loss = 0.2045
Epoch 19 / iter 3, loss = 0.2524
Epoch 19 / iter 4, loss = 0.0632
Epoch 20 / iter 0, loss = 0.1629
Epoch 20 / iter 1, loss = 0.1895
Epoch 20 / iter 2, loss = 0.2523
Epoch 20 / iter 3, loss = 0.1896
Epoch 20 / iter 4, loss = 0.0918
Epoch 21 / iter 0, loss = 0.1583
Epoch 21 / iter 1, loss = 0.2322
Epoch 21 / iter 2, loss = 0.1567
Epoch 21 / iter 3, loss = 0.2089
Epoch 21 / iter 4, loss = 0.2035
Epoch 22 / iter 0, loss = 0.2273
Epoch 22 / iter 1, loss = 0.1427
Epoch 22 / iter 2, loss = 0.1712
Epoch 22 / iter 3, loss = 0.1826
Epoch 22 / iter 4, loss = 0.2878
Epoch 23 / iter 0, loss = 0.1685
Epoch 23 / iter 1, loss = 0.1622
Epoch 23 / iter 2, loss = 0.1499
Epoch 23 / iter 3, loss = 0.2329
Epoch 23 / iter 4, loss = 0.1486
Epoch 24 / iter 0, loss = 0.1617
Epoch 24 / iter 1, loss = 0.2083
Epoch 24 / iter 2, loss = 0.1442
Epoch 24 / iter 3, loss = 0.1740
Epoch 24 / iter 4, loss = 0.1641
Epoch 25 / iter 0, loss = 0.1159
Epoch 25 / iter 1, loss = 0.2064
Epoch 25 / iter 2, loss = 0.1690
Epoch 25 / iter 3, loss = 0.1778
Epoch 25 / iter 4, loss = 0.0159
Epoch 26 / iter 0, loss = 0.1730
Epoch 26 / iter 1, loss = 0.1861
Epoch 26 / iter 2, loss = 0.1387
Epoch 26 / iter 3, loss = 0.1486
Epoch 26 / iter 4, loss = 0.1090
Epoch 27 / iter 0, loss = 0.1393
Epoch 27 / iter 1, loss = 0.1775
Epoch 27 / iter 2, loss = 0.1564
Epoch 27 / iter 3, loss = 0.1245
Epoch 27 / iter 4, loss = 0.7611
Epoch 28 / iter 0, loss = 0.1470
Epoch 28 / iter 1, loss = 0.1211
Epoch 28 / iter 2, loss = 0.1285
Epoch 28 / iter 3, loss = 0.1854
Epoch 28 / iter 4, loss = 0.5240
Epoch 29 / iter 0, loss = 0.1740
Epoch 29 / iter 1, loss = 0.0898
Epoch 29 / iter 2, loss = 0.1392
Epoch 29 / iter 3, loss = 0.1842
Epoch 29 / iter 4, loss = 0.0251
Epoch 30 / iter 0, loss = 0.0978
Epoch 30 / iter 1, loss = 0.1529
Epoch 30 / iter 2, loss = 0.1640
Epoch 30 / iter 3, loss = 0.1503
Epoch 30 / iter 4, loss = 0.0975
Epoch 31 / iter 0, loss = 0.1399
Epoch 31 / iter 1, loss = 0.1595
Epoch 31 / iter 2, loss = 0.1209
Epoch 31 / iter 3, loss = 0.1203
Epoch 31 / iter 4, loss = 0.2008
Epoch 32 / iter 0, loss = 0.1501
Epoch 32 / iter 1, loss = 0.1310
Epoch 32 / iter 2, loss = 0.1065
Epoch 32 / iter 3, loss = 0.1489
Epoch 32 / iter 4, loss = 0.0818
Epoch 33 / iter 0, loss = 0.1401
Epoch 33 / iter 1, loss = 0.1367
Epoch 33 / iter 2, loss = 0.0970
Epoch 33 / iter 3, loss = 0.1481
Epoch 33 / iter 4, loss = 0.0711
Epoch 34 / iter 0, loss = 0.1157
Epoch 34 / iter 1, loss = 0.1050
Epoch 34 / iter 2, loss = 0.1378
Epoch 34 / iter 3, loss = 0.1505
Epoch 34 / iter 4, loss = 0.0429
Epoch 35 / iter 0, loss = 0.1096
Epoch 35 / iter 1, loss = 0.1279
Epoch 35 / iter 2, loss = 0.1715
Epoch 35 / iter 3, loss = 0.0888
Epoch 35 / iter 4, loss = 0.0473
Epoch 36 / iter 0, loss = 0.1350
Epoch 36 / iter 1, loss = 0.0781
Epoch 36 / iter 2, loss = 0.1458
Epoch 36 / iter 3, loss = 0.1288
Epoch 36 / iter 4, loss = 0.0421
Epoch 37 / iter 0, loss = 0.1083
Epoch 37 / iter 1, loss = 0.0972
Epoch 37 / iter 2, loss = 0.1513
Epoch 37 / iter 3, loss = 0.1236
Epoch 37 / iter 4, loss = 0.0366
Epoch 38 / iter 0, loss = 0.1204
Epoch 38 / iter 1, loss = 0.1341
Epoch 38 / iter 2, loss = 0.1109
Epoch 38 / iter 3, loss = 0.0905
Epoch 38 / iter 4, loss = 0.3906
Epoch 39 / iter 0, loss = 0.0923
Epoch 39 / iter 1, loss = 0.1094
Epoch 39 / iter 2, loss = 0.1295
Epoch 39 / iter 3, loss = 0.1239
Epoch 39 / iter 4, loss = 0.0684
Epoch 40 / iter 0, loss = 0.1188
Epoch 40 / iter 1, loss = 0.0984
Epoch 40 / iter 2, loss = 0.1067
Epoch 40 / iter 3, loss = 0.1057
Epoch 40 / iter 4, loss = 0.4602
Epoch 41 / iter 0, loss = 0.1478
Epoch 41 / iter 1, loss = 0.0980
Epoch 41 / iter 2, loss = 0.0921
Epoch 41 / iter 3, loss = 0.1020
Epoch 41 / iter 4, loss = 0.0430
Epoch 42 / iter 0, loss = 0.0991
Epoch 42 / iter 1, loss = 0.0994
Epoch 42 / iter 2, loss = 0.1270
Epoch 42 / iter 3, loss = 0.0988
Epoch 42 / iter 4, loss = 0.1176
Epoch 43 / iter 0, loss = 0.1286
Epoch 43 / iter 1, loss = 0.1013
Epoch 43 / iter 2, loss = 0.1066
Epoch 43 / iter 3, loss = 0.0779
Epoch 43 / iter 4, loss = 0.1481
Epoch 44 / iter 0, loss = 0.0840
Epoch 44 / iter 1, loss = 0.0858
Epoch 44 / iter 2, loss = 0.1388
Epoch 44 / iter 3, loss = 0.1000
Epoch 44 / iter 4, loss = 0.0313
Epoch 45 / iter 0, loss = 0.0896
Epoch 45 / iter 1, loss = 0.1173
Epoch 45 / iter 2, loss = 0.0916
Epoch 45 / iter 3, loss = 0.1043
Epoch 45 / iter 4, loss = 0.0074
Epoch 46 / iter 0, loss = 0.1008
Epoch 46 / iter 1, loss = 0.0915
Epoch 46 / iter 2, loss = 0.0877
Epoch 46 / iter 3, loss = 0.1139
Epoch 46 / iter 4, loss = 0.0292
Epoch 47 / iter 0, loss = 0.0679
Epoch 47 / iter 1, loss = 0.0987
Epoch 47 / iter 2, loss = 0.0929
Epoch 47 / iter 3, loss = 0.1098
Epoch 47 / iter 4, loss = 0.4838
Epoch 48 / iter 0, loss = 0.0693
Epoch 48 / iter 1, loss = 0.1095
Epoch 48 / iter 2, loss = 0.1128
Epoch 48 / iter 3, loss = 0.0890
Epoch 48 / iter 4, loss = 0.1008
Epoch 49 / iter 0, loss = 0.0724
Epoch 49 / iter 1, loss = 0.0804
Epoch 49 / iter 2, loss = 0.0919
Epoch 49 / iter 3, loss = 0.1233
Epoch 49 / iter 4, loss = 0.1849
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-KxWSKjuK-1582627946521)(output_92_1.png)]
總結
本節,我們詳細講解了如何使用numpy實現梯度下降算法,構建並訓練了一個簡單的線性模型實現波士頓房價預測,可以總結出,使用神經網絡建模房價預測有三個要點:
-
構建網絡,初始化參數w和b,定義預測和損失函數的計算方法。
-
隨機選擇初始點,建立梯度的計算方法,和參數更新方式。
-
從總的數據集中抽取部分數據作爲一個mini_batch,計算梯度並更新參數,不斷迭代直到損失函數幾乎不再下降。