Python機器學習:線性迴歸
一、線性迴歸及梯度下降
看到線性迴歸,我首先想到的是高中算那個線性迴歸題,一節課只算出一個k,而且數據規模只有幾組而已。
可如今,“大人時代變了”
關於什麼是機器學習,我就暫且不做筆記了,網上介紹遠比我介紹的清楚。我看的是周志華的西瓜書,疫情原因不得返校,有些數學知識不得細細咀嚼,實在是頗有遺憾。
步入正題,線性迴歸:
線性迴歸簡單來說就是用一條曲線,來預測未知的可能的值。不知準確?
求得theta0、1最初的辦法是利用最小二乘法,求得歐氏幾何最小的閉式解。
梯度下降優化算法:
爲了求得更好的theta1,與theta0,我們需要求得損失函數w = (theta1,theta0)
通過求的的theta1、theta0的偏導,計算出梯度下降;
其中α是學習域,代表每次梯度下降的步長。
二、代碼演示
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
"""
the linear regression model for machine study
f(x) = theta_1 * x + theta_0
"""
def ReadingDataSets():
Datainputstream = np.array(pd.read_csv(r"D:\桌面\data.csv")) # 讀取文件
DataX = Datainputstream[:, 0: -1].ravel() # 將數據傳入到各自的維度中
DataY = Datainputstream[:, -1]
DataSetShape = Datainputstream.shape # 獲取數據規模
return DataX, DataY, DataSetShape
def average(sets): # 計算平均值
aver = sum(sets) / np.array(sets).shape[0]
return aver
def ParameterSolve(x,y,m):#求取y = theta_1 * x + theta_0
# 爲了計算最小的歐氏距離,採用求偏導,計算出各個theta的最優解閉式
theta_1, theta_0 = 0, 0#賦初值
parameter_1, parameter_2, parameter_3, parameter_4 = 0, 0, 0, 0
for i in range(m):
parameter_1 += y[i] * (x[i] - average(x))
parameter_2 += x[i]**2
parameter_3 += x[i]
theta_1 = parameter_1 / ( parameter_2 - (1/m) * (parameter_3 **2) ) # theta_1的閉式
for i in range(m):
parameter_4 += y[i] - theta_1 * x[i]
theta_0 = (1/ m) * parameter_4#theta_0的閉式
return theta_1, theta_0
def LossFormula(x,y,m,theta_1,theta_0):#計算損失函數的
J = 0
for i in range(m):
h = theta_1 * x[i] + theta_0
J += ( h - y[i] ) ** 2
J /= (2 * m)
return J
def PartialTheta(x,y,m,theta_1,theta_0):#計算偏導
theta_1Partial = 0
theta_0Partial = 0
for i in range(m):
theta_1Partial += (theta_1 * x[i] + theta_0 - y[i]) * x[i]
theta_1Partial /= (1/m)
for i in range(m):
theta_0Partial += theta_1 * x[i] + theta_0 - y[i]
theta_0Partial /= (1/m)
return [theta_1Partial,theta_0Partial]
def GradientDescent(x,y,m,alpha = 0.01,theta_1 = 0,theta_0 = 0):#梯度下降優化參數
MaxIteration = 1000#迭代次數
counter = 0#計數器
Mindiffer = 0.0000000000001#上一次損失值與本次損失值之差的最小閾值
c = LossFormula(x,y,m,theta_1,theta_0)
differ = c + 10#先賦初值
theta_1sets = [theta_1]
theta_0sets = [theta_0]
Loss = [c]
"""
當上一次損失值與本次損失值之差小於最小閾值,進行迭代
每迭代一次,損失值都與上一次做差,以確定是否 過梯度
求得梯度,在原來的基礎上進行梯度下降
"""
while (np.abs(differ - c) > Mindiffer and counter < MaxIteration):#當上一次損失值與本次損失值之差小於最小閾值,並且迭代
differ = c
upgradetheta_1 = alpha * PartialTheta(x,y,m,theta_1,theta_0)[0]#求得的一次theta的梯度值
upgradetheta_0 = alpha * PartialTheta(x,y,m,theta_1,theta_0)[1]
theta_1 -= upgradetheta_1
theta_0 -= upgradetheta_0#在原來的基礎上進行梯度下降
theta_1sets.append(theta_1)
theta_0sets.append(theta_0)
Loss.append(LossFormula(x,y,m,theta_1,theta_0))
c = LossFormula(x,y,m,theta_1,theta_0)
counter += 1
return {"theta_1":theta_1,"theta_1sets":theta_1sets,"theta_0":theta_0,"theta_0sets":theta_0sets,"losssets":Loss}
def DrawScatterandPredictionModel(x,y,theta_1,theta_0,newtheta):
plt.figure("linear regression")
plt.scatter(x, y)
plt.plot(x,theta_1 * x + theta_0,lw=2,label="initital linear regression")
plt.plot(x,newtheta["theta_1"] * x + newtheta["theta_0"],ls="--",lw=0.5,label="optimzed linear regression")
plt.legend()
plt.show()
if __name__ == '__main__':
x,y,shape = ReadingDataSets()
th1, th0 = ParameterSolve(x,y,shape[0])
loss = GradientDescent(x,y,shape[0],alpha=0.01,theta_1=th1,theta_0=th0)
DrawScatterandPredictionModel(x,y,th1,th0,loss)
實在是有趣啊哈哈哈哈哈。
運行結果如下:
{'theta_1': 1.2873573697963243,
'theta_1sets': [1.287357370010957, 1.2873573697963243],
'theta_0': 9.908606190325537,
'theta_0sets': [9.908606190325276, 9.908606190325537],
'losssets': [53.73521850475449, 53.73521850475453]}
未被優化:
梯度下降優化:
區別不是很大,因爲整個數據變化都是在(0.0000000000001)位上進行變化,而且從一開始我就對theta進行的求偏導取得的最優解閉式(意思就是沒用最小二乘法),所以,損失值變化小,圖象優化不明顯。