linear regression2

1 多元線性迴歸的矩陣解法

J(θ)=12Mi=1n(yi(a+bxi))2=12M(yxθ)T(yxθ)J(\theta)=\frac{1}{2M}\sum\limits_{i=1}^n(y_i-(a+bx_i))^2=\frac{1}{2M}(y-x\theta)^T(y-x\theta),
J(θ)J(\theta)求一階偏導得到梯度,

$
\begin{array}{lll}
\nabla_{\theta}J(\theta)&=&\nabla_{\theta}(\frac{1}{2M}(y-x\theta)^T(y-x\theta))\
&=&\nabla_\theta(\frac{1}{2M}(yT-\thetaTx^T)(y-x\theta))\
&=&\frac{1}{2M}\nabla_\theta(yTy-yTx\theta-\thetaTxTy+\thetaTxTx\theta)\
&=&\frac{1}{2M}(-(yTx)T-xTy+2xTx\theta)\
&=&\frac{1}{M}(xTx\theta-xTy)=0
\end{array}
$

解得: θ=(xTx)1xTy\theta=(x^Tx)^{-1}x^Ty,這個結果對於多元線性迴歸也適用.

代碼如下:

import numpy as np
import pandas as pd
data = pd.read_csv('c:/users/administrator/desktop/Advertising.csv')    # TV,Radio,Newspaper,Sales
data['intercept'] = 1
x = data[['intercept','TV', 'radio', 'newspaper']]
y = data['sales']
# 前150行的數據作爲train,後50行作爲test
train_x = np.array(x.loc[1:150,])
test_x = np.array(x.loc[151:,])
train_y = np.array(y.loc[1:150,])
test_y = np.array(y.loc[151:,])
# beta = (X^TX)^(-1)X^Ty,計算參數
Xt = np.transpose(train_x)
XtX = np.dot(Xt,train_x)
Xty = np.dot(Xt,train_y)
beta = np.linalg.solve(XtX,Xty)
print(beta)

結果爲:
[ 3.07875053e+00 4.65616125e-02 1.80900459e-01 -2.55988893e-03]

接着對後50個數據進行預測如下,

# 對後50行的數據進行預測
pred=[]
for data, actual in zip(test_x, test_y):
    test = np.transpose(data)
    prediction = np.dot(test, beta)
    pred.append(prediction)
    #print('prediction = ' + str(prediction) + ' actual = ' + str(actual))
pred

最後畫出預測值和真實值的圖像如下,

import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['font.sans-serif'] = ['simHei']
t = np.arange(len(test_x))
plt.plot(t, test_y, 'r-', linewidth=2, label='真實數據')
plt.plot(t, pred, 'g-', linewidth=2, label='預測數據')
plt.legend(loc='upper left')

0428pd1

注意: 下面我們使用scikit-learn中的linear_model來做一下.

0428pd2

2 多元線性迴歸的梯度下降解法

2.1 確認優化模型的假設函數和損失函數。

比如對於線性迴歸,假設函數表示爲hθ(x1,x2, ,xn)=θ0+θ1x1++θnxnh_{\theta}(x_1,x_2,\cdots,x_n)=\theta_0+\theta_1x_1+\cdots+\theta_nx_n,其中θi(i=0,1,2, ,n)\theta_i(i=0,1,2,\cdots,n)爲模型參數,xi(i=0,1,2, ,n)x_i(i=0,1,2,\cdots,n)爲每個樣本的n個特徵值。這個表示可以簡化,我們增加一個特徵x0=1x_0=1,這樣hθ(x0,x1, ,xn)=i=0mθixih_{\theta}(x_0,x_1,\cdots,x_n)=\sum\limits_{i=0}^m\theta_ix_i.

同樣是線性迴歸,對應於上面的假設函數,損失函數爲:

J(θ0,θ1, ,θn)=12Mj=0m(hθ(x0(j),x1(j), ,xn(j))yj)2J(\theta_0,\theta_1,\cdots,\theta_n)=\frac{1}{2M}\sum\limits_{j=0}^m(h_{\theta}(x_0^{(j)},x_1^{(j)},\cdots,x_n^{(j)})-y_j)^2.

2.2 算法過程:

  1. 確定當前位置的損失函數的梯度,對於θi\theta_i其梯度表達式如下:θiJ(θ0,θ1, ,θn)\frac{\partial}{\partial \theta_i}J(\theta_0,\theta_1,\cdots,\theta_n).

  2. 用步長乘以損失函數的梯度,得到當前位置下降的距離,即αθiJ(θ0,θ1, ,θn)\alpha\frac{\partial}{\partial \theta_i}J(\theta_0,\theta_1,\cdots,\theta_n).

  3. 確定是否所有的θi\theta_i,梯度下降的距離都小於ϵ\epsilon,如果小於ϵ\epsilon則算法終止,否則進入步驟4.

  4. 更新所有的θ\theta,對於θi\theta_i,其更新表達式如下。更新完畢後繼續轉入步驟1.θi=θiαθiJ(θ0,θ1, ,θn)\theta_i=\theta_i-\alpha\frac{\partial}{\partial \theta_i}J(\theta_0,\theta_1,\cdots,\theta_n).

下面用線性迴歸的例子來具體描述梯度下降。假設我們的樣本是(x1(0),x2(0), ,xn(0),y0),(x1(1),x2(1), ,xn(1),y1), ,(x1(2),x2(2), ,xn(2),y2)(x_1^{(0)},x_2^{(0)},\cdots,x_n^{(0)},y_0),(x_1^{(1)},x_2^{(1)},\cdots,x_n^{(1)},y_1),\cdots,(x_1^{(2)},x_2^{(2)},\cdots,x_n^{(2)},y_2),損失函數如前面先決條件所述:

J(θ0,θ1, ,θn)=12Mj=0m(hθ(x0(j),x1(j), ,xn(j))yj)2J(\theta_0,\theta_1,\cdots,\theta_n)=\frac{1}{2M}\sum\limits_{j=0}^m(h_{\theta}(x_0^{(j)},x_1^{(j)},\cdots,x_n^{(j)})-y_j)^2.

則在算法過程步驟1中對於θi\theta_i的偏導數計算如下:

θiJ(θ0,θ1, ,θn)=1Mj=0m(hθ(x0(j),x1(j), ,xn(j))yj)xi(j)\frac{\partial}{\partial \theta_i}J(\theta_0,\theta_1,\cdots,\theta_n)=\frac{1}{M}\sum\limits_{j=0}^m(h_{\theta}(x_0^{(j)},x_1^{(j)},\cdots,x_n^{(j)})-y_j)x_i^{(j)}.

上式中所有的x0jx_0^j均爲1.

步驟4中θi\theta^i的更新表達式如下:

θi=θiα1Mj=0m(hθ(x0(j),x1(j), ,xn(j))yj)xi(j)\theta_i=\theta_i-\alpha\frac{1}{M}\sum\limits_{j=0}^m(h_{\theta}(x_0^{(j)},x_1^{(j)},\cdots,x_n^{(j)})-y_j)x_i^{(j)}.

2.3 梯度下降法的矩陣方式描述

1 損失函數可以寫爲hθ(X)=Xθh_\theta(X)=X\theta,其中, 假設函數hθ(X)h_\theta(X)hθ(X) 爲mx1的向量,θ\theta(n+1)×1(n+1)\times 1的向量,X爲m×(n+1)m\times (n+1))維的矩陣。m代表樣本的個數,n+1代表樣本的特徵數。

損失函數的表達式爲:J(θ)=12M(XθY)T(XθY)J(\theta)=\frac{1}{2M}(X\theta-Y)^T(X\theta-Y),其中YY是樣本的輸出向量,維度爲mx1.

2 算法過程:

  1. 確定當前位置的損失函數的梯度,對於θi\theta_i其梯度表達式如下:θiJ(θ)\frac{\partial}{\partial \theta_i}J(\theta).

  2. 用步長乘以損失函數的梯度,得到當前位置下降的距離,即αθiJ(θ)\alpha\frac{\partial}{\partial \theta_i}J(\theta).

  3. 確定是否所有的θi\theta_i,梯度下降的距離都小於ϵ\epsilon,如果小於ϵ\epsilon則算法終止,否則進入步驟4.

  4. 更新所有的θ\theta,對於θi\theta_i,其更新表達式如下。更新完畢後繼續轉入步驟1.

θi=θiαθiJ(θ)\theta_i=\theta_i-\alpha\frac{\partial}{\partial \theta_i}J(\theta).

損失函數對於θ\theta向量的偏導數計算如下:
θJ(θ)=XT(XθY)\frac{\partial}{\partial \theta}J(\theta)=X^T(X\theta-Y),從而步驟4中的更新可以寫爲:θ=θαXT(XθY)\theta=\theta-\alpha X^T(X\theta-Y).

具體代碼如下:

def optimizer(data,init_a,init_b,learning_rate,num_iter):
    b=init_b
    a=init_a
    
    #gradient descent
    for i in range(num_iter):
        a,b=compute_gradient(a,b,data,learning_rate)
        #if i%100==0:
            #print('iter{0}:error={1}'.format(i,computer_error(a,b,data)))
    return [a,b]
def compute_gradient(a_current,b_current,data,learning_rate=0.001):

    a_gradient=0
    b_gradient=0
    
    M=float(len(data))
    for i in range(len(data)):
        x=data[i,0]
        y=data[i,1]
        
        a_gradient+= -(1/M)*(y-(a_current+b_current*x))
        b_gradient+= -(1/M)*x*(y-(a_current+b_current*x))
    #print('a_gradient=%f,b_gradient=%f'%(a_gradient,b_gradient))
    new_b=b_current-(learning_rate*b_gradient)
    new_a=a_current-(learning_rate*a_gradient)
    return [new_a,new_b]
def compute_gradient_vector_version(theta,X,y,learning_rate=0.00001):
    theta_c=copy.copy(theta);theta_c=np.matrix(theta_c)
    X_c=copy.copy(X);X_c=np.matrix(X_c)
    y_c=copy.copy(y);y_c=np.matrix(y_c)
    
    M,N=X_c.shape
    M=float(len(X_c))
    theta_gradient=np.matrix(np.zeros([N,1]))
    #for j in range(N):
        #theta_gradient[j,0]=-(1/M)*(y_c-X_c*theta_c).T*X_c[:,j]
    theta_gradient=-(1/M)*X_c.T*(y_c-X_c*theta_c)
    theta_c = theta_c-(learning_rate*theta_gradient)
    #print(theta_c)
    return theta_c
def optimizer_vector_version(X,y,learning_rate=0.00001,num_iter=10000):
    X_c=copy.copy(X);X_c=np.matrix(X_c)
    y_c=copy.copy(y);y_c=np.matrix(y_c)

    M,N=X_c.shape
    theta=np.matrix(np.zeros([N,1]))
    #gradient descent
    for i in range(num_iter):
        theta=compute_gradient_vector_version(theta,X_c,y_c,learning_rate)
        #if i%100==0:
            #print('iter{0}:error={1}'.format(i,computer_error(a,b,data)))
    return theta

對於上一個講稿中的例2,做法如下:

#導入數據
res=[]
with open('d:/shuju1.txt','r') as f:
    lines=f.readlines()
    for line in lines:
        res.append(list(map(float,line.strip('\n').split(','))))
res=np.array(res)
data=res
#初始化參數
learning_rate=0.001
initial_b=0
initial_a=0
num_iter=1000
#用兩種方法分別得到結果如下.

[a,b]=optimizer(data,initial_a,initial_b,learning_rate,num_iter)
>>(0.24852432182905326, 0.7411262595522877)

import copy
x0=np.ones((67,1))
data.shape,x0.shape
X=np.hstack((x0,data[:,0].reshape(67,1)))
y1=data[:,1].reshape(67,1)
theta_c=optimizer_vector_version(X,y1)

例1 下面我們看一個多元線性迴歸的例子.

0428pd3

首先用矩陣法實現,代碼如下:

import numpy as np
import pandas as pd
data1 = pd.read_csv('c:/users/administrator/desktop/Advertising.csv')    # TV,Radio,Newspaper,Sales
data1['intercept'] = 1
x = data1[['intercept','TV', 'radio', 'newspaper']]
y = data1['sales']
# 前150行的數據作爲train,後50行作爲test
train_x = np.array(x.loc[1:150,])
test_x = np.array(x.loc[151:,])
train_y = np.array(y.loc[1:150,])
test_y = np.array(y.loc[151:,])
# beta = (X^TX)^(-1)X^Ty,計算參數
Xt = np.transpose(train_x)
XtX = np.dot(Xt,train_x)
Xty = np.dot(Xt,train_y)
beta = np.linalg.solve(XtX,Xty)
print(beta)

結果爲:[ 3.07875053e+00 4.65616125e-02 1.80900459e-01 -2.55988893e-03]

畫出圖像代碼如下:

pred=[]
for data, actual in zip(test_x, test_y):
    test = np.transpose(data)
    prediction = np.dot(test, beta)
    pred.append(prediction)
    #print('prediction = ' + str(prediction) + ' actual = ' + str(actual))
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['font.sans-serif'] = ['simHei']
t = np.arange(len(test_x))
plt.plot(t, test_y, 'r-', linewidth=2, label='真實數據')
plt.plot(t, pred, 'g-', linewidth=2, label='預測數據')
plt.legend(loc='upper left')

0428pd4

下面我們用梯度下降法來實現這個題目.

train_y=train_y.reshape((len(train_y),1))
theta_c=optimizer_vector_version(train_x,train_y)
theta_c=theta_c.tolist()

pred=[]
for data, actual in zip(test_x, test_y):
    test = np.transpose(data)
    prediction = np.dot(test, theta_c)
    pred.append(prediction)
    #print('prediction = ' + str(prediction) + ' actual = ' + str(actual))
    
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['font.sans-serif'] = ['simHei']
t = np.arange(len(test_x))
plt.plot(t, test_y, 'r-', linewidth=2, label='真實數據')
plt.plot(t, pred, 'g-', linewidth=2, label='預測數據')
plt.legend(loc='upper left')

0428pd5

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章