ML實戰-Adaline with stochastic gradient descent

原理

stochastic gredient descent

初版的Adaline的最大缺點是需要x, y 的全集來進行計算weight, 但是在實際的大數據應用場景中,這是不可能的。因爲在網絡中,數據是指數增長的,有新的數據源源不斷地添加。所以需要引入“批處理的梯度下降算法”這個概念。

以下是前一章初級gradient descent過程:將全部的x放入神經網中訓練.

for i in range(self.n_iter):
    output = self.net_input(X)
    errors = (y - output)
    self.w_[1:] += self.eta * X.T.dot(errors)
    self.w_[0] += self.eta * errors.sum()
    cost = (errors**2).sum() / 2.0
    self.cost_.append(cost)

stochastic gradient descent就是一種特殊的批處理的梯度下降算法,它隨機選擇sample xi 通過:
Δw=η(yiϕ(xi))xi
來更新weights. 特殊的原因在於它是batch size =1, 也就是一個一個的處理。

批處理滿足於實時訓練模型。當我們用已有的數據訓練好一個模型後,可以一個一個接收新來的數據繼續完善我們的模型。(在後文中fit函數爲訓練已有數據,partial_fit函數爲後續數據做訓練調用)

adaptive learning rate

在stochastic gradient descent算法中,常用到的是可變換的學習速率,比如按照迭代次數逐步減短:
η=C1n_iter+C2

mini-batch learning

stochastic gradient descent是一個一個數據處理,而mini-batch learning 則是更爲廣義的批處理,比如batch size = 50.
Δw=ηii+50(yiϕ(xi))xi .這樣的好處是收斂更快。

實現

基於上篇的Adaline訓練模型, 在此次模型中添加:
1. shuffle函數。隨機選sample.
2. partial_fit函數。用來訓練後續數據集。

from numpy.random import seed
import numpy as np
class AdalineSGD(object):
    """ADAptive LInear NEuron classifier.
    Parameters
    ------------
    eta : float
    Learning rate (between 0.0 and 1.0)
    n_iter : int
    Passes over the training dataset.
    Attributes
    -----------
    w_ : 1d-array
    Weights after fitting.
    errors_ : list
    Number of misclassifications in every epoch.
    shuffle : bool (default: True)
    Shuffles training data every epoch, ensure choose dataset randomly
    if True to prevent cycles.
    random_state : int (default: None)
    Set random state for shuffling
    and initializing the weights.
    """
    def __init__(self, eta=0.01, n_iter=10,
        shuffle=True, random_state=None):
        self.eta = eta
        self.n_iter = n_iter
        self.w_initialized = False
        self.shuffle = shuffle
        if random_state:
            seed(random_state)
    def fit(self, X, y):
        """ Fit training data.
        Parameters
        ----------
        X : {array-like}, shape = [n_samples, n_features]
        Training vectors, where n_samples
        is the number of samples and
        n_features is the number of features.
        y : array-like, shape = [n_samples]
        Target values.
        Returns
        -------
        self : object
        """
        self._initialize_weights(X.shape[1])
        self.cost_ = []
        for i in range(self.n_iter):
            if self.shuffle:
                X, y = self._shuffle(X, y)
            cost = []
            #online processing
            for xi, target in zip(X, y):
                cost.append(self._update_weights(xi, target))
                avg_cost = sum(cost)/len(y)
            self.cost_.append(avg_cost)
        return self
    def partial_fit(self, X, y):
        """
        Fit training data without reinitializing the weights
        If we want to update our model—for example, in an on-line learning scenario with
        streaming data—we could simply call the partial_fit method on individual
        samples—for instance, ada.partial_fit(X_std[0, :], y[0]).
        """
        if not self.w_initialized:
            self._initialize_weights(X.shape[1])
        #ravel()多維數組降到一維數組,按行讀取。
        if y.ravel().shape[0] > 1:
            for xi, target in zip(X, y):
                self._update_weights(xi, target)
        else:
            self._update_weights(X, y)
        return self
    def _shuffle(self, X, y):
        """Shuffle training data"""
        r = np.random.permutation(len(y))
        return X[r], y[r]
    def _initialize_weights(self, m):
        """Initialize weights to zeros"""
        self.w_ = np.zeros(1 + m)
        self.w_initialized = True
    def _update_weights(self, xi, target):
        """Apply Adaline learning rule to update the weights"""
        output = self.net_input(xi)
        error = (target - output)
        self.w_[1:] += self.eta * xi.dot(error)
        self.w_[0] += self.eta * error
        cost = 0.5 * error**2
        return cost
    def net_input(self, X):
        """Calculate net input"""
        return np.dot(X, self.w_[1:]) + self.w_[0]
    def activation(self, X):
        """Compute linear activation"""
        return self.net_input(X)
    def predict(self, X):
        """Return class label after unit step"""
        return np.where(self.activation(X) >= 0.0, 1, -1)

測試

>>> ada = AdalineSGD(n_iter=15, eta=0.01, random_state=1)
>>> ada.fit(X_std, y)
>>> plt.plot(range(1, len(ada.cost_) + 1), ada.cost_, marker='o')
>>> plt.xlabel('Epochs')
>>> plt.ylabel('Average Cost')
>>> plt.show()

這裏寫圖片描述
如果有新的數據集增加:

ada.partial_fit(X_std[0, :], y[0])
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章