Stanford-CS231n-assignment1-two_layer_net附中文註釋

先記錄一個很好用的畫神經網絡圖的網站:http://alexlenail.me/NN-SVG/index.html

然後因爲對神經網絡的幾個層的名字到底應該標註在哪有點疑惑,現在看了幾段代碼才弄清楚,所以標註在圖上記錄一下,如下圖(激活函數以ReLU爲例),如果錯誤歡迎指正

神經網絡各層名

上圖中的神經網絡可叫做雙層(應該是雙全連接層)神經網絡或者單隱藏層(one hidden layer)網絡。網絡的前向傳播方式爲輸入節點與W1權重矩陣相乘並加上偏置b1,得到隱藏層的輸入值,然後隱藏層的輸入值需要經過ReLU函數處理,得到隱藏層的輸出函數,然後再用輸出函數重複上述過程,乘以W2權重矩陣並加上偏置b2,得到輸出層的值。在這個過程中需要注意矩陣的維度方向,很容易顛倒出錯,導致維度不一致無法相加或者相乘。

1. neural_net.py

1.1 Q1

第一個問題,在第一次求Loss的地方,出現了這個結果

我的代碼每次跑都是0.018,感覺這個差值有點大了,然後去網上看別人的代碼都是e-13級別的差值,然後在代碼裏找問題找了好久,實在找不出來錯誤,然後用別人的代碼跑也是上面0.018這個結果,最後在這篇博客https://blog.csdn.net/kammyisthebest/article/details/80377613中看到,人家的reg都是0.1,我們的是0.05???然後reg改成0.1跑了一遍,果不其然

1.2 Q2

第二個問題,在求解神經網絡反向傳播的梯度代碼中,遇到一個問題,求W1/W2的梯度都不難理解,但是求b1/b2的梯度時候就遇到問題了,首先代碼中前向傳播是這樣寫的

# 輸入值與W1的點積,作爲下一層的輸入
z2 = X.dot(W1) + b1
# 激活函數,求得隱藏層的輸出,也就是ReLU
a2 = np.maximum(z2, 0)
# 隱藏層的輸出進入到輸出層
scores = a2.dot(W2) + b2

這樣乍得一看,好像b1/b2的偏導數都是數字1,這導致我第一次寫b1/b2的時候直接把偏導寫成了np.ones_like(b1/b2),後來想想,不對啊,這只是在代碼中用numpy庫的簡化寫法罷了,實質上應該這麼寫

# 其實偏置本來也應該是一個矩陣,但是在Numpy的計算中直接被簡化了
z2 = X.dot(W1) + np.ones(N).dot(b1.reshape(H, -1))  # 只是表達這個意思,代碼不一定能跑
a2 = np.maximum(z2, 0)
scores = a2.dot(W2) + np.ones(N).dot(b2.reshape(C, -1))

 也就是說在求b1/b2的偏導數時候,實質上求導應該得到的是一個np.ones(N)這麼一個向量,然後再根據cs231n中的求導法則,用上游傳回來的偏導值乘以本地函數值,就可以得到梯度,也就是下面的代碼

# 先求出輸出層softmax型的loss func對輸出層的偏導數,作爲反向傳播的起點,此處與SVM相同
# softmax公式爲L=-s[yi]+ln(∑e^s[j]),可以求得L對s[yi]的偏導數爲-1+e^s[yi]/∑e^s[j],也就是下面代碼中的-1+prob
# 由於輸出層的z和a是相同的值(即a==z),所以此處delta(L)/delta(a) == delta(L)/delta(z)
output = np.zeros_like(scores)
output[range(N), y] = -1
output += prob
# 先根據反向傳播的上層梯度乘以本地變量求出W2的梯度
grads['W2'] = (a2.T).dot(output) # 公式BP4
grads['W2'] = grads['W2'] / N + reg * W2
# 求取b2的梯度,方法同上
grads['b2'] = np.ones(N).dot(output) / N

1.3 代碼

from __future__ import print_function

from builtins import range
from builtins import object
import numpy as np
import matplotlib.pyplot as plt
from past.builtins import xrange

class TwoLayerNet(object):
    """
    A two-layer fully-connected neural network. The net has an input dimension of
    N, a hidden layer dimension of H, and performs classification over C classes.
    We train the network with a softmax loss function and L2 regularization on the
    weight matrices. The network uses a ReLU nonlinearity after the first fully
    connected layer.

    In other words, the network has the following architecture:

    input - fully connected layer - ReLU - fully connected layer - softmax

    The outputs of the second fully-connected layer are the scores for each class.
    """

    def __init__(self, input_size, hidden_size, output_size, std=1e-4):
        """
        Initialize the model. Weights are initialized to small random values and
        biases are initialized to zero. Weights and biases are stored in the
        variable self.params, which is a dictionary with the following keys:

        W1: First layer weights; has shape (D, H)
        b1: First layer biases; has shape (H,)
        W2: Second layer weights; has shape (H, C)
        b2: Second layer biases; has shape (C,)

        Inputs:
        - input_size: The dimension D of the input data.
        - hidden_size: The number of neurons H in the hidden layer.
        - output_size: The number of classes C.
        """
        self.params = {}
        self.params['W1'] = std * np.random.randn(input_size, hidden_size)
        self.params['b1'] = np.zeros(hidden_size)
        self.params['W2'] = std * np.random.randn(hidden_size, output_size)
        self.params['b2'] = np.zeros(output_size)

    def loss(self, X, y=None, reg=0.0):
        """
        Compute the loss and gradients for a two layer fully connected neural
        network.

        Inputs:
        - X: Input data of shape (N, D). Each X[i] is a training sample.
        - y: Vector of training labels. y[i] is the label for X[i], and each y[i] is
          an integer in the range 0 <= y[i] < C. This parameter is optional; if it
          is not passed then we only return scores, and if it is passed then we
          instead return the loss and gradients.
        - reg: Regularization strength.

        Returns:
        If y is None, return a matrix scores of shape (N, C) where scores[i, c] is
        the score for class c on input X[i].

        If y is not None, instead return a tuple of:
        - loss: Loss (data loss and regularization loss) for this batch of training
          samples.
        - grads: Dictionary mapping parameter names to gradients of those parameters
          with respect to the loss function; has the same keys as self.params.
        """
        # Unpack variables from the params dictionary
        W1, b1 = self.params['W1'], self.params['b1']
        W2, b2 = self.params['W2'], self.params['b2']
        N, D = X.shape

        # Compute the forward pass
        scores = None
        #############################################################################
        # TODO: Perform the forward pass, computing the class scores for the input. #
        # Store the result in the scores variable, which should be an array of      #
        # shape (N, C).                                                             #
        #############################################################################
        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
        
        # 輸入值與W1的點積,作爲下一層的輸入
        z2 = X.dot(W1) + b1
        # 激活函數,求得隱藏層的輸出,也就是ReLU
        a2 = np.maximum(z2, 0)
        # 隱藏層的輸出進入到輸出層
        scores = a2.dot(W2) + b2
        pass

        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        # If the targets are not given then jump out, we're done
        if y is None:
            return scores

        # Compute the loss
        loss = None
        #############################################################################
        # TODO: Finish the forward pass, and compute the loss. This should include  #
        # both the data loss and L2 regularization for W1 and W2. Store the result  #
        # in the variable loss, which should be a scalar. Use the Softmax           #
        # classifier loss.                                                          #
        #############################################################################
        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        # 根據softmax的Loss函數定義來求該網絡的Loss
        # 先減去最大值防止數值錯誤
        scores -= np.max(scores, axis=1, keepdims=True)
        # 求所有得分項求自然指數
        exp_scores = np.exp(scores)
        # 求概率矩陣
        prob = exp_scores / np.sum(exp_scores, axis = 1, keepdims=True)
        # 取出分類正確項的概率
        correct_items = prob[range(N), y]
        # 根據softmax的loss func求loss
        data_loss = -np.sum(np.log(correct_items)) / N
        reg_loss = 0.5 * reg * (np.sum(W1 * W1) + np.sum(W2 * W2))
        loss = data_loss + reg_loss
        pass

        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        # Backward pass: compute gradients
        grads = {}
        #############################################################################
        # TODO: Compute the backward pass, computing the derivatives of the weights #
        # and biases. Store the results in the grads dictionary. For example,       #
        # grads['W1'] should store the gradient on W1, and be a matrix of same size #
        #############################################################################
        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        # 先求出輸出層softmax型的loss func對輸出層的偏導數,作爲反向傳播的起點,此處與SVM相同
        # softmax公式爲L=-s[yi]+ln(∑e^s[j]),可以求得L對s[yi]的偏導數爲-1+e^s[yi]/∑e^s[j],也就是下面代碼中的-1+prob
        # 由於輸出層的z和a是相同的值(即a==z),所以此處delta(L)/delta(a) == delta(L)/delta(z)
        output = np.zeros_like(scores)
        output[range(N), y] = -1
        output += prob
        # 先根據反向傳播的上層梯度乘以本地變量求出W2的梯度
        grads['W2'] = (a2.T).dot(output) # 公式BP4
        grads['W2'] = grads['W2'] / N + reg * W2
        # 求取b2的梯度,方法同上
        grads['b2'] = np.ones(N).dot(output) / N
        # 將最後一層節點的誤差反向傳播至隱藏層
        hidden = output.dot(W2.T)
        # 考慮到ReLU函數的作用,可以知道只有在z2矩陣中大於零的部分纔會被傳遞至後面的層中,這裏求的就是ReLU函數的偏導矩陣
        mask = np.zeros_like(z2)
        mask[z2 > 0] = 1
        hidden = hidden * mask # N*H,這裏相當於求解出了how bp algorithm works那一章中的公式BP2
        # 再從隱藏層反向傳播至W1
        grads['W1'] = (X.T).dot(hidden) # 公式BP4
        grads['W1'] = grads['W1'] / N + reg * W1
        # W1同理
        grads['b1'] = np.ones(N).dot(hidden) / N
        pass

        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        return loss, grads

    def train(self, X, y, X_val, y_val,
              learning_rate=1e-3, learning_rate_decay=0.95,
              reg=5e-6, num_iters=100,
              batch_size=200, verbose=False):
        """
        Train this neural network using stochastic gradient descent.

        Inputs:
        - X: A numpy array of shape (N, D) giving training data.
        - y: A numpy array f shape (N,) giving training labels; y[i] = c means that
          X[i] has label c, where 0 <= c < C.
        - X_val: A numpy array of shape (N_val, D) giving validation data.
        - y_val: A numpy array of shape (N_val,) giving validation labels.
        - learning_rate: Scalar giving learning rate for optimization.
        - learning_rate_decay: Scalar giving factor used to decay the learning rate
          after each epoch.
        - reg: Scalar giving regularization strength.
        - num_iters: Number of steps to take when optimizing.
        - batch_size: Number of training examples to use per step.
        - verbose: boolean; if true print progress during optimization.
        """
        num_train = X.shape[0]
        iterations_per_epoch = max(num_train / batch_size, 1)

        # Use SGD to optimize the parameters in self.model
        loss_history = []
        train_acc_history = []
        val_acc_history = []

        for it in range(num_iters):
            X_batch = None
            y_batch = None

            #########################################################################
            # TODO: Create a random minibatch of training data and labels, storing  #
            # them in X_batch and y_batch respectively.                             #
            #########################################################################
            # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

            # 加上replace=False時候提示Cannot take a larger sample than population when 'replace=False',即batch_size>num_train時錯誤,故去掉
            random_index = np.random.choice(num_train, batch_size) 
            X_batch = X[random_index, :]
            y_batch = y[random_index]
            pass

            # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

            # Compute loss and gradients using the current minibatch
            loss, grads = self.loss(X_batch, y=y_batch, reg=reg)
            loss_history.append(loss)

            #########################################################################
            # TODO: Use the gradients in the grads dictionary to update the         #
            # parameters of the network (stored in the dictionary self.params)      #
            # using stochastic gradient descent. You'll need to use the gradients   #
            # stored in the grads dictionary defined above.                         #
            #########################################################################
            # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

            self.params['W1'] -= grads['W1'] * learning_rate
            self.params['W2'] -= grads['W2'] * learning_rate
            self.params['b1'] -= grads['b1'] * learning_rate
            self.params['b2'] -= grads['b2'] * learning_rate
            pass

            # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

            if verbose and it % 100 == 0:
                print('iteration %d / %d: loss %f' % (it, num_iters, loss))

            # Every epoch, check train and val accuracy and decay learning rate.
            if it % iterations_per_epoch == 0:
                # Check accuracy
                train_acc = (self.predict(X_batch) == y_batch).mean()
                val_acc = (self.predict(X_val) == y_val).mean()
                train_acc_history.append(train_acc)
                val_acc_history.append(val_acc)

                # Decay learning rate
                learning_rate *= learning_rate_decay

        return {
          'loss_history': loss_history,
          'train_acc_history': train_acc_history,
          'val_acc_history': val_acc_history,
        }

    def predict(self, X):
        """
        Use the trained weights of this two-layer network to predict labels for
        data points. For each data point we predict scores for each of the C
        classes, and assign each data point to the class with the highest score.

        Inputs:
        - X: A numpy array of shape (N, D) giving N D-dimensional data points to
          classify.

        Returns:
        - y_pred: A numpy array of shape (N,) giving predicted labels for each of
          the elements of X. For all i, y_pred[i] = c means that X[i] is predicted
          to have class c, where 0 <= c < C.
        """
        y_pred = None

        ###########################################################################
        # TODO: Implement this function; it should be VERY simple!                #
        ###########################################################################
        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        # 前向傳播,求出輸出值
        z2 = X.dot(self.params['W1']) + self.params['b1']
        a2 = np.maximum(z2, 0)
        scores = a2.dot(self.params['W2']) + self.params['b2']
        # 求出得分矩陣每一行最大值的索引,代表分類的類別
        y_pred = np.argmax(scores, axis=1)
        pass

        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

        return y_pred

 

 

2. two_layer_net.ipynb

best_net = None # store the best model into this 

#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained  #
# model in best_net.                                                            #
#                                                                               #
# To help debug your network, it may help to use visualizations similar to the  #
# ones we used above; these visualizations will have significant qualitative    #
# differences from the ones we saw above for the poorly tuned network.          #
#                                                                               #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to  #
# write code to sweep through possible combinations of hyperparameters          #
# automatically like we did on the previous exercises.                          #
#################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
best_acc = 0
learning_rate = [1e-4, 5e-4, 1e-3]
regulations = [0.2, 0.25, 0.3, 0.35]
for lr in learning_rate:
    for reg in regulations:
        stats = net.train(X_train, y_train, X_val, y_val,
            num_iters=1500, batch_size=200,
            learning_rate=lr, learning_rate_decay=0.95,
            reg=reg, verbose=True)
        val_acc = (net.predict(X_val) == y_val).mean()
        if val_acc > best_acc:
            best_acc = val_acc
            best_net = net
            print('lr = ',lr ,' reg = ',reg, ' acc = ', best_acc)
pass

# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****

 

昨天晚上跑的時候在val集上最高的準確率達到了0.527,但是最後的參數出錯,好像因爲learning_rate設置太大導致nan錯誤,不知道爲什麼0.527的best_net也沒有保存下來,今天再跑,最高的準確率只有0.52了,

然後最終在test_set上的測試結果

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章