Pytorch深度學習(二)

上一講回顧

上一講我們從零開始實現了Pytorch中的基本操作。

  1. 首先從numpy中手寫了基於最小loss(MSE)的線性迴歸程序(示例程序一);
  2. 然後從梯度下降的角度考慮,改寫示例程序一,衍生成基於梯度下降的線性迴歸程序(示例程序二);
  3. 通過引入torch庫函數,替換掉了原函數中的求梯度問題(採用.backward()實現),完成了示例程序三
  4. 繼續對示例程序三進行改寫,定義了模型類,採用MSEloss以及SGD優化器,規範化了基於torch庫的神經網絡程序模型,整個模型框架分爲四部分:
    4.1 準備數據集;
    4.2 設計模型類
    4.3 設計損失函數和優化器
    4.4 模型訓練(forward, backward, update)
    完成了示例程序四
  5. 在示例程序四的基礎上,針對二分類問題進行處理,定義評價函數爲交叉熵loss=(ylogy^+(1y)log(1y^))loss = -(ylog \hat{y} +(1-y)log (1-\hat y))。Python程序中採用torch.nn.BCELoss,完成示例程序五
  6. 在示例程序5中,添加多層神經網絡串聯,形成示例程序六。方法在構造模型類的時候,進行串聯改寫即可。
  7. 示例程序七則考慮輸入的數據集比較大,耗費內存的問題,引入batch的概念。方法是在準備數據集的部分定義class.詳見示例程序。

下面繼續:

該示例完成了手寫數字識別的訓練和測試,與之前的示例程序相比,該程序引入數據集與測試集(首次使用從網絡上下載)。在測試集上不需要求梯度,with torch.no_grad():激活函數也改爲了relu,計算Loss 採用CrossEntropyLoss(softmax)

import torch
from torchvision import  transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F 
import torch.optim as optim

batch_size = 64
transform = transforms.Compose([transforms.ToTensor(),
            transforms.Normalize((0.1307,),(0.3081 ))])

train_dataset = datasets.MNIST(root = '../dataset/mnist/',
                train=True,download=True,transform=transform)
train_loader = DataLoader(train_dataset,
                shuffle=True,batch_size=batch_size)
test_dataset = datasets.MNIST(root ='../dataset/mnist/',
                train=False,download=True,transform=transform)
test_loader = DataLoader(test_dataset,
                shuffle=False,batch_size=batch_size)

class Model(torch.nn.Module):  #繼承於nn.Module
    def __init__(self):     #構造函數
        super(Model,self).__init__() #調用父類的構造
        self.linear1 = torch.nn.Linear(784,512)  #pytorch中的一個類,nn.linear,
        #繼承於 Module 
        # 成員函數 weight,bias
        self.linear2 = torch.nn.Linear(512,256)
        self.linear3 = torch.nn.Linear(256,128)
        self.linear4 = torch.nn.Linear(128,64)
        self.linear5 = torch.nn.Linear(64,10)
        # self.sigmoid = torch.nn.Sigmoid()


    def forward(self,x):    #必須叫這個名字 ,父類中有forward這個函數
        #這個地方相當於override
        x = x.view(-1,784)
        x = F.relu(self.linear1(x))
        x = F.relu(self.linear2(x))
        x = F.relu(self.linear3(x))
        x = F.relu(self.linear4(x))
        # y_pred = torch.sigmoid(self.linear(x))
        return self.linear5(x)

model = Model()
epoch_list = []
loss_list = []
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01, momentum = 0.5)  
            #  model.parameter()自動加載權重-all 權重  lr 自動學習率

def train(epoch):
    running_loss = 0.0
    for batch_idx,data in enumerate(train_loader,0):
        inputs, target = data
        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs,target)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:
            print('[%d, %5d] loss: %.3f' %(epoch+1,batch_idx+1,running_loss/300))
            running_loss = 0.0
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            images, labels = data
            outputs = model(images)
            _,predicted = torch.max(outputs.data, dim=1)
            total += labels.size(0)
            correct += (predicted ==labels).sum().item()
    print('Accuracy on test set: %d %%' %(100*correct/total))
if __name__=='__main__':
    for epoch in range(10):
        train(epoch)
        test()

訓練結果:

[1,   300] loss: 2.214
[1,   600] loss: 0.947
[1,   900] loss: 0.419
Accuracy on test set: 88 %
[2,   300] loss: 0.313
[2,   600] loss: 0.271
[2,   900] loss: 0.232
Accuracy on test set: 94 %
[3,   300] loss: 0.188
[3,   600] loss: 0.170
[3,   900] loss: 0.163
Accuracy on test set: 95 %
[4,   300] loss: 0.131
[4,   600] loss: 0.127
[4,   900] loss: 0.118
Accuracy on test set: 96 %
[5,   300] loss: 0.099
[5,   600] loss: 0.092
[5,   900] loss: 0.099
Accuracy on test set: 96 %
[6,   300] loss: 0.084
[6,   600] loss: 0.078
[6,   900] loss: 0.071
Accuracy on test set: 97 %
[7,   300] loss: 0.060
[7,   600] loss: 0.063
[7,   900] loss: 0.064
Accuracy on test set: 97 %
[8,   300] loss: 0.048
[8,   600] loss: 0.052
[8,   900] loss: 0.050
Accuracy on test set: 97 %
[9,   300] loss: 0.044
[9,   600] loss: 0.041
[9,   900] loss: 0.039
Accuracy on test set: 97 %
[10,   300] loss: 0.031
[10,   600] loss: 0.034
[10,   900] loss: 0.038
Accuracy on test set: 97 %

下面的例子考慮採用卷積神經網絡以及池化層,並引入GPU來訓練神經網絡,這樣CPU就再也不用100%滿負荷跑了。

import torch
from torchvision import  transforms
from torchvision import datasets
from torch.utils.data import DataLoader
import torch.nn.functional as F 
import torch.optim as optim

batch_size = 64
transform = transforms.Compose([transforms.ToTensor(),
            transforms.Normalize((0.1307,),(0.3081 ))])

train_dataset = datasets.MNIST(root = '../dataset/mnist/',
                train=True,download=True,transform=transform)
train_loader = DataLoader(train_dataset,
                shuffle=True,batch_size=batch_size)
test_dataset = datasets.MNIST(root ='../dataset/mnist/',
                train=False,download=True,transform=transform)
test_loader = DataLoader(test_dataset,
                shuffle=False,batch_size=batch_size)

class Model(torch.nn.Module):  #繼承於nn.Module
    def __init__(self):     #構造函數
        super(Model,self).__init__() #調用父類的構造
        self.conv1 = torch.nn.Conv2d(1,10,kernel_size=5)
        self.conv2 = torch.nn.Conv2d(10,20,kernel_size=5)
        self.pooling = torch.nn.MaxPool2d(2)
        self.fc = torch.nn.Linear(320,10)
        # self.sigmoid = torch.nn.Sigmoid()


    def forward(self,x):    #必須叫這個名字 ,父類中有forward這個函數
        #這個地方相當於override
        batch_size = x.size(0)
        x = F.relu(self.pooling(self.conv1(x)))
        x = F.relu(self.pooling(self.conv2(x)))
        x = x.view(batch_size,-1)
        x = self.fc(x)
        # y_pred = torch.sigmoid(self.linear(x))
        return x

model = Model()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
epoch_list = []
loss_list = []
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01, momentum = 0.5)  
            #  model.parameter()自動加載權重-all 權重  lr 自動學習率

def train(epoch):
    running_loss = 0.0
    for batch_idx,data in enumerate(train_loader,0):
        inputs, target = data
        inputs, target = inputs.to(device),target.to(device)
        optimizer.zero_grad()

        outputs = model(inputs)
        loss = criterion(outputs,target)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()
        if batch_idx % 300 == 299:
            print('[%d, %5d] loss: %.3f' %(epoch+1,batch_idx+1,running_loss/300))
            running_loss = 0.0
def test():
    correct = 0
    total = 0
    with torch.no_grad():
        for data in test_loader:
            inputs, target = data
            inputs,target = inputs.to(device),target.to(device)
            outputs = model(inputs)
            _,predicted = torch.max(outputs.data, dim=1)
            total += target.size(0)
            correct += (predicted ==target).sum().item()
    print('Accuracy on test set: %d %%' %(100*correct/total))
if __name__=='__main__':
    for epoch in range(10):
        train(epoch)
        test()
[1,   300] loss: 0.649
[1,   600] loss: 0.203
[1,   900] loss: 0.145
Accuracy on test set: 96 %
[2,   300] loss: 0.109
[2,   600] loss: 0.101
[2,   900] loss: 0.094
Accuracy on test set: 97 %
[3,   300] loss: 0.078
[3,   600] loss: 0.076
[3,   900] loss: 0.076
[2,   900] loss: 0.094
Accuracy on test set: 97 %
[3,   300] loss: 0.078
[3,   600] loss: 0.076
[3,   900] loss: 0.076
Accuracy on test set: 98 %
[4,   300] loss: 0.067
[4,   600] loss: 0.064
[4,   900] loss: 0.062
Accuracy on test set: 98 %
[5,   300] loss: 0.051
[5,   600] loss: 0.065
[5,   900] loss: 0.051
Accuracy on test set: 98 %
[6,   300] loss: 0.050
[6,   600] loss: 0.050
[6,   900] loss: 0.048
Accuracy on test set: 98 %
[7,   300] loss: 0.047
[7,   600] loss: 0.045
[7,   900] loss: 0.045
Accuracy on test set: 98 %
[8,   300] loss: 0.040
[8,   600] loss: 0.043
[8,   900] loss: 0.041
Accuracy on test set: 98 %
[9,   300] loss: 0.040
[9,   600] loss: 0.038
[9,   900] loss: 0.038
Accuracy on test set: 98 %
[10,   300] loss: 0.035
[10,   600] loss: 0.035
[10,   900] loss: 0.039
Accuracy on test set: 98 %

可以看到準確率提高了。
後面,提出了梯度消失的問題(當神經網絡層數逐漸增多之後),由於每層神經網絡輸出值在0~1之間,那麼經過多次迭代之後,會出現梯度值接近0的情況,反饋之後造成前面的神經網絡權重不在更新的情況。解決方法爲引入 Residual Block 。
在這裏插入圖片描述
這個,在輸出之後,就由原來在0附件,變成了在1附近變化,增加了前面神經網絡的訓練能力。具體代碼就不在放了(本人不是圖像專業,並沒有進行實踐,從老師給出的結果來看,準確度再次上升。達到99%),這裏需要注意的是並不是網絡層數越多越好,訓練的輪數越多越好。這點需要通過實踐訓練看曲線得到。
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章