Pytorch學習基礎——LSTM從訓練到測試

在上一篇Pytorch學習基礎——LeNet從訓練到測試講述了簡單神經網絡LeNet識別MNIST數據集的實例,作爲對比,本次將結合LSTM實現對MNIST數據集的識別。

實現過程:

  • 導入必要的包並設置超參數:
import torch
import torchvision
from torch import nn
from torch.autograd import Variable
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt

#define hyperparameter
EPOCH = 1
BATCH_SIZE = 64
TIME_STEP = 28    #time_step / image_height
INPUT_SIZE = 28    #input_step / image_width
LR = 0.01
DOWNLOAD = True
  • 下載並加載MNIST數據集(如果已經下載MNIST數據集,設置DOWMLOAD=False即可)
#get the mnist dataset
train_data = dsets.MNIST(root='./', train=True, transform=torchvision.transforms.ToTensor(), download=False)
test_data = dsets.MNIST(root='./', train=False, transform=torchvision.transforms.ToTensor())
test_x = test_data.test_data.type(torch.FloatTensor)[:2000]/255
test_y = test_data.test_labels.numpy()[:2000]
#use dataloader to batch input dateset
train_loader = torch.utils.data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)

注意與CNN數據加載時的區別,LSTM將圖片按行排列作爲序列,實現循環神經網絡的訓練和測試

  • 定義並實例化LSTM神經網絡:
#define the RNN class
class RNN(nn.Module):
    #overload __init__() method
    def __init__(self):
        super(RNN, self).__init__()
        
        self.rnn = nn.LSTM(
            input_size=28,
            hidden_size=64,
            num_layers=1,
            batch_first=True,
        )
        self.out = nn.Linear(64,10)
        
    #overload forward() method
    def forward(self, x):
        r_out, (h_n, h_c) = self.rnn(x, None)
        out = self.out(r_out[: ,-1, :])
        return out
rnn = RNN()
print(rnn)
  • 定義優化器和損失函數
#define optimizer with Adam optim
optimizer = torch.optim.Adam(rnn.parameters(), lr=LR)
#define cross entropy loss function
loss_func = nn.CrossEntropyLoss()
  • 訓練模型
#training and testing
for epoch in range(EPOCH):
    for step, (b_x, b_y) in enumerate(train_loader):
        #recover x as (batch, time_step, input_size)
        b_x = b_x.view(-1, 28, 28)
        
        output = rnn(b_x)
        loss = loss_func(output, b_y)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        if step % 50 == 0:
            #train with rnn
            test_output = rnn(test_x)
            #loss function
            pred_y = torch.max(test_output, 1)[1].data.numpy()
            #accuracy calculate
            acc = float((pred_y == test_y).astype(int).sum()) / float(test_y.size)
            print('Epoch: ', (epoch), 'train loss: %.3f'%loss.data.numpy(), 'test acc: %.3f'%(acc))
  • 測試模型
# print 100 predictions from test data
numTest = 100
test_output = rnn(test_x[:numTest].view(-1, 28, 28))
pred_y = torch.max(test_output, 1)[1].data.numpy()
print(pred_y, 'prediction number')
print(test_y[:numTest], 'real number')
ErrorCount = 0.0
for i in pred_y:
	if pred_y[i] != test_y[i]:
		ErrorCount += 1
print('ErrorRate : %.3f'%(ErrorCount / numTest))

實驗結果:

可以看到,LSTM網絡既可以用於語音處理,同時可以進行圖像分類,此時的“圖像”被抽象爲按行排列的序列,對於MNIST數據集 的測試表明,LSTM可以在較短時間內實現對數字手勢的識別。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章