【圖像識別】基於pytorch 的入門demo——CIFAR10數據集識別及其可視化

目錄

環境配置

1.數據集

2.模型訓練

3.訓練結果

4.Batch_size的作用

5.參考資料


 

        pytorch使用是動態圖計算思想,符合一般的計算邏輯,集成了caffe,容易上手靈活方便,方便使用GPU 加速、自動求導數,更適用於學術界。tensorflow採用的是靜態圖計算思想,靜態圖需要提前定義計算圖,然後使用創建的計算圖運算,運算過程中不利於查看中間變量,但是框架的生態成熟,部署便利,更適合工業界。pytorch自然語言處理包:AllenNLP,計算機視覺包:Torchvision。

環境配置

             win10 + GTX 1660Ti +Anaconda3 +Spyder+Pytorch1.0

              Pytorch的配置非常簡單,非常友好。 直接登錄官網,https://pytorch.org/   選擇配置環境,執行Command即可。

     spyder配置opencv環境,在Anaconda prompt中輸入:

conda install –c https://conda.binstar.org/menpo opencv

1.數據集

     CIFAR-10和CIFAR-100是帶有標籤的數據集(詳情:http://groups.csail.mit.edu/vision/TinyImages/)

     CIFAR-10數據集共有60000張彩色圖像,每張大小:32*32*3,分爲10個類,具體見圖,每類6000張圖。

     訓練集:50000張,構成了500個訓練批batch,每一批batch_size爲100張。

     測試集:10000張,構成一個batch。每一類隨機取1000張,共10類*1000=10000張。

                                                                                          10個類別

另外,pytorch的內置數據集很多:torchvision.datasets

class torchvision.datasets.MNIST(root, train=True, transform=None, target_transform=None, download=False)
class torchvision.datasets.FashionMNIST(root, train=True, transform=None, target_transform=None, download=False)
class torchvision.datasets.EMNIST(root, split, **kwargs)
class torchvision.datasets.CocoCaptions(root, annFile, transform=None, target_transform=None)
class torchvision.datasets.CocoDetection(root, annFile, transform=None, target_transform=None)
class torchvision.datasets.LSUN(root, classes='train', transform=None, target_transform=None)
class torchvision.datasets.ImageFolder(root, transform=None, target_transform=None, loader=)
class torchvision.datasets.DatasetFolder(root, loader, extensions, transform=None, target_transform=None)
class torchvision.datasets.CIFAR10(root, train=True, transform=None, target_transform=None, download=False)
class torchvision.datasets.CIFAR100(root, train=True, transform=None, target_transform=None, download=False)
class torchvision.datasets.STL10(root, split='train', transform=None, target_transform=None, download=False)
class torchvision.datasets.SVHN(root, split='train', transform=None, target_transform=None, download=False)
class torchvision.datasets.PhotoTour(root, name, train=True, transform=None, download=False)

2.模型訓練

     2.1 模型選擇:

             一方面可以自己定義自己Net,另外也可以使用PyTorch的torchvision.models提供的模型。

import torchvision.models as models
resnet18 = models.resnet18(pretrained=True)
alexnet = models.alexnet(pretrained=True)
squeezenet = models.squeezenet1_0(pretrained=True)
vgg16 = models.vgg16(pretrained=True)
densenet = models.densenet161(pretrained=True)
inception = models.inception_v3(pretrained=True)

      此外,pytorch 剛剛發佈了hub功能,見 https://pytorch.org/hub

model=torch.hub.load(model)

   2.2模型可視化

   下方的代碼爲網上搜集到的,PS:可以使用netron工具進行模型可視化,用工具直接打開cifar10.pkl即可。

    工具鏈接:https://github.com/lutzroeder/Netron ,可視化後的模型如下:

 

   2.3訓練過程:

                 1.構建模型框架
                 2.迭代輸入數據集  
                 3.計算前向損失(loss) 
                 4.誤差反向傳播,更新網絡的參數

   2.4參數設置:

                  見代碼

import torch                   #torch的包
import torch.nn as nn
import torch.nn.functional as F
import torchvision           #基於torch的計算技術視覺相關的開發包
import torchvision.transforms as transforms
import torch.optim as optim

import cv2 as cv
import numpy as np
import time
import matplotlib.pyplot as plt

from visdom import Visdom
import numpy as np

viz = Visdom(env='loss')
x1,y1=0,0
win = viz.line(
    X=np.array([x1]),
    Y=np.array([y1]),
    opts=dict(title='loss'))

#參數設置
batch_size = 50

start = time.time()
#1、對數據進行預處理
transform = transforms.Compose(   
    [transforms.ToTensor(),       #轉爲tensor
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])#歸一化
# =============================================================================
# transforms.Compose: 
#         將多種操作組合在一起,此處將數據轉換爲tensor和數據歸一化組合爲函數tansform
# =============================================================================

#2、加載數據
#2.1下載訓練集,並預處理
trainset = torchvision.datasets.CIFAR10(root='./', train=True,
                                        download=True, transform=transform)
#2.2加載訓練集,並打亂圖像的序號
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
                                          shuffle=False, num_workers=2)

#2.3下載測試集,並預處理
testset = torchvision.datasets.CIFAR10(root='./', train=False,
                                       download=True, transform=transform)

#2.4加載測試集,由於是測試無需打亂圖像序號
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
                                         shuffle=False, num_workers=2)

#2.5加載label,使用元組,不可改變
classes = ('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
 
end = time.time()
print("運行時間:%.2f秒"%(end-start))

#3構建深度學習網絡架構
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 64, 3, padding = 1)
        self.conv2 = nn.Conv2d(64, 64, 3, padding =1)
        self.conv3 = nn.Conv2d(64, 128, 3, padding = 1)
        self.conv4 = nn.Conv2d(128, 128, 3, padding = 1)
        self.conv5 = nn.Conv2d(128, 256, 3, padding = 1)
        self.conv6 = nn.Conv2d(256, 256, 3, padding = 1)
        self.maxpool = nn.MaxPool2d(2, 2)
        self.avgpool = nn.AvgPool2d(2, 2)
        self.globalavgpool = nn.AvgPool2d(8, 8)
        self.bn1 = nn.BatchNorm2d(64)
        self.bn2 = nn.BatchNorm2d(128)
        self.bn3 = nn.BatchNorm2d(256)
        self.dropout50 = nn.Dropout(0.5)
        self.dropout10 = nn.Dropout(0.1)
        self.fc = nn.Linear(256, 10)
 
    def forward(self, x):
        x = self.bn1(F.relu(self.conv1(x)))
        x = self.bn1(F.relu(self.conv2(x)))
        x = self.maxpool(x)
        x = self.dropout10(x)
        x = self.bn2(F.relu(self.conv3(x)))
        x = self.bn2(F.relu(self.conv4(x)))
        x = self.avgpool(x)
        x = self.dropout10(x)
        x = self.bn3(F.relu(self.conv5(x)))
        x = self.bn3(F.relu(self.conv6(x)))
        x = self.globalavgpool(x)
        x = self.dropout50(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)
        return x
    
    
if __name__ == '__main__': 
    net = Net()
    criterion = nn.CrossEntropyLoss()    #交叉熵損失函數
    optimizer = optim.Adam(net.parameters(), lr=0.1)#lr=0.001  
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    net.to(device)
     
    for epoch in range(1):
        running_loss = 0.

        for i, data in enumerate(trainloader): 
            inputs, labels = data
            inputs, labels = inputs.to(device), labels.to(device)
            optimizer.zero_grad()
            outputs = net(inputs)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()
            print('[%d, %5d] loss: %.4f' %(epoch + 1, (i+1)*batch_size, loss.item()))
            x1+=i
            viz.line(
                    X=np.array([x1]),
                    Y=np.array([loss.item()]),
                    win=win,#win要保持一致
                    update='append')
            
    print('Finished Training') 
    torch.save(net, 'cifar10.pkl')
    # net = torch.load('cifar10.pkl')
  
    correct = 0
    total = 0
    with torch.no_grad():
        for data in testloader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = net(images)
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
    
    print('Accuracy of the network on the 10000 test images: %d %%' % (
        100 * correct / total))
    
    class_correct = list(0. for i in range(10))
    class_total = list(0. for i in range(10))
    
    with torch.no_grad():
        for data in testloader:
            images, labels = data
            images, labels = images.to(device), labels.to(device)
            outputs = net(images)
            _, predicted = torch.max(outputs, 1)
            c = (predicted == labels).squeeze()
            for i in range(4):
                label = labels[i]
                class_correct[label] += c[i].item()
                class_total[label] += 1
 
 
    for i in range(10):
        print('Accuracy of %5s : %2d %%' % (
            classes[i], 100 * class_correct[i] / class_total[i]))

2.5 訓練過程可視化

      打開Anaconda Prompt輸入命令。(conda install visdom命令安裝失敗)

pip install visdom

     啓動服務:

python -m visdom.server

     打開瀏覽器:

http://localhost:8097/

3.訓練結果

         GPU上訓練就是快呀!!!CPU i3 三個半小時左右跑完,GTX 1660 TI 三分鐘左右就出一次結果。

4.Batch_size的作用

Batch_size=100;

測試結果

Accuracy of the network on the 10000 test images: 67 %

Accuracy of plane : 65 %

Accuracy of   car : 84 %

Accuracy of  bird : 52 %

Accuracy of   cat : 46 %

Accuracy of  deer : 44 %

Accuracy of   dog : 43 %

Accuracy of  frog : 79 %

Accuracy of horse : 78 %

Accuracy of  ship : 77 %

Accuracy of truck : 75 %

Batch_size=50;

測試結果

Accuracy of the network on the 10000 test images: 66 %

Accuracy of plane : 76 %

Accuracy of   car : 82 %

Accuracy of  bird : 37 %

Accuracy of   cat : 25 %

Accuracy of  deer : 56 %

Accuracy of   dog : 57 %

Accuracy of  frog : 72 %

Accuracy of horse : 67 %

Accuracy of  ship : 76 %

Accuracy of truck : 87 %

Batch_size=10;

測試結果

Accuracy of the network on the 10000 test images: 62 %

Accuracy of plane : 59 %

Accuracy of   car : 77 %

Accuracy of  bird : 49 %

Accuracy of   cat : 37 %

Accuracy of  deer : 50 %

Accuracy of   dog : 52 %

Accuracy of  frog : 69 %

Accuracy of horse : 73 %

Accuracy of  ship : 75 %

Accuracy of truck : 77 %

結論與思考:

  1. 在一定範圍內,batch_size越大,越有利於模型的快速收斂,較大的batch _size更接近訓練集的整體數據結構,因此,可以保證迭代過程中的梯度方向越準確,最後網絡收斂情況就會好。
  2. 然而,並不是batch_size越大越好,使用large-batch訓練得到的網絡具有較差的泛化能力。訓練集的數據結構和測試集的數據結構是相似的,但是二者並不是完全的相同,large-batch有利於提高訓練集的收斂精度,但是模型過於刻畫了訓練集的數據結構,勢必導致對測試集的數據模型的刻畫能力降低。
  3. batch_size的減小,整體識別率下降,但是對部分類別的識別率升高了,猜測根batch的數據分佈接近訓練集的分佈有關,改變了SGD的梯度下降方向,隨着batch_size減小,增加了迭代次數,使得模型收斂更精確。
  4. 訓練的核心在於構建具有足夠代表性的訓練集,並用模型去刻畫訓練集的數據結構,且該模型對非顯著特徵應當具有泛化學習能力。

5.參考資料

1、https://blog.csdn.net/Kansas_Jason/article/details/84503367

2、https://blog.csdn.net/shareviews/article/details/83094783(推薦一看)

3、https://blog.csdn.net/leviopku/article/details/81980249(Netron可視化工具)

4、 莫煩大神網頁:https://morvanzhou.github.io/

5、Pytorch中文網:https://ptorch.com/

6、Pytorch中文文檔:https://ptorch.com/docs/1/

7、Pytorch中文論壇:https://discuss.ptorch.com/

8、深度學習模型可視化工具:https://blog.csdn.net/baidu_40840693/article/details/83006347

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章