Pytorch搭建Peleenet

1、引言

      Peleenet也是輕量級網絡家族裏面的重要成員,並且在工業產品中得到了廣泛應用,接下來就對Peleenet網絡加以介紹。論文地址:https://arxiv.org/abs/1804.06882

2、Peleenet

      論文說是借鑑Densenet,但是我感覺不像,我對Densenet的理解就是密集連接,一個DenseBlock裏面:前層輸出和後面所有層的輸入在concatnate(雖然我沒有看過Densenet論文~~)。

      現在開始搭建~~

      (1)首先搭建三兄弟(卷積、標準化、激活)在一起,在Peleenet的論文中也提到了他把激活放在標準化之後。(這.....,大家不都這樣嘛)

import torch
from torch import nn
import math

class Conv_Norm_Acti(nn.Module):
    def __init__(self,in_channels,out_channels,kernel_size,stride=1,padding=0):
        super(Conv_Norm_Acti,self).__init__()
        self.conv = nn.Conv2d(in_channels=in_channels,out_channels=out_channels,
                      kernel_size=kernel_size,stride=stride,padding=padding)
        self.norm = nn.BatchNorm2d(num_features=out_channels)
        self.acti = nn.ReLU(inplace=True)
    def forward(self,x):
        x = self.conv(x)
        x = self.norm(x)
        x = self.acti(x)
        return x

     (2)接下來搭建第一個模塊Stem Block。論文說這樣的結構能夠保留更多的輸入信息,反正就是作爲先鋒很嗨皮。

class Stem_Block(nn.Module):
    """
    根模塊
    """
    def __init__(self,inp_channel=3,out_channels=32):
        super(Stem_Block,self).__init__()
        half_out_channels = int(out_channels/2)
        self.conv_3x3_1 = Conv_Norm_Acti(in_channels=inp_channel,out_channels=out_channels,
                                    kernel_size=3,stride=2,padding=1)
        self.conv_3x3_2 = Conv_Norm_Acti(in_channels=16,out_channels=out_channels,
                                    kernel_size=3,stride=2,padding=1)
        self.conv_1x1_1 = Conv_Norm_Acti(in_channels=32,out_channels=half_out_channels,kernel_size=1)
        self.conv_1x1_2 = Conv_Norm_Acti(in_channels=64,out_channels=out_channels,kernel_size=1)
        self.max_pool = nn.MaxPool2d(kernel_size=2,stride=2)
    def forward(self,x):
        x = self.conv_3x3_1(x)
        
        x1 = self.conv_1x1_1(x)
        x1 = self.conv_3x3_2(x1)
        x2 = self.max_pool(x)
        x_cat = torch.cat((x1,x2),dim=1)
        x_out = self.conv_1x1_2(x_cat)
        return x_out

          (3)接下來搭建最關鍵的結構Two-Way Dense Layer ,這個結構借鑑了Googlenet的分支結構來提取不同感受野的特徵,獲得更多的語義信息。論文中特別提到爲了減少計算量,bottleneck之後的輸出通道是根據輸入通道進行動態調整的,實際上就是在降維來減少計算量,因爲輸入圖片在前層結構中的計算量比較大,原因就是沒有下采樣不到位,feature map尺寸比較大,通道數如果再多,計算量就蹭蹭上去了。所以用所謂的bottleneck來降維,實際就是1x1卷積。動態調整的方式看代碼就可以了。bottleneck_width超參設置影響較大,然後提到了growrate,很多人把他看成通道數變量,我已經忍了所謂的bottleneck,但是rate在我的概念中應該是一個倍數變量(但這樣寫的確有點麻煩。。。)。

class Two_way_dense_layer(nn.Module):
    """
    特徵提取的主力
    """
    base_channel_num = 32
    def __init__(self,inp_channel,bottleneck_wid,growthrate):
        super(Two_way_dense_layer,self).__init__()
        growth_channel = self.base_channel_num*growthrate
        growth_channel = int(growth_channel/2)
        bottleneck_out = int(growth_channel*bottleneck_wid/4)
        
        if bottleneck_out > inp_channel/2:
            bottleneck_out = int(bottleneck_out/8)*4
            print("bottleneck_out is too big,adjust it to:",bottleneck_out)

        self.conv_1x1 = Conv_Norm_Acti(in_channels=inp_channel,out_channels=bottleneck_out,
                                       kernel_size=1)
        self.conv_3x3_1 = Conv_Norm_Acti(in_channels=bottleneck_out,out_channels=growth_channel,
                                      kernel_size=3,padding=1)
        self.conv_3x3_2 = Conv_Norm_Acti(in_channels=growth_channel,out_channels=growth_channel,
                                      kernel_size=3,padding=1)
    def forward(self,x):
        x_branch = self.conv_1x1(x)
        x_branch_1 = self.conv_3x3_1(x_branch)
        x_branch_2 = self.conv_3x3_1(x_branch)
        x_branch_2 = self.conv_3x3_2(x_branch_2)
        out = torch.cat((x,x_branch_1,x_branch_2),dim=1)
        return out

 

        (4)Dense Block就是重複Two way dense layer,堆積木。

class Dense_Block(nn.Module):
    def __init__(self,layer_num,inp_channel,bottleneck_wid,growthrate):
        super(Dense_Block,self).__init__()
        self.layers = nn.Sequential()
        base_channel_num = Two_way_dense_layer.base_channel_num
        for i in range(layer_num):
            layer = Two_way_dense_layer(inp_channel+i*growthrate*base_channel_num,
                                        bottleneck_wid,growthrate)
            self.layers.add_module("denselayer%d"%(i+1),layer)
    def forward(self,x):
        x = self.layers(x)
        return x

       (5) 過渡層,作用就是降維。

class Transition_layer(nn.Module):
    def __init__(self,inp_channel,use_pool=True):
        super(Transition_layer,self).__init__()
        self.conv_1x1 = Conv_Norm_Acti(in_channels=inp_channel,out_channels=inp_channel,kernel_size=1)
        self.avg_pool = nn.AvgPool2d(kernel_size=2,stride=2)
        self.use_pool = use_pool
    def forward(self,x):
        x = self.conv_1x1(x)
        if self.use_pool:
            x = self.avg_pool(x)
        return x

       (6)完成搭建,看着下表搭就好了。我沒有加後面的分類層,原因是懶。需要的自己加個全局自適應均值池化啥的╰( ̄ω ̄o),很多人是想用來做檢測或者分割的,需要多階段輸出,len(self.feature)是9,用切片就好啦,返回多個值,我知道你懂的🤭。

class Peleenet(nn.Module):
    def __init__(self,growthrate=1,layer_num_cfg=[3, 4, 8, 6],bottleneck_width=[1, 2, 4, 4],
                 inp_channels=[32,128,256,512]):
        super(Peleenet,self).__init__()
        base_channel_num = Two_way_dense_layer.base_channel_num
        self.features = nn.Sequential()
        self.stem_block =Stem_Block() # stride = 4
        self.features.add_module("Stage_0",self.stem_block)
        assert len(layer_num_cfg)==4 and len(bottleneck_width),"layer_num_cfg or bottleneck_width 的元素長度不是4!"
        for i in range(4):
            self.stage = Dense_Block(layer_num=layer_num_cfg[i],inp_channel=inp_channels[i],
                                  bottleneck_wid=bottleneck_width[i],growthrate=growthrate)
            if i<3:
                self.translayer = Transition_layer(inp_channel=inp_channels[i]+base_channel_num*growthrate*layer_num_cfg[i])
            else:
                self.translayer = Transition_layer(inp_channel=inp_channels[i]+base_channel_num*growthrate*layer_num_cfg[i],
                                                   use_pool=False)
            self.features.add_module("Stage_%d"%(i+1),self.stage)
            self.features.add_module("Translayer_%d"%(i+1),self.translayer)
        self._initialize_weights()
            
    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                m.weight.data.normal_(0, math.sqrt(2. / n))
                if m.bias is not None:
                    m.bias.data.zero_()
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()
    def forward(self,x):
        x = self.features(x)
        return x

 

     (7)測試

         完成了。

if __name__ == "__main__":
    inp = torch.randn((2,3,224,224))
    model = Peleenet(growthrate=1)
    result = model(inp)
    print(result.size())

# 輸出結果
"""
torch.Size([2, 704, 7, 7])
"""

 

1、總結

     跟着論文擼代碼理解比較深刻點~,有錯誤的地方請留言改正但不要批評~。同時也希望這篇博客能夠幫助到大家。

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章