【Interview】 ResNet系列/ Inception系列/ MobileNet系列/ ShuffleNet系列 網絡結構圖

VGG

2014
Very Deep Convolutional Networks for Large-Scale Image Recognition

ResNet

2015
Deep Residual Learning for Image Recognition

  • Residual Representations / Shortcut Connections

PreAct-ResNet

2016
Identity Mappings in Deep Residual Networks

  • 爲了構造identity mapping f(y) = y,因此作者對activation functions(BN和reLU)進行更改.那麼在forward或者backward的時候,信號都能直接propagate from 一個unit to other unit。

GoogLeNet

Inception V1

2014
Going deeper with convolutions

  • 利用1x1的卷積解決維度爆炸

Inception V2

2015
v2:Batch Normalization: Accelerating Deep Network Training by ReducingInternal Covariate Shift

  • Batch Normalization
  • 用 2 個 3x3 的 conv 替代 Inception v1 模塊中的5x5

Inception V3

2015
v3:Rethinking the InceptionArchitecture for Computer Vision

  • Asymmetric Convolutions
    將7x7分解成兩個一維的卷積(1x7,7x1),3x3也是一樣(1x3,3x1)
  • 優化v1的auxiliary classifiers
  • 新的pooling層
  • Label smooth

Inception V4

2016
v4:Inception-v4,Inception-ResNet and the Impact of Residual Connections on Learning

  • Inception模塊結合ResNet
    Inception module來替換resnet shortcut中的bootlenect

Xception

2017
Xception: DeepLearning with Depthwise Separable Convolutions

Xception就是在 spatial dimensions , channel dimension 這2個變換上做文章。

  • depth-wise convolution
    <img src=”https://img-blog.csdnimg.cn/20190924094637463.png">

  • 借鑑(非採用)depth-wise convolution 改進 Inception V3(卷積的時候要將通道的卷積與空間的卷積進行分離)

  • 原版 Depth-wise convolution,先逐通道 3×3 卷積,再 1×1 卷積

  • 而 Xception 是反過來,先 1*1 卷積,再逐通道卷積.

ResNeXt

2017
Aggregated ResidualTransformations for Deep Neural Networks

MobileNet

MobileNet V1

2017
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

Depthwise Separable Convolution

MobileNet V2

2019
Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation

Inverted residuals
Linear bottlenecks

MobileNet V3

2019 CVPR
Searching for MobileNetV3

優化激活函數(可用於其他網絡結構)
引入的基於squeeze and excitation結構的輕量級注意力模型

ShuffleNet

ShuffleNet V1

2017
ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

  • 借鑑ResNet單元
  • channel shuffle解決了多個group convolution疊加出現的邊界效應
  • pointwise group convolution 和 depthwise separable convolution主要減少了計算量。

ShuffleNet V2

2018
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design

  • 棄用了1x1的group convolution
  • Channel Split:把特徵圖分成兩組A和B
  • A組 認爲是short-cut;B組經過 bottleneck 輸入輸出channel一樣
  • 最後concat A和B
  • concat後進行Channel Shuffle

DenseNet

2017
Densely Connected Convolutional Networks

DPN

2017
Dual Path Networks

High Order RNN結構(HORNN)

SENet

2017
Squeeze-and-Excitation Networks

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章