Caffe中DeconvolutionLayer的用法

寫在前面:關於Deconvolution 與Transposed Convolution的字面上的區別,在此不再討論,以下統稱爲Deconvolution,可參考http://blog.csdn.net/u013250416/article/details/78247818。在我的理解裏面,Convolution是將大尺寸的feature map轉換爲小尺寸的feature map,而Deconvolution是將小尺寸的feature map轉換爲大尺寸的feature map。下面就介紹一下Caffe中DeconvolutionLayer的用法。



1.定義

在github上最新的caffe版本中,已經包含了DeconvolutionLayer。見src/caffe/layers/deconv_layer.cpp,deconv_layer.cu和 include/caffe/layers/deconv_layer.hpp,與ConvolutionLayer的區別在於output_shape的計算。

對於convolution:

output = (input + 2 * p  - k)  / s + 1;

對於deconvolution:

output = (input - 1) * s + k - 2 * p;


conv_layer.cpp:

template <typename Dtype>
void ConvolutionLayer<Dtype>::compute_output_shape() {
  const int* kernel_shape_data = this->kernel_shape_.cpu_data();
  const int* stride_data = this->stride_.cpu_data();
  const int* pad_data = this->pad_.cpu_data();
  const int* dilation_data = this->dilation_.cpu_data();
  this->output_shape_.clear();
  for (int i = 0; i < this->num_spatial_axes_; ++i) {
    // i + 1 to skip channel axis
    const int input_dim = this->input_shape(i + 1);
    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;
    const int output_dim = (input_dim + 2 * pad_data[i] - kernel_extent)
        / stride_data[i] + 1;
    this->output_shape_.push_back(output_dim);
  }
}
deconv_layer.cpp:

template <typename Dtype>
void DeconvolutionLayer<Dtype>::compute_output_shape() {
  const int* kernel_shape_data = this->kernel_shape_.cpu_data();
  const int* stride_data = this->stride_.cpu_data();
  const int* pad_data = this->pad_.cpu_data();
  const int* dilation_data = this->dilation_.cpu_data();
  this->output_shape_.clear();
  for (int i = 0; i < this->num_spatial_axes_; ++i) {
    // i + 1 to skip channel axis
    const int input_dim = this->input_shape(i + 1);
    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;
    const int output_dim = stride_data[i] * (input_dim - 1)
        + kernel_extent - 2 * pad_data[i];
    this->output_shape_.push_back(output_dim);
  }
}



2.用法

在使用Python中的NetSpec生成network prototxt的時候,layers.Deconvolution不能接受其他參數,只能通過顯式的convolution_param的方式來實現。

否則,如果按照ConvolutionLayer的方式來傳遞參數,可能會報錯:AttributeError: 'LayerParameter' object has no attribute 'stride'。可以參考:https://github.com/shelhamer/fcn.berkeleyvision.org/blob/master/voc-fcn32s/net.py#L58-L61

ConvlutionLayer:

conv = L.Convolution(relu, kernel_size=ks, stride=stride,
                             num_output=nout, pad=pad, bias_term=False, weight_filler=dict(type='xavier'),
                             bias_filler=dict(type='constant'))


DeconvolutionLayer:

conv = L.Deconvolution(relu, convolution_param=dict(kernel_size=ks, stride=stride,
			     num_output=nout, pad=pad, bias_term=False, weight_filler=dict(type='xavier'),
		             bias_filler=dict(type='constant')))



3.Caffe.proto配置

在Caffe.proto中,沒有配置DeconvolutionParameter.

可添加如下:

在LayerParameter中:

message LayerParameter {
  ......
  optional ConvolutionParameter convolution_param = 106;
  // Deconvolution
  optional DeconvolutionParameter deconvolution_param = 147;
  ......
}
添加message DeconvolutionParameter:

message DeconvolutionParameter {
  optional uint32 num_output = 1; // The number of outputs for the layer
  optional bool bias_term = 2 [default = true]; // whether to have bias terms

  // Pad, kernel size, and stride are all given as a single value for equal
  // dimensions in all spatial dimensions, or once per spatial dimension.
  repeated uint32 pad = 3; // The padding size; defaults to 0
  repeated uint32 kernel_size = 4; // The kernel size
  repeated uint32 stride = 6; // The stride; defaults to 1
  // Factor used to dilate the kernel, (implicitly) zero-filling the resulting
  // holes. (Kernel dilation is sometimes referred to by its use in the
  // algorithme à trous from Holschneider et al. 1987.)
  repeated uint32 dilation = 18; // The dilation; defaults to 1

  // For 2D convolution only, the *_h and *_w versions may also be used to
  // specify both spatial dimensions.
  optional uint32 pad_h = 9 [default = 0]; // The padding height (2D only)
  optional uint32 pad_w = 10 [default = 0]; // The padding width (2D only)
  optional uint32 kernel_h = 11; // The kernel height (2D only)
  optional uint32 kernel_w = 12; // The kernel width (2D only)
  optional uint32 stride_h = 13; // The stride height (2D only)
  optional uint32 stride_w = 14; // The stride width (2D only)

  optional uint32 group = 5 [default = 1]; // The group size for group conv

  optional FillerParameter weight_filler = 7; // The filler for the weight
  optional FillerParameter bias_filler = 8; // The filler for the bias
  enum Engine {
    DEFAULT = 0;
    CAFFE = 1;
    CUDNN = 2;
  }
  optional Engine engine = 15 [default = DEFAULT];

  // The axis to interpret as "channels" when performing convolution.
  // Preceding dimensions are treated as independent inputs;
  // succeeding dimensions are treated as "spatial".
  // With (N, C, H, W) inputs, and axis == 1 (the default), we perform
  // N independent 2D convolutions, sliding C-channel (or (C/g)-channels, for
  // groups g>1) filters across the spatial axes (H, W) of the input.
  // With (N, C, D, H, W) inputs, and axis == 1, we perform
  // N independent 3D convolutions, sliding (C/g)-channels
  // filters across the spatial axes (D, H, W) of the input.
  optional int32 axis = 16 [default = 1];

  // Whether to force use of the general ND convolution, even if a specific
  // implementation for blobs of the appropriate number of spatial dimensions
  // is available. (Currently, there is only a 2D-specific convolution
  // implementation; for input blobs with num_axes != 2, this option is
  // ignored and the ND implementation will be used.)
  optional bool force_nd_im2col = 17 [default = false];
}




發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章