Pytorch模型訓練(2) - 模型初始化

《模型初始化》

  本系列來總結Pytorch訓練中的模型結構一些內容,包括模型定義,模型參數初始化,模型保存與加載等
  上篇博文簡述了模型定義,嚴格來說,模型參數初始化也是模型構造的一部分,但其又有其特殊性和篇幅,所以我單獨列出來
  深度網絡中參數初始化一度是一個熱點和難點,在DL發展早期,研究者們對參數初始化方法研究不可謂不多;發展到今,因爲網絡結構的優化,訓練的優化,初始化方法的日趨成熟,參數初始化已漸進成熟。以至我們(至少我)都很少關注這個方向,只是拿來用就可以了,但它在模型訓練中還是很重要的;當然本文也不會去深入初始化原理,只在總結Pytorch中初始化方法

0 博客目錄

Pytorch模型訓練(0) - CPN源碼解析
Pytorch模型訓練(1) - 模型定義
Pytorch模型訓練(2) - 模型初始化
Pytorch模型訓練(3) - 模型保存與加載
Pytorch模型訓練(4) - Loss Function
Pytorch模型訓練(5) - Optimizer
Pytorch模型訓練(6) - 數據加載

1 模型初始化–Pytorch

  源碼鏈接

1.1 均勻分佈

def uniform_(tensor, a=0, b=1):
    r"""Fills the input Tensor with values drawn from the uniform
    distribution :math:`\mathcal{U}(a, b)`.
    Args:
        tensor: an n-dimensional `torch.Tensor`
        a: the lower bound of the uniform distribution
        b: the upper bound of the uniform distribution
        
        tensor - n維的torch.Tensor
		a - 均勻分佈的下界
		b - 均勻分佈的上界
		
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.uniform_(w)    從均勻分佈U(a, b)中生成值,填充輸入的張量或變量w
    """
    with torch.no_grad():
        return tensor.uniform_(a, b)

1.2 正太分佈

def normal_(tensor, mean=0, std=1):
    r"""Fills the input Tensor with values drawn from the normal
    distribution :math:`\mathcal{N}(\text{mean}, \text{std})`.
    Args:
        tensor: an n-dimensional `torch.Tensor`
        mean: the mean of the normal distribution 正態分佈的均值
        std: the standard deviation of the normal distribution 正態分佈的標準差
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.normal_(w)
    """
    with torch.no_grad():
        return tensor.normal_(mean, std)

1.3 常量

def constant_(tensor, val):
    r"""Fills the input Tensor with the value :math:`\text{val}`.
    Args:
        tensor: an n-dimensional `torch.Tensor`
        val: the value to fill the tensor with
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.constant_(w, 0.3)
    """
    with torch.no_grad():
        return tensor.fill_(val)

1.4 1填充

def ones_(tensor):
    r"""Fills the input Tensor with ones`.
    Args:
        tensor: an n-dimensional `torch.Tensor`
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.ones_(w)
    """
    with torch.no_grad():
        return tensor.fill_(1)

1.5 0填充

def zeros_(tensor):
    r"""Fills the input Tensor with zeros`.
    Args:
        tensor: an n-dimensional `torch.Tensor`
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.zeros_(w)
    """
    with torch.no_grad():
        return tensor.zero_()

1.6 單位矩陣

  用單位矩陣來填充2維輸入張量或變量,在線性層儘可能多的保存輸入特性

def eye_(tensor):
    r"""Fills the 2-dimensional input `Tensor` with the identity
    matrix. Preserves the identity of the inputs in `Linear` layers, where as
    many inputs are preserved as possible.
    Args:
        tensor: a 2-dimensional `torch.Tensor`
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.eye_(w)
    """
    if tensor.ndimension() != 2:
        raise ValueError("Only tensors with 2 dimensions are supported")

    with torch.no_grad():
        torch.eye(*tensor.shape, out=tensor, requires_grad=tensor.requires_grad)
    return tensor

1.7 dirac(delta)函數

  詳見《DiracNets: Training Very Deep Neural Networks Without Skip-Connections
  這個函數是比較新的一種初始化方法,問題提到,用該方法初始化不用skip的resnet,收到不俗效果(具體沒看)

def dirac_(tensor):
    r"""Fills the {3, 4, 5}-dimensional input `Tensor` with the Dirac
    delta function. Preserves the identity of the inputs in `Convolutional`
    layers, where as many input channels are preserved as possible.
    Args:
        tensor: a {3, 4, 5}-dimensional `torch.Tensor`
    Examples:
        >>> w = torch.empty(3, 16, 5, 5)
        >>> nn.init.dirac_(w)
    """
    dimensions = tensor.ndimension()
    if dimensions not in [3, 4, 5]:
        raise ValueError("Only tensors with 3, 4, or 5 dimensions are supported")

    sizes = tensor.size()
    min_dim = min(sizes[0], sizes[1])
    with torch.no_grad():
        tensor.zero_()

        for d in range(min_dim):
            if dimensions == 3:  # Temporal convolution
                tensor[d, d, tensor.size(2) // 2] = 1
            elif dimensions == 4:  # Spatial convolution
                tensor[d, d, tensor.size(2) // 2, tensor.size(3) // 2] = 1
            else:  # Volumetric convolution
                tensor[d, d, tensor.size(2) // 2, tensor.size(3) // 2, tensor.size(4) // 2] = 1
    return tensor

1.8 xavier_uniform_ 均勻分佈

def _calculate_fan_in_and_fan_out(tensor):
    dimensions = tensor.ndimension()
    if dimensions < 2:
        raise ValueError("Fan in and fan out can not be computed for tensor with fewer than 2 dimensions")

    if dimensions == 2:  # Linear
        fan_in = tensor.size(1)
        fan_out = tensor.size(0)
    else:
        num_input_fmaps = tensor.size(1)
        num_output_fmaps = tensor.size(0)
        receptive_field_size = 1
        if tensor.dim() > 2:
            receptive_field_size = tensor[0][0].numel()
        fan_in = num_input_fmaps * receptive_field_size
        fan_out = num_output_fmaps * receptive_field_size

    return fan_in, fan_out

  根據Glorot, X.和Bengio, Y.在“Understanding the difficulty of training deep feedforward neural networks”中描述的方法,用一個均勻分佈生成值,填充輸入的張量或變量。結果張量中的值採樣自U(-a, a),其中a= gain * sqrt( 6/(fan_in + fan_out)). 該方法也被稱爲Glorot initialisation


在這裏插入圖片描述

def xavier_uniform_(tensor, gain=1):
    r"""Fills the input `Tensor` with values according to the method
    described in "Understanding the difficulty of training deep feedforward
    neural networks" - Glorot, X. & Bengio, Y. (2010), using a uniform
    distribution. The resulting tensor will have values sampled from
    :math:`\mathcal{U}(-a, a)` where
    .. math::
        a = \text{gain} \times \sqrt{\frac{6}{\text{fan\_in} + \text{fan\_out}}}
    Also known as Glorot initialization.
    Args:
        tensor: an n-dimensional `torch.Tensor`
        gain: an optional scaling factor
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu'))
    """
    fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
    std = gain * math.sqrt(2.0 / (fan_in + fan_out))
    a = math.sqrt(3.0) * std  # Calculate uniform bounds from standard deviation
    with torch.no_grad():
        return tensor.uniform_(-a, a)

1.9 xavier_normal_ 正態分佈

  根據Glorot, X.和Bengio, Y. 於2010年在“Understanding the difficulty of training deep feedforward neural networks”中描述的方法,用一個正態分佈生成值,填充輸入的張量或變量。結果張量中的值採樣自均值爲0,標準差爲gain * sqrt(2/(fan_in + fan_out))的正態分佈。也被稱爲Glorot initialisation.


在這裏插入圖片描述

def xavier_normal_(tensor, gain=1):
    r"""Fills the input `Tensor` with values according to the method
    described in "Understanding the difficulty of training deep feedforward
    neural networks" - Glorot, X. & Bengio, Y. (2010), using a normal
    distribution. The resulting tensor will have values sampled from
    :math:`\mathcal{N}(0, \text{std})` where
    .. math::
        \text{std} = \text{gain} \times \sqrt{\frac{2}{\text{fan\_in} + \text{fan\_out}}}
    Also known as Glorot initialization.
    Args:
        tensor: an n-dimensional `torch.Tensor`
        gain: an optional scaling factor
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.xavier_normal_(w)
    """
    fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
    std = gain * math.sqrt(2.0 / (fan_in + fan_out))
    with torch.no_grad():
        return tensor.normal_(0, std)

1.10 kaiming_uniform_ 均勻分佈

	def calculate_gain(nonlinearity, param=None):
    r"""Return the recommended gain value for the given nonlinearity function.
    The values are as follows:

    ================= ====================================================
    nonlinearity      gain
    ================= ====================================================
    Linear / Identity :math:`1`
    Conv{1,2,3}D      :math:`1`
    Sigmoid           :math:`1`
    Tanh              :math:`\frac{5}{3}`
    ReLU              :math:`\sqrt{2}`
    Leaky Relu        :math:`\sqrt{\frac{2}{1 + \text{negative\_slope}^2}}`
    ================= ====================================================

    Args:
        nonlinearity: the non-linear function (`nn.functional` name)
        param: optional parameter for the non-linear function

    Examples:
        >>> gain = nn.init.calculate_gain('leaky_relu')
    """
    linear_fns = ['linear', 'conv1d', 'conv2d', 'conv3d', 'conv_transpose1d', 'conv_transpose2d', 'conv_transpose3d']
    if nonlinearity in linear_fns or nonlinearity == 'sigmoid':
        return 1
    elif nonlinearity == 'tanh':
        return 5.0 / 3
    elif nonlinearity == 'relu':
        return math.sqrt(2.0)
    elif nonlinearity == 'leaky_relu':
        if param is None:
            negative_slope = 0.01
        elif not isinstance(param, bool) and isinstance(param, int) or isinstance(param, float):
            # True/False are instances of int, hence check above
            negative_slope = param
        else:
            raise ValueError("negative_slope {} not a valid number".format(param))
        return math.sqrt(2.0 / (1 + negative_slope ** 2))
    else:
        raise ValueError("Unsupported nonlinearity {}".format(nonlinearity))

在這裏插入圖片描述

def _calculate_correct_fan(tensor, mode):
    mode = mode.lower()
    valid_modes = ['fan_in', 'fan_out']
    if mode not in valid_modes:
        raise ValueError("Mode {} not supported, please use one of {}".format(mode, valid_modes))

    fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
    return fan_in if mode == 'fan_in' else fan_out

  根據He, K等人於2015年在“Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification”中描述的方法,用一個均勻分佈生成值,填充輸入的張量或變量。結果張量中的值採樣自U(-bound, bound),其中bound = sqrt(6/((1 + a^2) * fan_in)) 。也被稱爲He initialisation.


在這裏插入圖片描述

def kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu'):
    r"""Fills the input `Tensor` with values according to the method
    described in "Delving deep into rectifiers: Surpassing human-level
    performance on ImageNet classification" - He, K. et al. (2015), using a
    uniform distribution. The resulting tensor will have values sampled from
    :math:`\mathcal{U}(-\text{bound}, \text{bound})` where
    .. math::
        \text{bound} = \sqrt{\frac{6}{(1 + a^2) \times \text{fan\_in}}}
    Also known as He initialization.
    Args:
        tensor: an n-dimensional `torch.Tensor`
        a: the negative slope of the rectifier used after this layer (0 for ReLU
            by default)
        mode: either 'fan_in' (default) or 'fan_out'. Choosing `fan_in`
            preserves the magnitude of the variance of the weights in the
            forward pass. Choosing `fan_out` preserves the magnitudes in the
            backwards pass.
        nonlinearity: the non-linear function (`nn.functional` name),
            recommended to use only with 'relu' or 'leaky_relu' (default).
            tensor – n維的torch.Tensor或autograd.Variable
			a -這層之後使用的rectifier的斜率係數(ReLU的默認值爲0)
			mode -可以爲“fan_in”(默認)或“fan_out”。“fan_in”保留前向傳播時權值方差的量級,“fan_out”保留反向傳播時的量級。
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu')
    """
    fan = _calculate_correct_fan(tensor, mode)
    gain = calculate_gain(nonlinearity, a)
    std = gain / math.sqrt(fan)
    bound = math.sqrt(3.0) * std  # Calculate uniform bounds from standard deviation
    with torch.no_grad():
        return tensor.uniform_(-bound, bound)

1.11 kaiming_normal_ 正態分佈

  根據He, K等人在“Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification”中描述的方法,用一個正態分佈生成值,填充輸入的張量或變量。結果張量中的值採樣自均值爲0,標準差爲sqrt(2/((1 + a^2) * fan_in))的正態分佈。


在這裏插入圖片描述

def kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu'):
    r"""Fills the input `Tensor` with values according to the method
    described in "Delving deep into rectifiers: Surpassing human-level
    performance on ImageNet classification" - He, K. et al. (2015), using a
    normal distribution. The resulting tensor will have values sampled from
    :math:`\mathcal{N}(0, \text{std})` where
    .. math::
        \text{std} = \sqrt{\frac{2}{(1 + a^2) \times \text{fan\_in}}}
    Also known as He initialization.
    Args:
        tensor: an n-dimensional `torch.Tensor`
        a: the negative slope of the rectifier used after this layer (0 for ReLU
            by default)
        mode: either 'fan_in' (default) or 'fan_out'. Choosing `fan_in`
            preserves the magnitude of the variance of the weights in the
            forward pass. Choosing `fan_out` preserves the magnitudes in the
            backwards pass.
        nonlinearity: the non-linear function (`nn.functional` name),
            recommended to use only with 'relu' or 'leaky_relu' (default).
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu')
    """
    fan = _calculate_correct_fan(tensor, mode)
    gain = calculate_gain(nonlinearity, a)
    std = gain / math.sqrt(fan)
    with torch.no_grad():
        return tensor.normal_(0, std)

1.12 orthogonal_

  用(半)正交矩陣填充輸入的張量或變量。輸入張量必須至少是2維的,對於更高維度的張量,超出的維度會被展平,視作行等於第一個維度,列等於稀疏矩陣乘積的2維表示。其中非零元素生成自均值爲0,標準差爲std的正態分佈。
參考:Saxe, A等人(2013)的“Exact solutions to the nonlinear dynamics of learning in deep linear neural networks”

def orthogonal_(tensor, gain=1):
    r"""Fills the input `Tensor` with a (semi) orthogonal matrix, as
    described in "Exact solutions to the nonlinear dynamics of learning in deep
    linear neural networks" - Saxe, A. et al. (2013). The input tensor must have
    at least 2 dimensions, and for tensors with more than 2 dimensions the
    trailing dimensions are flattened.
    Args:
        tensor: an n-dimensional `torch.Tensor`, where :math:`n \geq 2`
        gain: optional scaling factor
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.orthogonal_(w)
    """
    if tensor.ndimension() < 2:
        raise ValueError("Only tensors with 2 or more dimensions are supported")

    rows = tensor.size(0)
    cols = tensor[0].numel()
    flattened = tensor.new(rows, cols).normal_(0, 1)

    if rows < cols:
        flattened.t_()

    # Compute the qr factorization
    q, r = torch.qr(flattened)
    # Make Q uniform according to https://arxiv.org/pdf/math-ph/0609050.pdf
    d = torch.diag(r, 0)
    ph = d.sign()
    q *= ph

    if rows < cols:
        q.t_()

    with torch.no_grad():
        tensor.view_as(q).copy_(q)
        tensor.mul_(gain)
    return tensor

1.13 sparse_

  將2維的輸入張量或變量當做稀疏矩陣填充,其中非零元素根據一個均值爲0,標準差爲std的正態分佈生成。 參考Martens, J.(2010)的 “Deep learning via Hessian-free optimization”.

def sparse_(tensor, sparsity, std=0.01):
    r"""Fills the 2D input `Tensor` as a sparse matrix, where the
    non-zero elements will be drawn from the normal distribution
    :math:`\mathcal{N}(0, 0.01)`, as described in "Deep learning via
    Hessian-free optimization" - Martens, J. (2010).
    Args:
        tensor: an n-dimensional `torch.Tensor`
        sparsity: The fraction of elements in each column to be set to zero
        std: the standard deviation of the normal distribution used to generate
            the non-zero values
    Examples:
        >>> w = torch.empty(3, 5)
        >>> nn.init.sparse_(w, sparsity=0.1)
    """
    if tensor.ndimension() != 2:
        raise ValueError("Only tensors with 2 dimensions are supported")

    rows, cols = tensor.shape
    num_zeros = int(math.ceil(sparsity * rows))

    with torch.no_grad():
        tensor.normal_(0, std)
        for col_idx in range(cols):
            row_indices = torch.randperm(rows)
            zero_indices = row_indices[:num_zeros]
            tensor[zero_indices, col_idx] = 0
    return tensor

2 模型初始化–CPN

2.1 resnet

  在類ResNet的init函數中,組件註冊下面有這段代碼:

for m in self.modules():                                      #遍歷模型
    if isinstance(m, nn.Conv2d):                              #isinstance:m類型判斷    若當前組件爲 conv
         n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
         m.weight.data.normal_(0, math.sqrt(2. / n))          #正太分佈初始化
     elif isinstance(m, nn.BatchNorm2d):                      #若爲batchnorm
         m.weight.data.fill_(1)                               #weight爲1
         m.bias.data.zero_()                                  #bias爲0

  這裏將所有帶參數的層都初始化
  在resnet50這個方法裏面,有( if pretrained:)這麼一段:

def resnet50(pretrained=False, **kwargs):
    """Constructs a ResNet-50 model.
    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet
    """
    model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
    if pretrained:
        print('Initialize with pre-trained ResNet')
        from collections import OrderedDict
        state_dict = model.state_dict()
        pretrained_state_dict = model_zoo.load_url(model_urls['resnet50'])
        for k, v in pretrained_state_dict.items():
            if k not in state_dict:
                continue
            state_dict[k] = v
        print('successfully load '+str(len(state_dict.keys()))+' keys')
        model.load_state_dict(state_dict)
    return model

  如果預訓練模型存在,則加載並初始化,會覆蓋掉前面隨機初始化參數
  這裏涉及到模型加載,預訓練模型,finetune等知識,本人準備將這塊單獨總結
  但要知道我們通常所說的加載預訓練模型或finetune,其實都是爲了我們模型參數有更好的初始值,可以讓模型更好更快地收斂,甚至有時可以用少量數據就可以達到目的;這就好比我們去教一個孩子學習某種技能,如果這個孩子已經在這方面有好的基礎,我們教起來就容易得多,相反,若是個小白,就會耗費我們更多的精力和時間

2.2 globalnet

  在類globalNet的init函數中,組件註冊下面有這段代碼:
  其方法和上面resnet一樣

 for m in self.modules():
      if isinstance(m, nn.Conv2d):
           n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
           m.weight.data.normal_(0, math.sqrt(2. / n))
           if m.bias is not None:
               m.bias.data.zero_()
       elif isinstance(m, nn.BatchNorm2d):
           m.weight.data.fill_(1)
           m.bias.data.zero_()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章