pytorch中Adam優化器源碼解讀

1. 調用方法

torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)

參數:
weight_decay : 這裏是採用權重衰減,權重衰減的係數
amsgrad:在更新時,是否保留梯度的二階歷史信息

2.源碼

源碼中的實現,參照最後一幅圖中L2正則化的Adam。

    def step(self, closure=None):
        """Performs a single optimization step.

        Arguments:
            closure (callable, optional): A closure that reevaluates the model
                and returns the loss.
        """
        loss = None
        if closure is not None:
            loss = closure()

        for group in self.param_groups:
            for p in group['params']:
                if p.grad is None:
                    continue
                grad = p.grad.data
                if grad.is_sparse:
                    raise RuntimeError('Adam does not support sparse gradients, please consider SparseAdam instead')
                amsgrad = group['amsgrad']

                state = self.state[p] # 之前的step累計數據

                # State initialization
                if len(state) == 0:
                    state['step'] = 0
                    # Exponential moving average of gradient values
                    state['exp_avg'] = torch.zeros_like(p.data) # [batch, seq]
                    # Exponential moving average of squared gradient values
                    state['exp_avg_sq'] = torch.zeros_like(p.data)
                    if amsgrad:
                        # Maintains max of all exp. moving avg. of sq. grad. values
                        state['max_exp_avg_sq'] = torch.zeros_like(p.data)

                exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] # 上次的r與s
                if amsgrad:
                # asmgrad優化方法是針對Adam的改進,通過添加額外的約束,使學習率始終爲正值。
                    max_exp_avg_sq = state['max_exp_avg_sq']
                beta1, beta2 = group['betas']

                state['step'] += 1
                bias_correction1 = 1 - beta1 ** state['step']
                bias_correction2 = 1 - beta2 ** state['step']
				# 序號對應最後一幅圖中序號
                if group['weight_decay'] != 0:  # 進行權重衰減(實際是L2正則化)
                	# 6. grad(t)=grad(t-1)+ weight*p(t-1)
                    grad.add_(group['weight_decay'], p.data)

                # Decay the first and second moment running average coefficient
                # 7.計算m(t): m(t)=beta_1*m(t-1)+(1-beta_1)*grad
                exp_avg.mul_(beta1).add_(1 - beta1, grad)
                # 8.計算v(t): v(t)= beta_2*v(t-1)+(1-beta_2)*grad^2
                exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad)
                if amsgrad:
                    # Maintains the maximum of all 2nd moment running avg. till now
                    # 迭代改變max_exp_avg_sq的值(取最大值),傳到下一次,保留之前的梯度信息。
                    torch.max(max_exp_avg_sq, exp_avg_sq, out=max_exp_avg_sq)
                    # Use the max. for normalizing running avg. of gradient
                    denom = (max_exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
                else:
                	# 計算sqrt(v(t))+epsilon
                	# sqrt(v(t))+eps = denom = sqrt(v(t))/sqrt(1-beta_2^t)+eps
                    denom = (exp_avg_sq.sqrt() / math.sqrt(bias_correction2)).add_(group['eps'])
				# step_size=lr/bias_correction1=lr/(1-beta_1^t)
                step_size = group['lr'] / bias_correction1
				#p(t)=p(t-1)-step_size*m(t)/denom
                p.data.addcdiv_(-step_size, exp_avg, denom)

        return loss

對最後一步更新
denom=vt1β2t+ϵ=vt1β2t1+ϵ1β2t=vt^1+ϵ1β2t denom=\frac{\sqrt{v_t}}{\sqrt{1-\beta_2^t}+\epsilon}=\frac{\frac{\sqrt{v_t}}{\sqrt{1-\beta_2^t}}}{1+\frac{\epsilon}{\sqrt{1-\beta_2^t}}}=\frac{\hat{v_t}}{1+\frac{\epsilon}{\sqrt{1-\beta_2^t}}}
p(t)=p(t1)step_sizem(t)/denom=p(t1)lr1β2tm(t)1denom=p(t1)lrmt^vt^(1+ϵ1β2t) p(t)=p(t-1)-step\_size*m(t)/denom \\ =p(t-1)-\frac{lr}{1-\beta_2^t}*m(t)*\frac{1}{denom}\\ =p(t-1)-\frac{lr*\hat{m_t}}{\sqrt{\hat{v_t}}}*({1+\frac{\epsilon}{\sqrt{1-\beta_2^t}}})
上式取α=(1+ϵ1β2t)\alpha=({1+\frac{\epsilon}{\sqrt{1-\beta_2^t}}}),即可與最後一幅圖中序號12等價
算法:
(《深度學習》書中,pytorch中Adam不採用下面方式)
在這裏插入圖片描述

3. adam中權重衰減與L2正則化的關係

在sgd中,權重衰減和L2正則化等價,在adam等自適應優化算法(AdaGrad/RMSProp等)中,不等價。
在pytorch中的adam中,實際使用的是L2正則化(下圖中使用紅色部分),adamw算法中使用weight_decay(下圖中暗黃色部分),兩者的區別在於使用位置不同,其他部分都相同。
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章