L1
regularization_loss=0
for param in model.parameters():
regularization_loss+=torch.sum(torch.abs(param))
classify_loss+criteon(logits,target)
loss=classify_loss+0.01*regularization_loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
L1 正則化 pytorch 目前只能 手動寫入
L2可以加上上述