方法1:batch-size設置多小
方法2:
with torch.no_grad():
net = Net()
out = net(imgs)
積累的梯度應該是會一直放在顯存裏的...用了這一行就會停止自動反向計算梯度
方法3:
設置cpu來加載模型:
model_path = 'path/to/model.pt'
model = UNet(n_channels = 1, n_classes = 1)
state_dict = torch.load(model_path,map_location='cpu')
model.load_state_dict(state_dict)
model.to(device)