[arbre.dl--003]在anaconda安裝pytorch

1.官網 https://pytorch.org/get-started/locally/

2.創建pytorch1.5.1的虛擬環境,python版本3.6

conda create -n pytorch1.5.1 python=3.6

3.根據官網說明生成安裝命令,安裝不需要cuda支持的pytorch1.5.1 cpu版:

conda install pytorch torchvision cpuonly -c pytorch

4.測試安裝結果

import torch

print(torch.__version__)

輸出爲‘1.5.1’,表明安裝成功。

5.一個最小的測試例子a.py

# -*- coding: utf-8 -*-
 
import torch
 
 
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU
 
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
 
# Create random input and output data
x = torch.randn(N, D_in, device=device, dtype=dtype)
y = torch.randn(N, D_out, device=device, dtype=dtype)
 
# Randomly initialize weights
w1 = torch.randn(D_in, H, device=device, dtype=dtype)
w2 = torch.randn(H, D_out, device=device, dtype=dtype)
 
learning_rate = 1e-6
for t in range(500):
    # Forward pass: compute predicted y
    h = x.mm(w1)
    h_relu = h.clamp(min=0)
    y_pred = h_relu.mm(w2)
 
    # Compute and print loss
    loss = (y_pred - y).pow(2).sum().item()
    print(t, loss)
 
    # Backprop to compute gradients of w1 and w2 with respect to loss
    grad_y_pred = 2.0 * (y_pred - y)
    grad_w2 = h_relu.t().mm(grad_y_pred)
    grad_h_relu = grad_y_pred.mm(w2.t())
    grad_h = grad_h_relu.clone()
    grad_h[h < 0] = 0
    grad_w1 = x.t().mm(grad_h)
 
    # Update weights using gradient descent
    w1 -= learning_rate * grad_w1
    w2 -= learning_rate * grad_w2

輸出結果:

0 29138108.0
1 23484640.0
2 20555388.0
...

496 3.662774179247208e-05
497 3.607522012316622e-05
498 3.569219552446157e-05
499 3.529985042405315e-05
 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章