torch的常用模塊學習

一、nn模塊

nn.Identity()

這個函數建立一個輸入模塊,什麼都不做,通常用在神經網絡的輸入層。用法如下:

mlp = nn.Identity()
print(mlp:forward(torch.ones(5, 2)))

這個可以用在殘差學習中。

如果輸入需要有多個保存,也可以使用多個nn.Identity()

mlp = nn.Identity()
nlp = nn.Identity()
print(mlp:forward(torch.ones(5, 2)))

多個輸入可以在神經網絡搭建中起到很好的作用,相當於一個容器,把輸入都保留下來了。

可以看一下LSTM中的例子,因爲LSTM是循環網絡,需要保存上一次的信息,nn.Identity()能夠很好的保留信息。

local inputs = {}
table.insert(inputs, nn.Identity()())   -- network input
table.insert(inputs, nn.Identity()())   -- c at time t-1
table.insert(inputs, nn.Identity()())   -- h at time t-1
local input = inputs[1]
local prev_c = inputs[2]
local prev_h = inputs[3]
th>LSTM = require'LSTM.lua'                                                                                [0.0224s]
th> layer = LSTM.create(3, 2)
                                             [0.0019s]
th> layer:forward({torch.randn(1,3), torch.randn(1,2), torch.randn(1,2)})
{
  1 : DoubleTensor - size: 1x2
  2 : DoubleTensor - size: 1x2
}                                                    
                                             [0.0005s]

 

nn.Squeeze()

可以把輸入中的一維的那一層去除。可以直接來看一下官網上的例子:

x=torch.rand(2,1,2,1,2)
> x
(1,1,1,.,.) =
  0.6020  0.8897

(2,1,1,.,.) =
  0.4713  0.2645

(1,1,2,.,.) =
  0.4441  0.9792

(2,1,2,.,.) =
  0.5467  0.8648
[torch.DoubleTensor of dimension 2x1x2x1x2]

其具體形狀是這樣的:

+-------------------------------+
| +---------------------------+ |
| | +-----------------------+ | |
| | |   0.6020  0.8897      | | |
| | +-----------------------+ | |
| | +-----------------------+ | |
| | |   0.4441  0.9792      | | |
| | +-----------------------+ | |
| +---------------------------+ |
|                               |
| +---------------------------+ |
| | +-----------------------+ | |
| | |   0.4713  0.2645      | | |
| | +-----------------------+ | |
| | +-----------------------+ | |
| | |   0.5467  0.8648      | | |
| | +-----------------------+ | |
| +---------------------------+ |
+-------------------------------+

進行nn.squeeze()操作

> torch.squeeze(x)
(1,.,.) =
  0.6020  0.8897
  0.4441  0.9792

(2,.,.) =
  0.4713  0.2645
  0.5467  0.8648
[torch.DoubleTensor of dimension 2x2x2]

 

+-------------------------------+
|       0.6020  0.8897          |
|       0.4441  0.9792          |
+-------------------------------+
+-------------------------------+
|       0.4713  0.2645          |
|       0.5467  0.8648          |
+-------------------------------+

nn.JoinTable()

這個相當於tensorflow的concat操作,但是個人覺得沒有concat的操作好用。整體來說torch的代碼都沒有tensorflow簡潔,但是效率比較高。

module = JoinTable(dimension, nInputDims)
+----------+             +-----------+
| {input1, +-------------> output[1] |
|          |           +-----------+-+
|  input2, +-----------> output[2] |
|          |         +-----------+-+
|  input3} +---------> output[3] |
+----------+         +-----------+

例子如下:

x = torch.randn(5, 1)
y = torch.randn(5, 1)
z = torch.randn(2, 1)

print(nn.JoinTable(1):forward{x, y})
print(nn.JoinTable(2):forward{x, y})
print(nn.JoinTable(1):forward{x, z})

>1.3965
 0.5146
-1.5244
-0.9540
 0.4256
 0.1575
 0.4491
 0.6580
 0.1784
-1.7362
[torch.DoubleTensor of dimension 10x1]

 1.3965  0.1575
 0.5146  0.4491
-1.5244  0.6580
-0.9540  0.1784
 0.4256 -1.7362
[torch.DoubleTensor of dimension 5x2]

 1.3965
 0.5146
-1.5244
-0.9540
 0.4256
-1.2660
 1.0869
[torch.Tensor of dimension 7x1]

nn.gModel()

nngraph(nn) 是一個基於有向無環圖的模塊,所有的節點建立完後,需要使用nn.gModel()組成一個圖。

module=nn.gModule(input,output)

這裏的input 和output既可以是元素,也可以是列表。這個函數會生成一個從input到output的圖。其中此前的每一個模塊後面加上該模塊輸入,成爲這個圖中的節點。 
給出一個簡單的例子:

x1 = nn.Identity()()
 x2 = nn.Identity()()
 a = nn.CAddTable()({x1, x2})
 m = nn.gModule({x1, x2}, {a})

圖示下:

_|__   __|__
|    |  |    |
|____|  |____|
| x1    | x2
 \     /
  \z  /
  _\ /_
 |    |
 |____|
    |a

nn.SpatialConvolution()

module = nn.SpatialConvolution(nInputPlane, nOutputPlane, kW, kH, [dW], [dH], [padW], [padH])
or  cudnn.SpatialConvolution(nInputPlane, nOutputPlane, width, height, [dW = 1], [dH = 1], [padW = 0], [padH = 0],[groups=1])
  • nInputPlane: The number of expected input planes in the image given into forward().
  • nOutputPlane: The number of output planes the convolution layer will produce.
  • kW: The kernel width of the convolution
  • kH: The kernel height of the convolution
  • dW: The step of the convolution in the width dimension. Default is 1.
  • dH: The step of the convolution in the height dimension. Default is 1.
  • padW: The additional zeros added per width to the input planes. Default is 0, a good number is (kW-1)/2.
  • padH: The additional zeros added per height to the input planes. Default is padW, a good number is (kH-1)/2.
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章