nlp中的基本網絡

nlp中的文本,基本都可以表示爲[ batch, seq, embed_dim] 的形式

  1. CNN
    一般使用一維卷積,因爲一維卷積變換的是最後一個維度,所以變換文本形狀爲 [batch, embed_dim, seq]。
# 一維卷積是在最後一個維度進行
m = nn.Conv1d(in_channels=16, out_channels=33, kernel_size=3, stride=2)
input = torch.randn(20, 16, 50)  # [batch, seq, hidden_in]需要轉爲[batch, hidden_in, seq]
output = m(input)  # [20, 33, 24] # [batch, out_channels, hidden_out]

Lout=Lin+2padding+(kernel_size1)+1stride+1L{out} = \frac{L{in} +2*padding + (kernel\_size-1) + -1}{stride}+1
輸入維度:[ batch, in_channels, seq ]
輸出維度:[ batch, out_channels, L_out ]
二維卷積與一維卷積類似,只是變換維度爲兩個維度。

# 二維的卷積
m = nn.Conv2d(in_channels=16, out_channels=33, kernel_size=3, stride=2)
# non-square kernels and unequal stride and with padding
# m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
# non-square kernels and unequal stride and with padding and dilation
# m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
input = torch.randn(20, 16, 50, 100)
output = m(input) # [20, 33, 24, 49]
# 三維卷積
# With square kernels and equal stride
m = nn.Conv3d(in_channels=16, out_channels=33, kernel_size=3, stride=2)
# # non-square kernels and unequal stride and with padding
# m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0))
input =torch.randn(20, 16, 10, 50, 100)
output = m(input)  # [20, 33, 4, 24, 49]
  1. RNN
    輸入維度:[ seq, batch, input_size]
    h0: [num_layersnum_directions, batch, hidden],可以不加
    輸出維度:
    output: [seq, batch, hidden]
    hn: [num_layers
    num_directions, batch, hidden]
# rnn
rnn = nn.RNN(input_size=10, hidden_size=20, num_layers=2, bidirectional=False)
input = torch.randn(5, 3, 10)  # [time_step, batch, feature] = [seq, batch, input_size]
h0 = torch.randn(4, 3, 20)     # [num_layers*num_directions, batch, hidden]
output, hn = rnn(input)
# output: [seq, batch, hidden]   [5, 3, 20]  雙向乘2
# hn: [num_layers*num_directions, batch, hidden]   [2, 3, 20]
# num_direction: 計算方向,雙向=2,單向=1
# batch_first:Ture時,輸入維度爲[batch, seq, input_size]. 默認False,輸入維度爲[seq, batch, input_size]
# batch_first=False時, 若bidirectional=False,output[-1]=hn[-1],若爲True時,兩者不相等
  1. LSTM
lstm = nn.LSTM(input_size=10, hidden_size=20, num_layers=2)
input = torch.randn(5, 3, 10) # [time_step, batch, feature] = [seq, batch, input_size]
# h0與c0可以不進輸入
h0 = torch.randn(2, 3, 20) # [num_layers*num_directions, batch, hidden]
c0 = torch.randn(2, 3, 20) # [num_layers*num_directions, batch, hidden]
# 這裏有2層lstm,output是最後一層lstm的每個詞向量對應隱藏層的輸出,其與層數無關,只與序列長度相關
output, (hn, cn) = lstm(input, (h0, c0))
# output: [seq, batch, hidden_size]  [5, 3, 20]
# hn: [num_layers*num_directions, batch, hidden] [2, 3, 20]
# cn: [num_layers*num_directions, batch, hidden] [2, 3, 20]
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章