2018_ECCV_Workshops上面一篇文章
Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks
文章中逆亞像素卷積通過tensorflow的一個函數實現,tf.space_to_depth(X, r)。
本人論文用pytorch框架寫的,所以就復現了一下,逆亞像素卷積操作
逆亞像素卷積
def de_subpix(y): #輸入 torch的tensor
(b, c, h, w) = y.shape
# print(b, c, h, w)
h1 = int(h / 2)
w1 = int(w / 2)
d1 = torch.zeros((b, c, h1, w1))
d2 = torch.zeros((b, c, h1, w1))
d3 = torch.zeros((b, c, h1, w1))
d4 = torch.zeros((b, c, h1, w1))
# print(y.shape)
for i in range(0, h1, 2):
for j in range(0, w1, 2):
d1[:, :, i, j] = y[:, :, 2 * i, 2 * j]
d2[:, :, i, j] = y[:, :, 2 * i + 1, 2 * j]
d3[:, :, i, j] = y[:, :, 2 * i, 2 * j + 1]
d4[:, :, i, j] = y[:, :, 2 * i + 1, 2 * j + 1]
# print()
# print(i,j)
out = torch.cat([d1, d2, d3, d4], 1)
# print(out.shape)
return out #輸出 torch的tensor ,經過了重新排列
通過這個函數可以實現數據圖片尺寸減小,但是不會丟失數據