0x00 大綱
- paper: “Beyond Tracking: Selecting Memory and Refining Poses for Deep Visual Odometry” https://arxiv.org/abs/1904.01892
- code:
- 三個組件:Tracking, Memory, Refini
0x01 近期相關工作 & 需要查看的文獻資料
-
聯合depth和pose學習的文獻[16,19,36,37,39]
-
RNN時間信息[14,22, 31–33]
-
圖片不能超過5幀的原因:the high dimensionality of depth maps
-
VO在深度學習之前一直被處理成最小化幾何重投影誤差[10,18,20]和光度一致性誤差[7,8,30];
-
Sfmlearner
是第一篇無監督學習的論文,有監督:DeMoN和DeepTAM, MapNet,DeepVO, ESP-VO,GFS-VO
(有意思,分開評估rt,lstm) -
引入相對姿態約束減少局部誤差:
[4] S. Brahmbhatt, J. Gu, K. Kim, J. Hays, and J. Kautz. MapNet: Geometry-aware Learning of Maps for Camera Localization. In CVPR, 2018.
[14] G. Iyer, J. K. Murthy, K. Gunshi Gupta, and L. Paull. Geometric Consistency for Self-supervised End-to-end Visual Odometry. In CVPR Workshops, 2018.
[22] E. Parisotto, D. Singh Chaplot, J. Zhang, and R. Salakhutdinov. Global Pose Estimation with an Attention-based Recurrent Network. In CVPR Workshops, 2018. -
其他:
[32] S. Wang, R. Clark, H. Wen, and N. Trigoni. End-toend, Sequence-to-sequence Probabilistic Visual Odometry through Deep Neural Networks. IJRR, 2018.
[33] F. Xue, Q. Wang, X. Wang, W. Dong, J. Wang, and H. Zha. Guided Feature Selection for Deep Visual Odometry. In ACCV, 2018.
[5] R. Clark, S. Wang, A. Markham, N. Trigoni, and H. Wen. VidLoc: A Deep Spatio-temporal Model for 6-DoF Videoclip Relocalization. In CVPR, 2017.
The learning-based baselines include supervised approaches such as DeepVO [31], ESP-VO [32], GFS-VO [33], and unsupervised approaches such as SfmLearner [39], Depth-VO-Feat [37], GeoNet [36], Vid2Depth [19] and UndeepVO [16].
0x02 網絡相關
Encoder
基於FlowNe
t,在兩幀之間預測光流,輸出1024個2D特徵圖Tracking module
包含兩個模塊,ConvLSTM
和SE3 layer
,前者是LSTM
的變種,LSTM
多應用在DeepVO
和ESP-VO
,ConvLSTM
保留了更多的空間信息。後者是計算兩個相機移動的相對pose
,生成6-DoF
,全局的pose
計算取自於DeepVO
和ESP-VO
Memory module
使用的是經典的VO/SLAM系統,ORB-SLAM
,爲了糾正ConvLSTM
不能長時間記住信息Refuning module
估計每個圖片之間的絕對pose
,用的是ConvLSTM
,從這裏開始看不太懂了,開始玄學看paper
這裏得上下圖不是很明白
0x03 作者的實驗
- 數據集:KITTI [9] and TUM-RGBD [26] datasets
- encoder是在FlyingChairs dataset 預先訓練好的模型
0x04 個人總結
- VISO2-M 需要研究一下,單目VO算法恢復pose。
- 是有監督得端到端的單目視覺里程計,現在正在研究的是無監督,擴展眼界意義大於實際意義;
- 引用了兩個模塊,Memory,Refining,前者好理解,後者一頭霧水。後者還用到了
a spatial-temporal attention mechanism
- 將實驗結果和經典算法還有基於學習得VO做比較
- 沒有源碼,很致命