高级检索

  • ISSN 1006-3080
  • CN 31-1691/TQ

ORBTSDF-SCNet:一种动态场景在线三维重建方法

李翔宇 张雪芹

李翔宇, 张雪芹. ORBTSDF-SCNet:一种动态场景在线三维重建方法[J]. 华东理工大学学报(自然科学版). doi: 10.14135/j.cnki.1006-3080.20211221001
引用本文: 李翔宇, 张雪芹. ORBTSDF-SCNet:一种动态场景在线三维重建方法[J]. 华东理工大学学报(自然科学版). doi: 10.14135/j.cnki.1006-3080.20211221001
LI Xiangyu, ZHANG Xueqin. ORBTSDF-SCNet: An Online 3D reconstruction Method for Dynamic Scene[J]. Journal of East China University of Science and Technology. doi: 10.14135/j.cnki.1006-3080.20211221001
Citation: LI Xiangyu, ZHANG Xueqin. ORBTSDF-SCNet: An Online 3D reconstruction Method for Dynamic Scene[J]. Journal of East China University of Science and Technology. doi: 10.14135/j.cnki.1006-3080.20211221001

ORBTSDF-SCNet:一种动态场景在线三维重建方法

doi: 10.14135/j.cnki.1006-3080.20211221001
详细信息
    作者简介:

    李翔宇 (1996—),男,河南开封人,硕士生,主要研究方向为三维重建与深度学习。E-mail:1971089759@qq.com

    通讯作者:

    张雪芹, E-mail:zxq@ecust.edu.cn

  • 中图分类号: TP391

ORBTSDF-SCNet: An Online 3D reconstruction Method for Dynamic Scene

  • 摘要: 传统的三维重建技术在面对移动物体干扰时难以有效完成场景重建任务。针对该问题,本文提出一种基于SLAM(Simultaneous Localization and Mapping)、TSDF(truncated signed distance function)和SCNet(Sample Consistency Networks)实例分割网络的三维重建方法ORBTSDF-SCNet。该方法采用深度相机或双目相机获取重建物体及场景的深度图与RGB图,基于ORB_SLAM2实时获取位姿信息;采用基于结构化点云数据的表面重建算法TSDF与深度图相结合,实现在线三维模型重建;为了消除场景中移动物体对场景三维重建的干扰,提出采用SCNet实例分割网络检测和分割移动物体,并结合优化策略减小检测和实例分割误差以及深度图和RGB图对准误差。通过抠除移动物体,保证了重建场景的完整性。在ICL-NUIM、TUM数据集上的实验表明了本文所提方法的有效性。

     

  • 图  1  Tracking线程的基本组成

    Figure  1.  The basic composition of tracking thread

    图  2  当前帧表面点云映射到立体体素

    Figure  2.  The depth map to TSDF mesh

    图  3  SCNet网络结构

    Figure  3.  The network structure of SCNet

    图  4  ORBTSDF-SCNet结构图

    Figure  4.  The structure of ORBTSDF-SCNet

    图  5  ORBTSDF整体三维重建流程图

    Figure  5.  The flow chart of 3D reconstruction of ORBTSDF

    图  6  ORBTSDF-SCNet算法流程图

    Figure  6.  The flow chart of ORBTSDF-SCNet algorithm

    图  7  TUM数据集六序列运行速度(fps)对比

    Figure  7.  The comparative results of running speed(fps) on six sequences of TUM dataset with different methods

    图  8  ORBTSDF重建效果图

    Figure  8.  The reconstruction effect of ORBTSDF

    图  9  walk_xyz序列动态重建效果对比图

    Figure  9.  The dynamic reconstruction effect comparison on walk_xyz sequence

    图  10  walk_hsp序列动态重建效果对比图

    Figure  10.  The dynamic reconstruction effect comparison on walk_hsp sequence

    表  1  在ICL-NUIM数据集上的ATE RMSE(mm)对比实验结果

    Table  1.   The comparative results of ATE RMSE(mm) on ICL-NUIM dataset

    DSRSV2MRSKSFTMEFOS2Ours
    kt01042620472497986
    kt129822859916274
    kt2191181891020141819
    kt315243310903552431061911
    下载: 导出CSV

    表  2  TUM数据集上不同方法的RMSE(mm)实验结果

    Table  2.   The comparative results of RMSE(mm) on TUM dataset with different methods

    OFSMSD



    ATE
    walk_hsp57214355
    walk_static39814
    walk_xyz6672925
    sit_hsp292623
    sit_static877
    sit_xyz111518
    average2213824



    RPE
    walk_hsp3912561
    walk_static6671431
    walk_xyz293820
    sit_hsp83732
    sit_static111510
    sit_xyz81925
    average1274130
    下载: 导出CSV

    表  3  TUM数据集上不同实例分割网络RMSE(mm)实验结果

    Table  3.   The comparative results of RMSE(mm) on TUM dataset with different instance segmentation networks

    MRCMRHTCOurs

    ATE
    walk_hsp391265182143
    walk_static101298
    walk_xyz45293329
    sit_hsp31254226
    sit_static7887
    sit_xyz20212615
    average84605038

    RPE
    walk_hsp463351126125
    walk_static13251514
    walk_xyz55583938
    sit_hsp41254037
    sit_static10101015
    sit_xyz26213319
    average101824441
    下载: 导出CSV

    表  4  在TUM数据集两序列上RMSE(mm)对比实验结果

    Table  4.   The comparative results of RMSE(mm) on two sequences of TUM dataset

    SFFFPFours
    ATEwalk_sta350377214
    walk_xyz5102104125
    average4301245719
    RPEwalk_sta180977231
    walk_xyz68029013020
    average43019410125
    下载: 导出CSV

    表  5  在TUM数据集六序列上RMSE(mm)对比实验结果

    Table  5.   The comparative results of RMSE(mm) on six sequences of TUM dataset

    VSEFCFMFours


    ATE
    walk_hsp73920980310655
    walk_static327625513514
    walk_xyz87421669610425
    sit_hsp180138365223
    sit_static29911217
    sit_xyz11126273118
    average155871825524


    RPE
    walk_hsp3351634009361
    walk_sta101582243931
    walk_xyz3351634009320
    sit_hsp75102304132
    sit_static2410111710
    sit_xyz5728274625
    average155871825530
    下载: 导出CSV
  • [1] 危双丰, 刘振彬, 赵江洪, et al. SLAM室内三维重建技术综述[J]. 测绘科学, 2018, 43(7): 15-26.
    [2] NEWCOMBE R A, IZADI S, HILLIGES O, et al. KinectFusion: Real-Time Dense Surface Mapping and Tracking[C]//2011 10th Ieee International Symposium on Mixed and Augmented Reality (ISMAR). New York: IEEE, 2011: 127–136.
    [3] ENDRES F, HESS J, STURM J, et al. 3-D Mapping With an RGB-D Camera[J]. IEEE Transactions on Robotics, 2014, 30(1): 177-187. doi: 10.1109/TRO.2013.2279412
    [4] WHELAN T, KAESS M, FALLON M, et al. Kintinuous: Spatially Extended KinectFusion[J/OL]. Robotics & Autonomous Systems, 2012. http://hdl.handle.net/1721.1/71756.
    [5] WHELAN T, SALAS-MORENO R F, GLOCKER B, et al. ElasticFusion: Real-Time Dense SLAM and Light Source Estimation[J]. The International Journal of Robotics Research, 2016, 35(14): 1697-1716. doi: 10.1177/0278364916669237
    [6] ZHANG T, ZHANG H, LI Y, et al. FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow[C]//2020 IEEE International Conference on Robotics and Automation (ICRA). Paris: IEEE, 2020: 7322–7328.
    [7] SCONA R, JAIMEZ M, PETILLOT Y R, et al. StaticFusion: Background Reconstruction for Dense RGB-D SLAM in Dynamic Environments[C]//2018 IEEE International Conference on Robotics and Automation (ICRA). Brisbane: IEEE, 2018: 3849–3856.
    [8] LONG R, RAUCH C, ZHANG T, et al. RigidFusion: Robot Localisation and Mapping in Environments With Large Dynamic Rigid Objects[J]. IEEE Robotics and Automation Letters, 2021, 6(2): 3703-3710. doi: 10.1109/LRA.2021.3066375
    [9] GIRSHICK R. Fast R-CNN[C]//2015 Ieee International Conference on Computer Vision (ICCV). New York: IEEE, 2015: 1440–1448.
    [10] HE K, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN[J/OL]. CoRR, 2017, abs/1703.06870. http://arxiv.org/abs/1703.06870.
    [11] RUNZ M, BUFFIER M, AGAPITO L. MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects[C]. CHU D, GABBARD J L, GRUBERT J, et al. , eds. //Proceedings of the 2018 Ieee International Symposium on Mixed and Augmented Reality (ISMAR). New York: IEEE, 2018: 10–20.
    [12] LI Y, ZHANG T, NAKAMURA Y, et al. SplitFusion: Simultaneous Tracking and Mapping for Non-Rigid Scenes[C]//2020 Ieee/Rsj International Conference on Intelligent Robots and Systems (IROS). New York: IEEE, 2020: 5128–5134.
    [13] MUR-ARTAL R, TARDÓS J D. ORB-SLAM2: An Open-Source Slam System for Monocular, Stereo, and Rgb-d Cameras[J]. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. doi: 10.1109/TRO.2017.2705103
    [14] VU T, KANG H, YOO C D. SCNet: Training Inference Sample Consistency for Instance Segmentation[C]//Thirty-Fifth Aaai Conference on Artificial Intelligence. Palo Alto: Assoc Advancement Artificial Intelligence, 2021: 2701–2709.
    [15] CAI Z, VASCONCELOS N. Cascade R-CNN: High Quality Object Detection and Instance Segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 43(5): 1483-1498.
    [16] CHEN K, PANG J, WANG J, et al. Hybrid Task Cascade for Instance Segmentation[C]//2019 Ieee/Cvf Conference on Computer Vision and Pattern Recognition (CVPR). Los Alamitos: IEEE Computer Soc, 2019: 4969–4978.
    [17] HANDA A, WHELAN T, MCDONALD J, et al. A Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM[C]//2014 Ieee International Conference on Robotics and Automation (ICRA). New York: IEEE, 2014: 1524–1531.
    [18] STURM J, MAGNENAT S, ENGELHARD N, et al. Towards a Benchmark for RGB-D SLAM Evaluation[C/OL]//RGB-D Workshop on Advanced Reasoning with Depth Cameras at Robotics: Science and Systems Conf. (RSS). [s. n. ], 2011.https://hal.archives-ouvertes.fr/hal-01142608.
    [19] KERL C, STURM J, CREMERS D. Dense Visual SLAM for RGB-D Cameras[C]//2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. Tokyo: IEEE, 2013: 2100–2106.
    [20] STÜCKLER J, BEHNKE S. Multi-Resolution Surfel Maps for Efficient Dense 3D Modeling and Tracking[J]. Journal of Visual Communication and Image Representation, 2014, 25(1): 137-147. doi: 10.1016/j.jvcir.2013.02.008
    [21] RUNZ M, AGAPITO L. Co-Fusion: Real-Time Segmentation, Tracking and Fusion of Multiple Objects[J]. 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017: 4471-8.
    [22] ZHANG T, NAKAMURA Y. PoseFusion: Dense RGB-D SLAM in Dynamic Human Environments[C]. XIAO J, KROGER T, KHATIB O, eds. //Proceedings of the 2018 International Symposium on Experimental Robotics. Cham: Springer International Publishing Ag, 2020: 772–780.
  • 加载中
图(10) / 表(5)
计量
  • 文章访问数:  57
  • HTML全文浏览量:  30
  • PDF下载量:  7
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-12-21
  • 网络出版日期:  2022-04-24

目录

    /

    返回文章
    返回