高级检索

    李翔宇, 张雪芹. ORBTSDF-SCNet:一种动态场景在线三维重建方法[J]. 华东理工大学学报(自然科学版), 2023, 49(2): 284-294. DOI: 10.14135/j.cnki.1006-3080.20211221001
    引用本文: 李翔宇, 张雪芹. ORBTSDF-SCNet:一种动态场景在线三维重建方法[J]. 华东理工大学学报(自然科学版), 2023, 49(2): 284-294. DOI: 10.14135/j.cnki.1006-3080.20211221001
    LI Xiangyu, ZHANG Xueqin. ORBTSDF-SCNet: An Online 3D Reconstruction Method for Dynamic Scene[J]. Journal of East China University of Science and Technology, 2023, 49(2): 284-294. DOI: 10.14135/j.cnki.1006-3080.20211221001
    Citation: LI Xiangyu, ZHANG Xueqin. ORBTSDF-SCNet: An Online 3D Reconstruction Method for Dynamic Scene[J]. Journal of East China University of Science and Technology, 2023, 49(2): 284-294. DOI: 10.14135/j.cnki.1006-3080.20211221001

    ORBTSDF-SCNet:一种动态场景在线三维重建方法

    ORBTSDF-SCNet: An Online 3D Reconstruction Method for Dynamic Scene

    • 摘要: 传统的三维重建技术在面对移动物体干扰时难以有效完成场景重建任务。针对该问题,本文提出一种基于SLAM(Simultaneous Localization and Mapping)、TSDF(Truncated Signed Distance Function)和SCNet(Sample Consistency Networks)实例分割网络的三维重建方法ORBTSDF-SCNet。该方法采用深度相机或双目相机获取重建物体及场景的深度图与RGB图,且基于ORB_SLAM2实时获取位姿信息;采用基于结构化点云数据的表面重建算法TSDF与深度图相结合,实现在线三维模型重建;为了消除场景中移动物体对场景三维重建的干扰,提出采用SCNet实例分割网络检测和分割移动物体,并结合优化策略减小检测和实例分割误差以及深度图和RGB图对准误差。通过抠除移动物体,保证了重建场景的完整性。在ICL-NUIM、TUM数据集上的实验表明了本文所提方法的有效性。

       

      Abstract: At present, 3D reconstruction is an important way of 3D model making and 3D scene reconstruction with moving object interference is a research hotspot. Traditional 3D reconstruction technology is difficult to effectively complete the scene reconstruction task in the face of mobile object interference. To solve this problem, this paper proposes a 3D reconstruction framework named ORBTSDF-SCNet. This framework combines simultaneous localization and mapping (SLAM), truncated signed distance function (TSDF) and sample consistency networks (SCNet) technology to complete 3D scene reconstruction with moving object interference. Firstly, aiming at the fact that SLAM system can only output point cloud and cannot directly generate 3D model, this paper proposes a 3D reconstruction method, ORBTSDF, in which depth camera or binocular camera obtains RGBD image of the moving objects and scene, the tracking thread of ORB_SLAM2 is applied to obtain pose information in real time, and the surface reconstruction algorithm TSDF is adopted to realize 3D model reconstruction combined with depth image. At the same time, in order to eliminate the interference of moving objects in 3D scene reconstruction, such as image smear, low accuracy or reconstruction failure etc., a deep learning instance segmentation network SCNet is used to detect and segment moving objects. By combining with some optimization strategies, this method can reduce the error of detection and instance segmentation, the alignment error of depth map and RGB map. After the instance of the moving object is removed, the RGBD image is transmitted back to the part of ORBTSDF and forms a 3D scene reconstruction without moving objects. Finally, comparative experiments on ICL-NUM and TUM datasets verify the effectiveness of the proposed method.

       

    /

    返回文章
    返回