Abstract:
At present, 3D reconstruction is an important way of 3D model making and 3D scene reconstruction with moving object interference is a research hotspot. Traditional 3D reconstruction technology is difficult to effectively complete the scene reconstruction task in the face of mobile object interference. To solve this problem, this paper proposes a 3D reconstruction framework named ORBTSDF-SCNet. This framework combines simultaneous localization and mapping (SLAM), truncated signed distance function (TSDF) and sample consistency networks (SCNet) technology to complete 3D scene reconstruction with moving object interference. Firstly, aiming at the fact that SLAM system can only output point cloud and cannot directly generate 3D model, this paper proposes a 3D reconstruction method, ORBTSDF, in which depth camera or binocular camera obtains RGBD image of the moving objects and scene, the tracking thread of ORB_SLAM2 is applied to obtain pose information in real time, and the surface reconstruction algorithm TSDF is adopted to realize 3D model reconstruction combined with depth image. At the same time, in order to eliminate the interference of moving objects in 3D scene reconstruction, such as image smear, low accuracy or reconstruction failure etc., a deep learning instance segmentation network SCNet is used to detect and segment moving objects. By combining with some optimization strategies, this method can reduce the error of detection and instance segmentation, the alignment error of depth map and RGB map. After the instance of the moving object is removed, the RGBD image is transmitted back to the part of ORBTSDF and forms a 3D scene reconstruction without moving objects. Finally, comparative experiments on ICL-NUM and TUM datasets verify the effectiveness of the proposed method.