Abstract:
Simultaneous localization and mapping (SLAM) has planned an important role in the field of robot navigation, control, and planning, and has become one of the key techniques in intelligent robotics with the development of robotic industry. Moreover, SLAM algorithm based on RGB-D camera has been one of efficient vision SLAM algorithms due to its low price, portability, and measurement of pixels’ depth. However, most of RGB-D SLAM algorithms show poor performance in real-time and localization accuracy. Especially, the feature matching is time-consuming and the loop closure detection is inefficient in large-scale scene. Aiming at these shortcomings, this work proposes an improved RGB-D SLAM algorithm based on BoW model. In the proposed algorithm, every ORB feature is converted to the BoW feature vector representing its father node in the designated layer of BoW model that has the tree structure composed of visual features. Since the two features extracted from different frames belong to the same node, they are probably the right feature matching. Moreover, the matching range of each ORB feature is narrowed for accelerating the feature matching process. Meanwhile, BoW model can recognize which frame is similar with the current key frames via the comparison of BoW vectors. Thus, the loop closure candidates, which are filtered in multi-level for obtaining loop closure frame, are obtained via similarity test. Finally, only one accurate loop candidate is confirmed by continuous detecting as well as transform computing between loop candidates and current frame. It is shown from the comparison experiments and analyses between the improved RGB-D SLAM and the original RGB-D SLAM algorithm that the improved RGB-D SLAM can attain better real-time performance and localization accuracy in camera trajectory estimation.