Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM
Abstract
:1. Introduction
- This paper proposed an approach to judge whether an environment is symmetrical or not. Using the exclusion method to eliminate the asymmetry cases, we then extracted corners and fitting lines to test the hypothesis. The experimental results showed that this approach is effective, especially for a square environment, as in the case in this paper.
- This paper proposed a visual features-assisted method to help a robot localize in a symmetrical environment. ORB features and the bag-of-words method are used to describe an image, which are robust and efficient for image matching.
- We designed an algorithm to make the robot move to the geometric center position and capture several discrete images, which are used to decide the orientation. Compared with other popular methods, our results show that less memory and disk storage space are needed.
2. Related Work
2.1. Laser-Based SLAM
2.2. Feature Extraction
2.3. Visual Features
3. System Overview
4. Methods
4.1. Laser Data Preprocessing
4.2. Judgment of Symmetrical Environment
Algorithm 1. Judgement Method of Square Symmetrical Environment |
Input: A set of N points |
Output: Symmetrical environment or not |
1: Convert raw data to coordinate data , ignore angle parameter |
2: Calculate geometric centroid |
3: Calculate new distance data from |
4: Descending sort as |
5: Count the number of maximum values and number of minimum values |
6: If then |
7: Environment is not square symmetrical |
8: End if, terminate |
9: Calculate the Euclidean distance between two adjacent points , , , in turn |
10: If , , , are not equal then |
11: Environment is not square symmetrical |
12: End if, terminate |
13: Fit 4 lines by connecting adjacent two points in turn from 4 maximum distance points |
14: Count the vertical distances from the remaining points to the 4 lines |
15: If all distances less than which infinitely close to zero then |
16: Environment is square symmetrical |
17: Save the G‒V distance to the G‒V index list |
18: else |
19: Environment is not square symmetrical |
20: End if, terminate |
4.3. Visual Features’ Extraction and Representation
Algorithm 2. Robot Moves to Geometry Center Position |
Input: Coordinates of Geometric Centroid |
Output: Robot Moves to the Proper Position |
1: If environment is square symmetrical, then |
2: Let geometric center coordinates equal to geometric centroid coordinates |
3: Robot moves towards the geometry center position |
4: Calculate the middle of two adjacent corner points |
5: Robot adjusts its orientation to the middle of the two adjacent corner points |
6: else if |
7: Terminate |
8: End if |
Algorithm 3. Visual Features Extraction |
Input: M images from M directions in Node i, M equals to 4 in a square environment |
Output: Bag-of-words vector , where j is the number of image in Node i |
1: Create a visual vocabulary in an offline step based on DBoW3 with ORB features |
2: Robot captures images from four directions, , , , |
3: Extract ORB features from each image |
4: Covert features of each image into a bag-of-words vector, , , , |
5: Save the vectors to an image map database |
4.4. Localization
Algorithm 4. MCL Algorithm |
Input: |
Output: |
1: Initialization, |
2: For to M do |
3: sample_motion_model |
4: measurement_model |
5: |
6: End for |
7: For to M do |
8: Draw i with probability |
9: Add to |
10: End for |
11: Return |
Algorithm 5. Localization Method |
Input: Occupancy Grid Map, Visual Images Map |
Output: Localization Result |
1: Load Occupancy Grid Map |
2: AMCL localization method is used by default |
3: Receive laser scans data and call Algorithm 1 |
4: If the environment is symmetrical, then |
5: Call Algorithms 2 and 3 |
6: Compare G‒V distance with G‒V Index List |
7: Compare new bag-of-words vectors with visual images map database |
8: If match succeeded, then |
9: Go to Step 2 |
10: else |
11: Create new visual images map |
12: End if |
13: else |
14: Go to Step 2 |
15: End if |
5. Experiments and Results Analysis
5.1. Gazebo Simulation Environment
5.2. Real-World Environment
5.2.1. Robot Platform and Environment
5.2.2. Experimental Results Analysis
6. Discussion and Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ross, R.; Hoque, R. Augmenting GPS with Geolocated Fiducials to Improve Accuracy for Mobile Robot Applications. Appl. Sci. 2020, 10, 146. [Google Scholar] [CrossRef] [Green Version]
- Alarifi, A.; Al-Salman, A.; Alsaleh, M.; Alnafessah, A.; Al-Hadhrami, S.; Al-Ammar, M.A.; Al-Khalifa, H.S. Ultra Wideband Indoor Positioning Technologies: Analysis and Recent Advances. Sensors 2016, 16, 707. [Google Scholar] [CrossRef] [PubMed]
- Li, N.; Guan, L.; Gao, Y.; Du, S.; Wu, M.; Guang, X.; Cong, X. Indoor and Outdoor Low-Cost Seamless Integrated Navigation System Based on the Integration of INS/GNSS/rangefinder System. Remote Sens. 2020, 12, 3271. [Google Scholar] [CrossRef]
- Huang, B.; Zhao, J.; Liu, J. A Survey of Simultaneous Localization and Mapping. arXiv 2019, arXiv:1909.05214. [Google Scholar]
- Moravec, H.; Elfes, A. High resolution maps from wide angle sonar. In Proceedings of the IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; pp. 116–121. [Google Scholar]
- Dellaert, F.; Fox, D.; Burgard, W.; Thrun, S. Monte carlo localization for mobile robots. In Proceedings of the IEEE International Conference on Robotics and Automation (Cat. No. 99CH36288C), Detroit, MI, USA, 10–15 May 1999; pp. 1322–1328. [Google Scholar]
- Guan, R.P.; Ristic, B.; Wang, L.; Palmer, J.L. KLD sampling with Gmapping proposal for Monte Carlo localization of mobile robots. Inf. Fusion. 2019, 49, 79–88. [Google Scholar] [CrossRef]
- Xu, S.; Chou, W.; Dong, H. A Robust Indoor Localization System Integrating Visual Localization assisted by CNN-Based Image Retrieval with Monte Carlo Localization. Sensors 2019, 19, 249. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Smith, R.C.; Cheeseman, P. On the representation and estimation of spatial uncertainty. Int. J. Robot. Res. 1986, 5, 56–68. [Google Scholar] [CrossRef]
- Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005; Chapter 3; pp. 279–484. [Google Scholar]
- Bresson, G.; Alsayed, Z.; Yu, L.; Glaser, S. Simultaneous localization and mapping: A survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2017, 2, 194–220. [Google Scholar] [CrossRef] [Green Version]
- Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [Green Version]
- Grisetti, G.; Stachniss, C.; Burgard, W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]
- Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D rangefinder SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
- Ji, D.; Cheng, J.; Xu, Y. An extracting method of corner points from laser sensor readings. In Proceedings of the IEEE 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 4962–4967. [Google Scholar]
- Nguyen, V.; Gächter, S.; Martinelli, A.; Tomatis, N.; Siegwart, R. A comparison of line extraction algorithms using 2D range data for indoor mobile robotics. Auton. Robot. 2007, 23, 97–111. [Google Scholar] [CrossRef] [Green Version]
- Gao, H.; Zhang, X.; Fang, Y.; Yuan, J. A line segment extraction algorithm using laser data based on seeded region growing. Int. J. Adv. Robot. Syst. 2018, 15. [Google Scholar] [CrossRef] [Green Version]
- Williams, B.; Cummins, M.; Neira, J.; Newman, P.; Reid, I.; Tardós, J. A comparison of loop closing techniques in monocular SLAM. Robot. Auton. Syst. 2009, 57, 1188–1197. [Google Scholar] [CrossRef] [Green Version]
- Lowry, S.; Sünderhauf, N.; Newman, P.; Leonard, J.J.; Cox, D.; Corke, P.; Milford, M.J. Visual place recognition: A survey. IEEE Trans. Robot. 2016, 32, 1–19. [Google Scholar] [CrossRef] [Green Version]
- Lowe, D.-G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Corfu, Greece, 20–25 September 1999; pp. 1150–1157. [Google Scholar]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 404–417. [Google Scholar]
- Oliva, A.; Torralba, A. Building the gist of a scene: The role of global image features in recognition. Prog. Brain Res. 2006, 155, 23–36. [Google Scholar] [PubMed]
- Rosten, E.; Drummond, T. Machine learning for high-speed corner detection. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2006; pp. 430–443. [Google Scholar]
- Calonder, M.; Lepetit, V.; Strecha, C.; Fua, P. Brief: Binary robust independent elementary features. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 778–792. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef] [Green Version]
- Mur-Artal, R.; Tardós, J.D. ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef] [Green Version]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. ORB-SLAM3: An accurate open-source library for visual, visual-inertial and multi-map SLAM. arXiv 2020, arXiv:2007.11898, 2020. [Google Scholar]
- Yang, T.; Aitken, V. Uniform clustered particle filtering for robot localization. In Proceedings of the American Control Conference, Portland, OR, USA, 8–10 June 2005; pp. 4607–4612. [Google Scholar]
- Malayeri, A.A. Robot Localization in Symmetrical Environment. Master’s Thesis, University of Windsor, Windsor, ON, Canada, 2010. [Google Scholar]
- Brindza, J.; Thomas, A.; Lee, S.; McDermid, W.; He, Y.; Lee, D.D. Active sound localization in a symmetrical environment. Int. J. Adv. Robot. Syst. 2013, 10, 301. [Google Scholar] [CrossRef]
- DBow3. Available online: https://github.com/rmsalinas/DBow3 (accessed on 12 January 2021).
- Gálvez, L.D.; Tardos, J.D. Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 2012, 28, 1188–1197. [Google Scholar] [CrossRef]
- Pfaff, P.; Burgard, W.; Fox, D. Robust monte-carlo localization using adaptive likelihood models. In European Robotics Symposium; Springer: Berlin/Heidelberg, Germany, 2006; pp. 181–194. [Google Scholar]
- Meng, Z.; Wang, C.; Han, Z.; Ma, Z. Research on SLAM navigation of wheeled mobile robot based on ROS. In Proceedings of the 5th International Conference on Automation, Control and Robotics Engineering (CACRE), Dalian, China, 18–20 September 2020; pp. 110–116. [Google Scholar]
- Yan, R.; Wu, J.; Wang, W.; Lim, S.; Lee, J.; Han, C. Natural corners extraction algorithm in 2D unknown indoor environment with laser sensor. In Proceedings of the IEEE 12th International Conference on Control, Automation and Systems, Jeju, Korea, 17–21 October 2012; pp. 983–987. [Google Scholar]
- Mur-Artal, R.; Tardós, J.D. Fast relocalisation and loop closing in keyframe-based SLAM. In Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 846–853. [Google Scholar]
- Sivic, J.; Zisserman, A. Video Google: A text retrieval approach to object matching in videos. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 14–17 October 2003; pp. 1470–1477. [Google Scholar]
- Chan, S.; Wu, P.; Fu, L. Robust 2D Indoor Localization through Laser SLAM and Visual SLAM Fusion. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Miyazaki, Japan, 7–10 October 2018; pp. 1263–1268. [Google Scholar]
Polygon Environment | G‒V Distance (3 m) | G‒V Distance (5 m) | G‒V Distance (7 m) |
---|---|---|---|
Equilateral triangle | 50 | 50 | 48 |
Square | 50 | 50 | 50 |
Regular pentagon | 50 | 50 | 50 |
Regular hexagon | 50 | 50 | 49 |
Round | 50 | 50 | 50 |
Radius Length of Circular Area or Error Range (mm) | |||
---|---|---|---|
R = 5 | R = 10 | R = 15 | |
Proportion (%) | 86 | 94 | 100 |
Image 1 | Image 2 | Image 3 | Image 4 | |
---|---|---|---|---|
Image 1 | 0.96 | 0.12 | 0.08 | 0.10 |
Image 2 | 0.12 | 0.93 | 0.07 | 0.08 |
Image 3 | 0.08 | 0.07 | 0.91 | 0.13 |
Image 4 | 0.10 | 0.08 | 0.13 | 0.95 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ge, G.; Zhang, Y.; Jiang, Q.; Wang, W. Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM. Sensors 2021, 21, 1772. https://doi.org/10.3390/s21051772
Ge G, Zhang Y, Jiang Q, Wang W. Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM. Sensors. 2021; 21(5):1772. https://doi.org/10.3390/s21051772
Chicago/Turabian StyleGe, Gengyu, Yi Zhang, Qin Jiang, and Wei Wang. 2021. "Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM" Sensors 21, no. 5: 1772. https://doi.org/10.3390/s21051772
APA StyleGe, G., Zhang, Y., Jiang, Q., & Wang, W. (2021). Visual Features Assisted Robot Localization in Symmetrical Environment Using Laser SLAM. Sensors, 21(5), 1772. https://doi.org/10.3390/s21051772