LiDAR-Only Ground Vehicle Navigation System in Park Environment
Abstract
:1. Introduction
- We developed a map representation and maintenance form based on a multi-layer map architecture, proposed a ground detection method based on regional ground plane fitting and a topology extraction method based on probabilistic road map, and an integrated dynamic objects removal algorithm;
- Based on the hierarchical navigation architecture, we established a three-level planner for path planning and dynamic obstacle avoidance at different distances; The method in this paper is integrated into a real unmanned mobile platform, and the experimental tests are carried out in a complex park environment to verify the effectiveness of the navigation system.
2. Related Work
3. Proposed System
3.1. System Framework
3.2. Mapping Representation Module
3.2.1. Global Raw Point Cloud Map
3.2.2. Traverse Area Detection and Representation
3.2.3. Dynamic Point Cloud Detection and Removal
3.2.4. Topological Skeleton Extraction
3.2.5. Middle Map Construction
3.2.6. Local Map Maintenance
3.3. The Planning Module
3.3.1. Global Planner
3.3.2. Middle Planner
3.3.3. Local Planner
- Free Path: The free path uses the sampled discrete points for forward simulation, which represents the possible movement of the vehicle in the future. As shown in Figure 11, through multiple data uniform interpolation and data fitting, 729 possible paths are planned as candidates;
- Voxel Grid: For collision checking, the local planner uses a voxel grid with Free Path overlaid. Considering the occlusion of the vehicle radius, the voxel grid is generated within a certain range according to the spline distance;
- Collision Detection: After the voxel grid is generated, in order to be more efficient in the screening of local paths, each grid and path index are calculated offline. By finding neighbors within a specified distance around a waypoint in a set of paths that belong to a set of voxel grids. Finally, the index relationship table between each voxel grid and the feasible path are obtained.
- Path Filtering: First, the planner will filter the blocked paths, which can be achieved by using the index relationship table. From the remaining paths, set up a cost function based on the point cloud height between the ground threshold and the obstacle threshold. At the same time, the angle difference between each path and the target point and the angle difference between the direction of the path and the current vehicle orientation are used to calculate the cost.
4. Experiment
4.1. Traverse Area Detection
4.2. Dynamic Detection and Removal
4.3. Planning Results
4.4. Dynamic Obstacle Avodiance
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Gupta, S.; Tolani, V.; Davidson, J.; Levine, S.; Sukthankar, R.; Malik, J. Cognitive Mapping and Planning for Visual Navigation. arXiv 2017, arXiv:1702.03920. [Google Scholar] [CrossRef]
- Zhu, W.; Qi, Y.; Narayana, P.; Sone, K.; Basu, S.; Wang, X.E.; Wu, Q.; Eckstein, M.; Wang, W.Y. Diagnosing Vision-and-Language Navigation: What Really Matters. arXiv 2021, arXiv:2103.16561. [Google Scholar] [CrossRef]
- An, D.; Qi, Y.; Huang, Y.; Wu, Q.; Wang, L.; Tan, T. Neighbor-view Enhanced Model for Vision and Language Navigation. arXiv 2021, arXiv:2107.07201. [Google Scholar] [CrossRef]
- Qi, Y.; Wu, Q.; Anderson, P.; Wang, X.; Wang, W.Y.; Shen, C.; Hengel, A. REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments. arXiv 2019, arXiv:1904.10151. [Google Scholar] [CrossRef]
- Qi, Y.; Pan, Z.; Hong, Y.; Yang, M.H.; Hengel, A.; Wu, Q. The Road to Know-Where: An Object-and-Room Informed Sequential BERT for Indoor Vision-Language Navigation. arXiv 2021, arXiv:2104.04167. [Google Scholar] [CrossRef]
- Shan, T.; Wang, J.; Englot, B.J.; Doherty, K. Bayesian Generalized Kernel Inference for Terrain Traversability Mapping. In Proceedings of the Conference on Robot Learning, Zürich, Switzerland, 29–31 October 2018. [Google Scholar]
- Cao, C.; Zhu, H.; Yang, F.; Xia, Y.; Choset, H.; Oh, J.; Zhang, J. Autonomous exploration development environment and the planning algorithms. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 8921–8928. [Google Scholar]
- Zhu, H.; Cao, C.; Xia, Y.; Scherer, S.; Zhang, J.; Wang, W. DSVP: Dual-Stage Viewpoint Planner for Rapid Exploration by Dynamic Expansion. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 25–29 October 2021; pp. 7623–7630. [Google Scholar] [CrossRef]
- Cao, C.; Zhu, H.; Choset, H.; Zhang, J. TARE: A Hierarchical Framework for Efficiently Exploring Complex 3D Environments. In Proceedings of the Robotics: Science and Systems, New York, NY, USA, 27 June–1 July 2021. [Google Scholar]
- Cao, C.; Zhu, H.; Choset, H.; Zhang, J. Exploring Large and Complex Environments Fast and Efficiently. In Proceedings of the IEEE International Conference on Robotics and Automation, Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Yang, F.; Cao, C.; Zhu, H.; Oh, J.; Zhang, J. FAR planner: Fast, attemptable route planner using dynamic visibility update. arXiv 2021, arXiv:2110.09460. [Google Scholar]
- Wang, J.; Xu, M.; Foroughi, F.; Dai, D.; Chen, Z. Fastergicp: Acceptance-rejection sampling based 3d lidar odometry. IEEE Robot. Autom. Lett. 2021, 7, 255–262. [Google Scholar] [CrossRef]
- Vallet, J. GPS/IMU and LiDAR integration to aerial photogrammetry: Development and practical experiences with Helimap System. Vorträge Dreiländertagung 2007, 27, 1–10. [Google Scholar]
- Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne LiDAR data: A review. Remote. Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
- Luo, J.; Yan, B.; Wood, K. InnoGPS for data-driven exploration of design opportunities and directions: The case of Google driverless car project. J. Mech. Des. 2017, 139, 111416. [Google Scholar] [CrossRef]
- Ji, Z.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems Conference, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar]
- Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 5135–5142. [Google Scholar]
- Fankhauser, P.; Hutter, M. A Universal Grid Map Library: Implementation and Use Case for Rough Terrain Navigation; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Kim, G.; Kim, A. Remove, then Revert: Static Point cloud Map Construction using Multiresolution Range Images. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NA, USA, 25–29 October 2020. [Google Scholar]
- Yoon, D.; Tang, T.; Barfoot, T. Mapless Online Detection of Dynamic Objects in 3D Lidar. In Proceedings of the Canadian Conference on Computer and Robot Vision, Ingston, QC, Canada, 29–31 May 2019. [Google Scholar]
- Schauer, J.; Nuechter, A. The Peopleremover—Removing Dynamic Objects From 3-D Point Cloud Data by Traversing a Voxel Occupancy Grid. IEEE Robot. Autom. Lett. 2018, 3, 1679–1686. [Google Scholar] [CrossRef]
- Lim, H.; Hwang, S.; Myung, H. ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point Cloud Map Building. IEEE Robot. Autom. Lett. 2021, 6, 2272–2279. [Google Scholar] [CrossRef]
- Jiang, S.; Qi, Y.; Zhang, H.; Bai, Z.; Lu, X.; Wang, P. D3D: Dual 3-D Convolutional Network for Real-Time Action Recognition. IEEE Trans. Ind. Inform. 2021, 17, 4584–4593. [Google Scholar] [CrossRef]
- Qi, Y.; Qin, L.; Zhang, S.; Qing, M.; Yao, H. Robust visual tracking via scale-and-state-awareness. Neurocomputing 2021, 329, 75–85. [Google Scholar] [CrossRef]
- Yang, Y.; Li, G.; Qi, Y.; Huang, Q. Release the Power of Online-Training for Robust Visual Tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 12645–12652. [Google Scholar] [CrossRef]
- Voigtlaender, P.; Luiten, J.; Torr, P.H.S.; Leibe, B. Siam R-CNN: Visual Tracking by Re-Detection. arXiv 2019, arXiv:1911.12836. [Google Scholar] [CrossRef]
- Wu, Q.; Wang, P.; Wang, X.; He, X.; Zhu, W. Visual Question Answering—From Theory to Application; Advances in Computer Vision and Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2022. [Google Scholar] [CrossRef]
- Shi, S.; Wang, X.; Li, H. PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 770–779. [Google Scholar] [CrossRef] [Green Version]
- Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. PointPillars: Fast Encoders for Object Detection From Point Clouds. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 12689–12697. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4490–4499. [Google Scholar] [CrossRef] [Green Version]
- Mendes, A.; Bento, L.; Nunes, U. Multi-target detection and tracking with a laser scanner. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 796–801. [Google Scholar] [CrossRef]
- Dijkstrae, W. A note on two problems in connexion with graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef] [Green Version]
- Hart, P.E.; Nilsson, N.J.; Raphael, B. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Trans. Syst. Sci. Cybern. 2007, 4, 100–107. [Google Scholar] [CrossRef]
- LaValle, S.M.; Kuffner, J.J., Jr. Randomized kinodynamic planning. Int. J. Robot. Res. 2001, 20, 378–400. [Google Scholar] [CrossRef]
- Karaman, S.; Frazzoli, E. Sampling-based algorithms for optimal motion planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef] [Green Version]
- Hernández, J.; Moll, M.; Vidal, E.; Carreras, M.; Kavraki, L.E. Planning feasible and safe paths online for autonomous underwater vehicles in unknown environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Daejeon, Korea, 9–14 October 2016. [Google Scholar]
- Liu, Z.; Chen, B.; Zhou, H.; Koushik, G.; Hebert, M.; Zhao, D. Mapper: Multi-agent path planning with evolutionary reinforcement learning in mixed dynamic environments. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 11748–11754. [Google Scholar]
- Peltzer, O.; Bouman, A.; Kim, S.K.; Senanayake, R.; Ott, J.; Delecki, H.; Sobue, M.; Kochenderfer, M.; Schwager, M.; Burdick, J.; et al. FIG-OP: Exploring Large-Scale Unknown Environments on a Fixed Time Budget. arXiv 2022, arXiv:2203.06316. [Google Scholar]
- Koide, K. A Portable 3D LIDAR-based System for Long-term and Wide-area People Behavior Measurement. Int. J. Adv. Robot. Syst. 2019, 16, 1–16. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, K.; Li, J.; Xu, M.; Chen, Z.; Wang, J. LiDAR-Only Ground Vehicle Navigation System in Park Environment. World Electr. Veh. J. 2022, 13, 201. https://doi.org/10.3390/wevj13110201
Wang K, Li J, Xu M, Chen Z, Wang J. LiDAR-Only Ground Vehicle Navigation System in Park Environment. World Electric Vehicle Journal. 2022; 13(11):201. https://doi.org/10.3390/wevj13110201
Chicago/Turabian StyleWang, Kezhi, Jianyu Li, Meng Xu, Zonghai Chen, and Jikai Wang. 2022. "LiDAR-Only Ground Vehicle Navigation System in Park Environment" World Electric Vehicle Journal 13, no. 11: 201. https://doi.org/10.3390/wevj13110201
APA StyleWang, K., Li, J., Xu, M., Chen, Z., & Wang, J. (2022). LiDAR-Only Ground Vehicle Navigation System in Park Environment. World Electric Vehicle Journal, 13(11), 201. https://doi.org/10.3390/wevj13110201