Ackerman Unmanned Mobile Vehicle Based on Heterogeneous Sensor in Navigation Control Application
Abstract
:1. Introduction
- ❖
- A fusion technique involving heterogeneous imaging and LiDAR sensors is proposed.
- ❖
- YOLOv4-tiny and simple online real-time tracking (SORT) are used to detect the location of objects and perform object classification and tracking to ensure that the objects encountered by vehicles are pedestrians or static obstacles.
- ❖
- LiDAR is employed to obtain real-time distance information of detected objects. Compared with other sensors (infrared, sonar, etc.), LiDAR is less affected by the environment and has high precision.
- ❖
- The vehicle control center (VCC) activates the navigation control module based on heterogeneous sensors in real time. The VCC vehicle can perform obstacle avoidance and navigation functions, allowing the vehicle to reach its destination.
- ❖
- The experimental results indicated an average distance error of 0.03 m and an average error of the entire motion path of 0.357 m when using LiDAR in the Ackerman UMV.
2. Materials and Methods
2.1. Hardware Architecture of the Ackerman UMV
2.2. Ackerman UMV Positioning and Map Construction
2.3. Navigation Control
2.4. Object Detection and Tracking Based on YOLOv4-Tiny and SORT
2.5. Integration of LiDAR and Imaging
3. Experimental Results and Discussion
3.1. Navigation Experiment Results
3.2. Obstacle Avoidance Experiment Results
3.2.1. Static Obstacles
3.2.2. Dynamic Obstacles
3.3. Discussion
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Liu, Z.; Gao, B. Radar Sensors in Automatic Driving Cars. In Proceedings of the 2020 5th International Conference on Electromechanical Control Technology and Transportation (ICECTT), Nanchang, China, 15–17 May 2020; pp. 239–242. [Google Scholar]
- Zang, S.; Ding, M.; Smith, D.; Tyler, P.; Rakotoarivelo, T.; Kaafar, M.A. The Impact of Adverse Weather Conditions on Autonomous Vehicles: How Rain, Snow, Fog, and Hail Affect the Performance of a Self-Driving Car. IEEE Veh. Technol. Mag. 2019, 14, 103–111. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 142–158. [Google Scholar] [CrossRef] [PubMed]
- Yan, J.; Lei, Z.; Wen, L.; Li, S.Z. The Fastest Deformable Part Model for Object Detection. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2497–2504. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.; Berg, A.C. SSD: Single shot multiBox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Adarsh, P.; Rathi, P.; Kumar, M. YOLO v3-Tiny: Object Detection and Recognition using one stage improved model. In Proceedings of the 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 6–7 March 2020; pp. 687–694. [Google Scholar]
- Bochkovskiy, A.; Wang, C.; Liao, H.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Jiang, Z.; Zhao, L.; Li, S.; Jia, Y. Real-time object detection method for embedded devices. arXiv 2020, arXiv:2011.04244. [Google Scholar]
- Valtl, J.; Issakov, V. Frequency Modulated Continuous Wave Radar-Based Navigation Algorithm using Artificial Neural Network for Autonomous Driving. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 567–571. [Google Scholar]
- Lin, Y.; Wang, N.; Ma, X.; Li, Z.; Bai, G. How Does DCNN Make Decisions? In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 3342–3349. [Google Scholar]
- Ort, T.; Gilitschenski, I.; Rus, D. Autonomous Navigation in Inclement Weather Based on a Localizing Ground Penetrating Radar. IEEE Robot. Autom. Lett. 2020, 5, 3267–3274. [Google Scholar] [CrossRef]
- Mihai, S.; Shah, P.; Mapp, G.; Nguyen, H.; Trestian, R. Towards Autonomous Driving: A Machine Learning-based Pedestrian Detection System using 16-Layer LiDAR. In Proceedings of the 2020 13th International Conference on Communications (COMM), Bucharest, Romania, 18–20 June 2020; pp. 271–276. [Google Scholar]
- Gao, F.; Li, C.; Zhang, B. A Dynamic Clustering Algorithm for Lidar Obstacle Detection of Autonomous Driving System. IEEE Sens. J. 2021, 21, 25922–25930. [Google Scholar] [CrossRef]
- Duan, J.; Zheng, K.; Shi, L. Road and obstacle detection based on multi-layer laser radar in driverless car. In Proceedings of the 2015 34th Chinese Control Conference (CCC), Hangzhou, China, 28–30 July 2015; pp. 8003–8008. [Google Scholar]
- Potdar, S.; Pund, S.; Shende, S.; Lote, S.; Kanakgiri, K.; Kazi, F. Real-time localisation and path-planning in ackermann steering robot using a single RGB camera and 2D LIDAR. In Proceedings of the 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–6. [Google Scholar]
- Dong, M.; Chen, D.; Ye, N.; Chen, X. Design of Ackerman Mobile Robot System Based on ROS and Lidar. In Proceedings of the Journal of Physics: Conference Series, Sichuan, China, 20–21 November 2021; pp. 1838–1841. [Google Scholar]
- Limpert, N.; Schiffer, S.; Ferrein, A. A local planner for Ackermann-driven vehicles in ROS SBPL. In Proceedings of the 2015 Pattern Recognition Association of South Africa and Robotics and Mechatronics International Conference (PRASA-RobMech), Port Elizabeth, South Africa, 26–27 November 2015; pp. 172–177. [Google Scholar]
- Webster, R.W. Useful AI tools-a review of heuristic search methods. IEEE Potentials 1991, 10, 51–54. [Google Scholar] [CrossRef]
- Foead, D.; Ghifari, A.; Kusuma, M.B.; Hanafiah, N.; Gunawan, E. A Systematic Literature Review of A* Pathfinding. Procedia Comput. Sci. 2021, 179, 507–514. [Google Scholar] [CrossRef]
- Montemerlo, M.; Becker, J.; Bhat, S.; Dahlkamp, H.; Dolgov, D.; Et-tinger, S.; Haehnel, D.; Hilden, T.; Hoffmann, G.; Huhnke, B. Junior: The stanford entry in the urban challenge. J. Field Robot. 2008, 25, 569–597. [Google Scholar] [CrossRef]
- Weon, I.S.; Lee, S.G.; Ryu, J.K. Object Recognition Based SLAM an Intelligent Vehicle. IEEE Access 2020, 8, 65599–65608. [Google Scholar] [CrossRef]
- Dijkstra, E.W. A note on two problems in connexion with graphs. Numer. Math. 1959, 1, 269–271. [Google Scholar] [CrossRef]
- Rösmann, C.; Feiten, W.; Wösch, T.; Hoffmann, F.; Bertram, T. Trajectory modification considering dynamic constraints of autonomous robots. In Proceedings of the 7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; pp. 74–79. [Google Scholar]
- Fox, D.; Burgard, W.; Thrun, S. The dynamic window approach to collision avoidance. IEEE Robot. Autom. Mag. 1997, 4, 23–33. [Google Scholar] [CrossRef]
- Aguilar, W.; Sandoval, S.; Limaico, A.; Villegas-Pico, M.; Asimbaya, I. Path Planning Based Navigation Using LIDAR for an Ackerman Unmanned Ground Vehicle. In Proceedings of the 12th International Conference on Intelligent Robotics and Applications, Shenyang, China, 8–11 August 2019; pp. 399–410. [Google Scholar]
- Wang, H.; Ma, J.; Chi, H. Unmanned Driving System Based on DeepLabV3+ Semantic Segmentation. In Proceedings of the 2021 36th Youth Academic Annual Conference of Chinese Association of Automation (YAC), Nanchang, China, 28–30 May 2021; pp. 800–803. [Google Scholar]
- Carpio, R.F.; Potena, C.; Maiolini, J.; Ulivi, G.; Rosselló, N.B.; Garone, E.; Gasparri, A. A Navigation Architecture for Ackermann Vehicles in Precision Farming. IEEE Robot. Autom. Lett. 2020, 5, 1103–1110. [Google Scholar] [CrossRef]
- Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple online and realtime tracking. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3464–3468. [Google Scholar]
- Caltagirone, L.; Bellone, M.; Svensson, L.; Wahde, M. LIDAR–camera fusion for road detection using fully convolutional neural networks. Robot. Auton. Syst. 2019, 111, 125–131. [Google Scholar] [CrossRef]
- Zhang, X.; Li, Z.; Gao, X.; Jin, D.; Li, J. Channel Attention in LiDAR-camera Fusion for Lane Line Segmentation. Pattern Recognit. 2021, 118, 108020. [Google Scholar] [CrossRef]
- Daniel, J.A.; Vignesh, C.C.; Muthu, B.A.; Kumar, R.S.; Sivaparthipan, C.B.; Marin, C. Fully convolutional neural networks for LIDAR–camera fusion for pedestrian detection in autonomous vehicle. Multimed. Tools Appl. 2023. [Google Scholar] [CrossRef]
Cartons (Length, Width and Height) (cm) | Actual Distance (m) | Distance Obtained by the LiDAR (m) | Error (m) |
---|---|---|---|
50 × 29 × 25 | 1.37 | 1.39 | 0.02 |
58 × 30 × 30 | 3.08 | 3.11 | 0.03 |
63 × 43 × 43 | 4.97 | 5.01 | 0.04 |
Pedestrian | Actual Distance (m) | Distance Obtained by the LiDAR (m) | Error (m) |
---|---|---|---|
Pedestrian on the right | 2.34 | 2.36 | 0.02 |
Pedestrian on the left | 4.12 | 4.16 | 0.04 |
Seneor | AP | mAP | Precision | Recall | F1-Score | Computing Time (ms) | FPS | |
---|---|---|---|---|---|---|---|---|
Tree | Stone | |||||||
Camera image | 51.34% | 11.11% | 31.23% | 55% | 64% | 59% | 52.6 | 19 |
LiDAR image | 30.93% | 61.11% | 46.01% | 62% | 35% | 45% | 55.5 | 18 |
LiDAR camera image | 84% | 63% | 73.5% | 84% | 76% | 80% | 71.4 | 14 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shih, C.-H.; Lin, C.-J.; Jhang, J.-Y. Ackerman Unmanned Mobile Vehicle Based on Heterogeneous Sensor in Navigation Control Application. Sensors 2023, 23, 4558. https://doi.org/10.3390/s23094558
Shih C-H, Lin C-J, Jhang J-Y. Ackerman Unmanned Mobile Vehicle Based on Heterogeneous Sensor in Navigation Control Application. Sensors. 2023; 23(9):4558. https://doi.org/10.3390/s23094558
Chicago/Turabian StyleShih, Chi-Huang, Cheng-Jian Lin, and Jyun-Yu Jhang. 2023. "Ackerman Unmanned Mobile Vehicle Based on Heterogeneous Sensor in Navigation Control Application" Sensors 23, no. 9: 4558. https://doi.org/10.3390/s23094558
APA StyleShih, C. -H., Lin, C. -J., & Jhang, J. -Y. (2023). Ackerman Unmanned Mobile Vehicle Based on Heterogeneous Sensor in Navigation Control Application. Sensors, 23(9), 4558. https://doi.org/10.3390/s23094558