Construction of Edge Computing Platform Using 3D LiDAR and Camera Heterogeneous Sensing Fusion for Front Obstacle Recognition and Distance Measurement System
Abstract
:1. Introduction
2. Literature Review
2.1. Robot Operating Systems
2.2. Sensors
2.3. Heterogeneous Sensor Fusion
2.4. Object Detection
3. Materials and Methods
3.1. Heterogeneous Sensor Fusion
3.2. YOLOv4-Tiny Network
3.3. Detecting Distance of Obstacles Ahead
4. Results
4.1. Steps and Goals of the Experiment
4.2. Calibration
4.3. Combining the Instant Point Cloud and Image
4.4. Recognition and Measuring the Distance of Obstacles Present Ahead
4.5. Edge Computing Platform Performance Test
5. Discussion and Perspective
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
ADAS | Advanced Driver Assistance Systems |
AEB | Autonomous Emergency Braking |
AI | Artificial Intelligence |
CNN | Convolutional Neural Networks |
FCW | Forward Collision Warning |
FIR | Far Infrared |
HPC | High-Performance Computing |
IoT | Internet of Things |
LiDAR | Light Detection and Ranging |
RANSAC | Random Sample Consensus |
RMSE | Root Mean Square Error |
ROS | Robot Operating System |
SAE | Society of Automotive Engineers |
SLAM | Simultaneous Localization And Mapping |
SSD | Single Shot MultiBox Detector |
YOLO | You Only Look Once |
References
- How Self-Driving Cars Work: Sensor Systems. UDACITY. 2021. Available online: https://www.udacity.com/blog/2021/03/how-self-driving-cars-work-sensor-systems.html (accessed on 2 January 2022).
- Safety-Relevant Guidance for On-Road Testing of SAE Level 3, 4, and 5 Prototype Automated Driving System (ADS)-Operated Vehicles. SAE International. 2019. Available online: https://www.sae.org/standards/content/j3018_201909/ (accessed on 5 January 2022).
- Fildes, B.; Keall, M.; Bos, N.; Lie, A.; Page, Y.; Pastor, C.; Pennisi, L.; Rizzi, M.; Thomas, P.; Tingvall, C. Effectiveness of Low Speed Automous Emergency Braking Realworld Rear-end Crashes. Accid. Anal. Prev. 2015, 81, 24–29. [Google Scholar] [CrossRef] [PubMed]
- Artificial Intelligence: A killer App for Edge Computing. STL Partners. 2019. Available online: https://stlpartners.com/edge_computing/artificial-intelligence-a-killer-app-for-edge-computing/ (accessed on 1 October 2021).
- ROS-Robot Operating System. ROS. 2013. Available online: https://www.ros.org (accessed on 15 April 2020).
- What Is Simultaneous Localization and Mapping. Nvidia. 2019. Available online: https://blogs.nvidia.com/blog/2019/07/25/what-is-simultaneous-localization-and-mapping-nvidia-jetson-isaac-sdk/ (accessed on 23 May 2021).
- YOLO ROS: Real-Time Object Detection for ROS. Github. 2018. Available online: https://github.com/leggedrobotics/darknet_ros (accessed on 14 August 2022).
- Kallhammer, J.-E.; Eniksson, D.; Granlund, G.; Felsberg, M.; Moe, A.; Johansson, B.; Wiklund, J.; Forssen, P.-E. Near Zone Pedestrian Detection using a Low-Resolution FIR Sensor. In Proceedings of the 2007 IEEE Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; pp. 339–345. [Google Scholar]
- Yi, D.; Joo, J.; Piao, Z.; Jin, H.; Kim, S.C. Ultrasound-based Obstacle Detection System for Vehicles under Interference Environment. In Proceedings of the IEEE 2019 25th Asia-Pacific Conference on Communications (APCC), Ho Chi Minh City, Vietnam, 6–8 November 2019; pp. 95–98. [Google Scholar]
- Autopilot. Tesla. 2016. Available online: https://www.tesla.com/autopilot (accessed on 16 August 2021).
- Google’s Autonomous Car Takes To The Streets. IEEE SPECTRUM. 2010. Available online: https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/googles-autonomous-car-takes-to-the-streets (accessed on 9 July 2020).
- The Different Types of Sensor Fusion: Complementary, Competitive, and Cooperative. Networking Embedded Systems. 2011. Available online: https://netwerkt.wordpress.com/2011/03/30/the-different-types-of-sensor-fusion-complementary-competitive-and-cooperative/ (accessed on 7 January 2022).
- Dhall, A.; Chelani, K.; Radhakrishnan, V.; Krishna, K.M. LiDAR-Camera Calibration using 3D-3D Point correspondences. arXiv 2017, arXiv:1705.09785. [Google Scholar]
- Nowicki, M.R. Spatiotemporal Calibration of Camera and 3D Laser Scanner. IEEE Robot. Autom. Lett. 2020, 5, 6451–6458. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision and Pattern Recognition; Springer: Berlin, Germany, 2016. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.-Y.; Liao, H.-Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Camera Resectioning. Wikipedia. 2020. Available online: https://en.wikipedia.org/wiki/Camera_resectioning. (accessed on 3 January 2022).
- Camera Calibration and 3D Reconstruction. OpenCV. 2020. Available online: https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html. (accessed on 1 February 2022).
- Romero-Ramirez, F.J.; Muñoz-Salinas, R.; Medina-Carnicer, R. Speeded up detection of squared fiducial markers. Image Vis. Comput. 2018, 76, 38–47. [Google Scholar] [CrossRef]
Minimum Point in Box | Medium Point in Box | |
---|---|---|
Computation Complexity | Low | High |
Acquires the minimum distance? | Yes | Possibly |
Solves Environmental Interferences? | Slightly | Yes |
Best Use-cases | Anti-collision systems | Finding an object’s real distance |
ArUco Markers | ID 51, ID 461, ID 718 | |
---|---|---|
Board Size | ||
L | 30 | |
W | 30 | |
BL | 2.3 | |
BW | 2.3 | |
M | 13.5 |
Power Mode | CPU Average Usage | The Number of CPU Working Cores | GPU Average Usage | YOLOv4-Tiny FPS |
---|---|---|---|---|
0 (MAXN) | 53.3% (2.3 GHz) | 8 | >77% (1.4 GHz) | 60 |
1 (10 W) | × | × | × | × |
2 (15 W) | 100% (1.2 GHz) | 4 | >95% (675 MHz) | 22 |
3 (30 W ALL) | 72.5% (1.2 GHz) | 8 | >80% (905 MHz) | 40 |
4 (30 W 6core) | 82% (1.4 GHz) | 6 | >80% (905 MHz) | 35 |
5 (30 W 4core) | 95.8% (1.8 GHz) | 4 | >88% (905 MHz) | 32 |
6 (30 W 2core) | 100% (2.1 GHz) | 2 | >50% (905 MHz) | 20 |
7 (15 W DESKTOP) | 98% (2.2 GHz) | 4 | >95% (675 MHz) | 24 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, P.-Y.; Lin, H.-Y.; Pai, N.-S.; Huang, J.-B. Construction of Edge Computing Platform Using 3D LiDAR and Camera Heterogeneous Sensing Fusion for Front Obstacle Recognition and Distance Measurement System. Processes 2022, 10, 1876. https://doi.org/10.3390/pr10091876
Chen P-Y, Lin H-Y, Pai N-S, Huang J-B. Construction of Edge Computing Platform Using 3D LiDAR and Camera Heterogeneous Sensing Fusion for Front Obstacle Recognition and Distance Measurement System. Processes. 2022; 10(9):1876. https://doi.org/10.3390/pr10091876
Chicago/Turabian StyleChen, Pi-Yun, Hsu-Yung Lin, Neng-Sheng Pai, and Jing-Bin Huang. 2022. "Construction of Edge Computing Platform Using 3D LiDAR and Camera Heterogeneous Sensing Fusion for Front Obstacle Recognition and Distance Measurement System" Processes 10, no. 9: 1876. https://doi.org/10.3390/pr10091876
APA StyleChen, P.-Y., Lin, H.-Y., Pai, N.-S., & Huang, J.-B. (2022). Construction of Edge Computing Platform Using 3D LiDAR and Camera Heterogeneous Sensing Fusion for Front Obstacle Recognition and Distance Measurement System. Processes, 10(9), 1876. https://doi.org/10.3390/pr10091876