HAIS: Highways Automated-Inspection System
Abstract
:1. Introduction
- Safety concern: Road conditions vary in winter, representing a hazard for safe driving. Thus, all roads need to be inspected frequently to ensure better maintenance.
- Camera limitation: The individual use of a camera for inspection limits the inspection performance due to ambient light variation and visibility reduction (e.g., dust or snow). Therefore, it would be interesting to add other sensors, such as ultrasound-based or laser-based sensors, to work collaboratively to increase inspection accuracy.
- Vehicles assistance: The drivers need external aid when driving in harsh weather or when the sensors are no longer reliable, such as a camera in snow or a tunnel. The driver and autonomous vehicles (AVs) need to obtain real-time information about the critical zones that have been inspected recently. Because getting real-time road conditions or any potential road hazards will help enhance safety measures and save lives.
- Connectivity: integrating secure and reliable connections, such as vehicle-to-vehicle, vehicle-to-infrastructure, and infrastructure-to-vehicle, will help enhance the safety measure within a connected collaborative-driving network.
- Manual inspection: regular missions/visits are performed to check the road visually or using some measurement tools. If needed, the reported damages will be delivered to the maintenance team for further investigation and reparation. The technician will provide a visual inspection.
- Satellite-based inspection: satellite can be used to scan the road landscape and report the damage [8]. This approach is expensive, but it can provide good results depending on the satellite imagery resolution and the weather condition.
- Inspection node (semi-automated): the inspection vehicle, equipped with a node composed of a sensors-based platform (cameras, LiDAR), is used to scan the road and send the data to an online platform to report the inspection results.
- Autonomous inspection: a self-driven vehicle (robot or drone) is used for the inspection [9]. In this case, the inspection vehicle will define automatically the inspection mission trajectory based on the assigned target area. The collected data will be processed online to provide the inspection results.
2. System Design and Prototyping
3. Data Collection and Transfer Module
- Data compression: compress the received data from the inspection node and store the compressed file in a local folder based on a specific compression size. The tarfile library is used to create a .tar file containing the sensor’s data in the previously defined compression size [11].
- Local storage: the compressed data are kept locally until they are sent successfully to the cloud Firebase [12], where all previously collected data are stored in the Firebase online database. In case of connection interruption or sending error, the transmission thread will keep trying to send the compressed data while preserving a local backup until the program ensures that the data are completely sent successfully to the database.
- Data exchange using API: this component is the main component for the receiver side. It enables the web application to download, retrieve and request specific data from the database using Flask APIs. This component takes the vehicle number and the data token key to return a specific mission trajectory with the corresponding collected data.
4. Inspection Module
- Classification: to recognize the road safety index based on the weather condition and road quality. Examples of the CARLA-based simulated dataset [14] and public dataset used in this work are presented in Figure 9 and Figure 10. The trained classification model could achieve an accuracy of and in road snow coverage detection and safety index prediction, respectively.
- Segmentation: to detect the road surface to avoid the false positive damage coming from the surrounding objects in the image. The trained segmentation model could achieve a DICE up to , as shown in Figure 11.
- Object detection: to detect road damage, mainly the cracks, and compute the road damage area of each crack. Some examples of the obtained results using an object detection model, trained on a manually annotated dataset collected using a Dashcam or inspection node or drone dataset, are presented in Figure 12. The trained YOLOv5 model could achieve an IOU of up to .
5. User Interface Module
- Explore the collected data at every location by showing the different sensor data, such as road image, car speed, and car position.
- Visualize the road conditions report generated by the inspection algorithms.
- Manage the inspection and reparation missions by showing each mission separately.
6. Conclusions and Future Work
- Road snow coverage detection: an average prediction accuracy of .
- Road safety index prediction: an average prediction accuracy of .
- Road damage detection: achieve an IOU up .
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
API | application programming interface |
DSP | digital signal processing |
IoU | intersection over union |
ML | Machine learning |
SQL | full structured query language |
RTK | road traversing knowledge |
References
- Salesforce Canada. Why Canadian Manufacturers Need To Understand–And Embrace–The Fourth Industrial Revolution; Salesforce Canada: Toronto, ON, Canada, 2022. [Google Scholar]
- Saeed, T.U.; Alabi, B.N.T.; Labi, S. Preparing Road Infrastructure to Accommodate Connected and Automated Vehicles: System-Level Perspective. J. Infrastruct. Syst. 2021, 27, 6020003. [Google Scholar] [CrossRef]
- Panagiotopoulos, I.; Dimitrakopoulos, G. An empirical investigation on consumers’ intentions towards autonomous driving. Transp. Res. Part C Emerg. Technol. 2018, 95, 773–784. [Google Scholar] [CrossRef]
- Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the 2020 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2443–2451. [Google Scholar] [CrossRef]
- Grigorescu, S.; Trasnea, B.; Cocias, T.; Macesanu, G. A survey of deep learning techniques for autonomous driving. J. Field Robot. 2020, 37, 362–386. [Google Scholar] [CrossRef] [Green Version]
- O’Sullivan, S.; Nevejans, N.; Allen, C.; Blyth, A.; Leonard, S.; Pagallo, U.; Holzinger, K.; Holzinger, A.; Sajid, M.I.; Ashrafian, H. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2019, 15, e1968. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Imai, T. Legal regulation of autonomous driving technology: Current conditions and issues in Japan. IATSS Res. 2019, 43, 263–267. [Google Scholar] [CrossRef]
- Brewer, E.; Lin, J.; Kemper, P.; Hennin, J.; Runfola, D. Predicting road quality using high resolution satellite imagery: A transfer learning approach. PLoS ONE 2021, 16, e0253370. [Google Scholar] [CrossRef]
- Menendez, E.; Victores, J.G.; Montero, R.; Martínez, S.; Balaguer, C. Tunnel structural inspection and assessment using an autonomous robotic system. Autom. Constr. 2018, 87, 117–126. [Google Scholar] [CrossRef]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. Nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the 2020 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11618–11628. [Google Scholar] [CrossRef]
- Tarfile—Read and Write Tar Archive Files—Python 3.11.2 Documentation. Available online: https://docs.python.org/3/library/tarfile.html (accessed on 20 March 2023).
- Moroney, L. The Firebase Realtime Database. In The Definitive Guide to Firebase; Moroney, L., Ed.; Apress: Berkeley, CA, USA, 2017; pp. 51–71. [Google Scholar] [CrossRef]
- Rateke, T.; Justen, K.A.; von Wangenheim, A. Road surface classification with images captured from low-cost camera-road traversing knowledge (RTK) dataset. Rev. De Inform. Teor. E Apl. 2019, 26, 50–64. [Google Scholar] [CrossRef] [Green Version]
- Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. In Proceedings of the 1st Annual Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; pp. 1–16. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
A. Gabbar, H.; Chahid, A.; U. Isham, M.; Grover, S.; Singh, K.P.; Elgazzar, K.; Mousa, A.; Ouda, H. HAIS: Highways Automated-Inspection System. Technologies 2023, 11, 51. https://doi.org/10.3390/technologies11020051
A. Gabbar H, Chahid A, U. Isham M, Grover S, Singh KP, Elgazzar K, Mousa A, Ouda H. HAIS: Highways Automated-Inspection System. Technologies. 2023; 11(2):51. https://doi.org/10.3390/technologies11020051
Chicago/Turabian StyleA. Gabbar, Hossam, Abderrazak Chahid, Manir U. Isham, Shashwat Grover, Karan Pal Singh, Khalid Elgazzar, Ahmad Mousa, and Hossameldin Ouda. 2023. "HAIS: Highways Automated-Inspection System" Technologies 11, no. 2: 51. https://doi.org/10.3390/technologies11020051
APA StyleA. Gabbar, H., Chahid, A., U. Isham, M., Grover, S., Singh, K. P., Elgazzar, K., Mousa, A., & Ouda, H. (2023). HAIS: Highways Automated-Inspection System. Technologies, 11(2), 51. https://doi.org/10.3390/technologies11020051