ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces
Abstract
:1. Introduction
2. Literature Review
- The literature suggests that conventional techniques employed for detecting cracks, such as visual inspection, are susceptible to errors and subjectivity due to human interpretation, thereby lacking automation. The demand for automated and impartial methods for detecting cracks is imperative.
- The efficacy of acoustic testing is restricted, as it is a technique that employs sound waves to identify cracks that are not discernible through visual inspection. Nonetheless, sound wave transmission may be influenced by environmental noise and other variables. There is a necessity for the development of more rigorous and dependable techniques for acoustic testing.
- Various studies have noted the absence of explicit mention of deep learning algorithms and robotics techniques. Specifically, the utilization of deep convolutional neural networks and image processing techniques is highlighted in the context of crack detection. The omission of details regarding the DL algorithms and robotics techniques employed in each study may have implications for the replicability and lucidity of the research.
- The current literature on crack detection presents limited coverage of the challenges that are inherent in this process. While some studies have briefly touched upon the difficulties and potential avenues for crack detection, a more comprehensive investigation is required to fully understand the complexities associated with this task. These complexities include variations in crack patterns, environmental factors, and the diverse range of structures that must be examined.
3. Our Proposal: Road System
- Image and video capture: A robotic vision system captures images and videos of the structure from different angles and perspectives, including areas that are difficult to access.
- Data pre-processing: The captured images and videos are pre-processed to remove noise and enhance the contrast of crack features.
- Training deep learning algorithms: To reliably identify and categorize various types of cracks, a deep learning system is trained on a sizable collection of cracking images and videos. The SDNET2018 dataset, which contains images of concrete surfaces with varied degrees of fractures, has been used to fine-tune the CNN, InceptionResNetV2, Xception, DenseNet201, MobileNetV2, VGG16, and VGG19 models. For the specific objective of crack identification, transfer learning is employed to utilize the pre-trained parameters of the model and speed up the learning process.
- Crack detection: The trained deep learning algorithm is applied to the preprocessed images and videos to detect cracks and classify them according to their type and severity.
- Structural assessment: The detected cracks are analyzed to assess the structure’s health and identify any potential safety hazards.
- Reporting and maintenance: Engineers and repair teams are informed of the outcomes of the crack identification and evaluation so that they can make any repairs or maintenance tasks required to maintain the material’s safety and durability.
4. Experimental Results
5. Discussion
- Real-time processing: Real-time processing is critical in on-site crack detection, particularly when prompt decision-making or action is necessary. To ensure prompt results, it is imperative that the system efficiently processes the collected data and conducts the crack detection on time. Achieving real-time performance necessitates the assurance of efficient computational resources and optimized algorithms [41].
- Environmental factors: The robotic vision system’s image quality can be impacted by a range of environmental factors, including but not limited to lighting conditions, shadows, reflections, and weather conditions like rain and fog. Various factors may impact cracks’ apparent presence and clarity, resulting in erroneous outcomes in detecting cracks, either false positives or false negatives. The system must be devised to consider and address environmental factors’ influence to guarantee precise detection [42].
- Surface variations and textures: Various surfaces, including but not limited to concrete, asphalt, and different building materials, may display differences in texture, hue, and design. The presence of diverse surface characteristics and the inherent complexity and variation in crack patterns across various surfaces can present challenges for crack detection methods, which may require adaptation to handle these variations effectively [43].
- Generalization to unseen data: The system’s deep learning models depend on the training data to acquire knowledge of patterns and characteristics linked to cracks, thereby enabling generalization to unseen data. Nevertheless, the efficacy of these models on unobserved data or surfaces that exhibit substantial dissimilarities from the training data may need to be clarified. The system is recommended to undergo evaluation and validation procedures using a range of datasets and be tested on multiple surfaces to determine its generalizability and reliability.
- False positives and negatives: The challenge of crack detection lies in achieving a balance between minimizing false positives, which refer to the identification of non-crack areas as cracks, and false negatives, which refer to the failure to detect actual cracks. In certain instances, deep learning models may generate erroneous identifications owing to factors such as noise, surface irregularities, or intricate patterns resembling cracks. The occurrence of inconspicuous or diminutive fissures may lead to erroneous adverse outcomes. The implementation of continuous model refinement, optimization, and training, along with the utilization of diverse datasets, can aid in the alleviation of these challenges.
- Hardware limitations: The robotic system’s hardware components, including sensors and cameras, must satisfy criteria to capture top-notch images and data, presenting hardware limitations. The efficacy of crack detection can be influenced by various factors, such as the camera’s resolution, field of view, and image stabilization features, along with the precision and dependability of other sensors. It is imperative to guarantee the appropriateness and dependability of the hardware constituents to optimize the system’s overall functionality.
- Scalability and adaptability: The system under consideration must possess the ability to scale and adapt to diverse scenarios and applications. The system must effectively manage various crack types, from minor fissures to more substantial structural impairments. Additionally, the system should be capable of seamless deployment and compatibility with various robotic platforms to cater to a wide range of inspection environments and structures [43].
- In order to enhance the resilience and versatility of deep learning models, it is advisable to augment the dataset utilized for both training and testing purposes.
- The proposed system places emphasis on visual data obtained through cameras. However, the inclusion of other sensor data, such as LiDAR or infrared imaging, can offer supplementary information to enhance the precision of crack detection and evaluation.
- Incorporating real-time anomaly detection algorithms can be advantageous in conjunction with crack detection.
- In order to guarantee the effective execution and acceptance of the suggested system, it is imperative to engage in partnerships with infrastructure management organizations, road authorities, and industry stakeholders.
- In order to ascertain the efficacy and dependability of the suggested system in practical scenarios, it is advisable to carry out comprehensive field experiments on diverse road networks.
- Given that the proposed system entails the collection and processing of visual data, it is imperative to address any potential privacy and security concerns.
- Perform an exhaustive evaluation of the costs and benefits to determine the financial feasibility of expanding the proposed system to a broader scope.
6. Conclusions
7. Future Directions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zeeshan, M.; Adnan, S.M.; Ahmad, W.; Khan, F.Z. Structural Crack Detection and Classification using Deep Convolutional Neural Network. Pak. J. Eng. Technol. 2021, 4, 50–56. [Google Scholar] [CrossRef]
- Elghaish, F.; Talebi, S.; Abdellatef, E.; Matarneh, S.T.; Hosseini, M.R.; Wu, S.; Mayouf, M.; Hajirasouli, A.; Nguyen, T.-Q. Developing a new deep learning CNN model to detect and classify highway cracks. J. Eng. Des. Technol. 2022, 20, 993–1014. [Google Scholar] [CrossRef]
- Xiao, R.; Ding, Y.; Polaczyk, P.; Ma, Y.; Jiang, X.; Huang, B. Moisture damage mechanism and material selection of HMA with amine antistripping agent. Mater. Des. 2022, 220, 110797. [Google Scholar] [CrossRef]
- Kim, J.J.; Kim, A.-R.; Lee, S.-W. Artificial Neural Network-Based Automated Crack Detection and Analysis for the Inspection of Concrete Structures. Appl. Sci. 2020, 10, 8105. [Google Scholar] [CrossRef]
- Hamishebahar, Y.; Guan, H.; So, S.; Jo, J. A Comprehensive Review of Deep Learning-Based Crack Detection Approaches. Appl. Sci. 2022, 12, 1374. [Google Scholar] [CrossRef]
- Das, A.K.; Leung, C.; Wan, K.T. Application of deep convolutional neural networks for automated and rapid identification and computation of crack statistics of thin cracks in strain hardening cementitious composites (SHCCs). Cem. Concr. Compos. 2021, 122, 104159. [Google Scholar] [CrossRef]
- Flah, M.; Nehdi, M.L. Automated Crack Identification Using Deep Learning Based Image Processing. In Proceedings of the CSCE 2021 Annual Conference, Niagara Falls, ON, Canada, 26–29 May 2021. [Google Scholar]
- Golding, V.P.; Gharineiat, Z.; Munawar, H.S.; Ullah, F. Crack Detection in Concrete Structures Using Deep Learning. Sustainability 2022, 14, 8117. [Google Scholar] [CrossRef]
- Rao, A.S.; Nguyen, T.; Palaniswami, M.; Ngo, T. Vision-based automated crack detection using convolutional neural networks for condition assessment of infrastructure. Struct. Health Monit. 2021, 20, 2124–2142. [Google Scholar] [CrossRef]
- Dais, D.; Bal, I.E.; Smyrou, E.; Sarhosis, V. Automatic crack classification and segmentation on masonry surfaces using convolutional neural networks and transfer learning. Autom. Constr. 2021, 125, 103606. [Google Scholar] [CrossRef]
- Macaulay, M.O.; Shafiee, M. Machine learning techniques for robotic and autonomous inspection of mechanical systems and civil infrastructure. Auton. Intell. Syst. 2022, 2, 8. [Google Scholar] [CrossRef]
- Kansal, I.; Kasana, S.S. Minimum preserving subsampling-based fast image de-fogging. J. Mod. Opt. 2018, 65, 2103–2123. [Google Scholar] [CrossRef]
- Kansal, I.; Khullar, V.; Verma, J.; Popli, R.; Kumar, R. IoT-Fog-enabled robotics-based robust classification of hazy and normal season agricultural images for weed detection. Paladyn J. Behav. Robot. 2023, 14, 20220105. [Google Scholar] [CrossRef]
- Verma, J.; Bhandari, A.; Singh, G. Review of Existing Data Sets for Network Intrusion Detection System. Adv. Math. Sci. J. 2020, 9, 3849–3854. [Google Scholar] [CrossRef]
- Verma, J.; Bhandari, A.; Singh, G. iNIDS: SWOT Analysis and TOWS Inferences of State-of-the-Art NIDS solutions for the development of Intelligent Network Intrusion Detection System. Comput. Commun. 2022, 195, 227–247. [Google Scholar] [CrossRef]
- Verma, J.; Bhandari, A.; Singh, G. Feature Selection Algorithm Characterization for NIDS using Machine and Deep learning. In Proceedings of the 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 1–4 June 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Ni, F.; Zhang, J.; Chen, Z. Pixel-level crack delineation in images with convolutional feature fusion. Struct. Control Health Monit. 2019, 26, e2286. [Google Scholar] [CrossRef] [Green Version]
- Yang, J.; Lin, F.; Xiang, Y.; Katranuschkov, P.; Scherer, R.J. Fast Crack Detection Using Convolutional Neural Network. In Proceedings of the EG-ICE 2021 Workshop on Intelligent Computing in Engineering, Berlin, Germany, 30 June–2 July 2021; pp. 540–549. [Google Scholar] [CrossRef]
- Bhatt, D.; Patel, C.; Talsania, H.; Patel, J.; Vaghela, R.; Pandya, S.; Ghayvat, H. CNN variants for computer vision: History, architecture, application, challenges and future scope. Electronics 2021, 10, 2470. [Google Scholar] [CrossRef]
- Yu, H.; Zhu, L.; Li, D.; Wang, Q.; Liu, X.; Shen, C. Comparative Study on Concrete Crack Detection of Tunnel Based on Different Deep Learning Algorithms. Front. Earth Sci. 2022, 9, 817785. [Google Scholar] [CrossRef]
- Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3708–3712. [Google Scholar] [CrossRef]
- Kaseko, M.S.; Ritchie, S.G. A neural network-based methodology for pavement crack detection and classification. Transp. Res. Part C Emerg. Technol. 1993, 1, 275–291. [Google Scholar] [CrossRef]
- Huang, J.; Wu, D. Pavement crack detection method based on deep learning. In Proceedings of the CIBDA 2022—3rd International Conference on Computer Information and Big Data Applications, Wuhan, China, 25–27 March 2022; Volume 2021, pp. 252–255. [Google Scholar]
- Rajadurai, R.-S.; Kang, S.-T. Automated Vision-Based Crack Detection on Concrete Surfaces Using Deep Learning. Appl. Sci. 2021, 11, 5229. [Google Scholar] [CrossRef]
- Maguire, M.; Dorafshan, S.; Thomas, R.J. SDNET2018: A Concrete Crack Image Dataset for Machine Learning Applications; Utah State University: Logan, UT, USA, 2018.
- Bhowmick, S.; Nagarajaiah, S. Automatic detection and damage quantification of multiple cracks on concrete surface from video. Int. J. Sustain. Mater. Struct. Syst. 2020, 4, 292. [Google Scholar] [CrossRef]
- Le, T.-T.; Nguyen, V.-H.; Le, M.V. Development of Deep Learning Model for the Recognition of Cracks on Concrete Surfaces. Appl. Comput. Intell. Soft Comput. 2021, 2021, 8858545. [Google Scholar] [CrossRef]
- Bhat, S.; Naik, S.; Gaonkar, M.; Sawant, P.; Aswale, S.; Shetgaonkar, P. A Survey On Road Crack Detection Techniques. In Proceedings of the 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE), Vellore, India, 24–25 February 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef] [Green Version]
- Deng, J.; Lu, Y.; Lee, V.C.S. Concrete crack detection with handwriting script interferences using faster region-based convolutional neural network. Comput. Aided Civ. Infrastruct. Eng. 2020, 35, 373–388. [Google Scholar] [CrossRef]
- Yusof, N.A.M.; Ibrahim, A.; Noor, M.H.M.; Tahir, N.M.; Abidin, N.Z.; Osman, M.K. Deep convolution neural network for crack detection on asphalt pavement. J. Phys. Conf. Ser. 2019, 1349, 012020. [Google Scholar] [CrossRef]
- Liu, Y.; Yao, J.; Lu, X.; Xie, R.; Li, L. DeepCrack: A deep hierarchical feature learning architecture for crack segmentation. Neurocomputing 2019, 338, 139–153. [Google Scholar] [CrossRef]
- Liu, Z.; Cao, Y.; Wang, Y.; Wang, W. Computer vision-based concrete crack detection using U-net fully convolutional networks. Autom. Constr. 2019, 104, 129–139. [Google Scholar] [CrossRef]
- Wang, L.; MA, X.H.; Ye, Y. Computer vision-based Road Crack Detection Using an Improved I-UNet Convolutional Networks. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 539–543. [Google Scholar]
- Yang, Q.; Shi, W.; Chen, J.; Lin, W. Deep convolution neural network-based transfer learning method for civil infrastructure crack detection. Autom. Constr. 2020, 116, 103199. [Google Scholar] [CrossRef]
- Mogalapalli, H.; Abburi, M.; Nithya, B.; Bandreddi, S.K.V. Classical–Quantum Transfer Learning for Image Classification. SN Comput. Sci. 2021, 3, 20. [Google Scholar] [CrossRef]
- Saleem, M. Assessing the load carrying capacity of concrete anchor bolts using non-destructive tests and artificial multilayer neural network. J. Build. Eng. 2020, 30, 101260. [Google Scholar] [CrossRef]
- Saleem, M.; Gutierrez, H. Using artificial neural network and non-destructive test for crack detection in concrete surrounding the embedded steel reinforcement. Struct. Concr. 2021, 22, 2849–2867. [Google Scholar] [CrossRef]
- Garg, A.; Lilhore, U.K.; Ghosh, P.; Prasad, D.; Simaiya, S. Machine Learning-based Model for Prediction of Student’s Performance in Higher Education. In Proceedings of the 8th International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 26–27 August 2021; pp. 162–168. [Google Scholar] [CrossRef]
- Lilhore, U.K.; Simaiya, S.; Pandey, H.; Gautam, V.; Garg, A.; Ghosh, P. Breast Cancer Detection in the IoT Cloud-based Healthcare Environment Using Fuzzy Cluster Segmentation and SVM Classifier. In Ambient Communications and Computer Systems; Lecture Notes in Networks and Systems; Springer: Singapore, 2022; Volume 356, pp. 165–179. [Google Scholar] [CrossRef]
- Heidari, A.; Navimipour, N.J.; Unal, M.; Zhang, G. Machine Learning Applications in Internet-of-Drones: Systematic Review, Recent Deployments, and Open Issues. ACM Comput. Surv. 2023, 55, 1–45. [Google Scholar] [CrossRef]
- Hua, X.; Li, H.; Zeng, J.; Han, C.; Chen, T.; Tang, L.; Luo, Y. A Review of Target Recognition Technology for Fruit Picking Robots: From Digital Image Processing to Deep Learning. Appl. Sci. 2023, 13, 4160. [Google Scholar] [CrossRef]
- Park, M.; Jeong, J. Design and Implementation of Machine Vision-Based Quality Inspection System in Mask Manufacturing Process. Sustainability 2022, 14, 6009. [Google Scholar] [CrossRef]
- Zhao, H.; Zhang, C. An online-learning-based evolutionary many-objective algorithm. Inf. Sci. 2019, 509, 1–21. [Google Scholar] [CrossRef]
- Dulebenets, M.A. An Adaptive Polyploid Memetic Algorithm for scheduling trucks at a cross-docking terminal. Inf. Sci. 2021, 565, 390–421. [Google Scholar] [CrossRef]
- Kavoosi, M.; Dulebenets, M.A.; Abioye, O.; Pasha, J.; Theophilus, O.; Wang, H.; Kampmann, R.; Mikijeljević, M. Berth scheduling at marine container terminals: A universal island-based metaheuristic approach. Marit. Bus. Rev. 2019, 5, 30–66. [Google Scholar] [CrossRef]
- Pasha, J.; Nwodu, A.L.; Fathollahi-Fard, A.M.; Tian, G.; Li, Z.; Wang, H.; Dulebenets, M.A. Exact and metaheuristic algorithms for the vehicle routing problem with a factory-in-a-box in multi-objective settings. Adv. Eng. Inform. 2022, 52, 101623. [Google Scholar] [CrossRef]
- Gholizadeh, H.; Fazlollahtabar, H.; Fathollahi-Fard, A.M.; Dulebenets, M.A. Preventive maintenance for the flexible flowshop scheduling under uncertainty: A waste-to-energy system. Environ. Sci. Pollut. Res. 2021, 29, 1–20. [Google Scholar] [CrossRef]
- Rabbani, M.; Oladzad-Abbasabady, N.; Akbarian-Saravi, N. Ambulance routing in disaster response considering variable patient condition: NSGA-II and MOPSO algorithms. J. Ind. Manag. Optim. 2022, 18, 1035–1062. [Google Scholar] [CrossRef]
Ref. | Aim of Study | DL Algorithm Used | Robotics Technique Used |
---|---|---|---|
[16] | Using digital picture processing driven by a UAV, find concrete cracks | N/A | UAV-powered digital image processing |
[17] | To offer a rigorous evaluation and critique of image processing’s use of crack detection | N/A | N/A |
[18] | To investigate methods for detecting road cracks | N/A | N/A |
[19] | To review current deep CNN architectures | Deep convolutional neural networks | N/A |
[20] | Employing image processing to get cracked concrete properties during bridge inspection | N/A | Image processing |
[21] | Utilising a deep CNN to find road cracks | Deep convolutional neural network | N/A |
[22] | To test how well several pre-trained CNN detect construction cracks | Pre-trained convolutional neural networks | N/A |
[23] | Applying a trained DL model to UAV photos of civil infrastructure to find crack damage | Pre-trained deep learning model | UAV imaging |
[24] | Deep CNN will be used to automatically detect road cracks. | Deep convolutional neural network | N/A |
[25] | To provide a collection of annotated images for deep CNN to use in non-contact concrete fracture identification. | Deep Convolutional Neural Networks | N/A |
[26] | In order to find concrete fissures where handwriting script interferences are present | Faster Region-Based Convolutional Neural Network | N/A |
[27] | Finding asphalt pavement cracks | Deep Convolutional Neural Network | N/A |
[28] | To provide an architecture for hierarchical feature learning for crack segmentation | N/A | N/A |
[29] | Using computer vision-based methods, find concrete cracks | U-net Fully Convolutional Networks | N/A |
[30] | To create a more effective I-UNet convolutional network for detecting road cracks | I-UNet Convolutional Networks | N/A |
[31] | To create a deep CNN-based transfer learning technique for crack identification in civil infrastructure | Deep Convolutional Neural Network | N/A |
[32] | Picture categorization using classical-quantum transfer learning | Classical-Quantum Transfer Learning | N/A |
[33] | The use of artificial multilayer neural networks and non-destructive tests to evaluate the load-carrying ability of concrete anchor bolts | Artificial Multilayer Neural Network | N/A |
Accuracy | |||||||
---|---|---|---|---|---|---|---|
Epoch’s | CNN | DenseNet201 | InceptionResNetV2 | MobileNetV2 | VGG16 | VGG19 | Xception |
1 | 66.5775 | 52.6344 | 80.6098 | 38.7358 | 60.1676 | 38.7358 | 74.5921 |
2 | 66.5597 | 61.8615 | 74.5297 | 39.0479 | 72.8626 | 40.6615 | 85.6735 |
3 | 60.0428 | 50.9940 | 86.0301 | 30.8995 | 75.3499 | 52.1797 | 81.1447 |
4 | 58.5183 | 57.8675 | 86.1460 | 41.9809 | 64.8302 | 65.4007 | 82.2323 |
5 | 70.1168 | 64.6519 | 74.9131 | 60.3281 | 81.8044 | 75.9651 | 90.2469 |
6 | 65.4542 | 70.9815 | 78.8624 | 51.5468 | 81.9916 | 74.1999 | 86.9930 |
7 | 73.2549 | 76.2860 | 88.9097 | 45.6004 | 80.5830 | 78.9873 | 89.8814 |
8 | 78.4702 | 56.9582 | 88.0806 | 59.3474 | 73.4867 | 78.2027 | 88.2767 |
9 | 77.5876 | 73.9681 | 84.9782 | 45.2082 | 82.7227 | 78.3008 | 89.7388 |
10 | 77.8996 | 57.1989 | 87.6438 | 40.7774 | 80.6187 | 81.4300 | 86.9751 |
11 | 73.2549 | 70.6606 | 89.3911 | 50.9227 | 74.5832 | 82.2502 | 80.5296 |
12 | 72.9785 | 53.1069 | 84.7820 | 42.6050 | 82.6959 | 80.6098 | 86.7790 |
13 | 77.1775 | 66.6845 | 49.2199 | 47.5974 | 81.7331 | 78.0333 | 86.9127 |
14 | 76.2414 | 44.7178 | 85.9321 | 64.7143 | 82.8029 | 80.1462 | 80.6187 |
15 | 75.6887 | 41.5976 | 78.3097 | 60.9432 | 80.1997 | 82.1610 | 88.3124 |
16 | 75.3410 | 65.2670 | 85.3704 | 70.5982 | 79.0051 | 70.9726 | 90.5322 |
17 | 74.2355 | 42.8813 | 85.0049 | 58.4470 | 81.2160 | 79.2725 | 90.1489 |
18 | 72.6843 | 72.9072 | 86.9038 | 49.1932 | 80.4315 | 77.0884 | 87.9112 |
19 | 73.4510 | 64.1972 | 84.5502 | 80.9575 | 81.3141 | 81.6618 | 88.3748 |
20 | 73.6382 | 63.4929 | 84.7107 | 68.5388 | 72.1227 | 81.0734 | 89.9706 |
Loss | |||||||
Epoch’s | CNN | DenseNet201 | InceptionResNetV2 | MobileNetV2 | VGG16 | VGG19 | Xception |
1 | 0.0775 | 0.1115 | 0.0478 | 0.2042 | 0.0845 | 0.1224 | 0.0661 |
2 | 0.0801 | 0.0924 | 0.0731 | 0.1988 | 0.0769 | 0.1155 | 0.0381 |
3 | 0.0906 | 0.1491 | 0.0365 | 0.2168 | 0.0639 | 0.1076 | 0.0490 |
4 | 0.0906 | 0.1094 | 0.0350 | 0.1820 | 0.0781 | 0.0796 | 0.0463 |
5 | 0.0704 | 0.0946 | 0.0669 | 0.1249 | 0.0478 | 0.0607 | 0.0260 |
6 | 0.0859 | 0.0735 | 0.0559 | 0.1516 | 0.0471 | 0.0631 | 0.0363 |
7 | 0.0665 | 0.0654 | 0.0301 | 0.1774 | 0.0498 | 0.0534 | 0.0276 |
8 | 0.0573 | 0.1191 | 0.0322 | 0.1328 | 0.0616 | 0.0563 | 0.0335 |
9 | 0.0587 | 0.0716 | 0.0396 | 0.1781 | 0.0451 | 0.0549 | 0.0284 |
10 | 0.0565 | 0.1223 | 0.0336 | 0.1969 | 0.0494 | 0.0479 | 0.0344 |
11 | 0.0674 | 0.0796 | 0.0290 | 0.1604 | 0.0637 | 0.0471 | 0.0555 |
12 | 0.0684 | 0.1501 | 0.0407 | 0.1907 | 0.0458 | 0.0492 | 0.0394 |
13 | 0.0602 | 0.0997 | 0.1286 | 0.1734 | 0.0472 | 0.0542 | 0.0390 |
14 | 0.0636 | 0.1753 | 0.0376 | 0.1154 | 0.0453 | 0.0513 | 0.0549 |
15 | 0.0662 | 0.1825 | 0.0589 | 0.1288 | 0.0524 | 0.0471 | 0.0349 |
16 | 0.0675 | 0.1042 | 0.0383 | 0.0939 | 0.0533 | 0.0689 | 0.0278 |
17 | 0.0712 | 0.1849 | 0.0397 | 0.1377 | 0.0516 | 0.0526 | 0.0286 |
18 | 0.0755 | 0.0770 | 0.0356 | 0.1659 | 0.0538 | 0.0554 | 0.0350 |
19 | 0.0731 | 0.1039 | 0.0412 | 0.0597 | 0.0517 | 0.0480 | 0.0327 |
20 | 0.0744 | 0.0985 | 0.0420 | 0.0963 | 0.0802 | 0.0486 | 0.0301 |
Precision | |||||||
Epoch’s | CNN | DenseNet201 | InceptionResNetV2 | MobileNetV2 | VGG16 | VGG19 | Xception |
1 | 72.7496 | 57.4463 | 83.0615 | 38.7358 | 87.1830 | 0.0000 | 75.9108 |
2 | 78.7134 | 66.9144 | 75.6462 | 39.1033 | 82.4393 | 87.4785 | 86.5855 |
3 | 63.0467 | 51.0730 | 86.7354 | 30.9808 | 81.1182 | 63.8872 | 82.3788 |
4 | 66.6667 | 59.2572 | 86.8192 | 42.0516 | 72.9645 | 86.1176 | 82.7596 |
5 | 73.2454 | 65.9912 | 76.0318 | 60.4254 | 84.6228 | 79.7475 | 90.3765 |
6 | 69.1520 | 72.6900 | 80.3527 | 51.7756 | 83.7563 | 78.8481 | 87.3239 |
7 | 77.7636 | 76.9001 | 89.1256 | 45.6266 | 82.5322 | 82.0426 | 90.0572 |
8 | 82.6985 | 57.0150 | 88.3435 | 59.3633 | 78.0363 | 80.2758 | 88.4615 |
9 | 81.5405 | 74.5844 | 85.6768 | 45.2969 | 84.7576 | 79.8565 | 89.8285 |
10 | 80.9801 | 57.8230 | 88.0600 | 40.7774 | 82.6499 | 82.9683 | 87.1545 |
11 | 76.1155 | 71.3818 | 89.6320 | 50.9272 | 78.2683 | 84.4735 | 80.9190 |
12 | 74.9858 | 53.1459 | 85.1393 | 42.6202 | 84.0336 | 82.0412 | 86.9830 |
13 | 79.0208 | 66.9088 | 50.3478 | 47.6097 | 84.5687 | 80.9248 | 87.0020 |
14 | 77.8453 | 44.8356 | 86.3690 | 64.7195 | 84.4664 | 83.8642 | 81.2692 |
15 | 76.9658 | 41.7316 | 79.4142 | 60.9506 | 83.6102 | 84.1112 | 88.3376 |
16 | 76.2179 | 65.5284 | 85.8950 | 70.7271 | 82.2721 | 74.0648 | 90.6016 |
17 | 75.2725 | 42.8890 | 85.5954 | 58.4470 | 85.1226 | 81.3498 | 90.1767 |
18 | 73.4575 | 73.4121 | 87.2344 | 49.2407 | 83.4361 | 82.3676 | 87.9561 |
19 | 74.5433 | 64.6942 | 85.0979 | 80.9774 | 84.3397 | 83.8065 | 88.6262 |
20 | 74.5493 | 64.1787 | 85.1169 | 68.7058 | 86.3548 | 82.9129 | 90.0268 |
Recall | |||||||
Epoch’s | CNN | DenseNet201 | InceptionResNetV2 | MobileNetV2 | VGG16 | VGG19 | Xception |
1 | 60.8095 | 47.4904 | 78.0779 | 38.7358 | 33.7167 | 0.0000 | 73.7452 |
2 | 49.4161 | 60.9967 | 73.0498 | 39.0300 | 42.0612 | 13.5776 | 84.9336 |
3 | 56.9314 | 50.9227 | 85.4596 | 30.7212 | 65.8376 | 37.7730 | 80.2710 |
4 | 45.1814 | 56.6105 | 85.5576 | 41.8829 | 62.0754 | 36.1683 | 81.8668 |
5 | 66.8004 | 64.4557 | 74.2355 | 60.2746 | 78.0066 | 71.5075 | 90.0865 |
6 | 59.0354 | 69.5017 | 77.5876 | 51.3417 | 80.0303 | 68.5923 | 86.7790 |
7 | 65.1600 | 75.6798 | 88.7760 | 45.5737 | 78.2206 | 75.8402 | 89.7923 |
8 | 71.0350 | 56.8066 | 87.7686 | 59.3474 | 69.3679 | 75.7600 | 88.1697 |
9 | 71.1598 | 73.2014 | 84.3630 | 45.1636 | 80.9040 | 77.4004 | 89.6764 |
10 | 74.3960 | 55.9775 | 87.4476 | 40.7774 | 77.8015 | 80.0392 | 86.8592 |
11 | 70.2594 | 69.9563 | 89.2485 | 50.9227 | 69.3858 | 79.5935 | 80.2264 |
12 | 70.6339 | 53.0891 | 84.4789 | 42.6050 | 81.1269 | 79.6202 | 86.6185 |
13 | 75.2518 | 66.5151 | 48.3908 | 47.5885 | 78.3186 | 74.7348 | 86.8236 |
14 | 74.2088 | 44.5039 | 85.5220 | 64.6964 | 80.8594 | 74.8774 | 80.2621 |
15 | 74.1731 | 41.5084 | 77.5876 | 60.9343 | 74.4941 | 79.8520 | 88.2589 |
16 | 74.1999 | 65.1779 | 84.7464 | 70.5001 | 75.2162 | 68.3070 | 90.4966 |
17 | 73.2727 | 42.8546 | 84.3363 | 58.4470 | 74.5743 | 77.6946 | 90.1043 |
18 | 71.7482 | 72.5417 | 86.3867 | 49.1397 | 74.8150 | 73.5045 | 87.8934 |
19 | 72.3901 | 63.6445 | 84.0510 | 80.9486 | 74.4673 | 78.1582 | 88.2232 |
20 | 72.9874 | 62.6282 | 84.3809 | 68.3873 | 35.7136 | 78.8179 | 89.8903 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Popli, R.; Kansal, I.; Verma, J.; Khullar, V.; Kumar, R.; Sharma, A. ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces. Sustainability 2023, 15, 9314. https://doi.org/10.3390/su15129314
Popli R, Kansal I, Verma J, Khullar V, Kumar R, Sharma A. ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces. Sustainability. 2023; 15(12):9314. https://doi.org/10.3390/su15129314
Chicago/Turabian StylePopli, Renu, Isha Kansal, Jyoti Verma, Vikas Khullar, Rajeev Kumar, and Ashutosh Sharma. 2023. "ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces" Sustainability 15, no. 12: 9314. https://doi.org/10.3390/su15129314
APA StylePopli, R., Kansal, I., Verma, J., Khullar, V., Kumar, R., & Sharma, A. (2023). ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces. Sustainability, 15(12), 9314. https://doi.org/10.3390/su15129314