Next Article in Journal
Semi-Supervised Method for Underwater Object Detection Algorithm Based on Improved YOLOv8
Previous Article in Journal
Grasp Pattern Recognition Using Surface Electromyography Signals and Bayesian-Optimized Support Vector Machines for Low-Cost Hand Prostheses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Quantification of Rebar Mesh Inspection in Hidden Engineering Structures via Deep Learning

1
School of Economics and Management, Beijing Jiaotong University, Beijing 100044, China
2
Institute of Computing Technology, China Academy of Railway Sciences Corporation Limited, Beijing 100081, China
3
Beijing Jingwei Information Technology Co., Ltd., Beijing 100081, China
4
School of Civil Engineering, Beijing Jiaotong University, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(3), 1063; https://doi.org/10.3390/app15031063
Submission received: 23 October 2024 / Revised: 27 November 2024 / Accepted: 3 December 2024 / Published: 22 January 2025

Abstract

:
This paper presents an in-depth study of the automated recognition and geometric information quantification of rebar meshes, proposing a deep learning-based method for rebar mesh detection and segmentation. By constructing a diverse rebar mesh image dataset, an improved Unet-based model was developed, incorporating residual modules to enhance the network’s feature extraction capabilities and training efficiency. The study found that the improved model maintains high segmentation accuracy and robustness even in the presence of complex backgrounds and noise. To achieve the precise measurement of rebar spacing, a rebar intersection detection algorithm based on convolution operations was designed, and the IQR (Interquartile Range) algorithm was applied to remove outliers, ensuring the accuracy and reliability of spacing calculations. The experimental results demonstrate that the proposed model and methods effectively and efficiently accomplish the automated recognition and geometric information extraction of rebar meshes, providing reliable technical support for the automated detection and geometric data analysis of rebar meshes in practical engineering applications.

1. Introduction

With the acceleration of global urbanization, the number and scale of civil engineering projects have increased continuously, and the complexity of engineering structures has gradually escalated [1]. In these projects, reinforcing mesh serves as a critical component in concrete structures, playing a key role in enhancing structural strength and prolonging service life [2,3]. Consequently, the identification and detection of reinforcing mesh are of great significance for ensuring construction quality and safety [4,5,6,7].
Methods for identifying reinforcing mesh can be categorized into two main types: contact-based and non-contact-based detection [8]. Contact-based detection typically relies on a substantial number of on-site operators who visually inspect or use simple tools to determine the distribution of the reinforcing mesh. This approach is not only time-consuming and labor-intensive but also lacks stability and accuracy due to the varying professional skills and experience levels of the operators. Additionally, the complexity of the construction environment and variations in the density of the reinforcing mesh, surface coatings, or stains on the reinforcement further increase the difficulty of manual detection, thereby reducing accuracy. In contrast, non-contact-based detection methods, primarily based on deep learning algorithms in computer vision, can extract feature information from images [9,10]. Compared with contact-based methods, automated identification techniques using computer vision generally offer higher efficiency and precision [11,12,13,14,15]. As a commonly employed non-destructive testing method, the machine vision-based identification of reinforcing mesh is widely applied in practical engineering due to its simplicity of operation and cost-effectiveness [16,17,18]. However, in environments with dense reinforcing mesh, this approach still faces accuracy challenges. Therefore, improving the ability to identify reinforcing mesh in complex backgrounds has become one of the primary research focuses [19].
Upon the completion of the identification and segmentation of the reinforcing mesh, geometric information statistics constitute a key step in engineering applications [20,21,22]. Geometric information statistics involve the extraction and analysis of data such as the spacing between reinforcements, which holds critical significance for engineering design and construction quality control [23,24,25]. For instance, the accurate calculation of reinforcement spacing directly impacts the strength and stability of concrete structures [26,27,28,29]. In traditional engineering practices, these geometric data often depend on manual measurements and calculations, a process that is cumbersome and prone to errors [30]. The application of modern image processing technologies enables the automation of geometric information statistics. Through the geometric analysis of the identified reinforcing mesh combined with image processing algorithms, automatic calculations of reinforcement spacing can be performed. This process not only enhances statistical efficiency but also significantly improves data accuracy by reducing the influence of human error [31,32,33]. Nevertheless, the accuracy of geometric information statistics still faces challenges, particularly in scenarios where imprecise segmentation results from complex backgrounds lead to a decline in statistical precision. Hence, developing efficient and accurate methods for geometric information statistics, especially for the extraction of geometric data from reinforcing mesh in complex environments, remains an essential task in the current research [34].
In summary, this study aims to develop an algorithm for the identification of reinforcing mesh and geometric information statistics based on deep learning to achieve the efficient and accurate recognition and segmentation of reinforcing mesh, followed by geometric data analysis. A deep learning model suitable for the identification and segmentation of reinforcing mesh will then be developed, with the optimization of the model architecture and parameters to improve recognition accuracy and speed. Based on the identification and segmentation, a geometric information statistics module will be developed to enable the automatic calculation of reinforcement spacing. The proposed method will be tested and validated in real-world engineering environments, providing reliable technical support for future applications in engineering. The outcomes of this study will contribute to improving the quality of rebar spacing detection in hidden engineering projects and provide valuable references for related research fields.

2. Dataset Establishment

Due to the absence of publicly available reinforcing mesh datasets, it is imperative to construct a dataset consisting of a large number of annotated images of reinforcing mesh. During data collection, camera equipment was employed to capture original images of reinforcing mesh in real-world engineering projects. The diversity and representativeness of the dataset are crucial to the model’s performance. Consequently, reinforcing mesh images were collected from various construction environments, including different scenes such as bridges, tunnels, and roads, as well as samples of different forms and densities of reinforcing meshes and cages. The image resolution in the dataset ranges from 1920 × 1080 to 3840 × 2160, ensuring that the model can handle high-resolution data.
To enhance the diversity of the dataset, factors that affect detection and dimension calculation accuracy were considered during the image collection process. As shown in Figure 1, images (a) and (b) reflect differences in the complexity of the background, while images (c) and (d) illustrate the contrast between the background and the color of the reinforcing mesh. Additionally, images (e) and (f) depict situations where shadows and reflective spots caused by the reinforcing mesh are present.
After acquiring the original images of the reinforcing mesh, manual annotation was performed using the open-source labeling software Labelme3.16.7 to obtain the mask labels for the reinforcement. A polygon was used to outline the edges of each instance of reinforcement, and the enclosed area of the polygon was considered the reinforcement mask. Each individual rebar, node position, and its geometric attributes within the mesh were precisely annotated. Moreover, the dataset includes various complexities commonly encountered in construction environments, such as reflections on the reinforcement surface, coatings, and stains, to enhance the model’s robustness and adaptability.
A total of more than 9000 images were initially collected. To improve the generalization ability of the model and reduce the similarity between the data, a combination of data augmentation techniques was applied to expand the dataset. These techniques included geometric transformations such as image translation, flipping, and affine transformations, as well as pixel-level transformations like brightness and contrast adjustment, and the addition of Gaussian noise. As shown in Figure 2(1), the original image from the dataset was transformed through horizontal flipping and contrast adjustment, resulting in Figure 2(2), while cropping, magnification, and Gaussian noise were applied to produce Figure 2(3). After augmentation, images with excessive similarity were further filtered out, and a final count of 21,000 images was retained, meeting the data volume requirements for neural network training. To maximize the use of the data for model performance testing, the dataset was divided into training (70%), validation (15%), and testing (15%) sets [35].

3. Model Training and Testing

The task of segmenting reinforcing mesh typically requires accurately identifying the edges and intersection points of the reinforcement, which demands high precision. The Unet model is well suited for this task due to its skip connection mechanism, which effectively captures edges and intersections in images [36,37]. Additionally, the Unet model’s recognition accuracy can be further improved by adjusting the network’s depth and width or incorporating other models or methods, such as ResNet or attention mechanisms. Therefore, this study proposes to utilize the Unet deep learning model for the segmentation of reinforcing mesh.
The structure of the Unet network is shown in Figure 3 and is mainly divided into the input part, encoder, decoder, and output part. In the encoder section, multiple layers of convolution and pooling operations are used to extract image features, gradually reducing the spatial dimensions of the feature maps while increasing the semantic information. In the middle section, highlighted in red, the skip connection mechanism directly transfers high-resolution semantic features from the encoder to the corresponding layers in the decoder, preserving detailed parts of the reinforcing mesh in the image. In the decoding section, the image resolution is progressively restored through continuous upsampling and convolution operations, resulting in an output semantic feature map of the reinforcing mesh.

3.1. ReUnet Model

In various deep learning feature extraction networks, whether for classification, detection, instance segmentation, or semantic segmentation, the primary objective is to deeply extract features. Therefore, the encoder section’s primary framework is a deep neural network, designed to extract image features. Building upon the UNet architecture, this study incorporates residual blocks to further enhance the network’s representation capabilities and training efficiency [38].
The core idea of the residual block is to introduce shortcut connections, allowing the input to be directly passed to the output, thereby forming a “residual connection”. Specifically, the output of the residual block is further expressed as follows:
y = F x + x
In this context, x represents the input and F x denotes the output obtained after passing through two 3 × 3 convolutional layers followed by the ReLU activation function.
In this study, residual blocks have been primarily incorporated into the encoding and decoding sections of the Unet model. Each downsampling unit, composed of two convolutional layers and one pooling layer, has its convolutional layers replaced by residual blocks, as indicated by the blue arrows in the figure. The same procedure is applied to the upsampling operations (transposed convolutions). The integration of residual blocks not only preserves critical information from the images but also improves the efficiency of network training. Moreover, it enhances the model’s performance when dealing with high-dimensional and complex features.

3.2. Evaluation of Model Metrics Before and After Improvement

In the semantic segmentation training and validation phases, the loss function serves as a critical metric for evaluating the performance of deep learning networks. It is primarily used to assess both the model’s actual convergence behavior and its overall training performance. To ensure the reliability of the comparison, the cross-entropy loss function was employed for loss value calculation in both the pre- and post-improvement models. It is worth noting that this function remains the most commonly used classification loss function in convolutional neural networks. Cross-entropy is primarily utilized to measure the actual distance between two probability distributions. The calculation formula is as follows:
H p , q = x   p x log q x
Here, p represents the true values, and q represents the predicted values. Both p and q refer to the values obtained after the output of the original neural network has been processed by softmax regression.
During the testing phase, this study applied the Mean Intersection over Union (MIoU) to measure the recognition performance of each pixel [39]. Due to space limitations, the detailed calculation process of MIoU is not elaborated here.
As shown in Figure 4, the comparison of the cross-entropy loss values and MIoU calculations provides an intuitive demonstration of the performance differences between the models before and after the improvement throughout the training, validation, and testing stages. The study found that significant enhancements were observed when the model was applied to real-world reinforcing mesh scenarios. The improved model exhibited faster convergence, with the cross-entropy loss value decreasing more rapidly, leading to increased training efficiency. Moreover, overfitting was reduced on the validation set, enhancing the model’s generalization capability. Segmentation accuracy also improved, with IoU showing excellent performance across the training, validation, and testing sets. Particularly in complex scenes, the model accurately identified detailed portions of the reinforcing mesh.
The improved model also demonstrated stronger robustness and stability, proving its adaptability to scenarios with complex backgrounds. While maintaining high accuracy, computational efficiency was preserved, with no significant increase in inference time or resource usage. Overall, the enhanced model showed notable improvements in accuracy, efficiency, and robustness, providing reliable technical support for practical engineering applications.

4. Method for Calculating Rebar Mesh Spacing

4.1. Binary Image Thinning

Binary image thinning [40], also known as skeletonization, is a widely used technique in the field of image processing. Its primary objective is to reduce the feature objects in a binary image to a skeleton structure with a single-pixel width while retaining as much of the original image’s geometric shape and topological structure as possible. This allows for the extraction of representative feature information. The process holds significant importance in various fields, including image analysis, pattern recognition, and computer vision.
In this study, a morphology-based thinning algorithm was utilized. The algorithm applies morphological operations iteratively, gradually removing redundant boundary pixels from the image while retaining key points such as connections, endpoints, and isolated points. These key points are crucial for describing the geometric features of the image and effectively preserving the image’s original shape and structural information. The basic principle of the thinning algorithm is illustrated in Figure 5, where a pixel matrix consisting of 0 s and 1 s is processed through a series of iterative operations to generate a sparse skeleton, thus achieving the goal of image thinning.
Due to the complex and prominent geometric features of rebar mesh images, the thinning process imposes higher demands. By applying thinning to the semantically segmented rebar mesh images, the geometric properties of the mesh can be preserved while significantly reducing the computational complexity involved in the subsequent geometric feature extraction and analysis. This method is highly valuable in practical engineering applications, particularly when dealing with large image datasets, as it significantly improves computational efficiency and reduces resource consumption.

4.2. Detection of Rebar Mesh Intersections

The intersections of the rebar mesh are key structural features, and accurately locating these intersections is crucial for the subsequent measurements of reinforcement spacing. To address this, a filtering-based method for intersection detection is proposed in this paper. This method utilizes a circular-like filter kernel for pixel-level processing, ensuring the effective extraction of intersection locations. Unlike traditional methods, this approach leverages the local response characteristics of the filter kernel, fully utilizing the prominent features of intersection pixels to accurately detect the points where vertical and horizontal rebars intersect.

4.2.1. Filtering Operation

The circular-like filtering is based on the higher response values generated at intersection points during pixel-level filtering operations. Since intersections appear as single pixels after the skeletonization process, the response values at these intersection pixels during filtering are typically greater than those in surrounding non-intersection areas. Therefore, the circular-like filter kernel effectively extracts these prominent intersections by analyzing the pixel intensity within the local region.

4.2.2. Design of the Circular-like Filter Kernel

As illustrated in Figure 6 and Figure 7, the circular-like filter kernel differs from traditional filter kernels. It centers on the current pixel while reducing the weight of boundary elements, focusing primarily on processing the pixels in the central region. Although intersections are represented as single pixels after skeletonization, the circular-like filter kernel covers multiple surrounding pixels. Compared to traditional filter kernels, this design produces higher local response values at intersection points. The filter kernel effectively responds to the central region while reducing noise from the boundaries. By counting the number of non-zero pixels within the area covered by the filter kernel, it accurately distinguishes intersection points from other pixel regions, thereby improving both the accuracy and robustness of intersection detection.

4.3. Feature Extraction and Threshold Determination

In the circular-like filtering operation, each pixel is covered by the filter kernel, and the number of non-zero pixels in the local area is calculated. Since intersection points exhibit higher pixel values, the filtering operation can determine whether a point is an intersection by setting a reasonable threshold. When the threshold is set to 20% of the maximum distance within the filter area, the pixel exceeding the threshold is considered a potential intersection and is retained with a value set to 255; otherwise, it is set to 0. This threshold-based statistical method allows the circular-like filter to accurately extract the locations of intersection points. Once intersection detection is complete, the results are saved as a binary image using the OpenCV module. The detected intersection points can then be directly utilized for subsequent reinforcement spacing measurements and analysis.

4.4. Duplicate Detection Suppression (Figure 8)

To prevent the same intersection point from being marked multiple times during detection, a duplicate detection suppression mechanism is introduced. During the intersection detection process, once a pixel is identified as an intersection point, a suppression mask of the same size as the filter kernel is applied, ensuring that the same intersection is not detected again in subsequent operations. This suppression operation effectively reduces redundant detections, ensuring that each intersection point is only marked once.

4.5. Calculation of Rebar Mesh Spacing Based on Intersection Detection (Figure 9)

The rebar mesh consists of interwoven longitudinal and transverse rebars, making it necessary to handle the calculation of their spacing separately. However, due to the complexity of the interlacing mesh, traditional methods struggle to clearly differentiate between the spacing of the transverse and longitudinal rebars. Therefore, this paper proposes an automated spacing calculation method based on intersection detection which accurately distinguishes and measures the spacing of both the transverse and longitudinal rebars. The specific steps are as follows:
Step 1—Intersection Classification: After completing intersection detection, the first task is to classify the intersections. This is performed by calculating the Euclidean distance and the absolute slope between each intersection and its neighboring intersections. The intersections are classified based on the slope of the lines connecting them to their neighboring points. Intersections with larger slopes are typically considered longitudinal intersections, while those with smaller slopes are classified as transverse intersections. Each intersection is calculated 8 times during the spacing measurement to ensure that the distances to all the neighboring points are fully considered. Finally, the four closest intersections representing the vertical and horizontal directions will be retained.
It should be noted that this method becomes more accurate as the distribution of detected intersections becomes sparser. Therefore, in practical rebar mesh intersection detection, the user can selectively crop the target rebar mesh to reduce the number of intersections detected.
Step 2—Spacing Calculation: In this step, the transverse reinforcement spacing is calculated by measuring the distance between the intersections with smaller slopes, while the longitudinal reinforcement spacing is obtained by calculating the distance between the intersections with larger slopes. The final spacing result is computed using the following formula, where taking the average effectively avoids the deviations caused by single calculation errors, ensuring a more accurate result.
D r e b a r = m n 0 d i r n + 1
where D r e b a r represents the final reinforcement spacing, m is the actual diameter of the reinforcement (based on the actual rebar dimensions), and r is the pixel length of the reinforcement diameter (obtained in the feature map by arranging measurement lines).
Step 3—Outlier Removal: During the calculation process, certain intersections may produce outlier values due to issues like image segmentation errors or other factors, which could affect the final spacing results. To eliminate these outliers, the Interquartile Range (IQR) method is introduced in this study. By applying the IQR method, abnormal data points are identified and removed, ensuring that the calculated spacing is not skewed by such anomalies.
The Interquartile Range (IQR) method is an outlier detection technique based on the medians and quartiles of the data distribution, particularly suitable for removing outliers in univariate continuous data. The core idea of the IQR method is to identify outliers by analyzing the range of the data distribution, thereby avoiding the influence of extreme values on the calculation results. Quartiles divide the ordered data into four equal parts. The IQR represents the range of the middle 50% of the data, and its calculation formula is as follows:
I Q R = Q 3 Q 1
where Q1 is the first quartile (the 25th percentile), and Q3 is the third quartile (the 75th percentile). Data points that fall below Q1 − 1.5 × IQR or above Q3 + 1.5 × IQR are considered outliers and are excluded from the final calculations.

5. Case Study

The method was applied to the civil engineering works of the underground section of the Xiong’an New Area of the Beijing–Xiong’an High-Speed Railway. The scope of the project encompasses multiple complex structural elements, including railway embankments, bridges, tunnels, and stations, with a total length of approximately 24.838 km and a transverse width of 135 m. Given the extensive use of reinforced rebar mesh structures in the project, ensuring that the spacing distribution of the rebar mesh adheres to relevant standards and regulations is a crucial aspect of quality assurance. Through the automated detection of rebar mesh spacing, the efficient and precise verification of reinforcement layout can be achieved, reducing the errors associated with manual inspections and ensuring that the construction quality meets regulatory requirements.
In the detection of rebar mesh spacing, an improved U-Net model was employed for semantic segmentation. The model was trained using hardware configurations including GPU: RTX 4060 and CPU: Intel Core i9-14900K. The enhanced U-Net model was utilized, with model weights pre-trained on the ImageNet dataset, thereby leveraging the advantages of transfer learning to ensure that satisfactory performance could be achieved in fewer training iterations. During the training process, the random seed was set to 11 to ensure the reproducibility of the experimental results. The batch size was set to 10, which was adjusted to fit the memory constraints of the hardware while maintaining the stability of the training process. Regarding the optimization strategy, a cosine annealing learning rate schedule was applied. This scheduling method allows the learning rate to gradually decrease during training, with a more significant reduction as convergence approaches, helping to prevent overfitting and accelerating the final convergence process. The maximum learning rate was set to 1e-4, and the minimum learning rate to 1e-6, ensuring rapid convergence in the early stages of training while allowing for more refined model learning in the later stages. The Adam optimizer was employed, with a momentum parameter set to 0.9. The Adam optimizer, known for its adaptive learning rate capabilities, was chosen to further enhance model training efficiency.
The training process consisted of 100 epochs, with the training results shown in Figure 4a. Finally, an intersection detection algorithm was employed. This algorithm enables the automatic identification of intersections within the rebar mesh and the precise calculation of mesh spacing based on a predefined threshold. The automatic selection of the threshold is guided by the model’s training outcomes and the actual detection performance. It can be flexibly adjusted according to different environmental conditions, operating states, and precision requirements. Typically, the threshold is set to 20% of the maximum distance within the filter area, thereby enabling the accurate detection of rebar mesh spacing as depicted in Figure 10. The application results indicate that for regularly distributed rebar mesh structures, the accuracy of spacing detection exceeds 90% when compared to the true values. It should be noted that this accuracy is based on images captured under the condition of correct shooting, where the camera is perpendicular to the target and the distance is between 0.5 and 2 m. This shooting strategy helps reduce the impact caused by the angle and depth of the rebar mesh on the calculations.

6. Conclusions

This paper focuses on quantifying geometric information from rebar mesh, discussing methods for obtaining geometric data, detecting intersection features, and calculating mesh spacing.
To extract geometric information, the study enhances the traditional Unet model by adding multiple residual blocks. The improved Unet model performs better in training and capturing rebar mesh features, especially in image segmentation tasks affected by noise. The inclusion of residual blocks also increased the model stability, optimization efficiency, training speed, and overall performance.
For detecting rebar mesh intersections, the paper proposes an algorithm based on morphological operators and various filter sizes. The results show that intersection detection is reliable when compared with manual annotations and demonstrates robustness in complex reinforcement patterns. The paper also presents a method for calculating rebar spacing based on the detected intersections, offering an automated approach for mesh spacing calculation.
This paper primarily focuses on the extraction and quantification of geometric features and the distribution of rebar mesh using deep learning image algorithms, and proposes a method for calculating the spacing of rebar mesh. This provides significant assistance in the on-site detection and verification of rebar mesh spacing. However, the current method for calculating rebar mesh spacing based on image feature patterns is still limited by external factors such as the shooting angle and mesh depth. Future research could develop a database of rebar mesh images captured from multiple angles and under varying conditions, and create more flexible methods for calculating rebar mesh spacing. This would enable automated measurement across different shooting angles, thereby improving the accuracy and adaptability of rebar mesh detection.

Author Contributions

Methodology, Y.X., H.L. and Y.S.; Validation, Y.S.; Formal analysis, Y.L.; Investigation, Y.L.; Writing—original draft, Y.S.; Writing—review & editing, Y.S.; Visualization, Y.L.; Supervision, Y.L.; Project administration, X.N.; Funding acquisition, Y.X., X.N. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

Science and Technology Research and Development Program of CHINA RAILWAY (N2023G005). The APC was funded by Institute of Computing Technology, China Academy of Railway Sciences Corporation Limited.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

Authors Xianhui Nie and Hongliang Liu were employed by the company Beijing Jingwei Information Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Welford, M.R.; Yarbrough, R.A.; Welford, M.R.; Yarbrough, R.A. Urbanization. In Human-Environment Interactions: An Introduction; Palgrave Macmillan: Cham, Switzerland, 2021; pp. 193–214. [Google Scholar]
  2. Sakthivel, P.; Ravichandran, A.; Alagamurthi, N. Impact Strength of Hybrid Rebar mesh-and-Fiber Reinforced Cementitious Composites. KSCE J. Civ. Eng. 2015, 19, 1385–1395. [Google Scholar] [CrossRef]
  3. Li, J.; Wu, C.; Hao, H.; Su, Y. Experimental and Numerical Study on Steel Wire Mesh Reinforced Concrete Slab under Contact Explosion. Mater. Des. 2017, 116, 77–91. [Google Scholar] [CrossRef]
  4. Hu, J.; Zhang, S.; Chen, E.; Li, W. A Review on Corrosion Detection and Protection of Existing Reinforced Concrete (RC) Structures. Constr. Build. Mater. 2022, 325, 126718. [Google Scholar] [CrossRef]
  5. Zhang, J.; Hu, Z. BIM-and 4D-Based Integrated Solution of Analysis and Management for Conflicts and Structural Safety Problems during Construction: 1. Principles and Methodologies. Autom. Constr. 2011, 20, 155–166. [Google Scholar] [CrossRef]
  6. Hollaway, L.C.; Leeming, M. Strengthening of Reinforced Concrete Structures: Using Externally-Bonded FRP Composites in Structural and Civil Engineering; Elsevier: Amsterdam, The Netherlands, 1999; ISBN 1-85573-761-2. [Google Scholar]
  7. Zhang, S.; Sulankivi, K.; Kiviniemi, M.; Romo, I.; Eastman, C.M.; Teizer, J. BIM-Based Fall Hazard Identification and Prevention in Construction Safety Planning. Saf. Sci. 2015, 72, 31–45. [Google Scholar] [CrossRef]
  8. Dąbek, P.; Krot, P.; Wodecki, J.; Zimroz, P.; Szrek, J.; Zimroz, R. Measurement of Idlers Rotation Speed in Belt Conveyors Based on Image Data Analysis for Diagnostic Purposes. Measurement 2022, 202, 111869. [Google Scholar] [CrossRef]
  9. Blehm, C.; Vishnu, S.; Khattak, A.; Mitra, S.; Yee, R.W. Computer Vision Syndrome: A Review. Surv. Ophthalmol. 2005, 50, 253–262. [Google Scholar] [CrossRef] [PubMed]
  10. Asadi, P.; Gindy, M.; Alvarez, M.; Asadi, A. A Computer Vision Based Rebar Detection Chain for Automatic Processing of Concrete Bridge Deck GPR Data. Autom. Constr. 2020, 112, 103106. [Google Scholar] [CrossRef]
  11. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef]
  12. Ghosh, S.; Das, N.; Das, I.; Maulik, U. Understanding Deep Learning Techniques for Image Segmentation. ACM Comput. Surv. 2019, 52, 1–35. [Google Scholar] [CrossRef]
  13. Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A Review on Deep Learning Techniques Applied to Semantic Segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
  14. Hao, S.; Zhou, Y.; Guo, Y. A Brief Survey on Semantic Segmentation with Deep Learning. Neurocomputing 2020, 406, 302–321. [Google Scholar] [CrossRef]
  15. Roth, H.R.; Shen, C.; Oda, H.; Oda, M.; Hayashi, Y.; Misawa, K.; Mori, K. Deep Learning and Its Application to Medical Image Segmentation. Med. Imaging Technol. 2018, 36, 63–71. [Google Scholar]
  16. Qureshi, A.H.; Alaloul, W.S.; Murtiyoso, A.; Saad, S.; Manzoor, B. Comparison of Photogrammetry Tools Considering Rebar Progress Recognition. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 141–146. [Google Scholar] [CrossRef]
  17. Asadi, P. Computer Vision Based Method for Automatic Rebar Detection in Ground Penetrating Radar Data; University of Rhode Island: Kingston, RI, USA, 2019; ISBN 1-392-72537-2. [Google Scholar]
  18. Frangez, V.; Lloret-Fritschi, E.; Taha, N.; Gramazio, F.; Kohler, M.; Wieser, A. Depth-Camera-Based Rebar Detection and Digital Reconstruction for Robotic Concrete Spraying. Constr. Robot. 2021, 5, 191–202. [Google Scholar] [CrossRef]
  19. Ahmed, H.; La, H.M.; Gucunski, N. Rebar Detection Using Ground Penetrating Radar with State-of-the-Art Convolutional Neural Networks. In Proceedings of the 9th International Conference on Structural Health Monitoring of Intelligent Infrastructure, St. Louis, MI, USA, 4–7 August 2019; pp. 4–7. [Google Scholar]
  20. Li, Y.; Lu, Y.; Chen, J. A Deep Learning Approach for Real-Time Rebar Counting on the Construction Site Based on YOLOv3 Detector. Autom. Constr. 2021, 124, 103602. [Google Scholar] [CrossRef]
  21. Santos, R.; Ribeiro, D.; Lopes, P.; Cabral, R.; Calçada, R. Detection of Exposed Steel Rebars Based on Deep-Learning Techniques and Unmanned Aerial Vehicles. Autom. Constr. 2022, 139, 104324. [Google Scholar] [CrossRef]
  22. Han, K.; Gwak, J.; Golparvar-Fard, M.; Saidi, K.; Cheok, G.; Franaszek, M.; Lipman, R. Vision-Based Field Inspection of Concrete Reinforcing Bars. In Proceedings of the 13th International Conference on Construction Applications of Virtual Reality, London, UK, 30–31 October 2013; pp. 30–31. [Google Scholar]
  23. Li, F.; Kim, M.-K.; Lee, D.-E. Geometrical Model Based Scan Planning Approach for the Classification of Rebar Diameters. Autom. Constr. 2021, 130, 103848. [Google Scholar] [CrossRef]
  24. Wang, Q.; Cheng, J.C.; Sohn, H. Automated Estimation of Reinforced Precast Concrete Rebar Positions Using Colored Laser Scan Data. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 787–802. [Google Scholar] [CrossRef]
  25. Aram, S.; Eastman, C.; Venugopal, M.; Sacks, R.; Belsky, M. Concrete Reinforcement Modeling for Efficient Information Sharing. In Proceedings of the 30th International Symposium on Automation and Robotics in Construction and Mining, Montreal, QC, Canda, 11–15 August 2012; Volume 30, pp. 1056–1064. [Google Scholar]
  26. Pantazopoulou, S.J. Detailing for Reinforcement Stability in RC Members. J. Struct. Eng. 1998, 124, 623–632. [Google Scholar] [CrossRef]
  27. Hassoun, M.N.; Al-Manaseer, A. Structural Concrete: Theory and Design; John Wiley & Sons: Hoboken, NJ, USA, 2020; ISBN 1-119-60511-3. [Google Scholar]
  28. Fragiadakis, M.; Papadrakakis, M. Performance-based Optimum Seismic Design of Reinforced Concrete Structures. Earthq. Eng. Struct. Dyn. 2008, 37, 825–844. [Google Scholar] [CrossRef]
  29. Zhao, J.; Sritharan, S. Modeling of Strain Penetration Effects in Fiber-Based Analysis of Reinforced Concrete Structures. ACI Struct. J. 2007, 104, 133. [Google Scholar]
  30. Huang, J.; Menq, C.-H. Automatic Data Segmentation for Geometric Feature Extraction from Unorganized 3-D Coordinate Points. IEEE Trans. Robot. Autom. 2001, 17, 268–279. [Google Scholar] [CrossRef]
  31. Luo, C.; Cheng, C.; Zheng, Q.; Yao, C. Geolayoutlm: Geometric Pre-Training for Visual Information Extraction. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 7092–7101. [Google Scholar]
  32. Srinivasan, R.; Liu, C.; Fu, K. Extraction of Manufacturing Details from Geometric Models. Comput. Ind. Eng. 1985, 9, 125–133. [Google Scholar] [CrossRef]
  33. Poli, D.; Remondino, F.; Angiuli, E.; Agugiaro, G. Radiometric and Geometric Evaluation of GeoEye-1, WorldView-2 and Pléiades-1A Stereo Images for 3D Information Extraction. ISPRS J. Photogramm. Remote Sens. 2015, 100, 35–47. [Google Scholar] [CrossRef]
  34. Kadioglu, F.; Pidaparti, R.M. Composite Rebars Shape Effect in Reinforced Structures. Compos. Struct. 2005, 67, 19–26. [Google Scholar] [CrossRef]
  35. Xu, M.; Yoon, S.; Fuentes, A.; Park, D.S. A Comprehensive Survey of Image Augmentation Techniques for Deep Learning. Pattern Recognit. 2023, 137, 109347. [Google Scholar] [CrossRef]
  36. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.-W.; Heng, P.-A. H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation from CT Volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [Google Scholar] [CrossRef] [PubMed]
  37. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. In Proceedings of the Computer Vision Workshops (ECCV 2022), Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 205–218. [Google Scholar]
  38. Chen, T.; Son, Y.; Park, A.; Baek, S.-J. Baseline Correction Using a Deep-Learning Model Combining ResNet and UNet. Analyst 2022, 147, 4285–4292. [Google Scholar] [CrossRef]
  39. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
  40. Pavlidis, T. A Thinning Algorithm for Discrete Binary Images. Comput. Graph. Image Process. 1980, 13, 142–157. [Google Scholar] [CrossRef]
Figure 1. Rebar mesh images with complex backgrounds: (a,b) overlapping rebars; (c,d) color differences among different types of Rebars; (e,f) obvious shadows behind the Rebars.
Figure 1. Rebar mesh images with complex backgrounds: (a,b) overlapping rebars; (c,d) color differences among different types of Rebars; (e,f) obvious shadows behind the Rebars.
Applsci 15 01063 g001
Figure 2. Data augmentation modes: (1)~(5) data augmentation results of the original image under different methods.
Figure 2. Data augmentation modes: (1)~(5) data augmentation results of the original image under different methods.
Applsci 15 01063 g002
Figure 3. Schematic diagram of the Unet network framework.
Figure 3. Schematic diagram of the Unet network framework.
Applsci 15 01063 g003
Figure 4. Comparison of model performance during training, validation, and testing: (a) comparison of loss values of different models during training; (b) comparison of loss values of different models during validation; (c) comparison of MIoU of different models during testing.
Figure 4. Comparison of model performance during training, validation, and testing: (a) comparison of loss values of different models during training; (b) comparison of loss values of different models during validation; (c) comparison of MIoU of different models during testing.
Applsci 15 01063 g004
Figure 5. Image thinning process.
Figure 5. Image thinning process.
Applsci 15 01063 g005
Figure 6. Feature detection performance of different filters under regular rebar mesh distribution: (a) Filter Traversal Process Diagram; (b) Square Filter; (c) Circular Filter; (d) Image Features After Filtering with a Square Filter; (e) Image Features After Filtering with a Circular Filter.
Figure 6. Feature detection performance of different filters under regular rebar mesh distribution: (a) Filter Traversal Process Diagram; (b) Square Filter; (c) Circular Filter; (d) Image Features After Filtering with a Square Filter; (e) Image Features After Filtering with a Circular Filter.
Applsci 15 01063 g006
Figure 7. Feature detection performance of different filters under irregular rebar mesh distribution: (a) Filter Traversal Process Diagram; (b) Square Filter; (c) Circular Filter; (d) Image Features After Filtering with a Square Filter; (e) Image Features After Filtering with a Circular Filter.
Figure 7. Feature detection performance of different filters under irregular rebar mesh distribution: (a) Filter Traversal Process Diagram; (b) Square Filter; (c) Circular Filter; (d) Image Features After Filtering with a Square Filter; (e) Image Features After Filtering with a Circular Filter.
Applsci 15 01063 g007
Figure 8. Schematic diagram of duplicate suppression mask.
Figure 8. Schematic diagram of duplicate suppression mask.
Applsci 15 01063 g008
Figure 9. Schematic diagram of intersection detection: (a) Process of Detecting Neighboring Intersections (Nearest Intersection); (b) Extracted Intersection Image (Red Box Indicates Identified Intersections).
Figure 9. Schematic diagram of intersection detection: (a) Process of Detecting Neighboring Intersections (Nearest Intersection); (b) Extracted Intersection Image (Red Box Indicates Identified Intersections).
Applsci 15 01063 g009
Figure 10. Actual detection results of rebar mesh spacing: (af) Comparison of Detection Results and True Values for Different Rebar Specimen Spacings.
Figure 10. Actual detection results of rebar mesh spacing: (af) Comparison of Detection Results and True Values for Different Rebar Specimen Spacings.
Applsci 15 01063 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, Y.; Nie, X.; Liu, H.; Shen, Y.; Liu, Y. Automated Quantification of Rebar Mesh Inspection in Hidden Engineering Structures via Deep Learning. Appl. Sci. 2025, 15, 1063. https://doi.org/10.3390/app15031063

AMA Style

Xie Y, Nie X, Liu H, Shen Y, Liu Y. Automated Quantification of Rebar Mesh Inspection in Hidden Engineering Structures via Deep Learning. Applied Sciences. 2025; 15(3):1063. https://doi.org/10.3390/app15031063

Chicago/Turabian Style

Xie, Yalong, Xianhui Nie, Hongliang Liu, Yifan Shen, and Yuming Liu. 2025. "Automated Quantification of Rebar Mesh Inspection in Hidden Engineering Structures via Deep Learning" Applied Sciences 15, no. 3: 1063. https://doi.org/10.3390/app15031063

APA Style

Xie, Y., Nie, X., Liu, H., Shen, Y., & Liu, Y. (2025). Automated Quantification of Rebar Mesh Inspection in Hidden Engineering Structures via Deep Learning. Applied Sciences, 15(3), 1063. https://doi.org/10.3390/app15031063

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop