Next Article in Journal
A Model of an Extending Front Loader
Next Article in Special Issue
Using Voxelisation-Based Data Analysis Techniques for Porosity Prediction in Metal Additive Manufacturing
Previous Article in Journal
Comprehensive Performance Evaluation between Visual SLAM and LiDAR SLAM for Mobile Robots: Theories and Experiments
Previous Article in Special Issue
A Robust Texture-Less Gray-Scale Surface Matching Method Applied to a Liquid Crystal Display TV Diffuser Plate Assembly System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Efficiency of YOLOv5 Models in the Detection of Similar Construction Details

by
Tautvydas Kvietkauskas
1,
Ernest Pavlov
2,
Pavel Stefanovič
3,* and
Birutė Pliuskuvienė
3
1
Department of Information Technology, Vilnius Gediminas Technical University, Saulėtekio al. 11, LT-10223 Vilnius, Lithuania
2
Department of Electronic Systems, Vilnius Gediminas Technical University, Saulėtekio al. 11, LT-10223 Vilnius, Lithuania
3
Department of Information Systems, Vilnius Gediminas Technical University, Saulėtekio al. 11, LT-10223 Vilnius, Lithuania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(9), 3946; https://doi.org/10.3390/app14093946
Submission received: 22 February 2024 / Revised: 15 April 2024 / Accepted: 4 May 2024 / Published: 6 May 2024
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)

Abstract

:
Computer vision solutions have become widely used in various industries and as part of daily solutions. One task of computer vision is object detection. With the development of object detection algorithms and the growing number of various kinds of image data, different problems arise in relation to the building of models suitable for various solutions. This paper investigates the influence of parameters used in the training process involved in detecting similar kinds of objects, i.e., the hyperparameters of the algorithm and the training parameters. This experimental investigation focuses on the widely used YOLOv5 algorithm and analyses the performance of different models of YOLOv5 (n, s, m, l, x). In the research, the newly collected construction details (22 categories) dataset is used. Experiments are performed using pre-trained models of the YOLOv5. A total of 185 YOLOv5 models are trained and evaluated. All models are tested on 3300 images photographed on three different backgrounds: mixed, neutral, and white. Additionally, the best-obtained models are evaluated using 150 new images, each of which has several dozen construction details and is photographed against different backgrounds. The deep analysis of different YOLOv5 models and the hyperparameters shows the influence of various parameters when analysing the object detection of similar objects. The best model was obtained when the YOLOv5l was used and the parameters are as follows: coloured images, image size—320; batch size—32; epoch number—300; layers freeze option—10; data augmentation—on; learning rate—0.001; momentum—0.95; and weight decay—0.0007. These results may be useful for various tasks in which small and similar objects are analysed.

1. Introduction

Over the past decade, the application of artificial intelligence has grown in various areas. Many different methods of artificial intelligence can be used for different types of data analysis, such as those involving numbers, texts, sounds, and images. Deep learning methods play a significant role in various scientific research. This is due to the possibility of using not only the CPU, but also the GPU in model building. One field of artificial intelligence based on deep learning methods is computer vision. The most popular computer vision tasks are image classification, segmentation, and object detection. For example, in medicine, image data can be used to predict different diseases, such as cancer [1,2], glaucoma [3,4], and pneumonia [5,6]. Object detection models can be used in systems for travel direction recommendation [7], in industry for solutions to robotization tasks [8,9], in face detection for different applications [10,11], or other fields [12,13,14,15,16]. Usually, in all research, various computer vision methods or combinations are used to solve the specific problem. This is because there is no unambiguously appropriate method for all of them and, as a result, the results may depend on various factors.
One of the factors in building successful artificial intelligence models is properly prepared data for model training. A large amount of research has sought to analyse the data showing features that are distinctly different, and for which the natural size of the object is large in the real world [17,18]. In this case, the obtained results of object detection are high. A more complex task is to detect a similar object within an image, especially when the object is small in the real world. Objects that are similar can be described by the following characteristics: shape, colour, size, etc. For example, if we analyse medical pill detection, some of the pills can look identical in different images, depending on the angle, distance, lighting, shadows, and other external factors. At self-service checkouts [19], object detection methods have been implemented to detect fruits. It is difficult for models to determine what type of apple the customer is trying to buy due to the similarity between fruits. The same problem occurs in the construction detail analysis because some details can look identical. Detecting similar objects requires a much deeper analysis.
In this investigation, the efficiency of YOLOv5 has been investigated using the newly collected construction details dataset [20]. In construction detail analysis, datasets have a large number of categories, and items have similar features. It is therefore important to investigate the parameters of the dataset and find which of them has the highest influence on object detection. Additionally, one must take into account the size of the chosen YOLOv5 model and the selected training hyperparameters. The training dataset used in the experimental investigation consisted of 440 images (22 construction details on a white background, with 20 images in each category). Additionally, a test dataset consisting of 3300 images (22 construction details on 3 different backgrounds, with 50 images belonging to each category). Because training each model costs a large amount of time, the experimental investigation was performed in two stages. In the first stage, primary research was performed to determine the influence of epoch numbers, image size, batch size, layer freeze option, and data augmentation on object detection results. A total of 50 experiments were performed using the most popular and widely used models of other researchers—YOLOv5s and YOLOv5m. During the primary experiments, the best parameters were found and used in the second stage. In the second stage, a total of 135 experiments were performed using all five models of YOLOv5 (n, s, m, l, x). The main aim was to find the best training hyperparameters, such as learning rate, weight decay, and momentum. The main contributions of the paper are as follows:
(1)
The newly collected dataset has been prepared, is publicly available, and can be used in various computer vision tasks.
(2)
The five YOLOv5 models of different sizes have been experimentally investigated using the newly collected construction details. A total of 185 experiments have been performed, in which various combinations of the training and algorithm parameters have been analysed.
(3)
The results of the experimental investigation have shown the efficiency of different models, which allows us to see which nondefault parameters help to achieve higher object detection results. This could be useful for other researchers when analysing similar featured data.
(4)
The models could be used in the recommendation systems that allow the recommendation of a possible construction by detecting several dozen construction details in one image.
The structure of this paper is as follows. In Section 2, the related works are reviewed. No research has yet been able to solve the problem we are addressing in terms of the detection of similar construction details, nevertheless most similar research is herein overviewed. In addition, a brief overview of the most popular object detection algorithms is presented. In Section 3, an experimental investigation scheme is presented and described in detail. All steps from data collection and data preprocessing to model training and evaluation are presented. In Section 4, the discussion and limitations of the research are presented. Section 5 concludes the paper.

2. Related Works

The literature analysis has shown that, due to the complexity of the task, there is a lack of research that focuses on the detection of similar objects. Therefore, it is difficult to perform a comparative analysis of such research results. Usually, in such types of object detection tasks, the accuracy of the obtained model is smaller compared with other types of data. Therefore, in various investigations, different object detection algorithms and their parameters are changed in order to increase the model’s accuracy. Several research studies have been published that deal with the problem of similar object detection, though they have used different kinds of data.
In the investigation by Kwon et al. [21], the detection of medical pills was analysed using a deep learning algorithm. The authors proposed a two-step model based on a mask region-based convolutional neural network (Mask R-CNN) [22] that improved the detection performance of medical pills. In the first step, the object localization problem was solved in order to detect the medical pill in the image, and, in the second step, the multiclass classification was solved in order to detect the possible type of the medical pill. According to the testing results of the proposed model and YOLOv3 [23], experiments have shown that the accuracy of the proposed Mask R-CNN model (91%) is 18% higher than the results obtained using YOLOv3 (73%). The results obtained have shown that the proposed model can be applied in cases when a small amount of data are used to train the object detection models. Another study, which also focused on the real-time detection of medical pills, was performed by Tan et al. [24]. In this research, the efficiencies of the following three object detection algorithms were investigated: RetinaNet, Single Shot Multi-Box Detector (SSD), and YOLOv3. The results of the experimental investigation show that RetinaNet is not suitable for real-time medical pill detection due to slow performance (FPS–17), but that the accuracy, when compared with the other analysed algorithms, was the highest (82.89%). The highest speed performance was obtained by YOLOv3 (FPS–51), but the accuracy is smaller (80.69%) compared with RetinaNet and SSD. Intermediate performance was obtained by the SSD algorithm, where the accuracy was equal to 82.71% (slightly smaller when compared with the RetinaNet) and the speed was equal to 32 (FPS). By concluding the results, the authors state that YOLOv3 is more suitable for similar object detection tasks when the medical pills are analysed. In the research by Ou et al., models based on convolutional neural networks were used to detect and classify medical pills in images. In 2018 [25], an improved model of Inceptionv3 [26] was used, wherein models were trained using a newly collected dataset. The prepared dataset consisted of more than 470,000 images, where each category (different types of medical pills, for a total of 131 categories) had approximately 3600 images, taken from various angles. During the experimental research, the resolution of the images was transformed to 299 × 299. The accuracy of the model was evaluated using additional images of medical pills, which contain 400 images with 2825 annotations. The proposed model achieved 79.4% accuracy. Later, in 2020, Ou et al. [27] used Inception-ResNetv2 for the medical pill classification task due to its experimental performance. The same type of dataset was used, but with a larger amount of medical pill images (612 categories) having been prepared for the model training process. Furthermore, the authors analysed the efficiency of various classifiers (VGG-16, VGG-19, ResNet-50, ResNet-101, Inceptionv3, Inceptionv4, Xception, Inception-ResNetv1, Inception-ResNetv2). The highest accuracy (82.1%) was achieved using Inception-ResNetv2, and the smallest accuracy was obtained using VGG 16 (40.5%).
Saeed et al. [28] proposed an approach for the detection of small industrial objects using an improved faster regional convolutional neural network (Improved Faster RCNN). The main aim of their research was to detect and recognize screws in images. This problem is also related to the problem of similar object detection because, in some images taken from different angles, the various screw types may look the same. To train the models, the authors collected a new dataset from many images of industrial products in which screws could be found. A total of 917 original images of four different types (325, 163, 251, 178) of screws were taken. An augmentation of the dataset was applied and a total of 63,013 images were used in the experimental investigation. The efficiency of the proposed improved model of Faster RCNN was compared with RCNN, Fast RCNN, and Faster RCNN. The experimental results show that the highest accuracy was achieved using the improved Faster RCNN (~91%), followed by the Faster RCNN (~89%), Fast RCNN (~84%), and RCNN (~83%). In the research by Yildiz et al. [29], the authors proposed the combination of the Xception and Inceptionv3 models in order to detect screws in automated disassembly processes. The main objective of the research was to detect screws during hard disk disassembly. All images analysed in the training process were transformed to greyscale. In the research, the efficiencies of Xception, Inceptionv3, ResneXt101, InceptionResnetv2, Densenet201, and Resnet101v2 were evaluated. All analysed models achieved an accuracy greater than 96%, but the highest accuracy was obtained by Inceptionv3 (98.8%), followed by InceptionResnetv2 (98.6%), and ResneXt101 with Xception (98.5%). The lowest accuracy was obtained using Resnet101v2 (96.9%). The authors decided to combine two models with the highest accuracy to increase the accuracy of the combined classifier. For this reason, the results of the models were combined using some chosen weights, and the final prediction results were calculated. The combination of the proposed models achieved 99% accuracy when analysing the selected dataset. In the research by Mangold et al. [30], the YOLOv5 models were used to detect the screw head for automated disassembly and remanufacturing. The authors investigate two types of YOLOv5 group models—YOLOv5s and YOLOv5m. The dataset used in the investigation was pre-processed, and the size of the images reduced to 640 × 640 (the original size of the images was 1200 × 1200). During model training, the batch size was equal to 32. The results of the experimental investigation performed in the research show that the highest accuracy was obtained using the YOLOv5s model ([email protected]—98.4% and [email protected]:0.95—83.4%). A slightly smaller accuracy was obtained using the YOLOv5m ([email protected]—98% and [email protected]:0.95—82.6%) but the difference between the different models’ accuracy is not significant. The trained YOLOv5s model was evaluated using the real environment, where 20 small and 7 large motor images were passed to the model in order for it to detect screws. The testing results show that 39 out of 45 screws were correctly detected in the images of the small motors and 15 out of 17 screws were correctly detected in the images of the large motors.
This literature review has shown that many object detection algorithms exist and are used in various fields, for example, RCNN, Faster RCNN, SSD, YOLO, etc. [31,32,33]. Nowadays, one of the most popular object detection algorithm groups is YOLO, which can be used in real-time object detection tasks and the group algorithms of which allow one to obtain promising results in different areas. Of course, there exist many versions of the YOLO algorithm, from the first original version of YOLO to YOLOv8, YOLO-NAS, and YOLO with transformers [34]. The newest versions of YOLO, starting from YOLOv6, are still in the development process, so there are different issues with their practical use. One of the most stable recent versions is YOLOv5, which is widely used in scientific research, such as small and similar object detection [35]. YOLOv5 differs from previous versions of the YOLO algorithm because it uses the PyTorch framework, rather than Darknet, and because it uses CSPDarknet53 as the backbone. The YOLOv5 architecture uses the path aggregation network (PANet) as a neck by which to increase the flow of information. The head of YOLOv5 is the same as that of YOLOv3 and YOLOv4, which generates three different feature map outputs to achieve multiscale prediction. This helps to effectively increase the prediction of small and large objects in the model. The output layer generates the results. In the manuscript by Dlužnevskij et al. [36], experimental research has been performed to investigate the efficiency of YOLOv5 using a mobile device with real-time object detection tasks. Four different models of YOLOv5 have been analysed (YOLOv5s, YOLOv5m, YOLOv5l, YOLOv5x). The experiments were conducted using the original COCO dataset, reducing it to fit the requirements of the mobile environment. The results of the experimental investigation show that the performance of the model is highly influenced by the hardware architecture and the system in which the model is used.
In our previous research [37], the influence of training parameters on the detection of real-time construction details using YOLOv5s was analysed. Parameters, such as image resolution, batch size, iteration number, and colour of images, were investigated. The focus was only on the one YOLOv5s model that is usually suitable for real-time object detection using a limited technical environment, such as mobile phones. The results of the experimental investigation have shown that, in many cases, the optimal resolution of the construction details images should be 320 × 320 or 640 × 640 and that colour images allow slightly better results compared with greyscale images. Choosing the higher resolution image leads to a lower accuracy of construction detail detection. Furthermore, during model training, the batch size should be chosen as 16 or 32 to achieve higher model accuracy. The limitation of the research was that the other versions of YOLOv5 (n, m, l, x) were not analysed. Additionally, the hyperparameters were not changed during the model training process; instead, only the best hyperparameters are used based on the analysis of related works. The results of related works [38,39] have shown that, generally, other similar research has focused only on a small number of YOLOv5 hyperparameters, such as learning rate, momentum, augmentation parameters, and weight selection, and that other hyperparameters are usually not changed. In the research, the dataset used in the training process was not balanced, and this is important to consider when evaluating the models. Therefore, it is necessary to investigate the influence of different versions of YOLOv5 and hyperparameters on the detection of construction details.
Related works have shown that there is no single best model for object detection and that the results depend on various factors. One of the most important factors is the dataset being analysed. By analysing similar objects, such as medical pills, screws, or construction details, the correct detection depends on the angle of the camera, the lighting, and the position. In some cases, one object can look similar to another. Image pre-processing, such as that involved in the colour of the image or the size of the resolution, also influences the detection results. During the training of the models, it is important to select suitable hyperparameters. However, in computer vision, each new combination of hyperparameters costs a lot of training time because of the image analysis tasks and the model complexities. The object detection model selection is also one of the hardest parts, because related works have shown that older models, such as RCNN, Fast RCNN, or Faster RCNN, can be used to obtain an accuracy that is not inferior to the latest models. In addition, there are many versions or modifications of the object detection model in the scientific literature. All these facts show that it is important to investigate the efficiency of the object detection models using various factors and to find the best combination for each specific domain.

3. Experimental Investigation

To investigate the influence of various training parameters on different models of YOLOv5, an experimental investigation was performed. YOLOv5 is a large step forward in object identification algorithms, departing from its predecessors by leveraging the PyTorch framework and incorporating the CSPDarknet53 backbone with a new pooling architecture. This architecture solves feature fusion and computational efficiency concerns, improving object localisation accuracy while reducing model size. The focus layer enhances memory use and propagation efficiency [40]. The different combinations of training parameters have been used to find the highest accuracy in construction detail detection and, for this reason, a total of 185 models have been created and evaluated. The research workflow is presented in Figure 1.
The research was performed in two stages. The first stage focuses on training parameters and the second stage focuses on the hyperparameters of the pre-trained YOLOv5 models [40]. All of the YOLOv5 models presented in Table 1 have been trained using the well-known COCO2017 dataset, which was collected and prepared for object detection and segmentation tasks. The COCO2017 dataset is the subset of the MS COCO dataset (containing 164,000 images of 80 different objects with bounding boxes and segmentation masks for each data item). The models were trained using 118,000 images, and the remainder of the dataset was used for validation (5000) and testing (41,000) of the models.
All of the steps of the experimental investigation that were performed, from data preparation to model training and evaluation, are described in this section in more detail.
During the experimental investigation, all models were trained in an environment with the following specifications: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz (20 Threads, 10 Cores). The environment used a Linux operating system with 32 GB DDR4 RAM and a Tesla P100 PCIe 12GB GPU.

3.1. Results of the Primary Research

The newly collected construction details dataset was used in the experimental investigation [20]. The dataset was constructed in such a way that it is divided into three parts in order to be used in three stages. In the first stage, the dataset of 440 images was collected to train the models. The dataset consists of 22 different categories of images of construction details that were photographed on a white background. Each construction detail has been rotated 20 times in order for each picture to show a new angle. Each item of the dataset has been manually annotated. The number of images in the dataset is not high because the pre-trained models of YOLOv5 have been used. A sample of the analysed dataset is presented in Figure 2.
To evaluate the efficiency of the models, a larger number of construction details have been prepared. Each construction detail used in the training process has been photographed 50 times from different angles using 3 different backgrounds: white (W), neutral (N), and mixed (M) (Figure 3). The main reason for using three different backgrounds is to simulate the efficiency of the models in a real environment. On a neutral background, all analysed construction details can be clearly observed. In contrast, on a white background, all details usually stand out and are highlighted from the background, except the construction details of the white colour. On the third background, which is mixed, the pattern can be considered as noise. In this case, it is more difficult to correctly detect the object compared with the white and neutral backgrounds. A total of 3300 images were prepared (1100 images on each background).
The last part of the dataset which was used in the experimental investigation is formed by 150 images. In these, several dozen construction details are placed on the three backgrounds (Figure 4). The main idea of these images is to evaluate the efficiency of the YOLOv5 models that obtained the highest accuracy.
As mentioned above, selecting different parameters during the training process can lead to different results. It is important to investigate not only the hyperparameters of the YOLOv5 models, but also the other important parameters that could influence the final detection results. Training each model can be time consuming, so the experimental study was divided into two stages. The first was called primary research, and the second was called main research. In the primary research, the influence of the following five parameters was investigated: epoch numbers (one complete forward and backward pass of all training examples), batch size (number of images processed simultaneously in a forward pass), image size, layer freezing option, and different data augmentation options. Related works have shown the efficiency of YOLOv5s and YOLOv5m when used for the detection of small objects with similar features. In this case, these two models have been chosen in primary research. Various combinations of training parameters have been used in the training process (Table 2) and a total of 50 models were created and evaluated.
To find the best parameter combination, the 50 models have been evaluated using 3300 images. The experiments have been named according to the parameters used in the training process. For example, the name of the model Yolov5s_320_16_300_DefAugm means that the YOLOv5s model has been used and that the parameters are as follows: image size—320; batch size—16; epoch number—300; default parameters of data augmentation. The results of the experimental investigation show that, without data augmentation, the detection results are much lower when compared with the results using augmentation. This is true regardless of whether the default or custom options of the data augmentation have been used. Additionally, the first experiments have shown that object detection accuracy increases significantly using the option of 10 backbone layer freeze.
The influence of different combinations of data augmentation options has been analysed. The results of the experiment show that the best detection ratio achieved is equal to 0.4 (40% of correct detection). In this case, the highest number of construction details has been detected no matter which background has been used (322 construction details on the mixed (M) background; 509 construction details on the neutral (N) background; 485 construction details on the white (W) background). Overall, results show that almost every model better detects the construction details on a neutral background. However, it is important to mention that the models have been trained with the construction details, which were placed on a white background. During the primary research, the best parameters to allow one to achieve the highest ratio were found, and are presented in Table 3. These parameters will be used in the main research. The results of all of the experiments are presented in Table 4.

3.2. Results of the Main Research

During the training process of YOLOv5, there is the possibility to choose various hyperparameters that could influence the results of object detection. The analysis of related works has shown that many researchers have focused on the following three main parameters: learning rate, momentum, and weight decay. The various values of these parameters are used in scientific papers. Based on other research, our main experiments investigate several combinations of hyperparameters. In the main research, five versions of the YOLOv5 have been trained using the parameters obtained from the primary research results. The hyperparameters used in the main research are presented in Table 5. A total of 135 models have been trained and evaluated.
The results of the main research show that, when using various combinations of hyperparameters, the highest obtained correct detection ratio of construction details is equal to 0.5012 (50%). In this case, the YOLOv5l model was used. The model was trained with a learning rate equal to 0.001, a momentum of 0.95, and a weight decay of 0.0007. In some cases, the correct detection ratio is equal to 0. The lowest correct detection ratio was obtained using YOLOv5n. The highest correct detection ratio obtained for each YOLOv5 model (n, s, m, l, x) are presented in Figure 5.
As one can see in Figure 5, the smallest correct detection ratio was obtained using YOLOv5n (22%). The difference between the results of YOLOv5s (34%) and YOLOv5m is equal to 6%, while better results were obtained using YOLOv5m (40%). The results obtained using YOLOv5x (48%) are slightly lower compared with the results of YOLOv5l (50%). In addition, in Figure 6, the curves of precision, recall, [email protected], and [email protected]:0.95 are presented.
One can see (Figure 6) that, until approximately 200 epochs, the model is still training, and after 200 epochs there is no progress. The recall and the precision metrics of the model are close to 1. In the case of the [email protected] metric, the model is close to value 1 after 100 epochs. The [email protected]:0.95 metric shows that the accuracy during all 300 training epochs continues to increase.
The results of all of the main research experiments are presented in Table 6. In Figure 7 the confusion matrices of the best model on three different backgrounds are presented. As one can see, the smallest number of correct detections was on the mixed background (497). Using this background, two details were not detected at all and were recognized as different construction details. Furthermore, in this case, many details were not assigned to any classes at all, which shows that details merge in the mixed background. On the neutral background, the number of correct detections is larger (562), though the same construction detail as in the case of the mixed background was nevertheless recognized incorrectly. All of the details have been correctly recognized on the white background at least once. On a white background, the number of correct detections is largest (595), therefore in this case, 54% of the construction details were recognized correctly.

4. Discussion

This experimental investigation has shown the importance of training parameters and hyperparameter selection in the model training process. In this investigation, a total of 185 models were trained. The main problem with object detection tasks is that there are many different options for how to train the models, so it is hard to consider all of them. This is especially so when training each model takes a long time. In this research, many different parameter combinations were evaluated. The results may be useful for tasks related to the detection of objects with similar features. The analysed dataset has 22 categories. Some of the construction details could look identical to different categories due to the different photoshoot angles. This means that the results are not as good, but they are still promising and are valuable for future research. Due to the complexity of the task, the detection of construction details may be useful when evaluating the efficiency and performance of the model.
The model obtained in the main research could be used to develop a recommendation for building construction. It would detect details from an image and suggest possible construction. The system or application could be implemented in a mobile environment. An additional experiment was performed in which 150 new images were fed to the best obtained models. As mentioned, images of several dozen construction details were placed and photographed on the three different backgrounds that were used in the primary and main research. A sample of the construction details detection results is presented in Figure 8.
The five best YOLOv5 models obtained in the main research (Figure 5) were evaluated using 150 images. The full results of the correct detection ratio of each construction detail are presented in Table 7. As one can see, the worst detection results are obtained when a mixed background is used. Only in the case of YOLOv5m, was the correct detection ratio larger than 0 and almost all details were recognized at least once. The highest correct detection ratio is obtained when using the white background. Overall, results show that some construction details, like 1x1_h2_round, 2x3_h1 and 2x3_h2, were not detected at all or detected by only few models. The details 2x3_h1 and 2x3_h2 are differed in terms of their height but can look identical from other angles. Some of the construction details were recognized correctly all the time by the YOLOv5m model, for example, 2x8_h1, and 4x4_h1 when the neutral background is used. Summarized results of the additional research show that the YOLOv5m model recognizes the highest number compared with the other four models.
This research has some limitations, because the results have not been compared with the other object detection models the experimental investigation has been based only on the YOLOv5 algorithm. Additionally, it is not possible to ensure that the same results could be obtained using another dataset that has similar features. The results can still depend on many different aspects, for example, the angle of the image taken, the noise appearing around the construction details, etc. However, the results may still be useful to other researchers. The experimental investigation has shown the importance of the freeze option in the training process and the use of nondefault parameters to obtain higher object detection results.

5. Conclusions

In this paper, the influences of the training parameters and hyperparameters of YOLOv5 on the detection of construction details were analysed. Construction details were chosen due to the task complexity when similar feature data are analysed. In some cases, the construction details appear to be identical. All depends on the angle of the shot used, which in turn depends on the point of view of the camera. During the research, five models of YOLOv5 were analysed. A total of 185 models were trained and evaluated. Model efficiencies were tested using a total of 3300 images placed on 3 different complexity backgrounds. The influence of five training parameters (image size, batch size, epoch size, layer freeze option, and data augmentation) and three hyperparameters (learning rate, momentum, and weight decay) was analysed. All of the parameters mentioned were used in various combinations.
The results of the experimental investigation show that the best parameters for the detection of construction details are as follows: coloured images; image size—320; batch size—32; epoch number—300; layer freeze option—10; data augmentation—on; learning rate—0.001; momentum—0.95; and weight decay—0.0007. In this case, the percentage of correct detection is equal to 50%, regardless of which background is used. The correct detection results of the model only on the white background are equal to 54%. Experimental investigation has shown that the smallest detection results are obtained when a mixed background is used. The main reason for this is that some details merge with the background and that, therefore, the models cannot detect the construction details. Additional research using several dozen construction details in the same image (on three different backgrounds) have shown that the YOLOv5m model correctly recognizes the highest number of structural details.
The number of correct detection results can be increased if the YOLOv5 model is used to localize the structure details in the image. A second step would be to use an additional binary classification to find the correct details of the structure. This could be implemented in the future to find the best way in which to detect similar construction details at different angles.

Author Contributions

Conceptualization, T.K. and P.S.; methodology, P.S.; validation, T.K. and E.P.; formal analysis, T.K. and P.S.; data curation, T.K. and P.S.; writing—original draft preparation, T.K., E.P., P.S. and B.P.; writing—review and editing P.S. and B.P.; visualization, T.K., E.P. and P.S.; supervision, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to the large capacity of the data (7.21 GB).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jucevičius, J.; Treigys, P.; Bernatavičienė, J.; Briedienė, R.; Naruševičiūtė, I.; Trakymas, M. Investigation of MRI prostate localization using different MRI modality scans. In Proceedings of the 2020 IEEE 8th Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE), Vilnius, Lithuania, 22–24 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar]
  2. Wang, X.; Chen, H.; Gan, C.; Lin, H.; Dou, Q.; Tsougenis, E.; Heng, P.A. Weakly supervised deep learning for whole slide lung cancer image analysis. IEEE Trans. Cybern. 2019, 50, 3950–3962. [Google Scholar] [CrossRef]
  3. Shabbir, A.; Rasheed, A.; Shehraz, H.; Saleem, A.; Zafar, B.; Sajid, M.; Shehryar, T. Detection of glaucoma using retinal fundus images: A comprehensive review. Math. Biosci. Eng. 2021, 18, 2033–2076. [Google Scholar] [CrossRef] [PubMed]
  4. Elangovan, P.; Nath, M.K. Glaucoma assessment from color fundus images using convolutional neural network. Int. J. Imaging Syst. Technol. 2021, 31, 955–971. [Google Scholar] [CrossRef]
  5. Amyar, A.; Modzelewski, R.; Li, H.; Ruan, S. Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation. Comput. Biol. Med. 2020, 126, 104037. [Google Scholar] [CrossRef]
  6. Toğaçar, M.; Ergen, B.; Cömert, Z.; Özyurt, F. A deep feature learning model for pneumonia detection applying a combination of mRMR feature selection and machine learning models. IRBM 2020, 41, 212–222. [Google Scholar] [CrossRef]
  7. Stefanovič, P.; Ramanauskaitė, S. Travel Direction Recommendation Model Based on Photos of User Social Network Profile. IEEE Access 2023, 11, 28252–28262. [Google Scholar] [CrossRef]
  8. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Wei, X. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
  9. Zhou, X.; Xu, X.; Liang, W.; Zeng, Z.; Shimizu, S.; Yang, L.T.; Jin, Q. Intelligent small object detection for digital twin in smart manufacturing with industrial cyber-physical systems. IEEE Trans. Ind. Inform. 2022, 18, 1377–1386. [Google Scholar] [CrossRef]
  10. Li, C.; Wang, R.; Li, J.; Fei, L. Face detection based on YOLOv3. In Recent Trends in Intelligent Computing, Communication and Devices: Proceedings of ICCD 2018; Springer: Singapore, 2020; pp. 277–284. [Google Scholar]
  11. Chen, W.; Huang, H.; Peng, S.; Zhou, C.; Zhang, C. YOLO-face: A real-time face detector. Vis. Comput. 2021, 37, 805–813. [Google Scholar] [CrossRef]
  12. Ye, X.; Liu, Y.; Zhang, D.; Hu, X.; He, Z.; Chen, Y. Rapid and Accurate Crayfish Sorting by Size and Maturity Based on Improved YOLOv5. Appl. Sci. 2023, 13, 8619. [Google Scholar] [CrossRef]
  13. Shi, H.; Xiao, W.; Zhu, S.; Li, L.; Zhang, J. CA-YOLOv5: Detection model for healthy and diseased silkworms in mixed conditions based on improved YOLOv5. Int. J. Agric. Biol. Eng. 2024, 16, 236–245. [Google Scholar] [CrossRef]
  14. Hui, Y.; You, S.; Hu, X.; Yang, P.; Zhao, J. SEB-YOLO: An Improved YOLOv5 Model for Remote Sensing Small Target Detection. Sensors 2024, 24, 2193. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, J.; Xie, J.; Zhang, F.; Gao, J.; Yang, C.; Song, C.; Rao, W.; Zhang, Y. Greenhouse tomato detection and pose classification algorithm based on improved YOLOv5. Comput. Electron. Agric. 2024, 216, 108519. [Google Scholar] [CrossRef]
  16. Feng, S.; Qian, H.; Wang, H.; Wang, W. Real-time object detection method based on YOLOv5 and efficient mobile network. J. Real-Time Image Process. 2024, 21, 56. [Google Scholar] [CrossRef]
  17. Reddy, B.K.; Bano, S.; Reddy, G.G.; Kommineni, R.; Reddy, P.Y. Convolutional network based animal recognition using YOLO and Darknet. In Proceedings of the 2021 6th International Conference on Inventive Computation Technologies (ICICT), Coimbatore, India, 20–22 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1198–1203. [Google Scholar]
  18. Dewi, C.; Chen, R.C.; Jiang, X.; Yu, H. Deep convolutional neural network for enhancing traffic sign recognition developed on Yolo V4. Multimed. Tools Appl. 2022, 81, 37821–37845. [Google Scholar] [CrossRef]
  19. Hameed, K.; Chai, D.; Rassau, A. A sample weight and adaboost cnn-based coarse to fine classification of fruit and vegetables at a supermarket self-checkout. Appl. Sci. 2020, 10, 8667. [Google Scholar] [CrossRef]
  20. Construction Details Dataset. Available online: https://app.box.com/s/j420ld0wo89hvh6np1rc3z9t1e65yg2k (accessed on 13 January 2024).
  21. Kwon, H.J.; Kim, H.G.; Lee, S.H. Pill detection model for medicine inspection based on deep learning. Chemosensors 2021, 10, 4. [Google Scholar] [CrossRef]
  22. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef]
  23. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  24. Tan, L.; Huangfu, T.; Wu, L.; Chen, W. Comparison of RetinaNet, SSD, and YOLO v3 for real-time pill identification. BMC Med. Inform. Decis. Mak. 2021, 21, 324. [Google Scholar] [CrossRef]
  25. Ou, Y.Y.; Tsai, A.C.; Wang, J.F.; Lin, J. Automatic drug pills detection based on convolution neural network. In Proceedings of the 2018 International Conference on Orange Technologies (ICOT), Nusa Dua, Indonesia, 23–26 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
  26. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1251–1258. [Google Scholar]
  27. Ou, Y.Y.; Tsai, A.C.; Zhou, X.P.; Wang, J.F. Automatic drug pills detection based on enhanced feature pyramid network and convolution neural networks. IET Comput. Vis. 2020, 14, 9–17. [Google Scholar] [CrossRef]
  28. Saeed, F.; Ahmed, M.J.; Gul, M.J.; Hong, K.J.; Paul, A.; Kavitha, M.S. A robust approach for industrial small-object detection using an improved faster regional convolutional neural network. Sci. Rep. 2021, 11, 23390. [Google Scholar] [CrossRef] [PubMed]
  29. Yildiz, E.; Wörgötter, F. DCNN-based screw detection for automated disassembly processes. In Proceedings of the 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Sorrento, Italy, 26–29 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 187–192. [Google Scholar]
  30. Mangold, S.; Steiner, C.; Friedmann, M.; Fleischer, J. Vision-based screw head detection for automated disassembly for remanufacturing. Procedia CIRP 2022, 105, 1–6. [Google Scholar] [CrossRef]
  31. Xiao, Y.; Tian, Z.; Yu, J.; Zhang, Y.; Liu, S.; Du, S.; Lan, X. A review of object detection based on deep learning. Multimed. Tools Appl. 2020, 79, 23729–23791. [Google Scholar] [CrossRef]
  32. Zou, X. A review of object detection techniques. In Proceedings of the 2019 International Conference on Smart Grid and Electrical Automation (ICSGEA), Xiangtan, China, 10–11 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 251–254. [Google Scholar]
  33. Li, K.; Cao, L. A review of object detection techniques. In Proceedings of the 2020 5th International Conference on Electromechanical Control Technology and Transportation (ICECTT), Nanchang, China, 15–17 May 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 385–390. [Google Scholar]
  34. Terven, J.; Cordova-Esparza, D. A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond. arXiv 2023, arXiv:2304.00501. [Google Scholar]
  35. Zhao, Y.; Shi, Y.; Wang, Z. The improved YOLOV5 algorithm and its application in small target detection. In Proceedings of the International Conference on Intelligent Robotics and Applications, Kuala Lumpur, Malaysia, 5–7 November 2020; Springer: Berlin/Heidelberg, Germany, 2022; pp. 679–688. [Google Scholar]
  36. Dlužnevskij, D.; Stefanovič, P.; Ramanauskaite, S. Investigation of YOLOv5 Efficiency in iPhone Supported Systems. Balt. J. Mod. Comput. 2021, 9, 333–344. [Google Scholar] [CrossRef]
  37. Kvietkauskas, T.; Stefanovič, P. Influence of Training Parameters on Real-Time Similar Object Detection Using YOLOv5s. Appl. Sci. 2023, 13, 3761. [Google Scholar] [CrossRef]
  38. Isa, I.S.; Rosli, M.S.A.; Yusof, U.K.; Maruzuki, M.I.F.; Sulaiman, S.N. Optimizing the hyperparameter tuning of YOLOv5 for underwater detection. IEEE Access 2022, 10, 52818–52831. [Google Scholar] [CrossRef]
  39. Mantau, A.J.; Widayat, I.W.; Adhitya, Y.; Prakosa, S.W.; Leu, J.S.; Köppen, M. A GA-Based Learning Strategy Applied to YOLOv5 for Human Object Detection in UAV Surveillance System. In Proceedings of the 2022 IEEE 17th International Conference on Control & Automation (ICCA), Naples, Italy, 27–30 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 9–14. [Google Scholar]
  40. Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; NanoCode012; Kwon, Y.; Michael, K.; Xie, T.; Fang, J.; imyhxy; et al. ultralytics/yolov5: V7.0—YOLOv5 SOTA Realtime Instance Segmentation; Zenodo: Geneva, Switzerland, 2021. [Google Scholar] [CrossRef]
  41. Huang, Q.; Zhou, Y.; Yang, T.; Yang, K.; Cao, L.; Xia, Y. A Lightweight Transfer Learning Model with Pruned and Distilled YOLOv5s to Identify Arc Magnet Surface Defects. Appl. Sci. 2023, 13, 2078. [Google Scholar] [CrossRef]
  42. Ultralytics. Hyperparameter Tuning. Ultralytics YOLOv8 Docs. 3 March 2024. Available online: https://docs.ultralytics.com/guides/hyperparameter-tuning (accessed on 13 January 2024).
  43. Ultralytics. “Train”. Ultralytics YOLOv8 Docs. 30 March 2024. Available online: https://docs.ultralytics.com/modes/train/#train-settings (accessed on 24 January 2024).
  44. Ruman. YOLO Data Augmentation Explained–Ruman–Medium. Medium. 4 June 2023. Available online: https://rumn.medium.com/yolo-data-augmentation-explained-turbocharge-your-object-detection-model-94c33278303a (accessed on 24 January 2024).
Figure 1. The workflow of the experimental investigation.
Figure 1. The workflow of the experimental investigation.
Applsci 14 03946 g001
Figure 2. A sample of the dataset used to train the YOLOv5 models.
Figure 2. A sample of the dataset used to train the YOLOv5 models.
Applsci 14 03946 g002
Figure 3. A sample of the dataset used to evaluate the YOLOv5 models.
Figure 3. A sample of the dataset used to evaluate the YOLOv5 models.
Applsci 14 03946 g003
Figure 4. A sample of the dataset used to evaluate the best YOLOv5 model, with several dozen details in one image.
Figure 4. A sample of the dataset used to evaluate the best YOLOv5 model, with several dozen details in one image.
Applsci 14 03946 g004
Figure 5. The highest correct detection ratio of each YOLOv5 model.
Figure 5. The highest correct detection ratio of each YOLOv5 model.
Applsci 14 03946 g005
Figure 6. The evaluation of the YOLOv5l model.
Figure 6. The evaluation of the YOLOv5l model.
Applsci 14 03946 g006
Figure 7. The confusion matrices of the best obtained model.
Figure 7. The confusion matrices of the best obtained model.
Applsci 14 03946 g007
Figure 8. A sample of construction detail detection in real-world simulation.
Figure 8. A sample of construction detail detection in real-world simulation.
Applsci 14 03946 g008
Table 1. The specification of the pre-trained YOLOv5 models [40].
Table 1. The specification of the pre-trained YOLOv5 models [40].
ModelImage Size (pixels)mAPval
(50–95)
mAPval
(50)
Speed (ms) CPU blSpeed (ms) V100 blSpeed (ms) V100 b32Params (M)FLOPs @640 (B)
YOLOv5n64028.045.7456.30.61.94.5
YOLOv5s64037.456.8986.40.97.216.5
YOLOv5m64045.464.12248.21.721.249.0
YOLOv5l64049.067.343010.12.746.5109.1
YOLOv5x64050.768.976612.14.886.7205.7
Table 2. Parameters which were investigated in primary research.
Table 2. Parameters which were investigated in primary research.
Name of the ParameterValue of the ParametersComment
Epoch number300, 600The results of our previous research [37] have shown that these parameter options allow for the highest object detection results.
Image size320, 640
(pixels)
Batch size16, 32
Layers freeze option10The layer freeze option [41] is a feature in which the backbone and head layers can be unused in training mode. Primary research has shown that after 10 backbone layers were frozen, training times were reduced by approximately 2 times and construction detail recognition accuracy improved by approximately 1.5 times.
Augmentation13 optionsThe different options for data augmentation have been experimentally chosen and analysed [42,43,44]:
hsv_h—HSV-Hue augmentation of the image.
hsv_s—HSV-Saturation augmentation of the image.
hsv_v—HSV-Value augmentation of the image.
degrees—rotation (+/− degrees) of the image.
translate—shifting or moving the objects within the image.
scale—resizing the input images to different scales.
shear—geometric deformations by tilting or skewing the images along the x or y axes.
perspective—simulates perspective changes.
flipud—flips the image vertically, the top becomes the bottom, and vice versa.
fliplr—flips the image horizontally, the left side becomes the right side, and vice versa.
mosaic—combines several images to create a single training sample with a mosaic-like appearance.
mixup—combines pairs of images and their corresponding object labels to create new training examples.
copy_paste—involves randomly selecting a portion of one image and pasting it onto another image while maintaining the corresponding object labels.
Table 3. The best parameters obtained in the primary research.
Table 3. The best parameters obtained in the primary research.
ParameterValue of the Parameter
Image size320
Batch size32
Epoch number 300
The layers freeze option10
Augmentationhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0;
scale—0.5; shear—0.9; perspective—0; flipud –0.5; fliplr—0.5;
mosaic—0; mixup—0; copy_paste—0.
Table 4. The results of primary research (background: white (W), neutral (N), and mixed (M)).
Table 4. The results of primary research (background: white (W), neutral (N), and mixed (M)).
The Name of the ModelAugmentation OptionsCorrect Detection on Different BackgroundOverall Ratio of the Correct Detection
MNW
Yolov5s_320_16_300_NoAugmData augmentations have not been used.1221332030.14
Yolov5s_320_32_300_NoAugm1181411960.14
Yolov5s_320_16_600_NoAugm1001081990.12
Yolov5s_640_16_300_NoAugm39121580.06
Yolov5m_320_16_300_NoAugm1261862550.17
Yolov5m_320_32_300_NoAugm1561872260.17
Yolov5m_320_16_600_NoAugm1531392410.16
Yolov5m_640_16_300_NoAugm281492070.12
Yolov5s_320_16_300_DefAugmhsv_h—0.015; hsv_s—0.7; hsv_v—0.4; degrees—0; translate—0.1; scale—0.5; shear—0; perspective—0; flipud—0; fliplr—0.5; mosaic—1; mixup—0; copy_paste—0.1361403070.18
Yolov5s_320_32_300_DefAugm1732152870.20
Yolov5s_320_16_600_DefAugm1151433590.19
Yolov5s_640_16_300_DefAugm21703050.12
Yolov5m_320_16_300_DefAugm822573270.20
Yolov5m_320_32_300_DefAugm1112533050.20
Yolov5m_320_16_600_DefAugm1281663260.19
Yolov5m_640_16_300_DefAugm511713210.16
Yolov5m_320_32_600_DefAugm1161623530.19
Yolov5m_640_32_300_DefAugm811453790.18
Yolov5m_640_32_600_DefAugm941193420.17
Yolov5s_320_16_300_Frz_CusAugmhsv_h—0.5; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—1; mixup—0; copy_paste—0.1582443090.22
Yolov5s_320_32_300_Frz_CusAugm2112503290.24
Yolov5s_320_32_600_Frz_CusAugm1632153250.21
Yolov5s_320_16_600_Frz_CusAugm1482313130.21
Yolov5s_640_32_600_Frz_CusAugm771422780.15
Yolov5s_640_32_300_Frz_CusAugm801683060.17
Yolov5s_640_16_600_Frz_CusAugm761612800.16
Yolov5m_320_16_300_Frz_CusAugm2433553470.29
Yolov5m_320_32_300_Frz_CusAugm2743413610.30
Yolov5m_320_16_600_Frz_CusAugm2593723620.30
Yolov5m_640_32_600_Frz_CusAugm932423210.20
Yolov5m_320_32_600_Frz_CusAugm2573383740.29
Yolov5m_640_32_300_Frz_CusAugm692603470.20
Yolov5s_320_16_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—1; mixup—0; copy_paste—0.1602153160.21
Yolov5s_320_32_300_Frz_CusAugm1522553120.22
Yolov5m_320_16_300_Frz_CusAugm2713553520.30
Yolov5m_320_32_300_Frz_CusAugm2253323300.27
Yolov5m_320_16_600_Frz_CusAugmhsv_h—0.015; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—1; mixup—0; copy_paste—0.2673623620.30
Yolov5m_320_32_600_Frz_CusAugm2163243230.26
Yolov5m_320_16_300_Frz_CusAugm2693473700.30
Yolov5m_320_32_300_Frz_CusAugm2433773310.29
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—0.5; mixup—0; copy_paste—0.2644114410.34
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—1; mixup—0.5; copy_paste—0.1073553780.25
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—1; mixup—0; copy_paste—0.1382533290.22
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—0; mixup—0; copy_paste—0.2814974390.37
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—0.2; mixup—0; copy_paste—0.2854534110.35
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0; perspective—0; flipud—0.5; fliplr—0.5; mosaic—0.4; mixup—0; copy_paste—0.2684003600.31
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0.5; perspective—0; flipud—0.5; fliplr—0.5; mosaic—0; mixup—0; copy_paste—0.2795034710.38
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0.7; perspective—0; flipud—0.5; fliplr—0.5; mosaic—0; mixup—0; copy_paste—0.2304724560.35
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—0.9; perspective—0; flipud—0.5; fliplr—0.5; mosaic—0; mixup—0; copy_paste—0.3225094850.40
Yolov5m_320_32_300_Frz_CusAugmhsv_h—0.09; hsv_s—0.7; hsv_v—0.4; degrees—0.125; translate—0; scale—0.5; shear—1; perspective—0; flipud—0.5; fliplr—0.5; mosaic—0; mixup—0; copy_paste—0.2334924590.36
Table 5. Hyperparameters used in the main research.
Table 5. Hyperparameters used in the main research.
Name of the ParameterValue of the Parameters
Learning rate (lr0)0.01, 0.001, 0.0001
Momentum (m)0.9, 0.937, 0.95
Weight decay (wd)0.0001, 0.0005, 0.0007
Other optionsThe other values of the parameters have been left as default: lrf—0.01; warmup_epochs—3; warmup_momentum—0.8; warmup_bias_lr—0.05; box—0.05; cls—0.5; cls_pw—1; obj—1; obj_pw—1; iou_t—0.2; anchor_t—4; anchors—3; fl_gamma—0.
Table 6. The results of the main research (parameters: learning rate (lr0), momentum (m), weight decay (wd). Background: white (W), neutral (N), and mixed (M)).
Table 6. The results of the main research (parameters: learning rate (lr0), momentum (m), weight decay (wd). Background: white (W), neutral (N), and mixed (M)).
ParametersYOLOv5nYOLOv5sYOLOv5mYOLOv5lYOLOv5x
lr0mwdCorrect Detection on Different BackgroundOverall Ratio of the Correct DetectionCorrect Detection on Different BackgroundOverall Ratio of the Correct DetectionCorrect Detection on Different BackgroundOverall Ratio of the Correct DetectionCorrect Detection on Different BackgroundOverall Ratio of the Correct DetectionCorrect Detection on Different BackgroundOverall Ratio of the Correct Detection
MNWMNWMNWMNWMNW
0.010.9370.00071111363930.19392523574970.33522154944580.35364105475250.44914324574580.4082
0.010.9370.00051421074140.20092163184690.30393255094860.40004055295130.43854335104510.4224
0.010.9370.00011391654160.21822153645080.32942804914810.37943925154920.42394204974740.4215
0.010.950.00071581633780.21182333665030.33393145044490.38393394975070.40704364884680.4218
0.010.950.00051491373840.20301943294550.29642395134380.36063924835250.42424375075220.4442
0.010.950.00011241233860.19182253574820.32242684754620.36523675105000.41734384804660.4194
0.010.90.00071291093960.19211973364960.31182964704630.37243565274920.41674694874660.4309
0.010.90.00051421273760.19551973525020.31852995025090.39703955355410.44584504864760.4279
0.010.90.00011081273640.18151813444920.30822454574820.35884205335350.45094585164600.4345
0.0010.9370.000764282180.09391172053360.19943034495090.38214905425760.48734535695440.4745
0.0010.9370.000565352130.09481151973370.19672904435060.37554945435710.48734665655360.4748
0.0010.9370.000161362170.09521101943330.19302944354940.37064945415710.48674625545380.4709
0.0010.950.000776352610.11271322173600.21483074605230.39094975625950.50124675715370.4773
0.0010.950.000585382640.11731352143500.21183124525270.39124985575840.49674755725320.4785
0.0010.950.000175362680.11481402163650.21853234515330.39614925605870.49674645655310.4727
0.0010.90.000723121510.0564731212480.13392453924170.31944225035260.43974425355250.4552
0.0010.90.000523121540.0573691202440.13122483904100.31764214995160.43524675335470.4688
0.0010.90.000120131520.0561681172480.13122363904180.31644204955230.43584525455340.4639
0.00010.9370.00070000.000004160.0061832510.02764733350.034839100820.0670
0.00010.9370.00050000.000004160.0061833490.02734633350.034534101820.0658
0.00010.9370.00010000.000015190.0076832500.02734734390.036437103830.0676
0.00010.950.00070000.000015270.01001453730.042466104920.0794821931260.1215
0.00010.950.00050000.000015280.01031255730.042467105930.0803821881270.1203
0.00010.950.00010000.000004210.00761454720.042466105930.0800821931250.1212
0.00010.90.00070000.00000240.001816200.008214310.00551034260.0212
0.00010.90.00050000.00000240.001815220.008513310.00521132240.0203
0.00010.90.00010000.00001340.002416220.008813310.00521132250.0206
Table 7. Correct detection ratio of each model type on white (W), neutral (N), and mixed (M) backgrounds using 150 images.
Table 7. Correct detection ratio of each model type on white (W), neutral (N), and mixed (M) backgrounds using 150 images.
Name of the Construction DetailYOLOv5nYOLOv5sYOLOv5mYOLOv5lYOLOv5x
MNWMNWMNWMNWMNW
2x2_h20.000.000.130.000.040.000.070.140.190.000.000.000.010.000.00
1x2_h20.000.000.010.000.010.050.010.010.040.000.020.000.000.020.00
2x3_h10.000.000.000.000.150.000.000.000.000.000.000.000.000.000.00
2x4_h10.020.020.140.000.020.090.050.160.230.000.020.180.000.090.07
2x4_h20.030.040.700.030.110.590.110.210.620.220.380.530.010.070.41
2x2_h2_trap0.000.000.180.040.050.440.080.330.380.010.080.370.030.110.55
2x3_h20.000.000.000.000.000.000.000.000.160.000.000.110.000.000.08
1x2_h2_trap0.000.000.210.000.000.110.000.160.210.000.160.000.110.050.05
2x2_h10.000.000.100.000.300.400.000.200.300.100.000.100.000.000.20
1x2_h30.000.000.050.000.000.050.000.000.000.000.000.050.000.000.09
1x4_h20.000.000.060.000.220.170.110.220.170.060.170.060.060.060.17
4x6_h10.000.000.000.000.000.000.500.500.500.000.000.000.000.000.00
1x1_h20.000.000.030.000.080.110.140.190.160.000.140.050.050.000.16
2x6_h20.000.000.150.000.000.120.150.090.180.000.090.090.030.120.09
2x8_h10.000.130.500.130.250.250.381.000.500.380.630.000.250.750.38
1x6_h20.000.000.000.000.000.000.000.000.000.000.000.330.000.000.67
2x6_h10.000.000.170.000.170.170.000.170.330.000.000.000.000.170.17
1x1_h2_round0.000.000.000.000.000.000.000.000.000.000.000.000.000.000.00
1x2_h40.000.000.500.000.670.670.330.830.830.000.330.170.000.000.17
4x8_h10.000.000.000.000.000.500.000.500.500.500.000.500.000.000.00
1x1_h2_trap0.000.000.000.000.000.000.170.000.670.170.000.000.000.000.17
4x4_h10.000.000.000.000.000.000.001.000.000.500.000.500.000.000.50
2x2_h20.000.000.130.000.040.000.070.140.190.000.000.000.010.000.00
1x2_h20.000.000.010.000.010.050.010.010.040.000.020.000.000.020.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kvietkauskas, T.; Pavlov, E.; Stefanovič, P.; Pliuskuvienė, B. The Efficiency of YOLOv5 Models in the Detection of Similar Construction Details. Appl. Sci. 2024, 14, 3946. https://doi.org/10.3390/app14093946

AMA Style

Kvietkauskas T, Pavlov E, Stefanovič P, Pliuskuvienė B. The Efficiency of YOLOv5 Models in the Detection of Similar Construction Details. Applied Sciences. 2024; 14(9):3946. https://doi.org/10.3390/app14093946

Chicago/Turabian Style

Kvietkauskas, Tautvydas, Ernest Pavlov, Pavel Stefanovič, and Birutė Pliuskuvienė. 2024. "The Efficiency of YOLOv5 Models in the Detection of Similar Construction Details" Applied Sciences 14, no. 9: 3946. https://doi.org/10.3390/app14093946

APA Style

Kvietkauskas, T., Pavlov, E., Stefanovič, P., & Pliuskuvienė, B. (2024). The Efficiency of YOLOv5 Models in the Detection of Similar Construction Details. Applied Sciences, 14(9), 3946. https://doi.org/10.3390/app14093946

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop