Next Article in Journal
A DDoS Tracking Scheme Utilizing Adaptive Beam Search with Unmanned Aerial Vehicles in Smart Grid
Next Article in Special Issue
Research Progress on Power Visual Detection of Overhead Line Bolt Defects Based on UAV Images
Previous Article in Journal
Multi-UAV Path Planning Based on Cooperative Co-Evolutionary Algorithms with Adaptive Decision Variable Selection
Previous Article in Special Issue
PHSI-RTDETR: A Lightweight Infrared Small Target Detection Algorithm Based on UAV Aerial Photography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved YOLOv7 Model for Surface Damage Detection on Wind Turbine Blades Based on Low-Quality UAV Images

by
Yongkang Liao
1,
Mingyang Lv
1,*,
Mingyong Huang
1,
Mingwei Qu
1,
Kehan Zou
1,
Lei Chen
1 and
Liang Feng
2
1
School of Information and Electrical Engineering, Hunan University of Science and Technology, Xiangtan 411201, China
2
Hunan Shiyou Electric Co., Ltd., Xiangtan 411201, China
*
Author to whom correspondence should be addressed.
Drones 2024, 8(9), 436; https://doi.org/10.3390/drones8090436
Submission received: 9 July 2024 / Revised: 14 August 2024 / Accepted: 21 August 2024 / Published: 27 August 2024
(This article belongs to the Special Issue Intelligent Image Processing and Sensing for Drones, 2nd Edition)

Abstract

:
The efficient damage detection of the wind turbine blade (WTB), the core part of the wind power, is very improtant to wind power. In this paper, an improved YOLOv7 model is designed to enhance the performance of surface damage detection on WTBs based on the low-quality unmanned aerial vehicle (UAV) images. (1) An efficient channel attention (ECA) module is imbeded, which makes the network more sensitive to damage to decrease the false detection and missing detection caused by the low-quality image. (2) A DownSampling module is introduced to retain key feature information to enhance the detection speed and accuracy which are restricted by low-quality images with large amounts of redundant information. (3) The Multiple attributes Intersection over Union (MIoU) is applied to improve the inaccurate detection location and detection size of the damage region. (4) The dynamic group convolution shuffle transformer (DGST) is developed to improve the ability to comprehensively capture the contours, textures and potential damage information. Compared with YOLOv7, YOLOv8l, YOLOv9e and YOLOv10x, this experiment’s results show that the improved YOLOv7 has the optimal detection performance synthetically considering the detection accuracy, the detection speed and the robustness.

1. Introduction

With the development of the economy and society, the demand for energy is increasing. And the wind power is a kind of renewable and widely used around the world 3. Based on the Global Wind Report [1], in 2022, the global wind power installed capacity added 77.6 GW, the cumulative installed capacity reached 906GW, an increase of 9% over the previous year. However, as the core part of the wind turbine for converting wind energy into mechanical energy, the WTB is also vulnerable to alternating wind speeds and harsh environments such as oceans, deserts, and mountains, and damage to wind turbine blades not only reduces wind power and economic benefits, but also causes blade breakage and casualties [2]. Thus, the accurate detection of damaged blades is essential for both blade protection and safe turbine operation. In order to solve the problem, numerous techniques for identifying surface damage on WTBs are proposed and have been well applied, such as Acoustic Emission [3,4,5], the Fiber Bragg grating [6], Vibration [7,8,9], Infrared Thermography [10,11], and Ultrasonic Testing [12,13,14].
With the advancement of UAV technology [15] and image processing technology, the UAV-based image acquisition and damage detection technology have begun to be studied due to their superiorities of having a low cost, their easy operation and the fact that using them is a form of nondestructive testing [16,17,18,19,20]. Initially, several techniques for feature extraction coupled with machine learning algorithms were suggested for identifying the types of damage to WTBs, such as Deng et al. [21], who propose a filter based on the log-Gabor filter and the LPSO algorithm to adaptively extract the optimal feature from damaged images, and use the classifier based on the HOG and the SVM to identify the damage type. Movsessian et al. [22] propose a robust ANN-based damage detection method with a new index for danage detection. Peng et al. [23] propose an image enhancement technique that employs cartoon texture decomposition, multi-directional Gabor filtering, and gradient threshold-based segmentation to improve the precision of damage identification. Yang et al. [24] propose a deep learning network framework where the Otsu thresholding technique is applied for automatic feature extraction from images, and transfer learning along with ensemble learning are employed to boost the precision and efficiency of damage detection. Guo et al. [25] propose a hierarchical identification framework, in which the Haar-AdaBoost and the CNN classifier are used for detecting damaged blades. Sun et al. [26] propose a condition monitoring of blades method, in which they only need healthy measurements for training. Liu et al. [27] propose a lightweight model called YOLOv3-Mobilenet-PK for detecting surface-damaged blades. Foster et al. [28] propose a pubulic database of damaged blades for detection and compare the performance of some detection models.
Recently, Many attention mechanism strategies have been used to detect damaged WTBs. Zou et al. [29] propose an improved CNN, in which the attention module SC changed into the DPCI_SC block is used to obtain image spatial position information more efficiently. Zhang et al. [30] propose a detection method combining MobileNetv1-YOLOv4 and the transfer learning method, in which three different attention-based feature optimization modules are use to adaptively optimize features. Zhang et al. [31] propose an improved lightweight YOLOv5s network by incorporating the Convolution Block Attention Module (CBAM). Liu et al. [32] have developed an innovative lightweight model designed for the detection of damaged WTBs by utilizing an attention mechanism, thereby reducing computational time and enhancing detection precision. Chen et al. [33] propose an improved YOLOv5 framework, in which the CBAM is used to heighten damaged features of blades. Ma et al. [34] propose the MES-YOLOv8n model with ECA, which has higher accuracy than the primitive YOLOv8n for damaged blade detection.
As the size, shape and position of different types of surface damaged blades in UAV images are different. Specially, in consideration of the drone’s position and shooting angle, a large amount of complex background information will be contained in the UAV image and the damaged blade may only take up a small area in the UAV image. That is, the damage in blades will become smaller comparing with the whole UAV image. Those factors will lower the quality of the UAV image and seriously interfere with the effect of the detection of the damaged blade. In order to enhance the detection ability for low-quality UAV images, this paper proposes an improved YOLOv7 model fusing the ECA module, the DownSample module, the DGST module and the MIoU loss function module. The key contributions are summarized as follows:
  • As low-quality UAV images increase the difficulty of damage feature extraction, resulting in large computation and the long time consumption of the primitive YOLOv7 framework, the DGST is proposed as a new feature extraction module to replace the ELAN module in the Backbone of the YOLOv7 model and the DownSample module is introduced into the Backbone of the YOLOv7 model to enhance the ability of feature extraction and detection time;
  • such as cracks and other damage is not easy to be detected. Thus, the ECA attention module is introduced into the Neck of the YOLOv7 to enhance the model’s attention to damaged features and reduce the interference of irrelevant information in damaged images;
  • In order to solve the problem of mismatch between the generated bounding box and the actual bounding box, the MIoU loss function is used to represent the bounding box information more comprehensively and carefully.
The following sections of this paper are structured as indicated below. In Section 2, an overview of both the primitive YOLOv7 and the improved YOLOv7 architecture is presented. In Section 3, assessment indexes are introduced. In Section 4, the experiment and analysis based on the public blade image database are presented. In Section 5, the experiment and analysis based on the blade image database from a company are presented. Finally, this paper is summarize in the Conclusion.

2. Methods

2.1. YOLOv7 Model

As shown in Figure 1, the original YOLOv7 model consists of four parts: the Input layer, the Backbone network (feature extraction network), the Neck (feature fusion part) and the Head (prediction part). Where, the Backbone network consists of the CBS module, the MP-1 module and the E-ELAN module. The Neck adopts PA-Net structure and is mainly composed of the SPPCSPC module, the ELAN-C module, the MP-2 module, the CBS module and the UPSample module.The Head is mainly composed of RepConv module and 1 × 1 convolution.

2.2. Improved YOLOv7 Model

The improved YOLOv7 network architecture is shown in Figure 2. In the Backbone network, the DGST module and the DownSample module are used to build a new feature extraction network to obtain more accurate feature information, and improve the accuracy and speed of the damage detection, so as to deal with the problem of insufficient damage feature information extraction in the original YOLOv7 Backbone network for low-quality UAV Images. The addition of the ECA module in the Neck improves the sensitivity of the network to damage to captures the more subtle features of damage. In addition, the MIoU loss function is introduced to make the network more accurately describe the information of the boundary box in the prediction part to improve the recognition accuracy of different damage of wind turbine blades.

2.2.1. Improved Backbone Based on the DGST Module and the DownSample Module

Due to the diverse application scenarios of wind turbines, the background of damaged blade images captured by drones is complex and diverse. For example, the background of mountains, farmland, Gobi desert, houses, and woods will appear in the damaged blade images. Numerous unrelated background information restricts the ability of the backbone network in the original YOLOv7 to extract key feature information. In this paper, the DGST module is introduced as a new feature extraction module to replace the ELAN module of the original YOLOv7 and the DownSample module is used as the downsampling part to replace the MP-1 module of the original YOLOv7, which is used to improve the network’s accurate and efficient extraction ability of key features of wind turbine blade damage under the complex image background.
The architecture diagram of the DGST module is shown in Figure 3. Initially, the channel number is changed by a 1 × 1 convolution to accommodate the varied processing requirements of the subsequent branches. Next, looking from top to bottom, the upper branch employs a standard 3 × 3 convolution for extracting characteristics. The SE module is employed in the second branch to focus the model’s attention on areas that are crucial for damage detection, and the CBR with a 7 × 7 depth convolution in the third branch has a bigger receptive field, which allows it to capture abundant amounts of local feature information. Thirdly, all the characteristic information is integrated through the Concat connection. Finally, the comprehensive damage feature information of the WTB is either preserved or excluded within the ConvFFN module based on the DropPath value. thereby enhancing the model’s generalization capabilities and optimizing its feature extraction from the WTB damage images.
As shown in Figure 4, the DownSample module includes a standard convolution, an expansive convolution, a maximum pooling layer and an average pooling layer. There are two branches from top to bottom in the DownSample module. In the first branch, features are initially extracted by a standard convolution with k = 1 and s = 1 . Then, after the processing of the maximum pooling layer (MaxPool module), the resolution of the feature map, the calculation amount and parameter number of the subsequent network are reduced while retaining the feature map information. The average pooling layer (AvgPool module) can retain more detailed features and background information to average the loss of key feature information caused by the maximum pooling. In the second branch, the expansive convolution with d = 3 (expansion rate is 3) means to add holes on the basis of the standard convolution and to enlarge the receptive field without increasing the network parameters to capture more comprehensive feature information. Finally, the feature information from diverse scales and hierarchical levels is fused by the Concat module, and the feature map of the final damaged WTB image is obtained.

2.2.2. The Improved Neck Network with the ECA Module

The shooting of damaged images of WTBs is influenced by various factors, including downlight, backlight, light intensity, and so on. Those factors result in the low clarity of some damage images, especially in the false detection and missing detection of subtle types of damage such as cracks. Thus, in this paper, the channel attention module ECA module [35] is introduced after the upsampling part of the Neck to deal with the above problem. After being processed by the up-sampling module, the feature map can be recovered from low resolution to high resolution, and the key information contained in the feature map also is increased correspondingly. At this time, the introduced ECA module can further improve the attention to the damage characteristics, filter out irrelevant feature information, and better capture the subtle features in the damage to identify the target damage in the fuzzy wind turbine blade damage image. What is more, the model complexity and computation amount can be reduced to a certain extent, and the detection performance of the model can be improved.
The ECA module is an efficient channel attention mechanism for the design based on the deep convolutional neural networks. Compared with the traditional SE (Squeeze-and-Excitation) module, the module avoids dimension reduction during channel attention learning and helps to learn effective channel attention. Under the condition of not reducing the dimension, through the fast one-dimensional convolution of a convolution kernel of size k, the method of capturing local channel interaction is used to make the layers with a large number of channels realize cross-channel information interaction, so as to obtain richer feature information and ensure the efficiency and effectiveness of the model. The coverage range of the mutual effect can be determined adaptively according to the corresponding mapping relationship and channel dimension C. The correlation between these parameters is depicted by Equation (1).
k = ψ c = log 2 c γ + b γ o d d
where | t | o d d is used to denote the odd number closest to t.

2.2.3. The Improved Loss Function with the MIoU

The lack of matching degree between the generated damage boundary frame and the real boundary frame is one of the key factors limiting the accuracy of wind turbine blades damage detection. What is more, the CIoU loss function widely used in target detection will degenerate into the IoU loss function when the center point of the prediction frame coincides with the center point of the target frame and the aspect ratio is the same, which will significantly reduce the convergence speed of the algorithm. In this paper, considering the MIoU loss function can describe the boundary frame information more comprehensively and carefully by capturing more boundary frame attributes, the Multiple attributes IoU (MIoU) loss function is used to replace the original loss function for the improved YOLOv7 network to significantly improve the precision and speed of the damage detection of the turbine blade.
The MIoU loss function can accurately measure the difference between the predicted bounding box and the real bounding box by fusing four key attributes of the bounding box. That is, the overlap area, the center point distance, the aspect ratio and the absolute side length difference. As shown in Equation (2), the MIoU loss function mainly consists of four parts: the IoU loss, the center coordinate distance loss, the aspect ratio loss, and the boundary difference loss. Compared with the CIoU loss function [36], the MIoU loss function replaces the direct calculation of the side length by calculating the difference in the corresponding side length of different bounding boxes. This method significantly decreases computational effort and expresses the differences between bounding boxes in a more concise and intuitive way.
L M I o U = 1 I o U + ρ 2 ( b , b g t ) c 2 + 4 π 2 ( a r c t a n w g t h g t a r c t a n w h ) 2 + ( Δ h Δ h c ) 2 + ( Δ w Δ w c ) 2
where the b delegates the center point of the real bounding box and b g t delegates the predicted bounding box, the ρ denotes the Euclidean distance, and the c delegates the length of the catercorner of the bounding box C that is the samllest box which can cover the real bounding box and the predicted bounding box simultaneously. The w delegates the width of the real bounding box, the h delegates the height of the real bounding box, the w g t delegates the width of the predicted bounding box, the h g t delegates the height of the predicted bounding box, and the t is the number of the iteration.

3. Assessment Index

In the experiment, three evaluation indicators are used for quantitative measurement, that is, m A P @ 0.5 , precision (recored as P), and recall (recorded as R), which can reflect the performance of the improved YOLOv7 algorithm.
The index m A P @ 0.5 represents the mean average precision calculated under an IoU threshold of 0.5, and the m A P is calculated by Equation (3),
m A P = 1 n k = 1 k = n A P k
where, the n signifies the count of categories, while the A P refers to the area bounded by the precision–recall curve for each category and the coordinate axes, indicating the accuracy of the recognition for a single category.
The P is short for precision and is calculated by Equation (4):
P = T P T P + F P
The R is short for recall and is calculated by Equation (5):
R = T P T P + F N
where T P stands for True Positive, indicating correct positive predictions, F P stands for False Positive, indicating incorrect predictions of negative samples as positive, and F N stands for False Negative, indicating incorrect predictions of positive samples as negative.
Considering that the detection efficiency is also key to surface damage detection on WTBs, the number of parameters (recorded as P a r a m s ) and the frame rates (recorded as F) are used in this paper. And, they are, respectively, calculated by Equations (6) and (7).
P a r a m s = l = 1 D K l 2 C l 1 C l + l = 1 D K l 2 M 2 C l
where the K is used to denote the convolution kernel’s size, the M is used to denote the next feature map’s size, the C l 1 is used to denote the input layer and the C l is used to denote the output layer.
F = N T
where the N delegates the sample quantity, while the T denotes the duration taken to test these samples.

4. Experiments Based on Public Wind Turbine Blade Image Database

4.1. Experiment Environment and Settings

To speed up the calculation, the AutoDL Cloud Computing is rented for the experiment. The hardware environment is Intel(R) Xeon(R) Platinum 8481CPU (Intel Corporation, Santa Clara, CA, USA) and NVIDIA GeForce RTX 4090D (24G) GPU (NVIDIA Corporation, Santa Clara, CA, USA). And the software environment adopts the operating system ubuntu 22.04, the Deep learning framework Pytorch 2.1.0, the programming language Python 3.10.8, and CUDA 12.1.
This public WTB damage image dataset [28,37] has 2995 annotated images with their pixel size being 586 × 371. The proportion of the numbers of the training set, the validation set and the test set is 7:2:1. Subsequently, for the sake of making better use of the datasets and enhancing the experiment’s robustness, the three sets are expanded, respectively, by data augmentation methods, including rotating the images, adjusting the brightness of the images and changing the contrast of the images. Figure 5 and Figure 6 are taken as the examples of the two types of instances to display the differences. Finally, the number of images in the training set, validation set, test set, respectively, are 6288, 1797, and 900. During the model being trained, the dimensions of the images are fixed at 640 × 640. The momentum is configured to 0.937, and the learning rate is adjusted to 0.01. The optimizer utilized is SGD, with a weight decay of 0.0005. For warming up the model, the number of warmup epochs is set to 3.0, and the warmup momentum is 0.8. The batch size is defined as 8, and the total number of epochs for training is 100. Under the premise of ensuring that the configuration environment is consistent with the initial training parameters, our model is compared with the YOLOv7 model, the YOLOv8l model, the YOLOv9e model and the YOLOv10x model.

4.2. Training Process of the Models

Figure 7 shows the m A P @ 0.5 index variation curves of each algorithm during the process of being trained. When the amount of epochs is 10 rounds, the curve of the [email protected] index in our model surpasses that of other object detection models, which shows that its convergence speed is faster during training. In addition, with the deepening of the training process, the curve gradually stabilized, and the values of our model and those of the primitive YOLOv7 and YOLOv9e models are close, which are higher than those of the YOLOv8l and YOLOv10x models.

4.3. Experiment Results and Analysis

The experimental results are shown in Table 1, the m A P @ 0.5 index of our model (81.6%) is higher than that of the primitive YOLOv7 (81.0%), YOLOv8l (80.2%), YOLOv9e (81.5%) and YOLOv10x (80.3%) models. In terms of the precision index (P), our model is 2.6%, 2.8%, 1.8% and 2.1% higher than the primitive YOLOv7, YOLOv8l, YOLOv9e and YOLOv10x models, respectively, that is, the precision rate value is the highest amongst them. In terms of the recall index (R), that of our model is 0.2% and 0.9% higher than that of the primitive YOLOv7 model and that of the YOLOv9e model, respectively. Compared with the YOLOv8l and YOLOv10x models, it is slightly lower on the recall index by 1.3% and 1%, but it is ahead of the YOLOv8l and YOLOv10x models in the other three indexes. In terms of the FPS index, our model has the fastest image processing power, which is 1.55 times that of the primitive YOLOv7 model, 1.53 times that of the YOLOv8l model, 1.49 times that of the YOLOv9e model, and 1.11 times that of the YOLOv10x model. To sum up, in addition to being slightly lower than YOLOv8l and YOLOv10x in the R index, the proposed algorithm has the optimal values in the other indexes. What is more, the image processing speed of this model is much faster than that of other algorithms while ensuring higher accuracy, which satisfies the requirement of speed and accuracy for detecting damaged WTBs based on UAV images.
The P-R curve of our model is displayed in Figure 8, and the key of the P-R curve is the area encircled by the P-R curve and the coordinate axis. That is, the larger it is, the better the model’s detection capacity is. Figure 9 illustrated our model’s confusion matrix, summarizing the prediction results of different instances. The figure indicates that the detection accuracy for dirt and damage is 88% and 76%, respectively, satisfying the precision requirements of the surface detection of WTBs.

5. Experiments Based on the Database from a Wind Power Company

5.1. Experiment Environment and Database

The experiment is run on a personal computer called the Lenovo Legion Y7000P 2024. The hardware environment is i7-14650HX CPU and NVIDIA GeForce RTX 4060 (8G) GPU. The software environment adopts the Windows 11 operating system, the Deep learning framework Pytorch 2.1.0, the programming language Python 3.10.8, and CUDA 12.1.
There are 1900 images collected from a wind power field and these images are divided into groups based on the crack damage, the trachoma damage and the delamination damage, and Figure 10, Figure 11 and Figure 12, respectively, display the three figures for each category as the example. The UAV image features intricate background elements, including visible cars, cultivated fields, mountain ranges, water bodies, woodland, and skies of varying hues, as depicted in Figure 11 and Figure 12, causing the difference between the area proportion and the position in the UAV image. What is more, as shown in Figure 10, Figure 11 and Figure 12, the size, shape, and location of those damages could be different, whether they are different types of damage or the same type of damage. Thus, the above factors lower the quality of the damage images and increase the difficulty of damage detection.
All the images are processed to become a uniform size of 480 × 480 pixels. And the Table 2 displays the specific quantity distribution of each type of damage in the three types of experimental databases, that is, the proportion is 8:1:1.

5.2. Training Process of the Models

The parameters used in this experiment are the same as those in the previous experiment. Variation curves of the m A P @ 0.5 index of the five models during training are displayed in Figure 13. And it is observed that when the amount of epochs is 10 rounds, under the m A P @ 0.5 index, our model is the highest compared with the other four models, which shows that its convergence speed is faster during training. In addition, with the deepening of the training process, the five curves of the m A P @ 0.5 index of five models gradually stabilized. Especially, based on the m A P @ 0.5 index, our model outperforms the primitive YOLOv7 model, the YOLOv8l model, the YOLOv9e model, and the YOLOv10x model, achieving the highest performance among them.

5.3. Experiment Results

As shown in Figure 14, four P-R curves are displayed. The closer the value of the curve is to 1, the more efficient the model is. The m A P @ 0.5 index of all damage types is above 71%. When crack damage is detected, the value of the [email protected] achieved 88.5%, which is its highest value. Otherwise, the confusion matrix is shown in Figure 15 and our model’s detection accuracy for all defects are all above 72%, which satisfies the accuracy for damage detection.

5.4. Comparative Experiments

5.4.1. Ablation Experiment

The ablation experiment with diferent combinations of four improvement points is designed in Table 3 to demonstrate the effect of four improvement points, while ensuring the consistency of the relevant environment and database. In the Table 3, if a certain improvement point is used, it will be marked by the “✔”. Or, it will be marked by the “-”.
As can be seen from Table 3, after the DownSample module is introduced in Experiment 2 alone and the DGST module is introduced in Experiment 4 alone, the m A P @ 0.5 index has only severely risen by 0.8% and 0.4% in contrast to the primitive YOLOv7 model. But the m A P @ 0.5 index of the combination of the DownSample module and the DGST module in Experiment 6 has risen by 3.9% compared with that of the primitive YOLOv7 model, the P index has risen by 3.5%, and the R index has risen by 4.7%, which indicates that the two improvement methods greatly promote the accuracy of the model when they are used together. After the ECA module is introduced separately in Experiment 3, the m A P @ 0.5 index rose by 1.5%, the accuracy index rose by 7.7%, and the recall rate index rose by 6.7% compared with the primitive YOLOv7 model, which demonstrates that the ECA attention module can advance the network’s capacity to perceive important features. In Experiment 5, after the CIOU is replaced by the MIOU, its m A P @ 0.5 index has improved by 2.5% compared to the primitive YOLOv7 model. That is, the MIOU can perceive more bounding box attributes, so as to describe bounding box information more comprehensively, and complete the convergence process accurately and quickly. What is more, comparing Experiment 7 and Experiment 1, the m A P @ 0.5 index of our model rose by 6.2%, the accuracy rate rose by 10%, and the recall rate rose by 9.5%, which fully demonstrates the benefits of our model.

5.4.2. Comparison of Different Models

In order to further verify the superiority of the proposed algorithm, the improved YOLOv7 model is compared with the original YOLOv7, the YOLOv8l, the YOLOv9e and the YOLOv10x. And the five models’ comparison results are dispalyed in Table 4. For the m A P @ 0.5 index and the recall rate R index, our model is the highest. For the precision P index, our model has similar accuracy to that of the YOLOv10x model and is higher than other three models. What is more, the number of paraments of our model is 1.2 times that of the primitive YOLOv7 model, but the index frame rate of our model is nearly the same as that of the primitive YOLOv7 model and faster than the other three models. That is, the image processing speed of our model is 1.3 times that of the YOLOv8l model, that of the YOLOv9e model and that of the YOLOv10x model, respectively. Thus, based on Table 4, our model shows evident superiority in accuracy and speed.
Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20, respectively, display the recognition results of a crack damage image, a delamination damage image and a trachoma damage image for each of the five models to compare their detection performance. The value in each figure is called the confidence score, which represents the strength of the model predictions for a particular class of objects in the detection box. That is, it means the model’s recognition capability becomes stronger and stronger if the value becomes bigger and bigger. For the crack damage image in Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20, the confidence of the primitive YOLOv7 framework, the YOLOv8l framework, the YOLOv9e framework, the YOLOv10x framework and our framework is, respectively, 44%, 63%, 67%, 55% and 88%. That is, our framework has the best capability and is twice as good as the primitive YOLOv7 framework. For the delamination damage image, the confidences of the three models are, respectively, 71%, 76%, 79%, 74% and 87%. Specificaly, a large amount of background information, such as farmland and woods, is included in the delamination damage image and takes up two-thirds of the image, which limits the efficacy of object detection. But the confidence of the improved YOLOv7 model is still 87%. For the trachoma damage image, the confidence of our model is stillthe hightest of the five frameworks. Thus, the detection capability of our framework for the three types of damaged blades is high enough.

5.4.3. Robustness Comparison Experiment

To verify the robustness of the model, the Gaussian noise with different signal-to-noise ratio (SNR) will be added into tested images [38,39,40]. In this paper, four kinds of Gaussian noise with 10dB SNR, 20dB SNR, 30dB SNR, 40dB SNR are added into the tested damage images respectively, and the experimental results are displayed in Table 5. For each model, the recognition accuracy of the model increases slightly with the increase of the SNR. What is more, compared with the other four models, our model has the highest detection accuracy under each index, except for the accuracy (73.7%) of the YOLOv9e model at 10dB SNR, which is higher than that of our model. Specifically, when no noise is added, the YOLOv10x model is only 0.7% higher than our model under index P. But when noise is added, the YOLOv10x model is lower than our model under the three indexes. Thus, our model has stronger robustness and can better identify the damaged blade image with poor image quality.

6. Conclusions

In this paper, an improved YOLOv7 model with the DGST module, the DownSample module, the ECA module, and the MIoU module is proposed, and it not only advances the detection accuracy, but also advances the robustness for WTB detection based on low-quality images taken by UAVs.
  • As low-quality UAV images increase the difficulty of damage feature extraction, resulting in the large computation and long time consumption of the primitive YOLOv7 framework, the DGST and the DownSample modules are adopted to build a new feature extraction network, which can capture more abundant feature information and avoid the insufficient damage feature information extraction in the YOLOv7 backbone. What is more, the results of the ablation experiment indicate that our model outperforms the primitive YOLOv7 model with a more than 3.5% improvement in the m A P @ 0.5 , P and R indexes.
  • Because the characteristic information of damage in low-quality images is fuzzy, cracks and other damages are not easy to be detect. Theerefore, the ECA attention module is applied to solve the above question by advancing the model’s focus on damage features and minimizing the impact of intricate and unrelated background details in damage imagery. Moreover, the results from the ablation experiments revealed that our model has boosted the P and R indexes by 7.7% and 6.7%, respectively, thereby diminishing issues related to false positives and false negatives.
  • The MIoU loss function is used to more accurately and thoroughly represent bounding box information so as to solve the mismatch between the generated and the actual bounding box. Additionally, the results of the ablation experiment indicate that our model outperforms the primitive YOLOv7 model by 2.5% in [email protected], 2.8% in P, and 1.6% in R, respectively.
Further, based on two blade image databases, the improved YOLOv7 model is compared with the original YOLOv7 model, the YOLOv8l model, the YOLOv9e model, and the YOLOv10x model to verify the detection accuracy, the processing speed and the robustness. For the first database, the m A P @ 0.5 index, the P index and the F r a m e r a t e index of our model is the highest among the five models. For the second database, the values of the m A P @ 0.5 index and the recall (R) index of our model are the highest. However, the value of the precision (P) index of our model is only 0.7% lower than that of the YOLOv10x model. Furthermore, our model’s frame rate lags behind that of the primitive YOLOv7 model by a mere 0.54 fps. That is, the image processing speed of our model is 1.3 times that of the YOLOv8l model, 2.7 times that of the YOLOv9e model and 1.3 times that of the YOLOv10x model, respectively. Specifically, for the comparative robustness results, each model has 12 values compared under three kinds of criteria and four SNR levels; our model has eleven values that are at the maximum. Thus, synthetically considering the above contrastive analyses shows that our model has the optimal detection performance.

Author Contributions

Conceptualization, M.L.; methodology, Y.L. and M.L.; software, Y.L., M.H., M.Q. and K.Z.; validation, Y.L., M.H., M.Q. and K.Z.; formal analysis, L.C.; investigation, L.C.; resources, M.L.; data curation, L.F.; writing—original draft preparation, M.L. and Y.L.; writing—review and editing, M.L. and L.C.; visualization, M.L.; supervision, L.C.; project administration, M.L.; funding acquisition, M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Hunan Provincial Natural Science Foundation of China (Grant Number: 2024JJ7174), and in part byt the Outstanding Youth Project of the Education Department of the Hunan Province of China (Grant Number: 23B0453).

Data Availability Statement

The data used in this study are from the Hunan Shiyou Electric Co. Ltd., which has not authorized us to publish the data online. However, the data can be requested via email ([email protected]).

Conflicts of Interest

Mingyang Lv from the Hunan University of Science and Technology and Liang Feng from the Hunan Shiyou Electric Co., Ltd. have applied together the Joint Funds of the Hunan Provincial Natural Science Foundation of China (Grant Number: 2024JJ7174) which is funded by the Hunan provincial government. And this kind of foundation askes the researcher from school must apply with the researcher from company together and its aim is to solve practical technical problems and promote the development of company. Thus, authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
WTBWind turbine blade
YOLOv7You Only Look Once version 7
UAVUnmanned aerial vehicle
IoUIntersection over Union
CIoUComplete Intersection over Union
MIoUMultiple attributes Intersection over Union
CBSChannel-by-channel batch normalization and scaling
ECAEfficient Channel Attention
DGSTDynamic Group Convolution Shuffle Transformer

References

  1. Global Wind Energy Council. GWEC’s Global Wind Report 2023; GWEC: Lisbon, Portugal, 2024; Available online: https://gwec.net/globalwindreport2023/ (accessed on 1 August 2024).
  2. Candela Garolera, A.; Madsen, S.F.; Nissim, M.; Myers, J.D.; Holboell, J. Lightning Damage to Wind Turbine Blades From Wind Farms in the U.S. IEEE Trans. Power Deliv. 2016, 31, 1043–1049. [Google Scholar] [CrossRef]
  3. Liu, Z.; Wang, X.; Zhang, L. Fault Diagnosis of Industrial Wind Turbine Blade Bearing Using Acoustic Emission Analysis. IEEE Trans. Instrum. Meas. 2020, 69, 6630–6639. [Google Scholar] [CrossRef]
  4. Mielke, A.; Benzon, H.H.; McGugan, M.; Chen, X.; Madsen, H.; Branner, K.; Ritschel, T.K. Analysis of damage localization based on acoustic emission data from test of wind turbine blades. Measurement 2024, 231, 114661. [Google Scholar] [CrossRef]
  5. Song, D.; Ma, T.; Shen, J.; Xu, F. Multiobjective-Based Acoustic Sensor Configuration for Structural Health Monitoring of Compressor Blade. IEEE Sens. J. 2023, 23, 14737–14745. [Google Scholar] [CrossRef]
  6. Tian, S.; Yang, Z.; Chen, X.; Xie, Y. Damage Detection Based on Static Strain Responses Using FBG in a Wind Turbine Blade. Sensors 2015, 15, 19992–20005. [Google Scholar] [CrossRef]
  7. Moradi, M.; Sivoththaman, S. MEMS Multisensor Intelligent Damage Detection for Wind Turbines. IEEE Sens. J. 2015, 15, 1437–1444. [Google Scholar] [CrossRef]
  8. Ou, Y.; Chatzi, E.N.; Dertimanis, V.K.; Spiridonakos, M.D. Vibration-based experimental damage detection of a small-scale wind turbine blade. Struct. Health Monit. 2017, 16, 79–96. [Google Scholar] [CrossRef]
  9. Li, H.; Wu, S.; Yang, Z.; Yan, R.; Chen, X. Measurement Methodology: Blade Tip Timing: A Non-Contact Blade Vibration Measurement Method. IEEE Instrum. Meas. Mag. 2023, 26, 12–20. [Google Scholar] [CrossRef]
  10. Traphan, D.; Herráez, I.; Meinlschmidt, P.; Schlüter, F.; Peinke, J.; Gülker, G. Remote surface damage detection on rotor blades of operating wind turbines by means of infrared thermography. Wind Energy Sci. 2018, 3, 639–650. [Google Scholar] [CrossRef]
  11. Collier, B.; Memari, M.; Shekaramiz, M.; Masoum, M.A.; Seibi, A. Wind Turbine Blade Fault Detection via Thermal Imaging Using Deep Learning. In Proceedings of the 2024 Intermountain Engineering, Technology and Computing (IETC), Logan, UT, USA, 13–14 May 2024; pp. 23–28. [Google Scholar] [CrossRef]
  12. Tsukuda, K.; Egawa, T.; Taniguchi, K.; Hata, Y. Average difference imaging and its application to ultrasonic nondestructive evaluation of wind turbine blade. In Proceedings of the 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Seoul, Republic of Korea, 14–17 October 2012; pp. 2601–2604. [Google Scholar] [CrossRef]
  13. Anaya, M.; Tibaduiza, D.; Forero, E.; Castro, R.; Pozo, F. An acousto-ultrasonics pattern recognition approach for damage detection in wind turbine structures. In Proceedings of the 2015 20th Symposium on Signal Processing, Images and Computer Vision (STSIVA), Bogota, Colombia, 2–4 September 2015; pp. 1–5. [Google Scholar] [CrossRef]
  14. Yang, K.; Rongong, J.A.; Worden, K. Damage detection in a laboratory wind turbine blade using techniques of ultrasonic NDT and SHM. Strain 2018, 54, e12290. [Google Scholar] [CrossRef]
  15. Fang, X.; Xie, L.; Li, X. Integrated Relative-Measurement-Based Network Localization and Formation Maneuver Control. IEEE Trans. Autom. Control 2024, 69, 1906–1913. [Google Scholar] [CrossRef]
  16. Du, Y.; Zhou, S.; Jing, X.; Peng, Y.; Wu, H.; Kwok, N. Damage detection techniques for wind turbine blades: A review. Mech. Syst. Signal Process. 2020, 141, 106445. [Google Scholar] [CrossRef]
  17. Yue, M.; Zhang, L.; Huang, J.; Zhang, H. Lightweight and Efficient Tiny-Object Detection Based on Improved YOLOv8n for UAV Aerial Images. Drones 2024, 8, 276. [Google Scholar] [CrossRef]
  18. Lian, X.; Li, Y.; Wang, X.; Shi, L.; Xue, C. Research on Identification and Location of Mining Landslide in Mining Area Based on Improved YOLO Algorithm. Drones 2024, 8, 150. [Google Scholar] [CrossRef]
  19. Han, Y.; Guo, J.; Yang, H.; Guan, R.; Zhang, T. SSMA-YOLO: A Lightweight YOLO Model with Enhanced Feature Extraction and Fusion Capabilities for Drone-Aerial Ship Image Detection. Drones 2024, 8, 145. [Google Scholar] [CrossRef]
  20. Niu, S.; Nie, Z.; Li, G.; Zhu, W. Early Drought Detection in Maize Using UAV Images and YOLOv8+. Drones 2024, 8, 170. [Google Scholar] [CrossRef]
  21. Deng, L.; Guo, Y.; Chai, B. Defect Detection on a Wind Turbine Blade Based on Digital Image Processing. Processes 2021, 9, 1452. [Google Scholar] [CrossRef]
  22. Movsessian, A.; García Cava, D.; Tcherniak, D. An artificial neural network methodology for damage detection: Demonstration on an operating wind turbine blade. Mech. Syst. Signal Process. 2021, 159, 107766. [Google Scholar] [CrossRef]
  23. Peng, Y.; Wang, W.; Tang, Z.; Cao, G.; Zhou, S. Non-uniform illumination image enhancement for surface damage detection of wind turbine blades. Mech. Syst. Signal Process. 2022, 170, 108797. [Google Scholar] [CrossRef]
  24. Yang, X.; Zhang, Y.; Lv, W.; Wang, D. Image recognition of wind turbine blade damage based on a deep learning model with transfer learning and an ensemble learning classifier. Renew. Energy 2021, 163, 386–397. [Google Scholar] [CrossRef]
  25. Guo, J.; Liu, C.; Cao, J.; Jiang, D. Damage identification of wind turbine blades with deep convolutional neural networks. Renew. Energy 2021, 174, 122–133. [Google Scholar] [CrossRef]
  26. Sun, S.; Wang, T.; Yang, H.; Chu, F. Condition monitoring of wind turbine blades based on self-supervised health representation learning: A conducive technique to effective and reliable utilization of wind energy. Appl. Energy 2022, 313, 118882. [Google Scholar] [CrossRef]
  27. Liu, Y.; Wang, Z.; Wu, X.; Fang, F.; Saqlain, A.S. Cloud-Edge-End Cooperative Detection of Wind Turbine Blade Surface Damage Based on Lightweight Deep Learning Network. IEEE Internet Comput. 2023, 27, 43–51. [Google Scholar] [CrossRef]
  28. Foster, A.; Best, O.; Gianni, M.; Khan, A.; Collins, K.; Sharma, S. Drone Footage Wind Turbine Surface Damage Detection. In Proceedings of the 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Nafplio, Greece, 26–29 June 2022; pp. 1–5. [Google Scholar] [CrossRef]
  29. Zou, L.; Cheng, H. Research on Wind Turbine Blade Surface Damage Identification Based on Improved Convolution Neural Network. Appl. Sci. 2022, 12, 9338. [Google Scholar] [CrossRef]
  30. Zhang, C.; Yang, T.; Yang, J. Image Recognition of Wind Turbine Blade Defects Using Attention-Based MobileNetv1-YOLOv4 and Transfer Learning. Sensors 2022, 22, 6009. [Google Scholar] [CrossRef] [PubMed]
  31. Zhang, Y.; Yang, Y.; Sun, J.; Ji, R.; Zhang, P.; Shan, H. Surface defect detection of wind turbine based on lightweight YOLOv5s model. Measurement 2023, 220, 113222. [Google Scholar] [CrossRef]
  32. Liu, Y.H.; Zheng, Y.Q.; Shao, Z.F.; Wei, T.; Cui, T.C.; Xu, R. Defect detection of the surface of wind turbine blades combining attention mechanism. Adv. Eng. Inform. 2024, 59, 102292. [Google Scholar] [CrossRef]
  33. Liu, Z.H.; Chen, Q.; Wei, H.L.; Lv, M.Y.; Chen, L. Channel-Spatial attention convolutional neural networks trained with adaptive learning rates for surface damage detection of wind turbine blades. Measurement 2023, 217, 113097. [Google Scholar] [CrossRef]
  34. Ma, L.M.; Jiang, X.; Tang, Z.; Zhi, S.; Wang, T. Wind Turbine Blade Defect Detection Algorithm Based on Lightweight MES-YOLOv8n. IEEE Sens. J. 2024, 1. [Google Scholar] [CrossRef]
  35. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11531–11539. [Google Scholar] [CrossRef]
  36. Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing Geometric Factors in Model Learning and Inference for Object Detection and Instance Segmentation. IEEE Trans. Cybern. 2022, 52, 8574–8586. [Google Scholar] [CrossRef]
  37. Shihavuddin, A.; Chen, X. DTU—Drone Inspection Images of Wind Turbine. 2018. Available online: https://orbit.dtu.dk/en/publications/dtu-drone-inspection-images-of-wind-turbine (accessed on 20 August 2024).
  38. Song, K.; Yan, Y. A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Appl. Surf. Sci. 2013, 285, 858–864. [Google Scholar] [CrossRef]
  39. He, Y.; Song, K.; Meng, Q.; Yan, Y. An End-to-End Steel Surface Defect Detection Approach via Fusing Multiple Hierarchical Features. IEEE Trans. Instrum. Meas. 2020, 69, 1493–1504. [Google Scholar] [CrossRef]
  40. Bao, Y.; Song, K.; Liu, J.; Wang, Y.; Yan, Y.; Yu, H.; Li, X. Triplet-Graph Reasoning Network for Few-Shot Metal Generic Surface Defect Segmentation. IEEE Trans. Instrum. Meas. 2021, 70, 5011111. [Google Scholar] [CrossRef]
Figure 1. The structure of original YOLOv7 model (the upper part of the figure shows the internal structure of the E-ELAN, ELAN-H, SPPCSPC, and MP modules, respectively).
Figure 1. The structure of original YOLOv7 model (the upper part of the figure shows the internal structure of the E-ELAN, ELAN-H, SPPCSPC, and MP modules, respectively).
Drones 08 00436 g001
Figure 2. The structure of improved YOLOv7 model. (The upper part of the figure shows the internal structure of submodules, such as the DGST module, the Down-Sample module, the SPPCSPC module, and so on).
Figure 2. The structure of improved YOLOv7 model. (The upper part of the figure shows the internal structure of submodules, such as the DGST module, the Down-Sample module, the SPPCSPC module, and so on).
Drones 08 00436 g002
Figure 3. The architecture of the DGST module.
Figure 3. The architecture of the DGST module.
Drones 08 00436 g003
Figure 4. The architecture of the DownSample module.
Figure 4. The architecture of the DownSample module.
Drones 08 00436 g004
Figure 5. Data augmentation results for the damage instance. (a) The primordial image. (b) The rotated image. (c) The brightness-adjusted image. (d) The contrast-adjusted image.
Figure 5. Data augmentation results for the damage instance. (a) The primordial image. (b) The rotated image. (c) The brightness-adjusted image. (d) The contrast-adjusted image.
Drones 08 00436 g005
Figure 6. Data augmentation results for the dirt instance. (a) The primordial image. (b) The rotated image. (c) The brightness-adjusted image. (d) The contrast-adjusted image.
Figure 6. Data augmentation results for the dirt instance. (a) The primordial image. (b) The rotated image. (c) The brightness-adjusted image. (d) The contrast-adjusted image.
Drones 08 00436 g006
Figure 7. The m A P @ 0.5 curve of the five models.
Figure 7. The m A P @ 0.5 curve of the five models.
Drones 08 00436 g007
Figure 8. The P-R curve of improved YOLOv7.
Figure 8. The P-R curve of improved YOLOv7.
Drones 08 00436 g008
Figure 9. The confusion matrix of improved YOLOv7.
Figure 9. The confusion matrix of improved YOLOv7.
Drones 08 00436 g009
Figure 10. Three different examples of the Crack damage.
Figure 10. Three different examples of the Crack damage.
Drones 08 00436 g010
Figure 11. Three different examples of the Trachoma damages.
Figure 11. Three different examples of the Trachoma damages.
Drones 08 00436 g011
Figure 12. Three different examples of the Delamination damage.
Figure 12. Three different examples of the Delamination damage.
Drones 08 00436 g012
Figure 13. The m A P @ 0.5 curves of the five models.
Figure 13. The m A P @ 0.5 curves of the five models.
Drones 08 00436 g013
Figure 14. The P-R curve of improved YOLOv7.
Figure 14. The P-R curve of improved YOLOv7.
Drones 08 00436 g014
Figure 15. The confusion matrix of improved YOLOv7.
Figure 15. The confusion matrix of improved YOLOv7.
Drones 08 00436 g015
Figure 16. The three types of damage recognition results of the primitive YOLOv7 model for the three types of damages. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Figure 16. The three types of damage recognition results of the primitive YOLOv7 model for the three types of damages. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Drones 08 00436 g016
Figure 17. The three types of damage recognition results of the YOLOv8l model. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Figure 17. The three types of damage recognition results of the YOLOv8l model. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Drones 08 00436 g017
Figure 18. The three types of damage recognition results of the YOLOv9e model. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Figure 18. The three types of damage recognition results of the YOLOv9e model. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Drones 08 00436 g018
Figure 19. The three types of damage recognition results of the YOLOv10x model. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Figure 19. The three types of damage recognition results of the YOLOv10x model. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Drones 08 00436 g019
Figure 20. The three types of damage recognition results of the improved YOLOv7 model. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Figure 20. The three types of damage recognition results of the improved YOLOv7 model. (a) Crack damage. (b) Delamination damage. (c) Trachoma damage.
Drones 08 00436 g020
Table 1. Results of the comparison experiment of the five models.
Table 1. Results of the comparison experiment of the five models.
ModelmAP@0.5 (%)P (%)R (%)Frame Rate (fps)
YOLOv7818674.761
YOLOv8l80.285.876.261.7
YOLOv9e81.586.87463.3
YOLOv10x80.386.575.984.7
Ours81.688.674.994.3
Table 2. The quantitative distribution of three kinds of damage in three sets.
Table 2. The quantitative distribution of three kinds of damage in three sets.
SetCrack DamageTrachoma DamageDelamination DamageTotal
Training set3706105251505
Verification set258570180
Test set457793215
Table 3. Comparison of ablation experiments with different modules.
Table 3. Comparison of ablation experiments with different modules.
ExperimentDown-SampleECADGSTMIoUmAP@0.5 (%)P (%)R (%)
1----72.170.163.8
2---72.969.367.3
3---73.777.870.5
4---72.568.368.8
5---74.672.965.4
6--76.073.668.5
778.380.173.3
Table 4. Comparison results of the five models.
Table 4. Comparison results of the five models.
ModelmAP@0.5 (%)P (%)R (%)Params (M)Frame Rate (fps)
YOLOv772.170.163.837.273.53
YOLOv8l73.273.865.743.658.48
YOLOv9e75.378.772.357.326.60
YOLOv10x72.880.871.631.655.87
Ours78.380.173.344.372.99
Table 5. Robustness comparison results.
Table 5. Robustness comparison results.
ModelCriterionSignal-to-Noise Ratio (SNR)Without Noise
10 dB20 dB30 dB40 dB
Primitive YOLOv7 m A P @ 0.5 (%)64.767.567.667.972.1
P (%)64.570.27070.270.1
R (%)56.860.961.86263.8
YOLOv8l m A P @ 0.5 (%)65.268.769.168.873.2
P (%)63.171.971.571.273.8
R (%)60.862.462.862.365.7
YOLOv9e m A P @ 0.5 (%)71.172.371.672.175.3
P (%)73.771.672.671.878.7
R (%)63.466.965.765.672.3
YOLOv10x m A P @ 0.5 (%)66.567.668.367.972.8
P (%)70.271.571.171.780.8
R (%)64.764.566.56571.6
Ours m A P @ 0.5 (%)72.974.674.674.678.3
P (%)72.574.573.873.380.1
R (%)69.470.971.771.273.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, Y.; Lv, M.; Huang, M.; Qu, M.; Zou, K.; Chen, L.; Feng, L. An Improved YOLOv7 Model for Surface Damage Detection on Wind Turbine Blades Based on Low-Quality UAV Images. Drones 2024, 8, 436. https://doi.org/10.3390/drones8090436

AMA Style

Liao Y, Lv M, Huang M, Qu M, Zou K, Chen L, Feng L. An Improved YOLOv7 Model for Surface Damage Detection on Wind Turbine Blades Based on Low-Quality UAV Images. Drones. 2024; 8(9):436. https://doi.org/10.3390/drones8090436

Chicago/Turabian Style

Liao, Yongkang, Mingyang Lv, Mingyong Huang, Mingwei Qu, Kehan Zou, Lei Chen, and Liang Feng. 2024. "An Improved YOLOv7 Model for Surface Damage Detection on Wind Turbine Blades Based on Low-Quality UAV Images" Drones 8, no. 9: 436. https://doi.org/10.3390/drones8090436

APA Style

Liao, Y., Lv, M., Huang, M., Qu, M., Zou, K., Chen, L., & Feng, L. (2024). An Improved YOLOv7 Model for Surface Damage Detection on Wind Turbine Blades Based on Low-Quality UAV Images. Drones, 8(9), 436. https://doi.org/10.3390/drones8090436

Article Metrics

Back to TopTop