Next Article in Journal
Safety-Critical Fixed-Time Formation Control of Quadrotor UAVs with Disturbance Based on Robust Control Barrier Functions
Previous Article in Journal
Crowd Density Estimation via Global Crowd Collectiveness Metric
Previous Article in Special Issue
A Reinforcement Learning Approach Based on Automatic Policy Amendment for Multi-AUV Task Allocation in Ocean Current
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LiDAR-Based Unmanned Aerial Vehicle Offshore Wind Blade Inspection and Modeling

by
Alexandre Oliveira
1,2,*,
André Dias
1,2,
Tiago Santos
1,2,
Paulo Rodrigues
1,2,
Alfredo Martins
1,2 and
José Almeida
1,2
1
INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
2
ISEP—School of Engineering, Polytechnic Institute of Porto, Rua Dr. António Bernardino de Almeida 431, 4200-072 Porto, Portugal
*
Author to whom correspondence should be addressed.
Drones 2024, 8(11), 617; https://doi.org/10.3390/drones8110617
Submission received: 24 September 2024 / Revised: 21 October 2024 / Accepted: 23 October 2024 / Published: 28 October 2024

Abstract

:
The deployment of offshore wind turbines (WTs) has emerged as a pivotal strategy in the transition to renewable energy, offering significant potential for clean electricity generation. However, these structures’ operation and maintenance (O&M) present unique challenges due to their remote locations and harsh marine environments. For these reasons, it is fundamental to promote the development of autonomous solutions to monitor the health condition of the construction parts, preventing structural damage and accidents. This paper explores the application of Unmanned Aerial Vehicles (UAVs) in the inspection and maintenance of offshore wind turbines, introducing a new strategy for autonomous wind turbine inspection and a simulation environment for testing and training autonomous inspection techniques under a more realistic offshore scenario. Instead of relying on visual information to detect the WT parts during the inspection, this method proposes a three-dimensional (3D) light detection and ranging (LiDAR) method that estimates the wind turbine pose (position, orientation, and blade configuration) and autonomously controls the UAV for a close inspection maneuver. The first tests were carried out mainly in a simulation framework, combining different WT poses, including different orientations, blade positions, and wind turbine movements, and finally, a mixed reality test, where a real vehicle performed a full inspection of a virtual wind turbine.

1. Introduction

Wind and other renewable energy sources are highly utilized worldwide, becoming the main alternative to fossil fuels and the future key to mitigating climate change by achieving energy sustainability and minimizing the need for greenhouse power generation. With the technological growth of wind power generation platforms, wind turbines (WTs) are becoming more robust and efficient, with the versatility of being designed to operate on land or offshore [1]. However, their reliability is fundamental to maintaining their high potential. It is important to periodically monitor the health condition of the construction parts, mainly the blades, to prevent structural damage and accidents [2]. Traditionally, these preventive maintenance strategies are fulfilled by professional rope climbing teams or even ground equipment, where the main goal is to manually monitor the damage to the wind turbine blade (WTB) and collect data for future interventions/repairs. This methodology represents an extreme risk for the crews and, simultaneously, a time-consuming and high-cost operation [3].
Unmanned aerial vehicles (UAVs) are gaining more interest in the scientific and industrial research community due to their autonomy, maneuverability, and payload capability, making these robots an outstanding platform to perform real-world tasks such as search and rescue [4], surveillance, mapping, and even structural inspections [5,6,7]. Given their great technological evolution, combined with increasingly efficient performance, greater robustness, lower costs, and the ability to transport various sensors, their use is increasingly sought after, especially in manual operation tasks, where the operator is the ability to survey large areas, collect crucial data, or even close-inspect structural anomalies in real-time. However, executing these missions autonomously, especially on offshore wind frames (OFMs), is a different and more complex challenge due to their unpredictable weather, wind conditions, and visibility perception.
To overcome these challenges, this paper outlines the development of a novel LiDAR-based wind turbine inspection and modeling algorithm and the evolution of the offshore wind farm simulation framework, which is used for testing and training classical and machine learning-based algorithms. The most popular wind turbine inspection approaches using UAVs mainly depend on visual cameras to perform tower modeling, path planning, and autonomous crack detection. The idea is to develop an autonomous maneuver that detects, models, and follows the wind turbine shape without using visual perception. Combining this technique with high-end visual and thermographic cameras, it is possible to collect high-resolution data to perform damage detection techniques on wind blade profiles. This will serve as a baseline for deep learning and reinforcement learning algorithms for autonomous wind turbine inspections using UAVs.
This paper is an extended version of work published in [8] and is outlined as follows: Section 1.1 and Section 1.2 present the preliminary study of wind turbine operations and maintenance, especially on offshore sites, followed by an analysis of the current robotic technologies developed for wind turbine damage detection techniques and autonomous inspection maneuvers used mainly on UAVs. Section 2 presents the conceptual software architecture explained, describing the processes behind the LiDAR-based WT inspection and modeling. Section 3 describes the two stages used for testing and tuning the algorithm, showing and discussing the results obtained, and finally, Section 4 provides the conclusions and future research directions of this work.

1.1. Background

Onshore wind energy has been the primary renewable energy resource, being an excellent alternative in environmental terms to conventional electricity production based on fuels [9]. However, land facilities have difficulties in expansion and growth due to the visual and sound pollution caused by large wind turbines [10].
Offshore wind farms are now the principal focus of wind power research and development. With favorable indicators in almost every environmental index, combined with large areas and better wind conditions, for instance, higher wind speeds with lower turbulence effects and lower wind shear, offshore wind plants have become the future of wind energy [11]. However, these systems require more complex marine foundations, underwater power cables, and crew and transportation logistics, leading to a more expensive investment [10,12].
Ideally, offshore wind farms are installed on the continental shelf at distances of 10 km from the coast and water depths of around 10 m. These plants need to coexist with the existing shipping routes and oil and gas extraction and not interfere with the natural protected areas, leading to only 4% of that area being available for OWF exploration. These scenarios change for greater distances from the shore, and the potential area increases to 10% for distances between 30 and 50 km and 25% for even longer distances [11].
Independent of the type of wind farm facility, land or offshore, wind turbines operate under varying environmental conditions with dynamic and complex winds, compromising the life cycle reliability. Frequent and sudden failures resulting in reactive maintenance can be expensive, with associated downtime and loss of production. Despite onshore wind farm (WF) operations and maintenance (O&M) representing only 5% of their total investment, since these require fewer crew and transportation logistics, effective and reliable maintenance strategies are fundamental for offshore wind turbines (OWTs), where these operations constitute 23% of the investment cost [13,14]. For these reasons, maintenance activities are considered to be the most critical task for WTs, given their associated challenges, risks, and costs.
From a classical point of view, maintenance strategies are divided into two main classes: reactive and proactive maintenance [15].
Reactive maintenance, also known as corrective maintenance, is a failure-based strategy where maintenance occurs when failures have already happened. According to the literature, this strategy is effective for systems with negligible downtime loss, reducing unnecessary maintenance visits and inspections. Nonetheless, this technique is not suitable for every case where future wind farms, mainly offshore facilities, are likely to be constructed in more remote areas [3]. In addition, with restricted access to offshore sites due to unpredictable marine conditions, combined with systems with lower reliability and high failure rates [16], corrective techniques may cost more than expected downtime.
The proactive maintenance is a more complex approach, using different techniques to reduce downtime and prevent future failures by performing inspection and replacement operations before those events occur. There are three different categories: preventive maintenance (PM), condition-based maintenance (CBM), and predictive maintenance (PdM).
Preventive maintenance is a time-based strategy, meaning those activities are scheduled under a specific period based on pre-determined power generation levels, age, and life cycle of the components. PM aims to optimize the production plan by reducing unplanned maintenance and excessive spare stock, combined with adequately using service vessels [17,18]. On the other side, planning too many unnecessary inspection and maintenance operations causes an increase in the LCOE. To improve WT reliability and maintenance costs, Nandipati et al. [19] proposed a strategy that uses field data and reformulates it into a single cost-effective maintenance for various components on the wind turbines.
Condition-based and predictive maintenance are similar strategies that can be defined as sensor-based techniques. While CBM combines sensor measurements, such as temperature, vibration, noise, and corrosion, with any online or offline health diagnosis system to determine if any indicator reaches a specific limit and engages the maintenance [20,21], PdM involves more advance techniques, where according to sensor data and event history, parametric analyses are conducted to predict the failures events and engage before that happens [22,23,24]. The main disadvantages of sensor-based approaches are the extensive data generated during the turbine lifespan, solving problems of filtering, analyzing, and storing, and the fact that sensors increase the complexity and cost of the systems, introducing new issues such as sensor failures.
Another aspect that needs to be understood is what to expect and search for during an inspection operation. Wind turbine blades are one of the most crucial components to maintain [25]. As these are the principal part used to transform wind speed into rotational motion, they are very susceptible to structural damage. These defects reduce the lifespan and power generation efficiency of WTs and increase safety risks and maintenance costs. However, maintaining the integrity of the WTs alone does not guarantee proper functioning, since other components, like the ones illustrated in Figure 1, can present defects caused by fatigue or environmental exposition, as presented in the following list [26]:
  • Blades: cracks, fiber rupture, edge erosion, delamination, aeroelastic deflection, and others [28,29];
  • Gearbox: corrosion, dirt, wearing, gear tooth damage [30];
  • Tower: cracks, vibrations, corrosion, deformation, foundation weakness;
  • Nacelle: corrosion, cracks;
  • Generator: corrosion, vibration, rotor asymmetries, overspeed and overheating defects;
  • Shaft: shaft imbalance, misalignment, or severe cracks;
  • Yaw and pitch bearings: corrosion, dirt, wear, and spalling.
Regarding offshore wind turbine structures, monitoring the health of their foundations and mooring systems is extremely important. Since these are installed in the ocean, they present high levels of corrosion and scour, leading to foundation cracks, deformation, power cable weaknesses, or even rupture of the mooring ropes. An example was presented in 2023 [31], where the authors proposed a novel proactive strategy to diagnose the health of the cable systems. By analyzing the transient signals presented on the power cables, it is possible to detect and localize the partial discharge sources.

Damage Detection Techniques

Maintenance and inspection operations become significant factors, especially on OWTs, where the degradation process is more accentuated due to the seawater. Various damage detection technologies have been developed and improved, ensuring accurate minor damage detection before it causes catastrophic incidents.
The most popular non-destructive testing (NDT) techniques [32] include vibration-based methods [33], ultrasound [34], strain measurements [35], acoustic emission [36], thermography [37], optical imaging [38], X-ray [39], or even hyperspectral imaging [40]. To perform inspection operations using these techniques, in most cases, requires skilled workers with access to the WT to install and operate a device with a set of sensors. However, with the evolution of the vehicle’s capacity, combined with better and more compact sensors, some of these techniques can be addressed using autonomous platforms, and each is further discussed in more detail.
Optical imaging is the most common and affordable technique for surface anomaly inspection on structures like buildings, bridges, power line components, and wind turbines (Figure 2). A set of cameras mounted on a UAV can inspect the structures closely and collect data to detect small cracks and defects in the different materials [41].
The first vision-based damage detection approaches used different heuristics algorithms that were refined and hand-crafted to detect cracks on concrete, steel, asphalt, and fiber composites [41,43]. The most common approaches combine image binarization [44], morphological segmentation [45,46], image filters [47], or even feature extractors [48,49] to identify the crack on the image successfully. An example is presented in [50], where the authors developed an optical wind turbine surface inspection method that automatically detects cracks using high-resolution images and a UAV. The authors used the Viola–Jones framework to derive Haar-like features with cascading classifiers to detect surface anomalies. Recently, in [51], the authors proposed a new two-stage method that combines the previous work with Jaya K-means to segment the contour of the crack with high accuracy.
Classical techniques have been showing outstanding performance in detecting cracks in different types of surfaces. They also can be used to detect other defects, like steel corrosion [52,53,54,55], deformations [56], and ice accumulation [57,58,59]. However, their application in an automated structural inspection environment is limited. Since those methods need to be tuned to a specific target, depending on the target’s appearance, their performance becomes unreliable in a real-world scenario where the environmental and surrounding conditions vary extensively.
Given the recent success of deep learning (DL) techniques for computer vision applications, such as image classification [60], autonomous driving [61], and medical imaging [62], some authors have begun to focus their research work on developing machine learning and DL-based solutions for damage assessment [63]. Different approaches have been developed, and it is possible to divide them into image classification, object detection, and semantic segmentation.
Classification algorithms rely on trained convolutional neural networks (CNNs) to identify whether there is damage in the image. By evaluating the entire frame, these methods can mark the image using binary classification (e.g., fault or no fault detected) or with a multi-label category to specify the type of damage (e.g., corrosion, delamination, cracks). An example is Reddy et al. [38], who proposed a CNN-based classification approach to conducting binary and multi-label damage classifications to WT blades, including labels of tip open, tip erosion, side erosion, and cracks. This method achieved 95% accuracy on the binary classification and 91% on the multi-label methodology. With a similar approach, Zhao et al. [64] used a CNN model, Alexnet, to identify surface damage on wind turbine blades using multi-label classification, averaging 99% accuracy. CNN-based classification approaches have been widely used for damage assessment in various materials and applications, combining different and adapted CNN architectures [65,66,67,68].
While some authors focus on fine-tuning CNN architectures to improve detection accuracy, others worry about finding the exact location of the cracks on the image and distinguishing multiple areas of damage on a single picture. Since classification methods are limited to identifying only one object in the entire image, a new technique was developed.
Instead of classifying the entire image, object detection algorithms use different techniques, such as sliding windows [69], region search [70,71], and even region learning [72,73], to classify relevant features in smaller parts and obtain the bounding box that represents the different objects. These strategies not only allow for object localization but also allow for multiple classifications in the same image frame. Several areas have used the Faster R-CNN and YOLO algorithms for real-time object detection applications, such as human detection [74], traffic control [75], medicine [76], and even concrete damage detection [77,78].
Object detection-based methods only aim to fit a bounding box around the regions of interest, leaving the quantification problem to the user. To obtain more information about the object, intelligent algorithms have been developed for quantification, known as semantic segmentation algorithms. In general, these techniques classify each pixel in an image into one of a fixed number of classes, providing enough information to obtain the precise location and shape of the object.
Both fully convolutional networks (FCNs) and U-net methods are widely used in semantic segmentation for diverse applications such as scene parsing [79], biomedical image segmentation [80], and crack quantization [81], among others. Many approaches aim for semantic segmentation for automatic crack detection on concrete [82]. Zhenqing Liu et al. [83] developed a concrete crack semantic segmentation algorithm using a modified version of U-Net. For WT applications, C. Zhang et al. [84] proposed a deep neural network for detecting and segmenting WT blade faults based on images taken by a UAV. The Mask-MRNet architecture uses an optimized Mask R-CNN, the Mask R-CNN-512, stacked with MRNet, to detect, segment, and classify the blade fault simultaneously.
Semantic segmentation provides detailed scene understanding, making it valuable for real-time applications such as autonomous driving and robotics. However, it faces challenges like high computational demands, lighting conditions, and large annotated datasets, which can limit its effectiveness in resource-constrained environments.
Structural health monitoring (SHM) is a very important procedure not only for wind turbines but also for other structures. For this reason, various DL-based semantic segmentation techniques have been designed to achieve better accuracy and processing speed. However, those algorithms are not limited to crack detectors in application scenarios but can also present high efficiency in detecting defects like rust [85,86] and ice accumulation on wind turbines [87].
Thermography is another well-known NDT used to detect flaws in inspection procedures on different structures, including wind turbines (Figure 3). As a portable and rapid way to inspect large surfaces with complex geometry, this technique relies on devices to measure the surface temperature distribution of the structure. Given the thermal characteristics of the object, a uniform material is expected to produce smooth radiations along its surface. By searching for temperature peaks along the surface, these methods can easily detect geometric anomalies and material discontinuities in a real-time scenario.
There are two main variants in infrared (IR) thermographic NDT techniques: passive and active [88]. In passive thermography, thermal variations are detected using an IR camera without an external thermal source, measuring the object radiation emission caused by the sun. This is the most common method used to quickly inspect structures that are not in thermal equilibrium with the environment, with flaws representing higher or lower temperatures than the background. However, it is not applicable for fault detection at an early stage due to the slow temperature development [89].
On the other hand, active methodologies use external excitation sources, like optical radiation and mechanical ultrasonic waves, to generate heat on the structure. Analyzing its thermal distribution using an IR camera makes it possible to determine and quantify any discontinuities on the object. Active techniques are usually better for detecting deeper imperfections in the materials. Nonetheless, they require more computational time and are less usual for operating in outdoor scenarios [90].
Some authors use stationary thermographic systems to monitor WT blade conditions, allowing for defect detection on both the surface and internal structures to be performed. Doroshtnasir et al. [91] employed a method combining WTB photographic images with thermogram difference images to detect subsurface faults from the ground, aircraft, or ships. Other authors use continuous-wave line laser thermography techniques to monitor rotating WTBs [92] or to detect internal delamination using the thermal response of the structure [93]. Sanati et al. [90] present two thermography analyses for wind turbine inspection, including passive and active pulsed and step heating and cooling thermography.
The task could be performed for closer inspections using aerial systems, such as manual operation, teleoperation, or even autonomous control. In 2015, Galleguillos et al. [94] presented a thermographic analysis for inspecting WTBs using a UAV with a passive IR camera. In their field tests, the authors could detect inducted defects like delamination, impacts, and artificial cracks. Since then, several approaches have been developed combining small autonomous aerial vehicles with thermographic cameras to inspect different structures like buildings [95], bridges [96], and photovoltaic panels.
More recent studies have developed autonomous strategies to detect and quantify defects by combining DL techniques with thermography [97].
Yang et al. [98] presented an improved Fast R-CNN to learn temperature gradients and detect cracks of different depths on a steel plate, while F. Liu et al. proposed a robust crack detection method that combines gray-level visual information with infrared thermography to train and evaluate multiple CNN segmentation models [99,100].
Thermography enables damage detection when there is no visible light, but real application measurements are impacted by ambient temperature, surface reflection and roughness, light sources, and other factors, which are crucial for determining the right time to operate and acquire the thermography data.

1.2. Related Work

The most commonly used approaches for wind turbine inspection techniques were using telescopic lenses, lifts or climbing, or even manual operation with the support of a helicopter. Given the technological evolution of UAVs, these methods have started to be replaced by smaller and more agile vehicles capable of carrying perception sensors for data acquisition.
In the first approaches, these platforms started to carry out missions remotely controlled by a ground pilot team equipped with a visual feedback system and a ground station control unit. Generally, the pilot is responsible for the manual control of the flight, while the assistant controls the data acquisition, mission duration, and battery status. This strategy presented many advantages, like low-cost technologies, fast real-time data, and a precise navigation system. However, teleoperated missions depend highly on the perception and control of the pilot, who can be affected by visual conditions and fatigue [101]. For these reasons, new methods rely on semi-autonomous solutions to maintain the vehicles’ relative distance/velocity or even fully autonomous maneuvers to perform wind turbine inspections without human interaction.
In 2015, Stokkeland et al. [102] presented an autonomous vision-based solution to support the inspection maneuver on wind turbine blades. One of the main concerns when inspecting high structures like wind turbines is the vehicles’ positioning, alignment, and safety distance. With these limitations in mind, the authors developed a system that estimates the relative position and orientation of the wind turbine and automatically centers the UAV with the wind turbine hub to perform a predefined motion pattern. For the detection of the wind turbine, this method combines the Canny edge detector following the Hough line transform, detecting the edge lines of the structure and recognizing the wind turbine tower, hub, and blades. With the objects found in the image, the next step consists of estimating the relative distance of the wind turbine, tower yaw angle, and blade orientation and continuously updating the Kalman filter features to perform the maneuver operation.
In a different and more straightforward method, Schäfer et al. [103] proposed a conceptual path-planning technique for wind turbine 3D mapping based on its position and orientation. By predefining the geometry of the wind turbine, which includes its tower position, nacelle, rotor angle, number of blades, and blade diameter, the flight path is generated automatically, covering the front and back sides of the wind turbine blades. With a different task, the algorithm uses the two-dimensional (2D) point cloud data to simultaneously map the wind turbine and perform collision avoidance from the surroundings. This method was developed and tested under a simulated scenario and highly depends on the correct setting of the geometric parameter, which is not always reliable in an offshore environment.
At present, the most common wind turbine inspection approaches rely on visual cameras to undertake tasks such as tower modeling, path planning, and autonomous crack detection. The majority of these new techniques employ neural networks for the segmentation of wind turbine components and extraction of the model. However, a significant challenge persists in ensuring the consistency of visual-based algorithms across different climatic conditions. Visual data are particularly susceptible to external factors such as sunlight, position, and dust, which can impair the reliability of the inspection results.
In [104], vision-based blade tip detection and positioning is proposed by combining the Mask R-CNN with shape constraints techniques. First, the Mask R-CNN is used to identify and localize the tower, hub, and blades in the image. Since vertical WT blades have visual similarities to the wind turbine tower, a shape constraint technique is applied to correct the output classifications. Before extracting the tip coordinates, the last consideration is to remove the background wind turbine objects that could be in the same frame. After this process, the coordinates are extracted, and the 3D model is obtained by solving a perspective-n-point (PnP) problem. Although it performs well, this method relies only on images that capture the full view of the wind turbine.
Moolan-Feroze et al. [105] proposed a similar approach for simultaneous drone localization and WT model fitting. In their work, the WT was modeled with a skeletal approximation composed of five line segments (tower, three blades, and nacelle), integrating the model fitting process into the non-linear least squares optimizer. Given the image, the algorithm predicts the skeletal model based on its point and line models. With the estimated models, the final step consists of refining them through a process that utilizes a CNN to find correspondence between the skeletal and 2D image points using a pose graph for navigation optimization by coupling Global Positioning System (GPS) and Inertial Measurement Unit (IMU) data.
Regarding the inspection process and path planning, Gu et al. [106] presented an autonomous WT inspection method using a quad-rotor. By assuming stationary wind turbines with previous knowledge of its model (tower and blade length and fixed blade pitch), the authors performed a five-step maneuver to inspect the front and back sides of the WT. Similar to the previous ones, this method uses the visual information with YOLOv3 to detect and estimate the WT blade’s rotation and adjust its trajectory.
One of the biggest problems of wind turbines is WT blade faults. For that reason, and without concern about the wind turbine model, Car et al. [107] proposed a LiDAR-based method to perform semi-autonomous inspection, focusing on movement consistency and the quality of the image data. With the vehicle manually positioned relative to one of its blades, the UAV collects LiDAR and visual data, maintaining a relative position to the inspected blade. The algorithm first performs a plane approximation to obtain its centroid. Utilizing the relative distance to the inferred plane and its normal plane, the UAV can maintain a constant distance and head towards the plane while moving in parallel.
In a previous work [5], a LiDAR-based inspection procedure for UAV wind blade inspection and field tests was performed on an inland wind turbine. Based on a fixed-Y WT blade configuration and 2D LiDAR, the developed algorithm detects the vertical wind turbine blade. It performs a frontal and sideways inspection without any human intervention. The main drawbacks were misalignment with the tower and the limitation of inspecting only one blade at a time.
Performing inspection tasks on large structures using only one aerial vehicle can be time-consuming, since the average UAV’s endurance is around 20 min in a stationary environment. Although it is not directly related to the theme of this paper, it is necessary to emphasize authors such as [108,109,110] that have been developing autonomous methods for structure inspections, ensuring multiple-vehicle coordination and total coverage. These techniques may be interesting to address in future path planning problems.

Discussion

Developing autonomous WT inspection using UAVs takes into consideration different challenges. However, the current literature addresses this problem in two scenarios.
First, starting any inspection at close range requires knowledge of the wind turbine locations and configuration. At this stage, a wind turbine modeling problem is formulated, and the system must be able to detect and model the structure before inspection.
In cases where the WT model was already extracted or given by the user, the inspection operation transforms into a path planning problem, and the vehicle needs to plan the trajectory around the wind turbine, trying to extract as much data as possible to help to detect fault indicators.
Analyzing modeling approaches, the authors presented different techniques, both relying on visual information to extract the wind turbine configuration. Stokkeland et al. [102] used a classical technique to extract edges and estimate the state of the turbine while keeping track of the drone’s position at the approximation stage, while Guo et al. [104], Gu et al. [106], and Moolan-Feroze et al. [105] preferred to use neural networks to detect the different WT components or to directly predict the model during the inspection process. Both methods only address the modeling process, considering a stationary wind turbine for the inspection path planning process. In addition, for these methods to work, the components of the wind farm and their boundaries must be properly framed in the image. For a scenario where the inspection is carried out as closely as possible, for maximum image details, these algorithms are no longer reliable for identifying the different WT components, since the images are all similar in close range.
Regarding trajectory planning, the most common approaches were designed for on-land wind turbine operations, which means considering static structures. Based on the pose and configuration of the WT, either predefined by the user [103] or estimated through the visual information [102,104,105], these methods implemented different techniques to automatically extract the UAV path for covering the front and back sides of the wind turbine blades, concerning, at the same time, the optimal orientation for the image frame.
Relying only on visual perception to extract a fixed WT model and autonomously navigate for inspection in a dynamic environment like in the open sea can affect the performance of the operation in some situations, leading to the possible risk of crashing the vehicle.
Since LiDAR is a particularly well-suited sensor to perform high-accuracy 3D structural assessments, measurement of deformations, and conduct inspections in circumstances where visibility is restricted, some authors have started to include LiDAR sensors in their implementations. In [103,106], the proposed systems run a parallel workflow that uses 2D LiDAR data for collision avoidance and WT distance control. Moreover, in [105], the authors present some of the limitations of their method, mentioning the future need to include a LiDAR sensor in their proposed solution.
Knowing the advantages of using LiDAR when extracting three-dimensional features of the vehicle’s surroundings, Car et al. [107] and Dias et al. [5] proposed two different LiDAR-based solutions for autonomous WT inspection maneuvers. The first presented a semi-autonomous solution to inspect the WT blades. Since the system does not estimate the wind turbine model, their algorithm relies on a manual operator to guide the vehicle to each blade. In the second work, the inspection task is fully autonomous, but the algorithm relies on a specific Y-blade configuration. This reduces their versatility, being necessary to stop the WT in that particular position.
Given this study, this paper’s main contribution consists of a novel and more realistic UAV framework, presenting a fully autonomous wind turbine inspection strategy with an adaptive behavior for offshore applications that is capable of both modeling and planning close-range inspection using only LiDAR data.

2. Materials and Methods

As mentioned above, the first contact with WT inspections started in 2015, when an autonomous maneuver for three-sided close inspection of the wind turbine blades was proposed [5]. At that stage, a vertical operation based on a 2D LiDAR was proposed to map and guide the UAV around the WTB. This approach showed some excellent field results; however, it only worked under specific conditions: precise tower alignment, Y configuration, the lake of volume information for obstacle avoidance, and only the vertical blade being inspected.
Since these structures are in low-visibility conditions for the pilot, it is crucial to develop an autonomous inspection procedure that allows close inspection at a safe distance without affecting the pilot’s perspective. Several improvements were made to the previous single-line LiDAR algorithm, contributing to a more reliable movement with a smooth trajectory and better data collection.
To perform an autonomous inspection operation on wind turbines, the problem was split into two different pipelines. Based on the last method, the UAV must be able to detect and estimate the wind turbine pose and perform autonomous flight near the blades in a smooth and safe maneuver without pilot feedback. The first pipeline must control the UAV movement, detect the wind turbine model and its blades, and follow each one using only the 3D LiDAR information (Figure 4 in light orange).
The second part of the inspection detects damages to the WTBs. For damage detection, the vehicle is equipped with a visual stereo system with one optical and thermographic camera, allowing for the identification of surface and near-surface anomalies on the WT blades. Therefore, the second pipeline is responsible for the image acquisition of both cameras (visual and thermographic), real-time damage detection caused by fatigue or environmental exposition (cracks, fiber rupture, corrosion, edge erosion), and post-processing for more accurate results (Figure 4 in light blue). This block is an ongoing research work that applies morphological segmentation techniques with different deep learning approaches supported in edge computing.
Additionally, both pipelines rely on the vehicle’s navigation. This is handled with an Extended Kalman Filter (EKF) that fuses the inertial measurement unit (IMU) with RTK GNSS, providing high-accuracy pose estimation. Combining the UAV position and attitude with the LiDAR and camera extrinsic, the TF module converts the sensor data to the local (position and attitude relative to the takeoff state) and global coordinate frame (world coordinates). To complete this framework, two additional blocks were included: the mission control block, responsible for the state machine, and the data register, which georeferences all the data for post-processing.
This paper addresses the development of a LiDAR-based autonomous maneuver for wind turbine blade inspection. Following the same principles of the previous implementation [5], the new strategy, depicted in Figure 5, depends mainly on three blocks: perception, modeling, and inspection blocks. It was developed under the following criteria:
  • LiDAR data as the primary sensor to detect and follow the wind turbine blade;
  • Unknown wind turbine configuration;
  • The wind turbine’s position is partially known, assuming a maximum error of up to 30 m in distance;
  • The initial nacelle orientation guess is given within a limit of ± 45 ° between the UAV and the wind turbine. This prevents back-side WT inspections;
  • Predefined for wind turbines with three blades (most common WTs).

2.1. Perception Block

Before extracting the model from the LiDAR data, the algorithm must identify the candidate points that may represent the main WT components, such as the tower, rotor, blades, and nacelle. The first step involves processing and organizing the 3D data points into small groups for posterior object identification.
To reduce the amount of unnecessary information and increase the speed and reliability of the modeling algorithm, the point cloud data pass through a filtering process to rearrange the points by their line and column position, remove ground reflections, and ignore points with a range greater than 50 m. By combining these preprocessed points ( L p l ), the extrinsic sensor ( p l b ), and the estimated position of the vehicle ( p b w ), the surrounding map is extracted by transforming each data sample into the world frame ( L p w ), serving as a baseline for identifying the different objects.
Finally, for each point cloud frame, in the world references, multi-clustering techniques are applied, such as conditional Euclidean clustering, region-growing, DBSCAN, and normal segmentation, to extract the most relevant clusters, as shown in Figure 6. These techniques must be adjusted according to the LiDAR sensor and the distance between the UAV and the WT during the modeling stage.
In this work, the UAV is equipped with a Velodyne VPL-16 to calculate the model, maintaining a flight distance of 25 m from the structure until the model is fully estimated. For the initial estimations of tower positioning and height, the perception block uses only the Euclidean filter with a cluster tolerance of 0.95 and a minimum cluster size of 10 points. With these settings, the algorithm can obtain the tower parameters, guiding the UAV up to the WT rotor. Once the vehicle reaches the rotor, the perception block starts mixing the outputs of the Euclidean filter and DBSCAN to detect the rotor nose and then the WT blades. For this step, an epsilon value of 0.75 and a minimum sample of 50 points were set.
Additionally, the information of the previous estimated model (w W T ^ n 1 ), collected in sample n 1 , is merged with a region bounding box filter to improve the clustering search. Knowing the previous pose of the clusters and the UAV movement, and since WT movements present low-frequency changes—in other words, the moment of inertia is huge—cluster detection and identification are improved by looking near the known region.
Note that this pipeline works sequentially with the information from the modeling process and can be easily adapted to integrate different clustering techniques.

2.2. Modeling Block

As the name suggests, the modeling block matches the point cloud clusters with the corresponding components and finds the WT configuration to perform an efficient and safe autonomous inspection operation. The wind turbine model is defined by seven parameters: tower position, radius, height, rotor, and the three blades’ projected positions. As shown in Figure 7, instead of representing the exact location of the blades, these positions are projected to a parallel plane, ensuring safe distance and serving as initial starting points for close inspection.
With an initial guess of the position and orientation of the wind turbine, the first process is to find the bottom part of the tower to begin its pose motion estimation. Once the UAV is inside the threshold radius, the modeling algorithm starts to fit each cluster into a cylindrical shape, evaluating its verticality (Figure 8a). The group with a minor vertical error and better radius consistency is selected to extract the first model parameters (w W T ^ ), the tower pose (x, y, z, roll, and pitch), and the radius. At this stage, the algorithm initializes the filter, aligns with the cylinder center, and starts moving vertically to identify new WT components, maintaining, at the same time, a safe distance of 25 m from the tower. After initialization, the algorithm uses the previously known model to improve the cluster selection of the new data from the perception block.
While moving upwards, the algorithm starts searching for the tower height parameter by historically analyzing the size and shape of the correspondent cluster (Figure 8b). When the tower limit is reached, a visual representation of the structure is displayed, as shown in Figure 7, and the vehicle initiates the alignment process. The WT orientation can be estimated by detecting the nacelle or blade points. Since both are straight structures, the method projects the 3D point cloud on the horizontal plane and begins to fit 2D lines that could represent these WT parts, as Figure 8c illustrates. Depending on the pose of the wind turbine, this process can lead to two perpendicular lines, one for the nacelle and the other for the WT blades. To select the final orientation, the algorithm uses the known tower position, calculates the yaw angle for the wind turbine, and finally aligns its position in front of the WT rotor.
After the alignment process, the last step is to extract the safety positions to start the close inspection. In this position, the goal is to extract the rotor (cluster centered between the tower and UAV) and one of the blades’ center positions. The algorithm can estimate the rest of the WT parameters based on these points, as shown in Figure 8d. Note that after these procedures, the wind turbine model is fully defined and can be exported to reduce the operational time if the UAV requires battery replacement.

2.3. Inspection Block

After the modeling processes, the next stage consists of the UAV trajectory planning, considering a dynamic structure. Since offshore wind turbines have floating foundations, these structures can present slight angular and linear movements, each of which could lead to inspection failures. Given these challenges, the inspection block is responsible for analyzing the model data and controlling the vehicle through the entire inspection mission, maintaining it in a close but safe trajectory.
With knowledge of the wind turbine model, the final step is to closely inspect each blade and collect high-quality data with different sensors to identify possible failures. For this, and since the OWT is not static, the proposed algorithm combines the information of the estimated model with a dynamic motion strategy, similar to the one proposed in [107], to follow the WT blade edge.
From the extracted safe positions, and restricting the field of view of the LiDAR, the UAV starts searching for the closet blade surface, extracts the centroid of the point cloud data, approximates to a tangential plane, and calculates its covariance eigenvectors, obtaining the heading adjustment and the blade direction vector for controlling the UAV ( v b ) until the blade tip, as illustrated in Figure 9. The directional vector is added to the safety distance vector, and after normalization, the resulting vector is used to control the UAV during the close inspection. Note that the velocity is trimmed according to the UAV limitations.

3. Results

Offshore wind farms are typically installed in remote areas with restricted access due to unpredictable marine conditions, safety protocols, and transportation logistics. These are fundamental to planning and preparing for the real inspection operation and using the available time efficiently to obtain reliable data. For these reasons, it is crucial to find a strategy to test these algorithms in a controlled and safe environment, minimizing the cost of logistics and possible vehicle damage.
The tests were divided into two stages. In the first one, a simulation environment was used to adopt and improve new approaches, perform preliminary evaluations, and enhance the knowledge of real-scenario operations. After several simulation tests on different configurations, the next step was gaining confidence in the method by testing it with a real UAV. For this, an augmented reality (AR) framework was implemented, utilizing the UAV in a real-world environment and a simulated offshore wind turbine on the scene.

3.1. Simulation Environment

In light of the inherent challenges associated with inspection operations for wind turbines, whether situated on land or at sea, the development of a simulation environment represents an essential piece of this research. In alignment with the prevailing state-of-the-art and the vehicles’ system architecture, outlined in reference [5], this wind farm framework must fulfill the following criteria.
In offshore scenarios, wind turbines are exposed to degradation from a multitude of factors, resulting in the deterioration of both above-sea and underwater components. Consequently, the new simulation environment must be capable of integrating both aerial and underwater scenarios. In addition to incorporating physics, customizable robots, and good computational efficiency, this framework must be compatible with real vehicles, facilitating the transition between simulation and real-world operations.
The simulation framework was developed based on the Gazebo simulator, integrating ROS middleware with multiple sensors, such as cameras, LiDAR, radar, sonar, and various autonomous vehicles. Combining Gazebo simulation capabilities with the work performed in [111], where the authors included more realistic underwater scenarios, the new simulation framework was specially designed for aerial, surface, and underwater robots, providing a more realistic offshore wind farm, floating and wind effects, and the included hydrodynamic and hydrostatic forces. This structure allows for the development of new strategies and the implementation of learning techniques to perform intelligent inspections on offshore wind farms with similar environmental conditions. The actual simulation design, shown in Figure 10, comprises three wind turbines distributed in different directions and orientations (for the nacelle and blades). However, this configuration can be adapted to the real wind farm, promoting invariant dependency on the state of the wind turbine.
This framework includes a full-size REpower 5M offshore structure in a monopile foundation for the wind turbines. This model presents five degrees of freedom to adjust the blade’s pitch angle, nacelle orientation, and rotor position/speed, and it can be easily controlled through a ROS publisher. To increase the realism of an offshore scenario, each WT can be defined as fixed, floating, or even mimicking the movement of an actual wind turbine, using the motion data from a real structure. Additionally, since the used model is composed of independent elements (three blades, rotor, nacelle, and tower) it is also possible to modify each 3D part to exhibit fatigue signs, making this more suitable and reliable for performing the first experimental evaluations (Figure 11).
For the UAV, instead of modeling the real vehicle, and since this method only depends on the payload sensors, the PX4 Iris quad-rotor was redesigned to integrate a 3D LiDAR and a visual camera, as illustrated in Figure 12. These additional sensors were configured and placed precisely like in the real UAV [5].

Discussion

The first experimental tests were carried out in the simulation environment presented before to validate the implementation and identify the limitations. This framework was used to test and tune the algorithm’s final parameters and gain confidence to perform inspection operations in a real environment.
In the first stage, the simulation scenario consisted of an offshore wind farm with three static wind turbines, one boat, and the UAV, as shown in Figure 10. Each wind turbine presents a slightly different challenge.
For the first one, a simpler configuration was used, where the WT was aligned with the UAV and the blades were positioned in a horizontal Y shape, promoting optimal drone navigation without occlusions or yaw adjustments. This scenario served as a baseline for validating and tuning the algorithm. For testing the yaw alignment step, the second WT was remodeled with the same blade configuration but with a different angle for the nacelle, promoting the alignment maneuver.
Lastely, to improve the occlusion limitations, the third wind turbine was configured in a Y shape. With one of its blades in a vertical position, the tower point cloud cluster gradually disappears as the drone follows the tower, leading to errors in height estimation.
Figure 13 illustrates the 3D point cloud mapping obtained after the autonomous inspection procedure and the respective wind turbine model estimation. This allows for validating the concept of this methodology and identifying the main flaws of LiDAR-based approaches. After the validation of the algorithm, the second stage was to identify the metrics’ accuracy, find the limits, and obtain the success rate, as shown in Table 1. For these tests, multiple nacelle orientations (−40 to 40) were combined with various blade positions (0 to 90) to calculate the error of the estimated parameters, as illustrated in Figure 14.
Based on the results in Table 1, the modeling algorithm showed good versatility for different configurations, and the UAV was able to complete the autonomous inspection independently of the nacelle and blade configuration, even when the model presented higher angular errors. However, it is important to analyze and describe the minor differences highlighted in the results table.
Starting with the tower position estimation, since this is the most extended structure with an almost perfect cylinder shape (dependent on the wind turbine model), it is evident that this parameter is independent of the yaw orientation and blade position. It is expected that errors in this parameter are almost the same around the different configurations, as reflected in the results. For the tower height, despite the results showing invariance to the blade and nacelle configuration, this parameter is highly dependent on the occlusion of the blades. For this reason, two-stage height and yaw estimation was implemented, improving both occlusions and estimation.
Regarding yaw estimation, the typical yaw error was approximately 5 degrees. However, it is important to emphasize the results highlighted in orange. Since this method relies on line approximation from the blade and nacelle points, in most situations, the yaw estimation is calculated based on the most visible WT component, the blade points, which are not perpendicular to the nacelle orientation (Figure 14b). However, in specific configurations, where the UAV’s relative pose allows for no nacelle occlusion, highlighted in orange, the alignment process is performed mainly by the nacelle points, obtaining better yaw prediction.
For the blade position estimation, this error generally increases with the yaw angle, so even for the Y and inverted Y configuration highlighted in yellow, the error is higher than expected. This occurs because the blade clusters are shorter and smaller, leading to higher angular errors.
One of the main limitations of the algorithm is the occlusion of the WT tower, since the method is highly dependent on detecting first the WT tower to find the rotor and blades. This limitation depends on many factors, like the relative pose between the UAV and the WT, and even the blade’s positions. However, the most critical scenario is the Y blade configuration with a 0-degree nacelle orientation, where the vertical blade occludes a large area of the WT tower; the scenario highlighted in gray.
This limitation was overcome with the two-stage height and yaw estimation. After the first height and yaw estimation, the UAV will double-check the height parameter, and in case more WT tower points are detected, the algorithm repeats the height and yaw estimation one more time. This only happens in cases with perfect alignment. However, this limitation could represent a problem for WTs with conical-shaped towers, where the base of the blade is more significant than the tower radius at the top.
The last tests introduced angular (roll and pitch) and vertical motion to the wind turbine, simulating the oscillation of a floating structure. In the real world, marine conditions are classified based on the Beaufort scale, which correlates the wind speed and the wave height to determine the safety of ocean operations. For towing and deploying other vehicles into the water, Beaufort number 4 is the recommended condition, which represents winds between 5.5 and 7.9 m/s, waves of 1 to 2 m in height, and periods between 4 and 5 s [112].
Since there are no data correlating the Beaufort scale and OWT oscillations, the worst scenario was assumed, where the WT was represented by a light, floating structure that follows the ocean movements. Therefore, based on these, the tests were divided into three difficulties. The first level, “slow”, tries to mimic optimal offshore conditions with low-frequency oscillations and amplitudes, as shown in Equation (1). At the “medium” level, the amplitudes were increased by 1.5 times, maintaining the same frequencies, while in the final stage, both amplitude and frequencies were multiplied by 1.5 relative to the original “slow” level. The obtained results are detailed in Table 2.
z = s i n ( 2 π × 0.15 t ) m r o l l = 2 × s i n ( 2 π × 0.2 t ) p i t c h = s i n ( 2 π × 0.25 t )
Based on these results, the “slow” level shows that the algorithm can follow the WT blades even with motion, maintaining a safe distance simultaneously. In the next level, “medium”, increasing the amplitude increases the motion variation, which in some situations leads to wrong estimations (third run) or even makes the UAV chase the WT back and forward, reaching the maximum allowed time to estimate the WT model, shown in orange. Additionally, in the third run, the method showed again that the model does not need to be point accurate to perform the autonomous inspection.
In the “medium” and “high” stages, the algorithm shows some limitations. Increasing the amplitude reflects a wind turbine with greater oscillations, meaning the 3D motions are larger, increasing the variation in the measured clusters. In some situations (fifth run), while maintaining a safe distance, the algorithm starts losing track of the WT components, and the model estimation fails. In the other experiments (first to fourth runs), even with high error on yaw estimation, the method was able to complete the inspection with success, showing one more time that the model does not need to be point accurate to perform the autonomous inspection.
Increasing both the amplitude and frequency proved that for a high motion rate, the “high” stage, the UAV cannot perform the inspection. Since the WT is moving more and faster, only 20% of the tries the vehicle are capable of finishing the inspection task. To provide safe and smooth maneuvering, the vehicle’s maximum velocities and accelerations are restricted. When combining this limitation with a faster WT scenario, the drone is not able to progress, staying in a loop of maintaining the distance while following the blade edge moving back and forward. In some situations, the inspection mission fails due to bad estimation, but most of the time, the drone becomes blocked while moving behind the WT. Even with the correct model prediction, the UAV could not pass from the first WT blade, wasting 90% of the time maintaining the safe distance from the blade tip.
Note that these two experiments represent motion variations between 10 and 15 m on the WT rotor, which expresses extreme maritime conditions.

3.2. Mixed-Environment

After several simulation tests on different configurations, the next step was gaining confidence in the method and testing it on a real UAV.
To prevent losing valuable sensors and avoid the logistic complexity of performing a land or offshore inspection operation, we developed an augmented framework, presented in Figure 15, that uses the STORK II UAV in a safe environment inside a simulated OWF scenario.
For the virtual part, we used an empty Gazebo world with a single wind turbine and a PX4 Iris model equipped only with a LiDAR. Here, the UAV model is only used for visual collision simulation, mimicking the real UAV motion. At the same time, LiDAR will be responsible for detecting the virtual WT and transcoding to the real world.
In STORK II, autopilot is used for the low-level control and navigation estimation, combining the IMU data with RTK GPS. At the same time, the CPU performs high-level control, including the wind turbine inspection.
This configuration enables the direct conversion of the movement of the real-world vehicle to the simulation environment, which will result in new point cloud data for the virtual wind turbine. These data are then collected and processed by the STORK II CPU, which runs the inspection pipeline, outputting velocity commands to autonomously control the vehicle in a virtual inspection operation.

Discussion

The validation test occurred in an open area near a football court, with average wind speeds of 18.5 km/h. As Figure 16 illustrates, the autonomous mission was successfully executed, where the blue line represents the UAV trajectory in autonomous mode, which took 8 min and 10 s, while the red line represents the manual mode. Here, it is possible to observe that STORK II followed the simulated wind turbine, concluding that this method is ready to be tested on an actual wind turbine.
Figure 16 demonstrates that the autonomous mission was accomplished successfully. The blue line denotes the autonomous inspection, which took 8 min and 10 s, whereas the red line depicts the manually controlled trajectory. It is evident that STORK II accurately tracked the simulated wind turbine, indicating that the proposed approach is prepared for the real scenario. For the power consumption, this system used an average of 369 kW/h during the inspection, which represents 75% of the total battery capacity.
Figure 16 demonstrates that the autonomous mission was accomplished successfully. The blue trajectory represents the UAV’s path in autonomous mode, and the red trajectory represents the manual mode. It is evident that STORK II closely followed the simulated wind turbine, indicating that the method is adequately prepared for testing on an actual wind turbine.

4. Conclusions

This paper focused on developing a LiDAR-based algorithm capable of modeling and autonomously inspecting wind turbines, especially in offshore environments. Its performance was evaluated with two validation steps, one with a simulation framework and the other with an augmented reality methodology. The presented simulation environment poses a series of scientific challenges concerning offshore WT inspection operations. These challenges take the form of floating platforms, wind gusts, and wind turbine blade faults. Both the simulation and mixed environment results demonstrated the robustness and versatility of the proposed method, which could be further improved by testing it under onshore and offshore wind farm scenarios.
In the static simulation tests, the algorithm showed high independence from the WT configuration, where the UAV could extract the WT model and closely inspect the blades while maintaining a safe distance without human interaction. Its major limitation was the tower occlusion, partially solved by the two-stage height and yaw estimation. After the initial prediction, the algorithm searches for new tower points, updating those parameters if necessary. However, this procedure must be validated in real scenarios with different WT shapes.
Regarding the floating test results, the proposed pipeline proved to be very flexible and adaptive to oscillated structures. Even with some poor modeling predictions, the UAV completed the full inspection safely. Nonetheless, increasing the oscillation amplitude and frequency was crucial to identifying the algorithm’s limitations in highly dynamic scenarios. In the modeling part, the algorithm depends on the WT tower height estimation to proceed with the rotor and blade calculation, so with high oscillations, the UAV became stuck chasing the tower. The same problem occurred in the inspection tasks, where the UAV could not reach the WT blade tip. A possible way to deal with this problem is to implement a motion estimation strategy to predict the movement of the WT and adapt based on this prediction rather than being reactive based on current point cloud data. This could be unnecessary, since inspection procedures must occur under suitable environmental conditions, each representing lower oscillations.
Although the algorithm’s concept was validated with the available framework, it still needs to be validated under different conditions, combining land and offshore WT structures. Furthermore, there is a margin for improving both the efficiency and mission time of the autonomous operation. Given that the main goal is to autonomously guide a UAV to closely observe the WT blade’s surface and collect data for analysis, instead of waiting for the full model prediction to start the mission, the algorithm could be redefined to command the close inspection as soon as a blade cluster is observed. Since following the WT blade will lead to the center of the WT rotor, with this strategy, the method performs both prediction and close inspection at the same time.
Another improvement is related to the algorithm pipelines. In the current implementation, we developed a series workflow in which the perception pipeline first organizes the point cloud into clusters, then the modeling block analyzes and classifies the output. Finally, based on the model’s progression, the inspection pipeline takes the decision to control the UAV. To improve the efficiency of this workflow, perception and modeling could be redesigned to work in parallel.
Regarding the battery life of the vehicle, most of the tests were carried out in a simulation environment using a completely different drone compared to the real vehicle. Although the simulation was able to integrate weather factors identical to real ones, such as wind and sea oscillations, flight time and energy consumption will always be a factor that is highly dependent on the system, i.e., vehicle weight, aerodynamics, battery life, wind and gusts, and air density, among others.
Since wind blade offshore inspection is an ongoing project, WT access is still very restricted, especially for those installed offshore, where their construction is still behind schedule. Given this limitation, our next goal is to find on-land wind turbines that are available for testing the proposed implementation to validate the energy consumption, efficiency, and robustness of the maneuver, as well as to identify new limitations that may exist in a real scenario. Finally, our main research goal is to develop a deep reinforcement learning (DRL) framework for offshore wind turbine inspection with adaptive behavior. Analyzing the different UAV DRL implementations, our target is to evaluate and adapt novel continuous action space algorithms for wind turbine inspection applications.

Author Contributions

Conceptualization, A.O. and A.D.; methodology, A.O.; software, A.O.; validation, A.O. and A.D.; formal analysis, A.O.; investigation, A.O.; resources, A.O., A.D., A.M. and J.A.; data curation, A.O., A.D., T.S. and P.R.; writing—original draft preparation, A.O.; writing—review and editing, A.O., A.D., A.M. and J.A.; visualization, A.O.; supervision, A.D., A.M. and J.A.; project administration, A.M. and J.A.; funding acquisition, J.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the European Project H2020 EU-SCORES—European Scalable Complementary Offshore Renewable Energy Sources (DOI: 10.3030/101036457), by National Funds through the Portuguese funding agency, FCT—Fundação para a Ciência e a Tecnologia, within project UIDB/50014/2020 (DOI 10.54499/UIDB/50014/2020) and by FCT for the Ph.D. Grant UI/BD/152316/2021.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
2DTwo-Dimensional
3DThree-Dimensional
ARAugmented Reality
CBMCondition-Based Maintenance
CNNConvolutional Neural Network
CPUCentral Processing Unit
DLDeep Learning
DRLDeep Reinforcement Learning
EKFExtended Kalman Filter
FCNFully Convolutional Network
GPSGlobal Positioning System
IMUInertial Measurement Unit
INESC TECInstitute for Systems and Computer Engineering, Technology and Science
IRInfrared
LiDARLight Detection And Ranging
MLMachine Learning
NDTNon-Destructive Testing
O&MOperations and Maintenance
OWFOffshore Wind Farm
OWTOffshore Wind Turbine
PdMPredictive Maintenance
PMPreventive Maintenance
PnPPerspective-n-Point
R-CNNRegion-based Convolutional Neural Network
RTKReal-Time Kinematic
SHMStructural Health Monitoring
UAVUnmanned Aerial Vehicle
WFWind Farm
WTBWind Turbine Blade
WTWind Turbine
YOLOYou Only Look Once

References

  1. Papaelias, M.; Márquez, F.P.G. Wind turbine inspection and condition monitoring. In Non-Destructive Testing and Condition Monitoring Techniques for Renewable Energy Industrial Assets; Elsevier: Amsterdam, The Netherlands, 2020; pp. 19–29. [Google Scholar]
  2. Chan, D.; Mo, J. Life cycle reliability and maintenance analyses of wind turbines. Energy Procedia 2017, 110, 328–333. [Google Scholar] [CrossRef]
  3. Ren, Z.; Verma, A.S.; Li, Y.; Teuwen, J.J.; Jiang, Z. Offshore wind turbine operations and maintenance: A state-of-the-art review. Renew. Sustain. Energy Rev. 2021, 144, 110886. [Google Scholar] [CrossRef]
  4. Sousa, P.; Ferreira, A.; Moreira, M.; Santos, T.; Martins, A.; Dias, A.; Almeida, J.; Silva, E. ISEP/INESC TEC Aerial Robotics Team for Search and Rescue Operations at the EuRathlon Challenge 2015. In Proceedings of the 2016 International Conference on Autonomous Robot Systems and Competitions (ICARSC), Bragança, Portugal, 4–6 May 2016; pp. 156–161. [Google Scholar] [CrossRef]
  5. Dias, A.; Almeida, J.; Oliveira, A.; Santos, T.; Martins, A.; Silva, E. Unmanned Aerial Vehicle for Wind-Turbine Inspection. Next Step: Offshore. In Proceedings of the OCEANS 2022, Hampton Roads, VA, USA, 17–20 October 2022; pp. 1–6. [Google Scholar] [CrossRef]
  6. Azevedo, F.; Dias, A.; Almeida, J.; Oliveira, A.; Ferreira, A.; Santos, T.; Martins, A.; Silva, E. LiDAR-Based Real-Time Detection and Modeling of Power Lines for Unmanned Aerial Vehicles. Sensors 2019, 19, 1812. [Google Scholar] [CrossRef] [PubMed]
  7. Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned aerial vehicles (UAVs): A survey on civil applications and key research challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  8. Oliveira, A.; Dias, A.; Santos, T.; Rodrigues, P.; Martins, A.; Silva, E.; Almeida, J. Simulation Environment for UAV Offshore Wind-Turbine Inspection. In Proceedings of the OCEANS 2023-Limerick, Limerick, Ireland, 5–8 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  9. Leite, G.d.N.P.; Araújo, A.M.; Rosas, P.A.C. Prognostic techniques applied to maintenance of wind turbines: A concise and specific review. Renew. Sustain. Energy Rev. 2018, 81, 1917–1925. [Google Scholar] [CrossRef]
  10. Bilgili, M.; Yasar, A.; Simsek, E. Offshore wind power development in Europe and its comparison with onshore counterpart. Renew. Sustain. Energy Rev. 2011, 15, 905–915. [Google Scholar] [CrossRef]
  11. Soares-Ramos, E.P.; de Oliveira-Assis, L.; Sarrias-Mena, R.; Fernández-Ramírez, L.M. Current status and future trends of offshore wind power in Europe. Energy 2020, 202, 117787. [Google Scholar] [CrossRef]
  12. Esteban, M.D.; Diez, J.J.; López, J.S.; Negro, V. Why offshore wind energy? Renew. Energy 2011, 36, 444–450. [Google Scholar] [CrossRef]
  13. Dinwoodie, I.; Mcmillan, D. Operation and maintenance of offshore wind farms. Eng. Technol. Ref. 2014, 1. [Google Scholar] [CrossRef]
  14. Zhao, X.g.; Ren, L.z. Focus on the development of offshore wind power in China: Has the golden period come? Renew. Energy 2015, 81, 644–657. [Google Scholar] [CrossRef]
  15. Shafiee, M. Maintenance logistics organization for offshore wind energy: Current progress and future perspectives. Renew. Energy 2015, 77, 182–193. [Google Scholar] [CrossRef]
  16. Karyotakis, A.; Bucknall, R. Planned intervention as a maintenance and repair strategy for offshore wind turbines. J. Mar. Eng. Technol. 2010, 9, 27–35. [Google Scholar] [CrossRef]
  17. Dui, H.; Si, S.; Yam, R.C. A cost-based integrated importance measure of system components for preventive maintenance. Reliab. Eng. Syst. Saf. 2017, 168, 98–104. [Google Scholar] [CrossRef]
  18. García, I.E.M.; Sánchez, A.S.; Barbati, S. Reliability and preventive maintenance. In MARE-WINT; Springer: Berlin/Heidelberg, Germany, 2016; p. 235. [Google Scholar]
  19. Nandipati, S.; Nichenametla, A.N.; Waghmare, A.L. Cost-Effective Maintenance Plan for Multiple Defect Types in Wind Turbine Blades. In Proceedings of the 2018 Annual Reliability and Maintainability Symposium (RAMS), Reno, NV, USA, 22–25 January 2018; pp. 1–5. [Google Scholar] [CrossRef]
  20. Walgern, J.; Peters, L.; Madlener, R. Economic Evaluation of Maintenance Strategies for Offshore Wind Turbines Based on Condition Monitoring Systems; FCN Working Paper No. 08; SSRN: Rochester, NY, USA, 2017. [Google Scholar] [CrossRef]
  21. May, A.; McMillan, D.; Thöns, S. Economic analysis of condition monitoring systems for offshore wind turbine sub-systems. IET Renew. Power Gener. 2015, 9, 900–907. [Google Scholar] [CrossRef]
  22. Jahanshahi Zeitouni, M.; Parvaresh, A.; Abrazeh, S.; Mohseni, S.R.; Gheisarnejad, M.; Khooban, M.H. Digital twins-assisted design of next-generation advanced controllers for power systems and electronics: Wind turbine as a case study. Inventions 2020, 5, 19. [Google Scholar] [CrossRef]
  23. Canizo, M.; Onieva, E.; Conde, A.; Charramendieta, S.; Trujillo, S. Real-time predictive maintenance for wind turbines using Big Data frameworks. In Proceedings of the 2017 IEEE International Conference on Prognostics and Health Management (ICPHM), Dallas, TX, USA, 19–21 June 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 70–77. [Google Scholar]
  24. Geselschap, C.; Meskers, G.; Dijk, R.V.; Winsen, I.V. Digital twin-engineering with the human factor in the loop. In Proceedings of the Offshore Technology Conference, Houston, TX, USA, 6–9 May 2019; OnePetro: Richardson, TX, USA, 2019. [Google Scholar]
  25. Poozesh, P.; Aizawa, K.; Niezrecki, C.; Baqersad, J.; Inalpolat, M.; Heilmann, G. Structural health monitoring of wind turbine blades using acoustic microphone array. Struct. Health Monit. 2017, 16, 471–485. [Google Scholar] [CrossRef]
  26. Liu, Y.; Hajj, M.; Bao, Y. Review of robot-based damage assessment for offshore wind turbines. Renew. Sustain. Energy Rev. 2022, 158, 112187. [Google Scholar] [CrossRef]
  27. Li, J.; Li, Z.; Jiang, Y.; Tang, Y. Typhoon Resistance Analysis of Offshore Wind Turbines: A Review. Atmosphere 2022, 13, 451. [Google Scholar] [CrossRef]
  28. Du, Y.; Zhou, S.; Jing, X.; Peng, Y.; Wu, H.; Kwok, N. Damage detection techniques for wind turbine blades: A review. Mech. Syst. Signal Process. 2020, 141, 106445. [Google Scholar] [CrossRef]
  29. Li, D.; Ho, S.C.M.; Song, G.; Ren, L.; Li, H. A review of damage detection methods for wind turbine blades. Smart Mater. Struct. 2015, 24, 033001. [Google Scholar] [CrossRef]
  30. Bhardwaj, U.; Teixeira, A.; Soares, C.G. Reliability prediction of an offshore wind turbine gearbox. Renew. Energy 2019, 141, 693–706. [Google Scholar] [CrossRef]
  31. Stanescu, D.; Digulescu, A.; Ioana, C.; Candel, I. Early-Warning Indicators of Power Cable Weaknesses for Offshore Wind Farms. In Proceedings of the OCEANS 2023—MTS/IEEE U.S. Gulf Coast, Biloxi, MS, USA, 25–28 September 2023; pp. 1–6. [Google Scholar] [CrossRef]
  32. Márquez, F.P.G.; Chacón, A.M.P. A review of non-destructive testing on wind turbines blades. Renew. Energy 2020, 161, 998–1010. [Google Scholar] [CrossRef]
  33. Oliveira, G.; Magalhães, F.; Cunha, Á.; Caetano, E. Vibration-based damage detection in a wind turbine using 1 year of data. Struct. Control Health Monit. 2018, 25, e2238. [Google Scholar] [CrossRef]
  34. Jasiūnienė, E.; Raišutis, R.; Šliteris, R.; Voleišis, A.; Jakas, M. Ultrasonic NDT of wind turbine blades using contact pulse-echo immersion testing with moving water container. Ultragarsas 2008, 63, 28–32. [Google Scholar]
  35. Schubel, P.; Crossley, R.; Boateng, E.; Hutchinson, J. Review of structural health and cure monitoring techniques for large wind turbine blades. Renew. Energy 2013, 51, 113–123. [Google Scholar] [CrossRef]
  36. Tziavos, N.I.; Hemida, H.; Dirar, S.; Papaelias, M.; Metje, N.; Baniotopoulos, C. Structural health monitoring of grouted connections for offshore wind turbines by means of acoustic emission: An experimental study. Renew. Energy 2020, 147, 130–140. [Google Scholar] [CrossRef]
  37. Yang, B.; Zhang, L.; Zhang, W.; Ai, Y. Non-destructive testing of wind turbine blades using an infrared thermography: A review. In Proceedings of the 2013 International Conference on Materials for Renewable Energy and Environment, Chengdu, China, 19–21 August 2013; IEEE: Piscataway, NJ, USA, 2013; Volume 1, pp. 407–410. [Google Scholar]
  38. Reddy, A.; Indragandhi, V.; Ravi, L.; Subramaniyaswamy, V. Detection of Cracks and damage in wind turbine blades using artificial intelligence-based image analytics. Measurement 2019, 147, 106823. [Google Scholar] [CrossRef]
  39. Mikkelsen, L.P. Observations of microscale tensile fatigue damage mechanisms of composite materials for wind turbine blades. IOP Conf. Ser. Mater. Sci. Eng. 2018, 388, 012006. [Google Scholar] [CrossRef]
  40. Yan, Y.; Ren, J.; Zhao, H.; Windmill, J.F.; Ijomah, W.; De Wit, J.; Von Freeden, J. Non-Destructive Testing of Composite Fiber Materials With Hyperspectral Imaging—Evaluative Studies in the EU H2020 FibreEUse Project. IEEE Trans. Instrum. Meas. 2022, 71, 1–13. [Google Scholar] [CrossRef]
  41. Spencer, B.F., Jr.; Hoskere, V.; Narazaki, Y. Advances in computer vision-based civil infrastructure inspection and monitoring. Engineering 2019, 5, 199–222. [Google Scholar] [CrossRef]
  42. Wang, W.; Xue, Y.; He, C.; Zhao, Y. Review of the typical damage and damage-detection methods of large wind turbine blades. Energies 2022, 15, 5672. [Google Scholar] [CrossRef]
  43. Jahanshahi, M.R.; Kelly, J.S.; Masri, S.F.; Sukhatme, G.S. A survey and evaluation of promising approaches for automatic image-based defect detection of bridge structures. Struct. Infrastruct. Eng. 2009, 5, 455–486. [Google Scholar] [CrossRef]
  44. Kim, H.; Ahn, E.; Cho, S.; Shin, M.; Sim, S.H. Comparative analysis of image binarization methods for crack identification in concrete structures. Cem. Concr. Res. 2017, 99, 53–61. [Google Scholar] [CrossRef]
  45. Jahanshahi, M.R.; Masri, S.F. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation. Smart Mater. Struct. 2013, 22, 035019. [Google Scholar] [CrossRef]
  46. Jahanshahi, M.R.; Masri, S.F.; Padgett, C.W.; Sukhatme, G.S. An innovative methodology for detection and quantification of cracks through incorporation of depth perception. Mach. Vis. Appl. 2013, 24, 227–241. [Google Scholar] [CrossRef]
  47. Salman, M.; Mathavan, S.; Kamal, K.; Rahman, M. Pavement crack detection using the Gabor filter. In Proceedings of the 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), The Hague, The Netherlands, 6–9 October 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 2039–2044. [Google Scholar]
  48. Jahanshahi, M.R.; Chen, F.C.; Joffe, C.; Masri, S.F. Vision-based quantitative assessment of microcracks on reactor internal components of nuclear power plants. Struct. Infrastruct. Eng. 2017, 13, 1013–1026. [Google Scholar] [CrossRef]
  49. Liu, Y.F.; Cho, S.; Spencer, B., Jr.; Fan, J.S. Concrete crack assessment using digital image processing and 3D scene reconstruction. J. Comput. Civ. Eng. 2016, 30, 04014124. [Google Scholar] [CrossRef]
  50. Wang, L.; Zhang, Z. Automatic detection of wind turbine blade surface cracks based on UAV-taken images. IEEE Trans. Ind. Electron. 2017, 64, 7293–7303. [Google Scholar] [CrossRef]
  51. Wang, L.; Zhang, Z.; Luo, X. A two-stage data-driven approach for image-based wind turbine blade crack inspections. IEEE/ASME Trans. Mechatron. 2019, 24, 1271–1281. [Google Scholar] [CrossRef]
  52. Ahuja, S.K.; Shukla, M.K. A survey of computer vision based corrosion detection approaches. In Proceedings of the International Conference on Information and Communication Technology for Intelligent Systems, Ahmedabad, India, 25–26 March 2017; Springer: Cham, Switzerland, 2017; pp. 55–63. [Google Scholar]
  53. Ghanta, S.; Karp, T.; Lee, S. Wavelet domain detection of rust in steel bridge images. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1033–1036. [Google Scholar]
  54. Shen, H.K.; Chen, P.H.; Chang, L.M. Automated steel bridge coating rust defect recognition method based on color and texture feature. Autom. Constr. 2013, 31, 338–356. [Google Scholar] [CrossRef]
  55. Liao, K.W.; Lee, Y.T. Detection of rust defects on steel bridge coatings via digital image recognition. Autom. Constr. 2016, 71, 294–306. [Google Scholar] [CrossRef]
  56. Baqersad, J.; Poozesh, P.; Niezrecki, C.; Harvey, E.; Yarala, R. Full field inspection of a utility scale wind turbine blade using digital image correlation. In Proceedings of the CamX, The Composites and Advanced Materials, Orlando, FL, USA, 13–16 October 2014; Volume 10, pp. 2891–2960. [Google Scholar]
  57. Akhloufi, M.; Benmesbah, N. Outdoor ice accretion estimation of wind turbine blades using computer vision. In Proceedings of the 2014 Canadian Conference on Computer and Robot Vision, Montreal, QC, Canada, 6–9 May 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 246–253. [Google Scholar]
  58. Madi, E.; Pope, K.; Huang, W.; Iqbal, T. A review of integrating ice detection and mitigation for wind turbine blades. Renew. Sustain. Energy Rev. 2019, 103, 269–281. [Google Scholar] [CrossRef]
  59. Wei, K.; Yang, Y.; Zuo, H.; Zhong, D. A review on ice detection technology and ice elimination technology for wind turbine. Wind Energy 2020, 23, 433–457. [Google Scholar] [CrossRef]
  60. Brownlee, J. Deep Learning for Computer Vision: Image Classification, Object Detection, and Face Recognition in Python; Machine Learning Mastery: Vermont, Australia, 2019. [Google Scholar]
  61. Muhammad, K.; Ullah, A.; Lloret, J.; Del Ser, J.; de Albuquerque, V.H.C. Deep learning for safe autonomous driving: Current challenges and future directions. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4316–4336. [Google Scholar] [CrossRef]
  62. Sahiner, B.; Pezeshk, A.; Hadjiiski, L.M.; Wang, X.; Drukker, K.; Cha, K.H.; Summers, R.M.; Giger, M.L. Deep learning in medical imaging and radiation therapy. Med. Phys. 2019, 46, e1–e36. [Google Scholar] [CrossRef]
  63. Zou, D.; Zhang, M.; Bai, Z.; Liu, T.; Zhou, A.; Wang, X.; Cui, W.; Zhang, S. Multicategory damage detection and safety assessment of post-earthquake reinforced concrete structures using deep learning. Comput.-Aided Civ. Infrastruct. Eng. 2022, 37, 1188–1204. [Google Scholar] [CrossRef]
  64. Zhao, X.Y.; Dong, C.Y.; Zhou, P.; Zhu, M.J.; Ren, J.W.; Chen, X.Y. Detecting surface defects of wind tubine blades using an Alexnet deep learning algorithm. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2019, 102, 1817–1824. [Google Scholar] [CrossRef]
  65. Xu, Y.; Bao, Y.; Chen, J.; Zuo, W.; Li, H. Surface fatigue crack identification in steel box girder of bridges by a deep fusion convolutional neural network based on consumer-grade camera images. Struct. Health Monit. 2019, 18, 653–674. [Google Scholar] [CrossRef]
  66. Fan, R.; Bocus, M.J.; Zhu, Y.; Jiao, J.; Wang, L.; Ma, F.; Cheng, S.; Liu, M. Road crack detection using deep convolutional neural network and adaptive thresholding. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 474–479. [Google Scholar]
  67. Kim, H.; Ahn, E.; Shin, M.; Sim, S.H. Crack and noncrack classification from concrete surface images using machine learning. Struct. Health Monit. 2019, 18, 725–738. [Google Scholar] [CrossRef]
  68. Dong, C.Z.; Catbas, F.N. A review of computer vision–based structural health monitoring at local and global levels. Struct. Health Monit. 2021, 20, 692–743. [Google Scholar] [CrossRef]
  69. Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep learning-based crack damage detection using convolutional neural networks. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
  70. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
  71. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  72. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef] [PubMed]
  73. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  74. Boudjit, K.; Ramzan, N. Human detection based on deep learning YOLO-v2 for real-time UAV applications. J. Exp. Theor. Artif. Intell. 2022, 34, 527–544. [Google Scholar] [CrossRef]
  75. Lin, J.P.; Sun, M.T. A YOLO-based traffic counting system. In Proceedings of the 2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI), Taichung, Taiwan, 30 November–2 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 82–85. [Google Scholar]
  76. Nie, Y.; Sommella, P.; O’Nils, M.; Liguori, C.; Lundgren, J. Automatic detection of melanoma with yolo deep convolutional neural networks. In Proceedings of the 2019 E-Health and Bioengineering Conference (EHB), Iasi, Romania, 21–23 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  77. Zhang, C.; Chang, C.C.; Jamshidi, M. Concrete bridge surface damage detection using a single-stage detector. Comput.-Aided Civ. Infrastruct. Eng. 2020, 35, 389–409. [Google Scholar] [CrossRef]
  78. Cha, Y.J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types. Comput.-Aided Civ. Infrastruct. Eng. 2018, 33, 731–747. [Google Scholar] [CrossRef]
  79. Hao, S.; Zhou, Y.; Guo, Y. A brief survey on semantic segmentation with deep learning. Neurocomputing 2020, 406, 302–321. [Google Scholar] [CrossRef]
  80. Asgari Taghanaki, S.; Abhishek, K.; Cohen, J.P.; Cohen-Adad, J.; Hamarneh, G. Deep semantic segmentation of natural and medical images: A review. Artif. Intell. Rev. 2021, 54, 137–178. [Google Scholar] [CrossRef]
  81. Kim, B.; Cho, S. Image-based concrete crack assessment using mask and region-based convolutional neural network. Struct. Control Health Monit. 2019, 26, e2381. [Google Scholar] [CrossRef]
  82. Kheradmandi, N.; Mehranfar, V. A critical review and comparative study on image segmentation-based techniques for pavement crack detection. Constr. Build. Mater. 2022, 321, 126162. [Google Scholar] [CrossRef]
  83. Liu, Z.; Cao, Y.; Wang, Y.; Wang, W. Computer vision-based concrete crack detection using U-net fully convolutional networks. Autom. Constr. 2019, 104, 129–139. [Google Scholar] [CrossRef]
  84. Zhang, C.; Wen, C.; Liu, J. Mask-MRNet: A deep neural network for wind turbine blade fault detection. J. Renew. Sustain. Energy 2020, 12, 053302. [Google Scholar] [CrossRef]
  85. Munawar, H.S.; Ullah, F.; Shahzad, D.; Heravi, A.; Qayyum, S.; Akram, J. Civil infrastructure damage and corrosion detection: An application of machine learning. Buildings 2022, 12, 156. [Google Scholar] [CrossRef]
  86. Katsamenis, I.; Doulamis, N.; Doulamis, A.; Protopapadakis, E.; Voulodimos, A. Simultaneous Precise Localization and Classification of metal rust defects for robotic-driven maintenance and prefabrication using residual attention U-Net. Autom. Constr. 2022, 137, 104182. [Google Scholar] [CrossRef]
  87. Hacıefendioğlu, K.; Başağa, H.B.; Yavuz, Z.; Karimi, M.T. Intelligent ice detection on wind turbine blades using semantic segmentation and class activation map approaches based on deep learning method. Renew. Energy 2022, 182, 1–16. [Google Scholar] [CrossRef]
  88. Nsengiyumva, W.; Zhong, S.; Lin, J.; Zhang, Q.; Zhong, J.; Huang, Y. Advances, limitations and prospects of nondestructive testing and evaluation of thick composites and sandwich structures: A state-of-the-art review. Compos. Struct. 2021, 256, 112951. [Google Scholar] [CrossRef]
  89. Tchakoua, P.; Wamkeue, R.; Ouhrouche, M.; Slaoui-Hasnaoui, F.; Tameghe, T.A.; Ekemb, G. Wind turbine condition monitoring: State-of-the-art review, new trends, and future challenges. Energies 2014, 7, 2595–2630. [Google Scholar] [CrossRef]
  90. Sanati, H.; Wood, D.; Sun, Q. Condition monitoring of wind turbine blades using active and passive thermography. Appl. Sci. 2018, 8, 2004. [Google Scholar] [CrossRef]
  91. Doroshtnasir, M.; Worzewski, T.; Krankenhagen, R.; Röllig, M. On-site inspection of potential defects in wind turbine rotor blades with thermography. Wind Energy 2016, 19, 1407–1422. [Google Scholar] [CrossRef]
  92. Hwang, S.; An, Y.K.; Sohn, H. Continuous-wave line laser thermography for monitoring of rotating wind turbine blades. Struct. Health Monit. 2019, 18, 1010–1021. [Google Scholar] [CrossRef]
  93. Hwang, S.; An, Y.K.; Yang, J.; Sohn, H. Remote inspection of internal delamination in wind turbine blades using continuous line laser scanning thermography. Int. J. Precis. Eng. Manuf.-Green Technol. 2020, 7, 699–712. [Google Scholar] [CrossRef]
  94. Galleguillos, C.; Zorrilla, A.; Jimenez, A.; Diaz, L.; Montiano, Á.; Barroso, M.; Viguria, A.; Lasagni, F. Thermographic non-destructive inspection of wind turbine blades using unmanned aerial systems. Plast. Rubber Compos. 2015, 44, 98–103. [Google Scholar] [CrossRef]
  95. Daffara, C.; Muradore, R.; Piccinelli, N.; Gaburro, N.; de Rubeis, T.; Ambrosini, D. A cost-effective system for aerial 3D thermography of buildings. J. Imaging 2020, 6, 76. [Google Scholar] [CrossRef] [PubMed]
  96. Cheng, C.; Shang, Z.; Shen, Z. Automatic delamination segmentation for bridge deck based on encoder-decoder deep learning through UAV-based thermography. NDT E Int. 2020, 116, 102341. [Google Scholar] [CrossRef]
  97. He, Y.; Deng, B.; Wang, H.; Cheng, L.; Zhou, K.; Cai, S.; Ciampa, F. Infrared machine vision and infrared thermography with deep learning: A review. Infrared Phys. Technol. 2021, 116, 103754. [Google Scholar] [CrossRef]
  98. Yang, J.; Wang, W.; Lin, G.; Li, Q.; Sun, Y.; Sun, Y. Infrared thermal imaging-based crack detection using deep learning. IEEE Access 2019, 7, 182060–182077. [Google Scholar] [CrossRef]
  99. Liu, F.; Liu, J.; Wang, L. Asphalt Pavement Crack Detection Based on Convolutional Neural Network and Infrared Thermography. IEEE Trans. Intell. Transp. Syst. 2022, 23, 22145–22155. [Google Scholar] [CrossRef]
  100. Liu, F.; Liu, J.; Wang, L. Deep learning and infrared thermography for asphalt pavement crack severity classification. Autom. Constr. 2022, 140, 104383. [Google Scholar] [CrossRef]
  101. Morgenthal, G.; Hallermann, N. Quality assessment of unmanned aerial vehicle (UAV) based visual inspection of structures. Adv. Struct. Eng. 2014, 17, 289–302. [Google Scholar] [CrossRef]
  102. Stokkeland, M.; Klausen, K.; Johansen, T.A. Autonomous visual navigation of unmanned aerial vehicle for wind turbine inspection. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 998–1007. [Google Scholar]
  103. Schäfer, B.E.; Picchi, D.; Engelhardt, T.; Abel, D. Multicopter unmanned aerial vehicle for automated inspection of wind turbines. In Proceedings of the 2016 24th Mediterranean Conference on Control and Automation (MED), Athens, Greece, 21–24 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 244–249. [Google Scholar]
  104. Guo, H.; Cui, Q.; Wang, J.; Fang, X.; Yang, W.; Li, Z. Detecting and positioning of wind turbine blade tips for uav-based automatic inspection. In Proceedings of the IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1374–1377. [Google Scholar]
  105. Moolan-Feroze, O.; Karachalios, K.; Nikolaidis, D.N.; Calway, A. Simultaneous drone localisation and wind turbine model fitting during autonomous surface inspection. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2014–2021. [Google Scholar]
  106. Gu, W.; Hu, D.; Cheng, L.; Cao, Y.; Rizzo, A.; Valavanis, K.P. Autonomous wind turbine inspection using a quadrotor. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 709–715. [Google Scholar]
  107. Car, M.; Markovic, L.; Ivanovic, A.; Orsag, M.; Bogdan, S. Autonomous wind-turbine blade inspection using LiDAR-equipped unmanned aerial vehicle. IEEE Access 2020, 8, 131380–131387. [Google Scholar] [CrossRef]
  108. Kanellakis, C.; Fresk, E.; Mansouri, S.S.; Kominiak, D.; Nikolakopoulos, G. Autonomous visual inspection of large-scale infrastructures using aerial robots. arXiv 2019, arXiv:1901.05510. [Google Scholar]
  109. Pérez, D.; Alcántara, A.; Capitán, J. Distributed Trajectory Planning for a Formation of Aerial Vehicles Inspecting Wind Turbines. In Proceedings of the 2022 International Conference on Unmanned Aircraft Systems (ICUAS), Dubrovnik, Croatia, 21–24 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 646–654. [Google Scholar]
  110. Ivić, S.; Crnković, B.; Grbčić, L.; Matleković, L. Multi-UAV trajectory planning for 3D visual inspection of complex structures. Autom. Constr. 2023, 147, 104709. [Google Scholar] [CrossRef]
  111. Manhães, M.M.M.; Scherer, S.A.; Voss, M.; Douat, L.R.; Rauschenbach, T. UUV simulator: A gazebo-based package for underwater intervention and multi-robot simulation. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, Monterey, CA, USA, 19–23 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–8. [Google Scholar]
  112. Kim, M.; Hizir, O.; Turan, O.; Day, S.; Incecik, A. Estimation of added resistance and ship speed loss in a seaway. Ocean Eng. 2017, 141, 465–476. [Google Scholar] [CrossRef]
Figure 1. Main components of an offshore wind turbine (From [27]).
Figure 1. Main components of an offshore wind turbine (From [27]).
Drones 08 00617 g001
Figure 2. Visual damage examples found on wind turbines (From [42]).
Figure 2. Visual damage examples found on wind turbines (From [42]).
Drones 08 00617 g002
Figure 3. Internal defect of WT blade detected using infrared thermal imaging (From [42]).
Figure 3. Internal defect of WT blade detected using infrared thermal imaging (From [42]).
Drones 08 00617 g003
Figure 4. Proposed architecture for autonomous wind turbine inspection.
Figure 4. Proposed architecture for autonomous wind turbine inspection.
Drones 08 00617 g004
Figure 5. Proposed framework for the LiDAR-based pipeline. p l b and p b w are transformations from LiDAR-to-body and body-to-world frames, respectively.
Figure 5. Proposed framework for the LiDAR-based pipeline. p l b and p b w are transformations from LiDAR-to-body and body-to-world frames, respectively.
Drones 08 00617 g005
Figure 6. Visual damage examples found on wind turbines (adapted from [8]).The points used to estimate the rotor centre position are in red. The blue points indicate the wind blade cluster used to estimate the first blade configuration. In white the tower points and the green the smaller WT blade clusters.
Figure 6. Visual damage examples found on wind turbines (adapted from [8]).The points used to estimate the rotor centre position are in red. The blue points indicate the wind blade cluster used to estimate the first blade configuration. In white the tower points and the green the smaller WT blade clusters.
Drones 08 00617 g006
Figure 7. Model representation of the wind turbine. The green cylinder represents the tower position, height, and radius; the green dot represents the rotor center; and the blue dots represent each blade.
Figure 7. Model representation of the wind turbine. The green cylinder represents the tower position, height, and radius; the green dot represents the rotor center; and the blue dots represent each blade.
Drones 08 00617 g007
Figure 8. Modeling process. (a) Tower position and radius estimation; (b) tower height measurement, green cylinder; (c) wind turbine orientation estimation. In red, the blade’s point cloud is projected on the horizontal plane; (d) blades and rotor estimation.
Figure 8. Modeling process. (a) Tower position and radius estimation; (b) tower height measurement, green cylinder; (c) wind turbine orientation estimation. In red, the blade’s point cloud is projected on the horizontal plane; (d) blades and rotor estimation.
Drones 08 00617 g008
Figure 9. Blade close inspection procedure after obtaining wind turbine model estimation.
Figure 9. Blade close inspection procedure after obtaining wind turbine model estimation.
Drones 08 00617 g009
Figure 10. First-stage simulation environment for testing UAV offshore inspection algorithms.
Figure 10. First-stage simulation environment for testing UAV offshore inspection algorithms.
Drones 08 00617 g010
Figure 11. (Left) REpower 5M offshore structure in a monopile foundation. (Right top) Joint axis representation. (Right bottom) Simulated rust and delamination on the WTB.
Figure 11. (Left) REpower 5M offshore structure in a monopile foundation. (Right top) Joint axis representation. (Right bottom) Simulated rust and delamination on the WTB.
Drones 08 00617 g011
Figure 12. PX4 Iris model equipped with a simulated Velodyne VLP-16 LiDAR, on the bottom, and a 2MP visual camera, on the front.
Figure 12. PX4 Iris model equipped with a simulated Velodyne VLP-16 LiDAR, on the bottom, and a 2MP visual camera, on the front.
Drones 08 00617 g012
Figure 13. LiDAR 3D reconstruction from an autonomous inspection operation performed on a simulation framework in three different blade configurations.
Figure 13. LiDAR 3D reconstruction from an autonomous inspection operation performed on a simulation framework in three different blade configurations.
Drones 08 00617 g013
Figure 14. Blade and nacelle configurations used to validate the algorithm’s accuracy.(a) Blade configurations with the WT blade at the starting position of 0. (b) Nacelle configurations with the WT nacelle oriented at the starting position of 0.
Figure 14. Blade and nacelle configurations used to validate the algorithm’s accuracy.(a) Blade configurations with the WT blade at the starting position of 0. (b) Nacelle configurations with the WT nacelle oriented at the starting position of 0.
Drones 08 00617 g014
Figure 15. Augmented reality framework.
Figure 15. Augmented reality framework.
Drones 08 00617 g015
Figure 16. Trajectory output of the simulated wind turbine in a real environment. The blue line shows the UAV trajectory during the autonomous inspection, while the red line represents the operator’s control.
Figure 16. Trajectory output of the simulated wind turbine in a real environment. The blue line shows the UAV trajectory during the autonomous inspection, while the red line represents the operator’s control.
Drones 08 00617 g016
Table 1. Modeling accuracy results. The orange colour highlights the results where the yaw estimation is the more accurate, occurring when the nacelle cluster conducts the estimation. In grey, is highlighted the worst tower occlusion case and the yellow represents the blade angle limitation, where their cluster are smaller.
Table 1. Modeling accuracy results. The orange colour highlights the results where the yaw estimation is the more accurate, occurring when the nacelle cluster conducts the estimation. In grey, is highlighted the worst tower occlusion case and the yellow represents the blade angle limitation, where their cluster are smaller.
Model ConfigurationAlgorithm Estimation
Blade Angle ()WT Orientation ()Tower Position (m)Tower Height (m)Yaw ()Blade Angle ()
μ σ μ σ μ σ μ σ
−400.0910.0490.3320.0364.9870.1898.5850.440
−200.1520.1110.1740.0854.5450.4437.5240.420
000.3040.0240.3340.0654.3780.3292.5990.051
200.4930.0880.3430.1405.6430.2573.9920.412
400.5740.0510.0350.0174.8090.2927.3470.100
−400.2950.1740.0940.1491.8631.4189.1540.113
−200.3510.1860.1110.1844.9761.9339.8500.095
3000.5390.1410.1190.0169.2040.6505.2170.367
200.3050.1530.0620.136−8.4432.6790.4330.057
400.3380.1450.4360.1920.8111.0513.5550.081
−400.4590.1450.5680.18010.1460.33012.0170.049
−200.1840.2590.0880.0628.2580.41210.4270.627
6000.1610.0860.0490.1378.2380.3394.0940.072
200.2280.1512.1550.1478.3120.2197.5950.094
400.1760.0930.2740.0857.2670.52112.8250.612
−400.6810.1370.3490.1372.1080.37812.9830.387
−200.4790.0740.1600.1136.4591.70211.4770.220
9000.4370.2530.1680.0934.5610.9248.5020.070
200.4400.1743.2501.0706.6582.2594.0420.261
400.3740.1480.2490.0960.6850.22612.0710.135
Table 2. Modeling and inspection results on a floating WT. The highlighted cells emphasise the failed tries, identifying the problems that occur.
Table 2. Modeling and inspection results on a floating WT. The highlighted cells emphasise the failed tries, identifying the problems that occur.
1st Run2nd Run3rd Run4th Run5th RunTotal Success Rate
ModelInspectionModelInspectionModelInspectionModelInspectionModelInspectionModelInspection
SlowSuccessSuccessSuccessSuccessSuccessSuccessSuccessSuccessSuccessSuccess100.00%100.00%
Medium
(Amplitude * 1.5)
SuccessSuccessSuccessSuccessBad yaw
estimation
SuccessSuccessSuccessFailFail60.00%80.00%
High
(Amplitude * 1.5)
(Frequency * 1.5)
SuccessUnfinished
inspection
SuccessSuccessBad blade
estimation
FailUnfinished
height
FailSuccessUnfinished
inspection
60.00%20.00%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oliveira, A.; Dias, A.; Santos, T.; Rodrigues, P.; Martins, A.; Almeida, J. LiDAR-Based Unmanned Aerial Vehicle Offshore Wind Blade Inspection and Modeling. Drones 2024, 8, 617. https://doi.org/10.3390/drones8110617

AMA Style

Oliveira A, Dias A, Santos T, Rodrigues P, Martins A, Almeida J. LiDAR-Based Unmanned Aerial Vehicle Offshore Wind Blade Inspection and Modeling. Drones. 2024; 8(11):617. https://doi.org/10.3390/drones8110617

Chicago/Turabian Style

Oliveira, Alexandre, André Dias, Tiago Santos, Paulo Rodrigues, Alfredo Martins, and José Almeida. 2024. "LiDAR-Based Unmanned Aerial Vehicle Offshore Wind Blade Inspection and Modeling" Drones 8, no. 11: 617. https://doi.org/10.3390/drones8110617

APA Style

Oliveira, A., Dias, A., Santos, T., Rodrigues, P., Martins, A., & Almeida, J. (2024). LiDAR-Based Unmanned Aerial Vehicle Offshore Wind Blade Inspection and Modeling. Drones, 8(11), 617. https://doi.org/10.3390/drones8110617

Article Metrics

Back to TopTop