1. Introduction
As the photovoltaic system is becoming popular as a future-oriented green energy, there is a strong tendency to install more PV systems widely. However, as operation time is increasing, the efficiency of the PV system is decreasing due to several reasons such as contamination of the PV module’s surface, back surface temperature, and shadows [
1,
2]. Thus, it is necessary to inspect the current status of the module surface regularly. Having a regular diagnosis of the module surface will ensure the best efficiency is kept by sending a real-time alarm for maintenance before the problem occurs.
Various previous studies have tested maintenance methods to keep the efficiency of the PV system. Conventional studies aimed to find a series of faulty modules by measuring the voltage and current of the PV module, monitoring the power generation status of the inverter [
3,
4], and detecting contamination cracks on the surface with a camera [
5,
6]. Other studies have manually inspected PV modules using thermal images or electroluminescence images [
7,
8,
9,
10]. Recently, studies have been conducted to find hotspots using a thermal camera [
11,
12]. Also, several studies used a drone to inspect a wider area and reduce inspection time. When using a drone-based thermal camera, there is an additional benefit: it can eliminate the hurdles of inspection caused by the problematic location of the site, e.g., on the roof of a high building or sea surface.
Since it is difficult to distinguish objects only with a thermal imaging camera, most studies use dual cameras composed of a thermal camera and a visible camera. The region of interest (ROI) is derived through image processing of the most visible image, and temperature information of the object is extracted by referring to temperature data of the thermal image. However, a thermal camera generally has a lower resolution than a visible camera because of the limitations of cost. Due to different resolutions between the visible and thermal cameras, there are often invalid frames from a thermal camera. Using a reasonable low-resolution camera, it is impossible to take an image from high altitude due to limited resolution. Therefore, the drone has to be placed at a closer distance from the PV module. In that case, since we cannot see the whole picture of the module, it is a prerequisite to take the picture in the center of our target object.
A gimbal system is commonly used for obtaining clean images from drone-mounted cameras, as drones in flight present a variety of vibration frequencies. The system consists of a structure to support a camera module and a stabilizer to block outside vibrations and keep the right angle [
13]. A gimbal controls the target angle of a camera and provides advantages for acquiring better and proper images. This subject and the importance of using a gimbal on a drone was discussed by various authors [
14,
15]. Furthermore, a gimbal dampens vibrations, which is significantly beneficial for real-time image stabilization applications [
16]. Apart from smoothing angular movements and dampening vibrations, a gimbal maintains a camera in a predefined position [
17]. This is mostly the position in which the camera’s axis is horizontal or vertical, but all other positions according to the gimbal’s technical capabilities are possible.
In this paper, we propose an image processing and control system that can automatically adjust the gimbals for PV module inspection using autonomous drones. The proposed system consists of a dual camera, a gimbal frame, a gimbal control board (GCB), and a gimbal control system (GCS). The dual camera consists of a thermal imaging camera and a visible camera and uses a brushless DC (BLDC) motor for the gimbal frame. The GCB controls the motor through the pulse width modulation (PWM) input signal, and the main controller GCS implements image processing to control the angle of the thermal imaging camera in the visible camera. GCS calculates the center of the panel area and also the necessary value for angle adjustment from the image processing of captured images from a visible camera, and then sends the resulting data to the GCB. Consequently, we could reduce the time for pre-processing, and we could also increase the accuracy of inspection.
3. Implementation of Gimbal Control through PV Image Analysis
3.1. Gimbal Control with PWM Signal
Manual and automatic signals for gimbal control are transmitted via GCS. For automatic signals, control values are determined by image processing, and for manual signals, control values are determined by external radio controls that are transmitted by a X8R receiver. The control value for the gimbal will be determined in the GCS with the combination of both manual and automatic input, with the priory of manual input signal. This result, with the format of the PWM signal, will be sent to the GCB from the GCS. According to the PWM signal, the GCB sends out control signals to each of its motors. The PWM is a cycle control method that controls the width of the pulse. Regularly, an output voltage value is maintained with high or low voltage values to produce a square wave output between 0 V and 3.3 V.
Figure 8 shows an example of the PWM.
T refers to the cycle period, and
ton is the duration of high voltage. The duty-cycle is calculated using Equation (1).
The cycle period is 18 ms in our system, and the ton range is between 0.7 ms and 2.3 ms. When T is 1.5 ms, the motor will be stopped, when it is 0.7 to 1.3 ms, the motor will be run as clockwise, and when it is 1.7 to 2.3 ms, the motor will be run as counterclockwise. Therefore, the duty cycle value is operated between 3 and 12. For the safety of the gimbal, the duty cycle is limited from 6.3% to 10.5%. Therefore, the PWM value is converted to a value between 64 and 108 by setting the cycle period value T (18 ms) to 1024.
3.2. Implementation of GCS
Figure 9 shows the implemented GCS in our study. The components of the CGS consist of the object with the gimbal, the embedded GCS with dual cameras, and the gimbal control/driving module. The GCS is implemented based on the hardware of Raspberry Pi 3 b+.
The thermal camera and visual camera are attached to the drone’s GCS. The image taken from the visual camera is processed and used to find physical areas of the modules. Through image processing, the GCS calculates control values through module position and angle, and converts them into PWM signals which will be transmitted to the GCB. The motor control to adjust the gimbal tilt and roll is performed by sending out independent PWM signals (PWM1 to control roll, PWM0 to control tilt).
3.3. Implementation of Object Recognition Image Processing Program
The program is implemented by OpenCV and the C++ language, which is used in the Linux Codeblocks environment. The implementation process of the image processing algorithm is as follows. We convert RGB images to Hue-Saturation-Value (HSV) images. The HSV color space is more suitable for object detection than the RGB color space. The RGB color space is a combination of colors that reduces the intuition when a specific color must be expressed. The RGB color space is not intuitive. However, in the HSV color space, intuitive native colors can be analyzed [
18]. The PV module can be detected with the color of the blue hue area in the HSV color space when the PV module is detected. We extract the PV module area using OpenCV inRange function. The inRange function extracts pixels corresponding to the blue region in the HSV color space.
In extracting the PV module area, blue pixels can be extracted in some background areas as noise. To eliminate this kind of noise, dilate/erode and medianBlur function were applied. Then, we binarize the given PV module area, and using the Hough transform, we calculate the horizontal angle. The Hough transform is used in automated analysis of digital images. All PV modules contain white lines in a specific grid-like pattern. These lines are the distinctive feature of a PV module. The Hough transform is a popular feature extraction technique widely used to detect lines in images. The Hough transform detects a line represented by Equation (2).
where
r represents the distance from the origin to closest point on the straight line, and
θ represents the angle between the
x axis and the line connecting the origin with that closest point.
Each line in the image is associated with a pair (r, θ). For each pixel at (x, y) in an edge image, the Hough transform finds the set of all straight lines that passes through this point. The final output of the Hough transform is the accumulator matrix. This accumulator matrix contains the quantized values of r and θ. The element with the highest value indicates the most represented straight line in the image. Therefore, the value of r and θ is determined by referring to the existing information among the three values representing the highest value.
Moreover, we calculate the vertical position with the center of gravity of the block. We show the results for an implemented image processing example in
Figure 10: original visible image (a), inRange function (b), threshold function (c), Hough line function (d), bold function (e), and derived result (f).
3.4. Implementation of BLDC Control Algorithm
The control method for the gimbal system’s stabilized driving device is a proportional integral derivative (PID) controller method. The PID has a small number of parameters and a simple structure. In particular, it is intuitive to change the response characteristics in response to changes in proportional gain, integration time, and derivative time [
19]. Some studies examined intelligent PIDs to automatically determine the parameters [
20,
21]. However, in this work, the classical PID algorithm with simple structure is applied.
From the calculated horizontal angle and the vertical position, we can obtain the PWM values seen in Equations (3)–(5).
where
t is the unit time and
is the target value obtained by image processing at
t time, which can be a horizontal angle and a vertical position,
and
are the tilt or roll motor control value at
t − 1 time and
t time, respectively,
e(
t) is the position error value,
,
, and
are proportional, differential, and integral coefficients, respectively,
is the PWM value for tilt motor control of the main controller,
w is the slope of the PWM value and position value, and
b is the minimum PWM value.
The calculated horizontal and vertical positions are encoded along with the signal received from the remote control device. The relationship between the camera focus and the PV module is always fixed regardless of the tilt. Only the part cut by the image plane changes as the tilt changes. In this development, the camera tilt is adjusted around the horizontal axis of the module seen from above the PV module.
4. Gimbal Control Experiment
In order to verify the validity of the developed system, we conducted an experimental test as shown
Figure 11. Our system consists of a drone system with a gimbal, GCS with dual camera, experiment panel with photo of PV module, and remote display.
The experiment process is as follows:
Produce the PWM signal correctly in the GCS and observe the transmission to the GCB.
Calculate the center value and horizontal angle of the PV module by image processing.
Analyze the gimbal control and tracking convergence according to changes in target vertical-position and target horizontal-angle.
Firstly, we confirmed whether the PWM signal was well generated by the main controller and transmitted to the GCB. As shown in
Figure 12, the PWM waveform period was confirmed to be stably generated at 18 ms, and the width value was changed according to the position. Also, we confirmed that the gimbal’s BLDC motor rotated properly by generating the PWM signal.
Secondly, we extracted the central and horizontal angle values of the PV module from the image.
Figure 13 shows one sample case for the image processing process. The image obtained through the image processing process, the PV module region extraction image, the binarization image, the hop transform image, and the overall result derivation image were sequentially displayed. We acquired a visible image and extracted the area of PV using the inRange function. The HSV range of the inRange function is 80 to 140 for hue, 65 to 255 for saturation, and 65 to 255 for brightness. To find the module’s center position, we binarized it with a threshold of 50 gray-value and find the boundary using the contours function. We use the Hough function with a threshold of 250 gray-value to find the PV module’s horizontal line.
Figure 14 shows various angle and center point extraction experiments with the PV module.
Table 1 and
Table 2 show the extracted values and error values for the extracted horizontal angle and center point. Compared with the ground truth, the horizontal angle showed an error of about ±8°, and the center position value showed ±28 pixels.
Thirdly, we checked that tilt and angle control was performed correctly to position the ROI in the center and horizontally capture it. In this experiment, PID control parameters (
Kp,
Kd,
Ki) for the PWM output were set to 0.8, 0.4, and 0.01. The experimental results are shown in
Figure 15 and
Figure 16. The PWM is derived using the center and horizontal values of the extracted PV module. The gimbal’s tilt and roll are adjusted to look vertically at the PV module according to the PWM signal.
In the experiment, we tested whether the tilt and roll were corrected to the target position and angle when the center position and horizontal position of the target PV module were randomly changed. As shown in
Figure 15, the tilt correction result shows that the vertical angle of the gimbal is adjusted to the target position when the center position of the target PV module changes to 200, 400, 100, and 350. At this time, the tilt correction error is about 6 pixels, and the PV module seen from the drone camera is positioned at the center. As shown in
Figure 16, when the horizontal angle of the target PV module was changed to 43, −75, 10, and 70 degrees, the roll rotated to approximate each horizontal angle. At this time, the roll correction error was around 3 degrees, and the PV module seen by the drone camera was aligned horizontally. Through this, the ROI of the PV module is located at the center, and it is confirmed that the proper image is taken in real-time.
5. Conclusions
Herein, we proposed a gimbal control system, which can make drones capture effective frames from a thermal camera for detecting deteriorated areas of PV modules by locating a target object in the center of an image. This system involves real-time image processing and PID signal generation. In order to calculate proper angles (pitch, roll, yaw), blob analysis and the Hough transform are mainly used for center detection and angle calculation, respectively. These position and angles are converted into a PID signal. These processes can give feedback to each other and make a target PV module stable. Finally, experimental results showed that target modules can be kept in the center area under limited conditions. In the near future, we will develop a fully automatic drone system to detect deteriorated areas using this gimbal control system after verifying its performance in real environments such as hard wind, higher trees, cloudy and rainy weather, etc.