Next Article in Journal
Sparse Online Gaussian Process Adaptive Control of Unmanned Aerial Vehicle with Slung Payload
Previous Article in Journal
T–S Fuzzy Observer-based Output Feedback Lateral Control of UGVs Using a Disturbance Observer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research and Design of an Active Light Source System for UAVs Based on Light Intensity Matching Model

1
School of Computer and Big Data, Minjiang University/Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Fuzhou 350108, China
2
Guangdong Laboratory for Lingnan Modern Agriculture, College of Engineering, South China Agricultural University, Guangzhou 510642, China
3
Guangdong Provincial Key Laboratory of Agricultural Artificial Intelligence (GDKL-AAI), Guangzhou 510642, China
4
Centre for Pesticide Application and Safety, The University of Queensland, Gatton, QLD 4343, Australia
5
College of Information Science and Technology, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
*
Authors to whom correspondence should be addressed.
Drones 2024, 8(11), 683; https://doi.org/10.3390/drones8110683
Submission received: 12 October 2024 / Revised: 14 November 2024 / Accepted: 16 November 2024 / Published: 19 November 2024

Abstract

:
The saliency feature is a key factor in achieving vision-based tracking for multi-UAV control. However, due to the complex and variable environments encountered during multi-UAV operations—such as changes in lighting conditions and scale variations—the UAV’s visual features may degrade, especially under high-speed movement, ultimately resulting in failure of the vision tracking task and reducing the stability and robustness of swarm flight. Therefore, this paper proposes an adaptive active light source system based on light intensity matching to address the issue of visual feature loss caused by environmental light intensity and scale variations in multi-UAV collaborative navigation. The system consists of three components: an environment sensing and control module, a variable active light source module, and a light source power module. This paper first designs the overall framework of the active light source system, detailing the functions of each module and their collaborative working principles. Furthermore, optimization experiments are conducted on the variable active light source module. By comparing the recognition effects of the variable active light source module under different parameters, the best configuration is selected. In addition, to improve the robustness of the active light source system under different lighting conditions, this paper also constructs a light source color matching model based on light intensity matching. By collecting and comparing visible light images of different color light sources under various intensities and constructing the light intensity matching model using the comprehensive peak signal-to-noise ratio parameter, the model is optimized to ensure the best vision tracking performance under different lighting conditions. Finally, to validate the effectiveness of the proposed active light source system, quantitative and qualitative recognition comparison experiments were conducted in eight different scenarios with UAVs equipped with active light sources. The experimental results show that the UAV equipped with an active light source has improved the recall of yoloV7 and RT-DETR recognition algorithms by 30% and 29.6%, the mAP50 by 21% and 19.5%, and the recognition accuracy by 13.1% and 13.6, respectively. Qualitative experiments also demonstrated that the active light source effectively improved the recognition success rate under low lighting conditions. Extensive qualitative and quantitative experiments confirm that the UAV active light source system based on light intensity matching proposed in this paper effectively enhances the effectiveness and robustness of vision-based tracking for multi-UAVs, particularly in complex and variable environments. This research provides an efficient and computationally effective solution for vision-based multi-UAV systems, further enhancing the visual tracking capabilities of multi-UAVs under complex conditions.

1. Introduction

In recent years, due to rapid technological advancements, Unmanned Aerial Vehicles (UAVs) have gradually transitioned from military to civilian applications, finding extensive use in agriculture, logistics, security, and environmental monitoring [1,2,3]. Collaborative navigation operations with multi-UAVs can significantly enhance operational speed and expand the operational area, effectively alleviating issues related to the limited payload and endurance of individual UAVs during large-scale tasks [4,5]. As a result, multi-UAV collaborative navigation has become a key and hot topic for improving operational efficiency in the current stage of UAV development [6,7]. Among these, vision-based multi-UAV collaboration methods have become the mainstream research direction in the field of UAV navigation, both domestically and internationally, due to the continuous updates of vision algorithms and the compact size of vision sensors [8].
Vision-based collaborative navigation methods for UAV swarms primarily involve using vision acquisition devices to capture external image information and employing computer vision technology to analyze the features of other UAVs within the images [9,10]. This enables the identification and localization of other UAVs’ positions [11,12]. Subsequently, the UAVs adjust their flight states according to task requirements, achieving relative positioning, obstacle avoidance, and path planning within the swarm [13]. Vision-based collaborative navigation methods provide efficient and accurate position perception and decision-making, offering robust technical support for coordinated operations of UAV swarms. However, during operations, UAV swarms may encounter complex and variable environments, such as changes in light intensity and scale, which can lead to the loss of visual features. Particularly during high-speed movement, these factors severely impact the accuracy and robustness of vision-based navigation [14,15].
Currently, there are two main approaches to improving the robustness of vision-based collaborative navigation for UAV swarms. One approach involves enhancing image processing algorithms to improve the accuracy of feature extraction and recognition. For example, Yang Zhang et al. [16] proposed a method that stabilizes target feature values through preprocessing steps and incorporates contextual information from historical data, integrating continuous weighted dynamic response maps from both temporal and spatial perspectives to enhance the recognition capability of tracking algorithms for UAV targets. Yiting Li et al. [17] proposed an improved YOLOv8 image recognition algorithm tailored to UAV-acquired images. This approach improved the feature fusion module, feature extraction module, and loss function within the network structure, enhancing the model’s ability to recognize small UAV-like targets in images. While these methods can improve the robustness of vision-based navigation to some extent, they are often limited by the lighting conditions and scale variations of image acquisition equipment, failing to ensure sufficient accuracy in complex and variable environments.
The other approach involves introducing active light source systems to enhance the visual features of UAVs in complex environments. For instance, Viktor Walter et al. [18] proposed installing multiple ultraviolet light sources on the UAV’s frame to determine the relative positions of UAVs through different combinations of ultraviolet light sources. This method effectively enhances UAV recognition features but uses fixed light sources, which cannot adjust brightness and color in real time, resulting in the inability to adapt the external features of the light sources to different scenarios. Hyeon-woo Park et al. [19] addressed the impact of lighting changes during the day and night on visual collaborative tracking by attaching infrared reflective tags to unmanned vehicles. Although this method effectively addresses significant lighting changes from day to night, it still cannot cope with complex and variable environments, such as fog, similar foreground and background conditions, where the visibility and stability of the light source are compromised. Therefore, researching an active light source system capable of adapting to different lighting and scale variations is crucial for enhancing the practicality and reliability of vision-based collaborative navigation for UAVs.
In our previous research [20,21], we attempted to use laser beams emitted by a laser transmitter as an active light source. By equipping the main UAV with an onboard laser emitter that projects linear laser beams and installing a receiving visual acquisition device on the follower UAVs, we captured and identified the relative position of the laser beam. This enabled us to determine the relative positions between the main UAV and the follower UAVs. Subsequently, the follower UAVs adjusted their flight status based on the detected relative position, achieving coordinated multi-UAV flight. Although this method enhanced the robustness of visual tracking against external factor variations to some extent, we identified significant drawbacks during practical applications. Firstly, due to the highly directional visibility of the laser beam, if obstacles were present between the transmitter and receiver, the laser beam would be blocked by the obstacles, preventing the receiver from detecting the beam’s actual position, ultimately causing the failure of the UAV swarm mission. Secondly, when the distance between the main UAV and follower UAVs became large, even minor vibrations of the main UAV could result in significant deviations of the laser beam at long distances. This often led to the loss of the laser target by the receiving device, causing mission failure of the swarm. In contrast, the active light source system proposed in this paper introduces salient features directly onto the UAV itself. Therefore, even when the main UAV undergoes significant attitude changes or encounters obstacles, the active light source system provides a broader set of identifiable features, ensuring that the follower UAVs can maintain consistent recognition and tracking of the target, thereby achieving robust tracking.
In summary, addressing the challenges and limitations of the current methods in vision-based collaborative navigation of UAV swarms, this paper proposes an active light source system for UAVs based on light intensity matching, as shown in Figure 1. The system consists primarily of an environmental perception and control module, a variable active light source module, and a light source power supply module. It can automatically adjust the brightness and color of the light source according to the environmental light intensity and target scale to obtain high-quality image feature information, thereby improving the accuracy and robustness of vision algorithms for UAV target recognition and tracking.
Considering that the sunlight spectrum consists of absorption spectra of different wavelengths and that light energy is mainly distributed in the visible light band and the red and ultraviolet bands, constructing an intelligent matching model of active light sources in specific wavelength bands is a key factor affecting visual recognition features in the light intensity matching active light source system [22,23]. Therefore, this paper collects visible light images of different color light sources under full-time light intensity, compares them with their respective original images, calculates the relationship between peak signal-to-noise ratios, and constructs a comprehensive peak signal-to-noise ratio recognition effect (CREC-PSNR) control model for selecting the appropriate color. Ultimately, an intelligent matching model for active light sources in specific wavelength bands is constructed.
Additionally, the size of the light shield is another crucial factor influencing the visual recognition features of active light sources. A small-sized light shield has high brightness after intercepting light but a lower recognition rate for distant small targets. In contrast, a large-sized light shield can recognize larger distant targets but with lower brightness and recognition rate. Therefore, we conducted comparative experiments on light shield sizes, comparing the recognition effects of different colors under different sizes of light shields to select the optimal size.
Our contributions are summarized as follows:
(1)
We innovatively propose an active light source system for UAVs based on light intensity matching. This system can intelligently adjust the brightness and color of the light source according to changes in environmental light intensity and target scale, maintaining high performance in various complex and variable environments. This technology significantly enhances the identifiable feature information of UAVs.
(2)
To achieve precise light source matching, we constructed an intelligent matching model for active light sources in specific wavelength bands. By comprehensively collecting visible light images of different color light sources under full-time light intensity, we successfully established a CREC-PSNR control model for accurately selecting the optimal color. Additionally, we conducted in-depth comparative experiments and optimization on the size of the light shield to ensure high recognition rate and brightness even at increased distances.
(3)
We installed the variable active light source feature on a custom quadrotor UAV and conducted comparative experiments with UAVs not equipped with this feature, validating the practicality and versatility of the active light source feature.
The remainder of this paper is organized as follows. Section 2 describes the structure and logical control of the system. Section 3 introduces the impact of the light shield size on the active light source and the design and implementation of the size selection experiments. Section 4 describes the influence of light source color on the active light source and how to select the light source color under different light intensities. Section 5 explains how to verify the feasibility and versatility of the active light source system after its completion. Section 6 provides concluding remarks.

2. System Framework

The system structure of the proposed active light source device primarily consists of three parts: the environmental perception and control module, the light source emitting module, and the light source power supply module. Figure 2 illustrates the workflow of the active light source system.
As shown in Figure 2, the environmental perception and control module is mainly composed of the light intensity sensing module and the light source intelligent matching model. The light intensity sensing module is used to collect and perceive the intensity of ambient light and output this data to the light source intelligent matching model. The light source intelligent matching model is a control model constructed by analyzing the relationship between light intensity and the optimal light source emitting color. Its function is to adjust the emitting color of the active light source in real time according to different light intensities.
The power supply module for the active light source consists of a 5 V/4000 mAh lithium battery and two constant voltage control modules. The lithium battery is independent from the UAV’s power system and primarily provides stable power to the light source, ensuring consistent lighting performance in various environments. The constant voltage control modules are responsible for regulating the voltage, ensuring the stability and safety of the power supply, and preventing voltage fluctuations from damaging the light source.
The light emission module consists of the laser emission module and a light-shielding cover. The laser emission module is composed of three main parts: a constant voltage control module, a laser driver board, and a laser diode. During operation, the constant voltage control module supplies a stable current to the laser driver board, which drives the diode to continuously emit high-intensity laser light. The emitted laser beam then illuminates the light-shielding cover. The scattered light is precisely controlled by the cover, forming a uniform light field and ultimately creating a prominent active light source. In this study, the laser emission module includes three colors: red, green, and blue. By combining any two of these colors, additional colors such as yellow, pink, and cyan can be produced, resulting in a total of six different color combinations. This design meets the lighting requirements of various scenarios. The multi-color laser emission module not only enhances the flexibility of the light source but also improves its adaptability for specific applications.
During system operation, the light intensity sensing and color adjustment module first collects the ambient light intensity and outputs this data to the light intensity-color control model, which selects the optimal active light source color for the current light intensity. This ensures that the optimal color for each scene is selected, converting this color into control signals that are output to the light source power supply module. The different colors of the light source exhibit varying recognition performances under different light intensities. Therefore, selecting the optimal color based on light intensity to ensure good recognition performance of the active light source device is a key research objective of this paper.

3. Selection of Light Shield Size for Active Light Source

The primary function of the light shield is to diffuse the point laser beam to form a recognizable light source. As the distance between the target object and the visual sensor changes, the pixel distribution and details of the target object in the image will alter [24]. Different sizes of light shields emit different brightness levels after intercepting the same light source. Smaller light shields emit brighter light after intercepting the light, but as the distance increases, the recognized target appears smaller, leading to a decrease in recognition rate. Conversely, larger light shields provide better recognition of targets at greater distances but emit less bright light compared to smaller shields, which also results in a lower recognition rate. Therefore, selecting the appropriate size for the light shield is crucial in the design of the active light source feature. To determine the optimal light shield size, we conducted experiments under consistent light intensity and background conditions.
To identify the light shield size that offers the best recognition effect within a certain distance, we selected five different light shield sizes with diameters of 35 mm, 50 mm, 60 mm, 80 mm, and 95 mm. Figure 3 shows the actual images of the five light shield sizes.
All light shields have the same properties except for size, including transparency and brightness. Therefore, under the condition of 700 nit light intensity and the same background, each size of light shield was tested at distances of 30 cm, 60 cm, 90 cm, 120 cm, and 150 cm, emitting red, blue, green, pink, cyan, and yellow light. We collected 25 images for each color at each distance, resulting in 150 images for each distance for one light shield. Each light shield was tested at five distances, yielding 750 images per light shield and 3750 images in total for all five light shields. Figure 4 illustrates the experimental setup for selecting the light shield size for the active light source.
To evaluate the recognition effect of different light shield sizes at varying distances, we used the YOLOv7 [25] recognition algorithm on the collected images. We selected precision ( P ) and confidence ( C ) as the evaluation metrics for recognition performance. To comprehensively evaluate the relationship between size and distance under different colors, we aggregated the recognition data of the six colors at the same distance and proposed a metric called the recognition effect of the lampshade at a specific distance ( R D ) to evaluate the recognition performance of the light shield at specific distances, which is calculated as shown in Equation (1).
R D d i s t a n c e = i = 1 6 ( P ( d i s t a n c e , i ) × C ( d i s t a n c e , i ) ) ,
where d i s t a n c e denotes the distance between the vision acquisition device and the light source. i is 1 to 6 and represents the six colors of red, blue, green, pink, cyan, and yellow, respectively. P ( d i s t a n c e , i ) is the precision and C ( d i s t a n c e , i ) is the confidence for each of the six colors at distance. This metric provides a comprehensive assessment of the recognition performance of the light shield size at a specific distance, considering the combined effect of precision and confidence across different colors.
To select the appropriate size for the light shield, this study considered the impact of both distance and color. Using the values at various distances, we further proposed an evaluation metric, the integrated recognition effect of the lampshade at different size ( I R E L s i z e ), to better assess the comprehensive recognition performance of the light shield at different distances, which is calculated as shown in Equation (2).
I R E L s i z e = R D 30 × W 30 + R D 60 × W 60 + R D 90 × W 90 + R D 120 × W 120 + R D 150 × W 150 ,
where s i z e represents the size of the light shield. W d represents the weight coefficient for the distance d . Given that the difficulty of target recognition increases with distance, the weight coefficients in this study are set to 5%, 15%, 20%, 25%, and 35% for the distances of 30 cm, 60 cm, 90 cm, 120 cm, and 150 cm, respectively. This metric ensures a comprehensive assessment of the light shield size, taking into account both distance and recognition performance, with higher weights assigned to longer distances due to increased recognition difficulty.
Based on the methodology described above, we conducted experiments to evaluate the performance of different light shield sizes. The experimental data for five different sizes of masks are shown in Table 1, Table 2, Table 3, Table 4 and Table 5.
From the recognition data in Table 1, Table 2, Table 3, Table 4 and Table 5, we can see that although the precision of each color remains high across different sizes, the differences are not very pronounced. However, based on the confidence indicator, it is evident that when the light source size is 60mm, the confidence values corresponding to the six colors are the highest among all sizes. To better illustrate the differences between the various sizes, we used Equations (1) and (2) for a comprehensive analysis of the data in Table 1, Table 2, Table 3, Table 4 and Table 5. The results are shown in Figure 5.
According to Figure 5, the 35 mm light shield performs well at short distances but poorly at long distances. The 50 mm and 60 mm light shields show good recognition performance across all distances. In contrast, the 80 mm and 95 mm light shields perform poorly at short distances and better than the 35 mm shield at long distances, but worse than the 50 mm and 60 mm shields.
These results can be explained by the fact that smaller light shields emit brighter light, which enhances recognition rates. However, as a recognition target, small light shields appear too small at long distances, reducing their recognition rate. Larger light shields, on the other hand, display more details and features as recognition targets at long distances, but their intercepted light brightness is lower compared to smaller shields, decreasing their recognition rates across all distances.
Therefore, the 50 mm and 60 mm light shields exhibit the best recognition performance. Although the 60 mm light shield performs slightly worse than the 50 mm shield at 30 cm, it outperforms the 50 mm shield at all other distances. Given that higher weights are assigned to longer distances in our evaluation metric, the I R E L value for the 60 mm light shield is greater than that for the 50 mm light shield, making it the highest among the five sizes. In summary, the 60 mm light shield size is selected for the active light source system due to its superior integrated recognition effect, balancing brightness and detail across different distances.

4. Optimal Light Source Color Selection

When training recognition models, the original images captured by visual acquisition devices are used as training images. However, in actual recognition scenarios, the active light source is influenced by external lighting conditions, resulting in variations in perceived colors under different light intensities [26]. To ensure that the active light source color maintains recognition performance close to the original images, the color deviation under different lighting conditions should be minimal. Therefore, the optimal active light source color is selected by comparing images taken under various light intensities with the corresponding original images captured under a standard light intensity of 700 nit.
To determine the best active light source color under varying light intensities, images were collected of red, green, blue, yellow, pink, and cyan light sources under different lighting conditions. Specifically, from 8 AM to 6 PM, covering a light intensity range from 200 nit to 8000 nit, photos were taken every 10 min, capturing the six different colored light sources. Additionally, ambient light intensity data were recorded at 10 min intervals. The photos taken at 700 nit were used as the baseline original images for each color. The collected data were then analyzed to select the optimal light source color for different light intensities. Figure 6 and Figure 7 illustrate the experimental setup for selecting the optimal light source color.
After collecting the original and comparison images, peak signal-to-noise ratio (PSNR) [27] was selected to evaluate the images. A higher PSNR value indicates that the comparison image is closer to the original image. PSNR is based on mean squared error ( M S E ), which is calculated as shown in Equation (3) for a given original image I of size m × n and a comparison image K :
M S E = 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) K ( i , j ) ] 2 ,
where ( i , j ) represents the pixel coordinates in the image.
For color images with three channels (BGR), the average M S E ¯ is calculated as shown in Equation (4):
M S E ¯ = M S E B + M S E G + M S E R 3 ,
Based on the above formulas, the P S N R calculation is as follows:
P S N R = 10 · l o g 10 ( M A X I 2 M S E ¯ ) ,
where M A X I is the maximum pixel value of the image, which is 255.
Using the above experimental methods and evaluation criteria, we compared the collected comparison images with the original images and calculated the PSNR values for the comparison images. Figure 8 shows the relationship between the PSNR values of different colors and varying light intensities.
The PSNR value helps determine the similarity between images of different colored active light sources and the original images under varying light intensities. To further ascertain which color performs best for recognition under different lighting conditions, we introduce a comprehensive evaluation metric combining PSNR and overall recognition effectiveness, termed as the comprehensive recognition effect of a given color. The C R E C for a given color with a 60 mm light shield can be calculated using Equation (6):
C R E C i = P ( 30 , i ) × C ( 30 , i ) × 5 % + P ( 60 , i ) × C ( 60 , i ) × 15 % + P ( 90 , i ) × C ( 90 , i ) × 20 % + P ( 120 , i ) × C ( 120 , i ) × 25 % + P ( 150 , i ) × C ( 120 , i ) × 35 % ,
where P and C are the precision and confidence values, respectively, and the subscript ( d i s t a n c e ,   i )   indicates the distance and color (1 to 6 representing red, blue, green, pink, cyan, and yellow, respectively).
Using the data from Table 2 and the PSNR calculation formula, the CREC values can be computed for the original image light intensity for different colors of active light sources. Figure 9 shows the variation in CREC-PSNR with luminance.
From Figure 9, it is evident that red and blue CREC-PSNR values significantly exceed those of other colors, indicating that red and blue perform better in terms of recognition under varying light intensities.
To decide between red and blue under different light intensities, we fitted their data using luminance as the independent variable and CREC-PSNR as the dependent variable through nonlinear regression. The fitting curves are shown in Figure 10.
Equations (7) and (8) show the results of polynomial fitting of the blue laser and red laser with respect to the CREC-PSNR against the light intensity, respectively:
f r e d ( x ) = 4.261 10 26 × x 7 + 2.521 × 10 23 × x 6 2.256 × 10 17 × x 5 + 3.415 × 10 13 × x 4 2.191 × 10 9 × x 3 + 6.907 × 10 6 × x 2 + 0.01005 × x + 30.87 ,
f b l u e ( x ) = 1.664   × 10 25 × x 7 + 5.136 × 10 21 × x 6 6.469 × 10 17 × x 5 + 4.283 × 10 13 × x 4 1.589 × 10 9 × x 3 + 3.181 × 10 6 × x 2 0.002774 × x + 26.59 ,
where x is the ambient light intensity.
Therefore, the active light source can select the corresponding color by the size of the contrast; when f r e d ( x ) is greater than f b l u e ( x ) , it means that the recognition of the red light source is better than blue under the current light intensity, and vice versa. Equation (9) is shown below:
Color red , f r e d ( x ) > f b l u e ( x ) blue , f b l u e ( x ) > f r e d ( x ) ,
Based on the above analysis, we developed a control model to select the best recognition color under different light intensities. This model has been integrated into our active light source device, enhancing its effectiveness in various lighting conditions. The proposed active light source system significantly improves the accuracy and robustness of visual algorithms for UAV identification and tracking, showcasing the potential for practical applications in complex environments.

5. Practical Recognition Experiments with Active Light Sources

5.1. Implementation Details

To verify whether the active light source device can enhance the recognition rate of UAVs in real-world scenarios, we mounted the active light source device on the test UAV, in which the relevant parameters of the test UAV are shown in Table 6. We conducted recognition tests and comparisons between UAVs equipped with the device and those without it across various application scenarios. Figure 11 shows the comparison image of the UAV equipped with the active light source and the conventional UAV.
A total of 320 images of UAVs with and without the active light source device in the same indoor scene were collected to form the training set. Eight scenarios were selected as the test environments: outdoor without shadow, outdoor with shadow, complex background with grass, complex background with trees, transparent corridor, indoor without light, indoor with light, and light-free environment. Each of these eight scenarios varies in terms of lighting intensity, background complexity, and active light source color. The specific parameters of the test scenarios are detailed in Table 7. A total of 1600 images of UAVs with and without the active light source device were collected across these scenarios to form the test set. To better validate the improvement in UAV recognition robustness provided by the proposed device, we conducted experiments on the aforementioned dataset using two algorithms: YOLOv7 and RT-DETR [28]. YOLOv7 is an end-to-end object detection algorithm, while RT-DETR is a real-time object detection model based on the Transformer architecture. Additionally, precision, recall, and mAP50 were selected as evaluation metrics. These metrics were used to compare the two setups, verifying the feasibility and generality of the device in practical recognition scenarios.
Precision measures the proportion of correctly identified positive samples out of all predicted positive samples. Recall measures the proportion of correctly identified positive samples out of all actual positive samples. Precision and recall are calculated as shown in Equations (10) and (11):
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where T P represents true positives, F P represents false positives, and F N represents false negatives. The average precision ( A P ) represents the mean precision for a single class, while the mean average precision (mAP) is the average of A P values across all classes, serving as a comprehensive metric for evaluating model performance in multi-class object detection tasks. A P and m A P are calculated as shown in Equations (12) and (13):
A P = 0 1 P ( r ) d r
m A P = j = 1 S A P ( j ) S
where S symbolizes the overall number of categories.

5.2. Quantitative Comparison and Analysis

The recognition results of UAVs with and without active light source devices in two different algorithms for different scenarios are shown in Table 8. From the recognition results, it can be observed that in scenarios with strong or concentrated lighting and complex backgrounds, such as outdoor without shadows, complex background with grass, complex background with trees, and indoor with lights, the metrics for UAVs without the active light source device are generally lower than those for UAVs with the active light source device.
In scenarios with strong or concentrated lighting, like outdoor without shadows and indoor with lights, the high external light intensity and concentrated lighting cause the UAV body to reflect more natural light, leading to missed detections and lower confidence scores. This results in a significant drop in R values and mAP50. For UAVs with the active light source device, the device emits light independently, reducing the reflection of natural light from the UAV body and serving as a distinctive feature to aid recognition. Hence, R values and mAP50 for UAVs with the active light source device are significantly higher in these scenarios. In the outdoor without shadow scene, the P value for UAVs with the active light source is lower because the active light source can become washed out in strong light, and its light can affect the surrounding environment, causing false positives. However, it still maintains a relatively good level.
In complex scenarios like complex background with grass and complex background with trees, the R values and mAP50 for UAVs without the active light source device are significantly lower. In complex backgrounds, UAVs without the active light source blend in, making it difficult for the recognition algorithm to detect them, leading to higher missed detection rates and lower confidence, thus significantly reducing R values and mAP50. UAVs with the active light source, however, stand out due to the distinct light source, making them easier to recognize, resulting in much higher R values and mAP50 in complex scenarios.
In low-light scenarios like indoor without lights and light-free environment, the metrics for UAVs without the active light source device are lower than those for UAVs with the device. In these scenarios, especially in no light, the poor lighting conditions cause loss of details, leading to lower recognition rates and confidence, thus lower R values and mAP50. UAVs with the active light source, however, provide additional lighting in dark environments, revealing more details and serving as a distinctive feature, aiding recognition. Therefore, in low-light scenarios, UAVs with the active light source maintain higher metrics.
In simpler scenarios like outdoor with shadows, both types of UAVs retain good details as there is no direct lighting, resulting in high metrics for both setups. However, in the translucent corridor scenario, the P values and mAP50 for UAVs with the active light source are lower than those without. The reflective ground in this scenario causes the light from the active light source to create reflections, slightly affecting recognition. Thus, in this scenario, the p values and mAP50 for UAVs with the active light source are slightly lower, but the overall metric values remain relatively high.
Finally, from the average metrics across all scenarios, it can be observed that UAVs equipped with the active light source device outperformed those without it in terms of both recognition accuracy and recall rate. Specifically, for the YOLOv7 algorithm, the UAVs with the active light source achieved a 13.1% increase in precision, a 30% increase in recall rate, and a 21% increase in mAP50. For the RT-DETR algorithm, the precision improved by 13.6%, the recall rate increased by 29.6%, and the mAP50 rose by 19.5%. These results indicate that the active light source effectively enhances the saliency features of the UAVs, thereby improving recognition performance across various scenarios.

5.3. Qualitative Comparison and Analysis

Further, a qualitative comparison of the recognition performance of UAVs with and without the active light source across different scenarios is presented. Figure 12 shows the comparison of UAVs with and without the active light source in various scenarios.
As illustrated in Figure 12, in scenarios with strong or concentrated lighting and complex backgrounds (a), (c), (d), and (f), UAVs without the active light source can blend with the background, resulting in difficulties in recognition. In contrast, UAVs with the active light source maintain high-confidence recognition. In simple scenarios (b), both types of UAVs exhibit good recognition performance. In environments with low lighting (g), UAVs without the active light source experience significant detail loss, making recognition challenging. In completely dark environments (h), UAVs without the active light source are almost invisible, whereas UAVs with the active light source can be identified through the active light source itself, allowing for effective recognition of the UAV.
In summary, in the vast majority of scenarios, the active light source device enhances the UAVs by serving as a prominent feature that improves the recognition rate. However, in a few specific scenarios, such as in a translucent corridor, the device exhibits some limitations, which results in a slight decrease in the recognition performance of UAVs equipped with the device.

6. Discussion and Conclusions

6.1. Discussion

Through detailed experiments and analysis, the significant role of the active light source device in UAV recognition was systematically discussed. The experimental results demonstrated that UAVs equipped with the active light source device exhibited substantial advantages in recognition performance under various complex and extreme lighting conditions compared to those without the device. Specifically, whether in strong light, weak light, complex backgrounds, or simple scenes, the active light source effectively improved the recognition rate, recall rate, and mAP50 values, thereby enhancing the applicability and reliability of UAVs in various environments.
Firstly, in strong light environments, the active light source device effectively reduced the reflection of natural light on the UAV body, mitigating the risk of missed or incorrect identifications. It also served as a unique recognition feature, enhancing the UAV’s visibility under complex lighting conditions. Secondly, in weak or no-light environments, the active light source provided the necessary lighting compensation, making the UAV more prominent in images and significantly improving the accuracy of the recognition algorithm. Furthermore, in complex backgrounds, the sharp contrast created by the active light source against the background allowed the UAV to be accurately recognized even in highly variable scenes.
Although the active light source device may have a slight impact on recognition performance due to ground reflection in specific scenarios (such as transparent corridors), its advantages far outweigh the disadvantages overall. The comprehensive evaluation data in Table 3 further corroborate this, showing significant improvements in recognition rate and recall rate for UAVs equipped with the active light source, fully demonstrating its importance and effectiveness in UAV recognition.

6.2. Conclusions

This study addresses the issue of UAV recognition and localization based on vision in complex environments by proposing an active light source system based on light intensity matching. The components of the device were studied, tested, and designed according to the functional requirements of the system. In order to select the optimal size of the light source shield, comparison experiments were carried out on several sizes of shield, and the optimal size of the shield was selected. To determine the most effective color for the active light source under different light intensities, recognition performance experiments were performed, and a control model for color and light intensity was established to select the optimal active light source color for different conditions. Finally, to verify the feasibility of our active light source device, it was mounted on a UAV and compared with a UAV without the active light source.
In summary, this research provides new insights and methods for the development of UAV recognition and localization technology. However, the improvement in recognition performance provided by the active light source device is limited in some specific scenarios. Our future goal is to enhance the device’s effectiveness in these scenarios for practical applications. For example, by adding a power adjustment module, the active light source device can offer more brightness and color options, thereby improving UAV recognition performance in a wider range of scenarios. Alternatively, by collecting pictures in different modalities and fusing the effective information in the pictures, robust recognition and localization of UAVs can be ultimately achieved.

Author Contributions

Conceptualization: R.M., H.L. and S.G.H.; data curation: R.M. and T.W.; writing—original draft preparation: R.M. and T.W.; writing—review and editing: R.M., S.G.H. and Z.Z.; supervision: H.L., S.G.H. and Z.Z.; formal analysis: R.M. and T.W.; validation: R.M. and T.W.; methodology: R.M., T.W. and Z.Z.; resources: R.M. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (Grant No. 32201679); and in part by the Science Foundation of Fujian Province of China (Grant No. 2022J05230); and in part by the Open Project Program of Guangdong Provincial Key Laboratory of Agricultural Artificial Intelligence (Grant No. GDKL-AAI-2023008); and in part by the Minjiang University Talent Introduction Technology Project (Grant No. MJY22012); and in part by the Science and Technology Plan of Guangdong Province of China (2023B10564002).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors wish to thank sincerely the editors and anonymous reviewers for their critical comments and suggestions to improve the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mohsan, S.A.H.; Khan, M.A.; Noor, F.; Ullah, I.; Alsharif, M.H. Towards the unmanned aerial vehicles (UAVs): A comprehensive review. Drones 2022, 6, 147. [Google Scholar] [CrossRef]
  2. Su, J.; Zhu, X.; Li, S.; Chen, W.-H. AI meets UAVs: A survey on AI empowered UAV perception systems for precision agriculture. Neurocomputing 2023, 518, 242–270. [Google Scholar] [CrossRef]
  3. Ming, R.; Jiang, R.; Luo, H.; Lai, T.; Guo, E.; Zhou, Z. Comparative analysis of different uav swarm control methods on unmanned farms. Agronomy 2023, 13, 2499. [Google Scholar] [CrossRef]
  4. Ahmed, N.; Pawase, C.J.; Chang, K.H. Distributed 3-D path planning for multi-UAVs with full area surveillance based on particle swarm optimization. Appl. Sci. 2021, 11, 3417. [Google Scholar] [CrossRef]
  5. Yu, Y.; Lee, S. Efficient multi-UAV path planning for collaborative area search operations. Appl. Sci. 2023, 13, 8728. [Google Scholar] [CrossRef]
  6. Adoni, W.Y.H.; Lorenz, S.; Fareedh, J.S.; Gloaguen, R.; Bussmann, M. Investigation of autonomous multi-UAV systems for target detection in distributed environment: Current developments and open challenges. Drones 2023, 7, 263. [Google Scholar] [CrossRef]
  7. Shen, H.; Lin, D.; Yang, X.; He, S. Vision-Based Multi-Object Tracking through UAV Swarm. IEEE Geosci. Remote Sens. Lett. 2023, 20, 6008905. [Google Scholar] [CrossRef]
  8. Tong, P.; Yang, X.; Yang, Y.; Liu, W.; Wu, P. Multi-UAV collaborative absolute vision positioning and navigation: A survey and discussion. Drones 2023, 7, 261. [Google Scholar] [CrossRef]
  9. Lissandrini, N.; Michieletto, G.; Antonello, R.; Galvan, M.; Franco, A.; Cenedese, A. Cooperative optimization of UAVs formation visual tracking. Robotics 2019, 8, 52. [Google Scholar] [CrossRef]
  10. Xu, X.; Zhuge, S.; Li, C.; Ning, C.; Zhong, L.; Lin, B.; Yang, X.; Zhang, X. A vision-only relative distance calculation method for multi-UAV systems. Aerosp. Sci. Technol. 2023, 142, 108665. [Google Scholar] [CrossRef]
  11. Sumetheeprasit, B.; Rosales Martinez, R.; Paul, H.; Ladig, R.; Shimonomura, K. Variable baseline and flexible configuration stereo vision using two aerial robots. Sensors 2023, 23, 1134. [Google Scholar] [CrossRef] [PubMed]
  12. Cheng, G.; Yang, G.; Zhang, D.; Hu, J.; Zhang, J.; Xu, Z. The Multi-UAV Collaborative Localization Based on Visual and Inertial Sensors. In Proceedings of the International Conference on Autonomous Unmanned Systems, Nanjing, China, 8–11 September 2023; pp. 392–401. [Google Scholar]
  13. Tang, Y.; Hu, Y.; Cui, J.; Liao, F.; Lao, M.; Lin, F.; Teo, R.S.H. Vision-aided multi-UAV autonomous flocking in GPS-denied environment. IEEE Trans. Ind. Electron. 2018, 66, 616–626. [Google Scholar] [CrossRef]
  14. Bai, C.; Yan, P.; Piao, H.; Pan, W.; Guo, J. Learning-based multi-UAV flocking control with limited visual field and instinctive repulsion. IEEE Trans. Cybern. 2023, 54, 462–475. [Google Scholar] [CrossRef] [PubMed]
  15. Fu, Y.; Xiong, H.; Dai, X.; Nian, X.; Wang, H. Multi-UAV Target Localization Based on 3D Object Detection and Visual Fusion. In Proceedings of the International Conference on Autonomous Unmanned Systems, Nanjing, China, 8–11 September 2023; pp. 226–235. [Google Scholar]
  16. Zhang, Y.; Yu, Y.-F.; Chen, L.; Ding, W. Robust correlation filter learning with continuously weighted dynamic response for UAV visual tracking. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4705814. [Google Scholar] [CrossRef]
  17. Li, Y.; Fan, Q.; Huang, H.; Han, Z.; Gu, Q. A modified YOLOv8 detection network for UAV aerial image recognition. Drones 2023, 7, 304. [Google Scholar] [CrossRef]
  18. Walter, V.; Staub, N.; Franchi, A.; Saska, M. Uvdar system for visual relative localization with application to leader–follower formations of multirotor uavs. IEEE Robot. Autom. Lett. 2019, 4, 2637–2644. [Google Scholar] [CrossRef]
  19. Park, H.W.; Choi, I.S.; Park, S.K.; Choi, J.S. Leader-follower formation control using infrared camera with reflective tag. In Proceedings of the International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jeju, Republic of Korea, 30 October–2 November 2013; pp. 321–324. [Google Scholar]
  20. Ming, R.; Zhou, Z.; Lyu, Z.; Luo, X.; Zi, L.; Song, C.; Zang, Y.; Liu, W.; Jiang, R. Laser tracking leader-follower automatic cooperative navigation system for UAVs. Int. J. Agric. Biol. Eng. 2022, 15, 165–176. [Google Scholar] [CrossRef]
  21. Ming, R.; Zhou, Z.; Luo, X.; Liu, W.; Le, Z.; Song, C.; Jiang, R.; Zang, Y. Optical tracking system for multi-UAV clustering. IEEE Sens. J. 2021, 21, 19382–19394. [Google Scholar] [CrossRef]
  22. Arya, S.; Chung, Y.H. A Comprehensive Survey on Optical Scattering Communications: Current Research, New Trends, and Future Vision. IEEE Commun. Surv. Tutor. 2023, 25, 1–15. [Google Scholar] [CrossRef]
  23. Curtiss, J.M.; Languirand, E.R. Active illumination source for hyperspectral spectrometer in UAV/UGV mounted applications. In Proceedings of the Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) Sensing XXIII, Orlando, FL, USA, 6–12 June 2022; Volume 12116, pp. 146–153. [Google Scholar]
  24. Xu, C.; Dongliang, P.; Yu, G. Real-time object detection for UAV images based on improved YOLOv5s. Opto-Electron. Eng. 2022, 49, 210372-1–210372-13. [Google Scholar]
  25. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
  26. Ye, J.; Fu, C.; Zheng, G.; Cao, Z.; Li, B. Darklighter: Light up the darkness for uav tracking. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 3079–3085. [Google Scholar]
  27. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
  28. Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. Detrs beat yolos on real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 16965–16974. [Google Scholar]
Figure 1. Light intensity matched active light source system for UAVs. Note: 1. 5V-DC power supply interface 2. MCU 3. Light intensity sensor module 4. Red laser constant voltage control module 5. Blue laser constant voltage control module 6. Light cover 7. Red laser emission module 8. Blue laser emission module.
Figure 1. Light intensity matched active light source system for UAVs. Note: 1. 5V-DC power supply interface 2. MCU 3. Light intensity sensor module 4. Red laser constant voltage control module 5. Blue laser constant voltage control module 6. Light cover 7. Red laser emission module 8. Blue laser emission module.
Drones 08 00683 g001
Figure 2. Active light source device system workflow.
Figure 2. Active light source device system workflow.
Drones 08 00683 g002
Figure 3. Different sizes of light shield.
Figure 3. Different sizes of light shield.
Drones 08 00683 g003
Figure 4. Experimental principle of active light source shield size selection.
Figure 4. Experimental principle of active light source shield size selection.
Drones 08 00683 g004
Figure 5. R D and I R E L for different sizes of light shield.
Figure 5. R D and I R E L for different sizes of light shield.
Drones 08 00683 g005
Figure 6. Principle of optimal light source color selection experiment.
Figure 6. Principle of optimal light source color selection experiment.
Drones 08 00683 g006
Figure 7. Experimental setup for optimal light source color selection: blue light source (a), red light source (b).
Figure 7. Experimental setup for optimal light source color selection: blue light source (a), red light source (b).
Drones 08 00683 g007
Figure 8. Relationship between PSNR values of different colors and light intensity changes.
Figure 8. Relationship between PSNR values of different colors and light intensity changes.
Drones 08 00683 g008
Figure 9. CREC-PSNR variation with luminance.
Figure 9. CREC-PSNR variation with luminance.
Drones 08 00683 g009
Figure 10. Nonlinear fitting curves for red and blue.
Figure 10. Nonlinear fitting curves for red and blue.
Drones 08 00683 g010
Figure 11. Comparison image of the UAV equipped with the active light source and the conventional UAV: UAV with active light source (a), UAV without active light source (b). Note: 1. Active light source 2. Support module 3. UAV.
Figure 11. Comparison image of the UAV equipped with the active light source and the conventional UAV: UAV with active light source (a), UAV without active light source (b). Note: 1. Active light source 2. Support module 3. UAV.
Drones 08 00683 g011
Figure 12. Data comparison of UAVs with and without active light source in various scenarios.
Figure 12. Data comparison of UAVs with and without active light source in various scenarios.
Drones 08 00683 g012
Table 1. Precision and confidence of 35 mm size light shield at different distances.
Table 1. Precision and confidence of 35 mm size light shield at different distances.
DistanceRedBlueGreenYellowPinkCyan
PCPCPCPCPCPC
30 cm1.0000.9351.0000.9271.0000.9051.0000.8361.0000.8921.0000.869
60 cm1.0000.7891.0000.8101.0000.7741.0000.6631.0000.7971.0000.779
90 cm0.4000.4421.0000.7081.0000.7271.0000.3581.0000.5361.0000.700
120 cm0.4000.6211.0000.7601.0000.5551.0000.2681.0000.4991.0000.702
150 cm0.9600.4951.0000.7551.0000.6101.0000.2491.0000.3931.0000.692
Table 2. Precision and confidence of 50 mm size light shield at different distances.
Table 2. Precision and confidence of 50 mm size light shield at different distances.
DistanceRedBlueGreenYellowPinkCyan
PCPCPCPCPCPC
30 cm1.0000.8511.0000.8931.0000.8251.0000.8961.0000.8681.0000.845
60 cm1.0000.9321.0000.9181.0000.8841.0000.9201.0000.9021.0000.893
90 cm1.0000.8601.0000.8891.0000.8501.0000.8201.0000.8321.0000.874
120 cm1.0000.8971.0000.9131.0000.8511.0000.7451.0000.8361.0000.846
150 cm1.0000.8731.0000.8821.0000.7861.0000.6471.0000.7591.0000.835
Table 3. Precision and confidence of 60 mm size light shield at different distances.
Table 3. Precision and confidence of 60 mm size light shield at different distances.
DistanceRedBlueGreenYellowPinkCyan
PCPCPCPCPCPC
30 cm1.0000.7901.0000.8121.0000.8041.0000.8721.0000.8391.0000.828
60 cm1.0000.9391.0000.9321.0000.9121.0000.9331.0000.9081.0000.889
90 cm1.0000.9141.0000.9101.0000.8831.0000.8821.0000.8631.0000.897
120 cm1.0000.9161.0000.9211.0000.8661.0000.7861.0000.8331.0000.885
150 cm1.0000.9021.0000.9101.0000.8121.0000.7201.0000.8131.0000.890
Table 4. Precision and confidence of 80 mm size light shield at different distances.
Table 4. Precision and confidence of 80 mm size light shield at different distances.
DistanceRedBlueGreenYellowPinkCyan
PCPCPCPCPCPC
30 cm1.0000.3991.0000.4001.0000.4161.0000.4150.0000.0001.0000.590
60 cm1.0000.9441.0000.9481.0000.9301.0000.9141.0000.8761.0000.874
90 cm1.0000.9421.0000.9171.0000.9131.0000.8870.0000.0001.0000.884
120 cm1.0000.9071.0000.9001.0000.8891.0000.7940.0000.0001.0000.886
150 cm1.0000.8721.0000.8811.0000.8381.0000.6450.0000.0001.0000.841
Table 5. Precision and confidence of 95 mm size light shield at different distances.
Table 5. Precision and confidence of 95 mm size light shield at different distances.
DistanceRedBlueGreenYellowPinkCyan
PCPCPCPCPCPC
30 cm0.6400.1330.1200.4000.3600.1320.3600.1740.0000.0001.0000.259
60 cm1.0000.8631.0000.9081.0000.8931.0000.8790.0000.0001.0000.797
90 cm1.0000.9161.0000.9151.0000.9091.0000.8850.0000.0001.0000.811
120 cm1.0000.8771.0000.8931.0000.8441.0000.8420.0000.0001.0000.803
150 cm1.0000.8601.0000.8401.0000.5101.0000.6800.0000.0001.0000.743
Table 6. The relevant parameters of the test UAV.
Table 6. The relevant parameters of the test UAV.
ParameterUnitValue
Supply modemAh4S lithium battery, 4000 mAh
Dimensions (whole machine)mm205 × 205 × 204
Dimensions (without a battery and light source)mm205 × 205 × 83
Weight (whole machine)kg1.15
Weight (without a battery and light source)kg0.575
Endurancemin15
Number of propellers 4
Table 7. The specific parameters of the test scenarios.
Table 7. The specific parameters of the test scenarios.
Experimental SceneLight Intensity (nit)Background ComplexityLight Source Color
Outdoor without shadow4097LowBlue
Outdoor with shadow2135LowBlue
Complex background with grass1671HighBlue
Complex background with trees1376HighBlue
Transparent corridor142LowRed
Indoor without light14MediumRed
Indoor with light401MediumRed
Light-free environment0MediumRed
Table 8. Comprehensive evaluation data for UAVs with and without active light source.
Table 8. Comprehensive evaluation data for UAVs with and without active light source.
Loading of Active Light SourcesExperimental SceneYoloV7RT-DETR
PRmAP50PRmAP50
With active lightOutdoor without shadow0.880.960.9580.913 0.931 0.947
Outdoor with shadow0.990.9910.9950.971 0.995 0.975
Complex background with grass0.9740.940.9680.973 0.961 0.952
Complex background with trees0.9750.7720.8680.990 0.807 0.843
Transparent corridor0.96310.9450.968 1.000 0.934
Indoor without light0.99910.9950.999 0.986 0.964
Indoor with light0.9780.980.9710.974 1.000 0.976
Light-free environment0.99810.9950.986 0.999 0.964
Mean value0.9690.9550.9620.9710.9600.944
Without active lightOutdoor without shadow0.9340.5680.7980.943 0.617 0.814
Outdoor with shadow0.9860.920.9590.959 0.968 0.960
Complex background with grass0.9540.8270.9320.928 0.839 0.942
Complex background with trees0.9680.3020.6190.961 0.282 0.630
Transparent corridor0.99910.9950.972 1.000 0.988
Indoor without light0.9990.880.8850.976 0.833 0.874
Indoor with light0.7530.650.7650.835 0.677 0.741
Light-free environment0.110.10.0650.101 0.099 0.041
Mean value0.8380.6550.7520.8350.6640.749
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ming, R.; Wu, T.; Zhou, Z.; Luo, H.; Hassan, S.G. Research and Design of an Active Light Source System for UAVs Based on Light Intensity Matching Model. Drones 2024, 8, 683. https://doi.org/10.3390/drones8110683

AMA Style

Ming R, Wu T, Zhou Z, Luo H, Hassan SG. Research and Design of an Active Light Source System for UAVs Based on Light Intensity Matching Model. Drones. 2024; 8(11):683. https://doi.org/10.3390/drones8110683

Chicago/Turabian Style

Ming, Rui, Tao Wu, Zhiyan Zhou, Haibo Luo, and Shahbaz Gul Hassan. 2024. "Research and Design of an Active Light Source System for UAVs Based on Light Intensity Matching Model" Drones 8, no. 11: 683. https://doi.org/10.3390/drones8110683

APA Style

Ming, R., Wu, T., Zhou, Z., Luo, H., & Hassan, S. G. (2024). Research and Design of an Active Light Source System for UAVs Based on Light Intensity Matching Model. Drones, 8(11), 683. https://doi.org/10.3390/drones8110683

Article Metrics

Back to TopTop