Next Article in Journal
Crack Growth Analytical Model Considering the Crack Growth Resistance Parameter Due to the Unloading Process
Next Article in Special Issue
Measurement of Driving Conditions of Aircraft Ground Support Equipment at Tokyo International Airport
Previous Article in Journal
Pulsar Signal Adaptive Surrogate Modeling
Previous Article in Special Issue
Research, Analysis, and Improvement of Unmanned Aerial Vehicle Path Planning Algorithms in Urban Ultra-Low Altitude Airspace
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rapid Aircraft Wake Vortex Identification Model Based on Optimized Image Object Recognition Networks

1
School of Air Traffic Management, Civil Aviation Flight University of China, Guanghan 618307, China
2
School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Aerospace 2024, 11(10), 840; https://doi.org/10.3390/aerospace11100840
Submission received: 13 July 2024 / Revised: 31 August 2024 / Accepted: 13 September 2024 / Published: 11 October 2024

Abstract

:
Wake vortices generated by aircraft during near-ground operations have a significant impact on airport safety during takeoffs and landings. Identifying wake vortices in complex airspaces assists air traffic controllers in making informed decisions, ensuring the safety of aircraft operations at airports, and enhancing the intelligence level of air traffic control. Unlike traditional image recognition, identifying wake vortices using airborne LiDAR data demands a higher level of accuracy. This study proposes the IRSN-WAKE network by optimizing the Inception-ResNet-v2 network. To improve the model’s feature representation capability, we introduce the SE module into the Inception-ResNet-v2 network, which adaptively weights feature channels to enhance the network’s focus on key features. Additionally, we design and incorporate a noise suppression module to mitigate noise and enhance the robustness of feature extraction. Ablation experiments demonstrate that the introduction of the noise suppression module and the SE module significantly improves the performance of the IRSN-WAKE network in wake vortex identification tasks, achieving an accuracy rate of 98.60%. Comparative experimental results indicate that the IRSN-WAKE network has higher recognition accuracy and robustness compared to common recognition networks, achieving high-accuracy aircraft wake vortex identification and providing technical support for the safe operation of flights.

1. Introduction

Aircraft wake turbulence is a strong airflow formed from the pressure difference between the upper and lower surfaces of the wings during high-speed flight, rotating rapidly around the wingtips and extending backward and downward [1]. Wake turbulence, a byproduct of aircraft lift, is a critical factor affecting flight safety. When an aircraft encounters the wake of a preceding aircraft, the wake imparts a rolling moment to the following aircraft’s wings, potentially leading to loss of control [2,3,4]. Therefore, wake incidents can have severe consequences.
Given the significance of aircraft wake for civil aviation safety, wake detection has become a key research area in civil aviation [5,6,7,8]. Extensive research on detection methods has been conducted domestically and internationally, including Doppler radar, microwave radar, and acoustic radar detection. Jun Li et al. [9]. detected the wake of a transport aircraft 1.4 km away using a newly developed X-band Doppler weather radar (peak power 50 kW) at a base in Nanjing. Doppler radar is suitable for long-range detection, covering a wide area but with relatively low resolution and high equipment costs, requiring improvements in observation accuracy. As research progresses, LiDAR has gradually become a tool for observing aircraft wake, using laser beams to detect the motion and intensity of wake vortices. LiDAR receivers capture scattered laser signals, and by analyzing the time delay and frequency shift of these signals, three-dimensional images and velocity distributions of wake are obtained, providing high-resolution detection imaging for precise wake morphology and changes. Michel D T [10] proposed an airborne LiDAR sensor for tracking wake vortices produced by aircraft in formation flight, demonstrating that airborne LiDAR can detect wake vortices generated by preceding aircraft. Liu, X [11] applied LiDAR during aircraft departure stages, analyzing LiDAR detection methods and studying the effects of crosswinds on wake vortex evolution.
Although LiDAR can detect aircraft wake, extracting information in real-time is key to reducing wake intervals and enhancing operational safety. Köpp [12] successfully detected distant aircraft wake using a 2 µm LiDAR. In 2011, Fibertek [13] simulated the wake of a Boeing 747 and conducted on-site detection and validation using a developed 1.5 µm coherent Doppler wind LiDAR. In 2014, Mitsubishi Corporation [14] discovered clear-air turbulence using an airborne coherent Doppler wind LiDAR.
In recent years, with the rapid development of deep learning, convolutional neural networks have demonstrated capabilities surpassing human performance in various computer vision tasks [15]. Convolutional neural networks have also become a new method for LiDAR target recognition, offering robustness and strong non-linear representation abilities [16].
Pan, W [17] proposed a new method to recognize aircraft wake vortices by modifying the VGG16 network, providing a binary classification of uncertain wake vortex behavior patterns. Nana Chu [18] proposed a two-stage probabilistic deep learning framework for wake vortex recognition and duration assessment. The first stage involves vortex core localization using a convolutional neural network (CNN), and the second stage predicts the vortex intensity within the region of interest (ROI) based on initial core localization results. Zhang, X [19] proposed an improved CNN method for WV localization and grading based on PCDL data, avoiding the effects of unstable environmental wind fields on WV localization and classification results.
With the development of convolutional neural networks, the Inception-ResNet-v2 network [20], with its depth and breadth, has achieved significant success in large-scale image classification tasks. However, the standard Inception-ResNet-v2 network still lacks in handling fine-grained features in complex environments. To address this, we proposed the IRSN-WAKE network, enhancing feature representation capability and robustness by integrating the SE (Squeeze-and-Excitation) module [21] and a noise suppression module. The SE module adaptively reweights feature channels, improving the network’s focus on key features and addressing insufficient channel interaction. Concurrently, we designed and introduced a noise suppression module, using residual connections and element-wise operations to effectively suppress noise, enhancing feature extraction robustness and accuracy. With these modules, we built a new deep-learning framework based on the Inception-ResNet-v2 network and combined it with LiDAR data to achieve precise aircraft wake recognition. This technology is anticipated to be applied to rapid and efficient wake recognition to ensure flight safety and increase airport capacity.
This paper’s primary contributions include:
  • Proposing an improved Inception-ResNet-v2 network integrated with SE and noise suppression modules, effectively enhancing the network’s feature representation capability and robustness;
  • Designing a wake recognition framework based on LiDAR data, introducing deep learning models to achieve precise extraction and recognition of wake features;
  • Validating the superior performance of the proposed method in wake recognition tasks through experiments, demonstrating its potential for aircraft wake monitoring applications, and providing references for ensuring flight safety.
The structure of this paper is as follows: Section 2 discusses the principles of aircraft wake formation, the principles of Doppler effect measurement, and the methods for collecting Doppler LiDAR data; Section 3 details the IRSN-WAKE network architecture; Section 4 introduces the wake recognition method based on LiDAR data; Section 5 concludes the research and proposes future research directions.

2. Materials and Methods

2.1. Principle of Aircraft Wake Vortex Formation

Wake turbulence is a byproduct of aircraft lift, manifesting as a pair of counter-rotating vortex pairs. As air flows over the wing, the higher flow speed over the upper surface compared to the lower surface leads to a higher pressure on the lower surface than on the upper surface. This pressure difference generates lift for the aircraft. Concurrently, air flows from the high-pressure area to the low-pressure area, moving from the lower surface of the wing around the wingtips to the upper surface. This results in the formation of two pairs of closed vortices, centered at the wingtips and rotating in opposite directions, commonly referred to as wingtip vortices [22,23]. In the initial 0–10 s after wingtip vortex formation, the wake vortex structure is tight and concentrated, with the circulation densely distributed near the vortex core. During the rapid decay phase of 10–20 s, air viscosity causes the circulation within the vortex to diffuse and dissipate radially and axially through the friction and mixing processes of microscale vortices within [2]. Figure 1 illustrates a schematic of aircraft wake vortex generation [24].

2.2. The Principle of Doppler Effect Measurement

The principle of Doppler effect measurement is based on the change in the wavelength of light reflected by a target due to the relative motion between the light source and the target. When Doppler LiDAR scans a target airspace, the receiver captures laser signals reflected by aerosol particles moving with the wake trail. The Doppler frequency shift of the backscattered signal relates to the laser wavelength and radial wind speed as Equation (1):
Δ f D = 2 λ 0 V R
where Δ f D denotes the Doppler shift, which reflects the relative velocity of the target with respect to the receiver. λ 0 represents the wavelength, and V R indicates the radial wind speed, describing the speed at which the target approaches or recedes from the receiver.
Due to the significant reduction in visibility during foggy conditions, the laser signal experiences attenuation, and in rainy weather, raindrops cause scattering and absorption of the laser signal. Therefore, LiDAR data collection should be conducted under clear weather conditions with high visibility. Additionally, since the operation of LiDAR does not rely on visible light, it can function effectively both during the day and at night.
As illustrated in Figure 2, the radar emits a laser beam that scans the wake generated by an aircraft using the Hallock-Burnham model [25], assuming a zero-background wind field. The study focuses on the radial velocity field of an A320 aircraft, with parameters shown in Table 1 and initial vortex T0 as described in Equation (2):
Γ 0 = M g ρ V b 0
where M is the mass of the aircraft, ρ is the air density, b 0 is the initial separation distance of the rolled-up wake vortices, and V is the inflow velocity of air, approximately equal to the aircraft’s flight speed.
As the A320 passes, its wake vortex causes aerosol particles in the air to move. The flow field and velocity intensity of these aerosol particles are depicted in Figure 3a. When the LiDAR scans the vortex at the origin in Figure 3a, let’s consider an aerosol particle located at point P, whose coordinates can be expressed as ( r , θ ) where r denotes the horizontal distance from the radar to point P, and θ is the elevation angle of point P. The radial velocity is given by Equations (3) and (4).
v t = v t 1 + v t 2
| v r | = O P v t | O P |
Figure 3b shows the cloud of radial velocities of aerosol particles received by the radar, where the horizontal axis represents the distance from the LiDAR measurement point, and the vertical axis represents the LiDAR scanning angle. The chromaticity corresponds to the velocities in the color bar, with red indicating positive velocities and blue indicating negative velocities. Positive values represent velocities moving away from the LiDAR, while negative values indicate velocities moving toward the LiDAR.

2.3. Field Detection

In this experiment, Wind3D 6000 LiDAR (Qingdao Leice Transient Technology Co., Ltd., Qingdao, China) was used to collect aircraft wake vortex data, enabling detection over longer distances. The maximum detection radius exceeds 6 km. It features a small size, is lightweight, and has low power consumption. Figure 4a depicts the operation of the LiDAR equipment. Table 2 lists the equipment parameters of Wind3D 6000. The experimental site was near Shuangliu International Airport, as shown in Figure 4, which displays a satellite image of the airport with the detection locations marked. Considering factors such as airport terrain, weather conditions, and runway operations, the LiDAR parameters were set as shown in Table 2.
Based on the detection on-site, the visualization of the wake generated by the A380, as shown in Figure 4, reveals an evolving pattern where the induced interaction of the two vortices and the influence of environmental winds lead to noticeable changes. Over time, the left and right vortices gradually expand in size while the intensity of the vortex circulation diminishes. Despite maintaining an overall reverse symmetry, this symmetrical structure progressively destabilizes with time, eventually blending into the ambient wind field.

3. Convolutional Neural Networks and Their Implementation in Detection

In recent years, deep Convolutional Neural Networks (CNNs) have achieved breakthroughs in various computer vision tasks such as image classification, object detection, sentiment recognition, and scene segmentation. Doppler LiDAR is widely used in wind field measurements [11], and deep learning provides an effective method for detecting aircraft wake vortices. However, due to limitations in storage space and computational power, traditional neural network models still face significant challenges in storage and computation on embedded devices. Currently, research focusing on lightweight CNNs has gained attention, with the Inception and ResNet architectures being two prominent models. The former is noted for its computational efficiency and superior performance, while the latter effectively addresses the vanishing gradient problem in deep networks through residual connections. Applying lightweight networks to LiDAR data recognition has emerged as a novel approach. However, LiDAR data is single-channel and varies significantly from images with very low resolution.
To improve the accuracy and real-time performance of aircraft wake vortex detection, we propose the Inception-ResNet-Wake model. This model introduces a noise suppression module in the convolutional process to effectively suppress noise, preserve critical features, and combine the SE module with the Inception-ResNet-v2 network. The SE module readjusts the weights of feature channels adaptively, enhancing the model’s representational power to capture richer feature information across different scales, thereby improving detection accuracy. The network architecture of the model is illustrated in Figure 5, and the pseudo-code is shown in Algorithm 1.
Algorithm 1 IRSN-WAKE Network
1: procedure IRSN-WAKE_NETWORK(I)
2 :   F s t e m NoiseSuppression(I)
3 :   F s t e m     I n i t i a l _ S t e m ( F s t e m )
4 :   for   i = 1   to   n b  do
5 :   F s t e m I N C E P T I O N _ R E S N E T _ A _ S E F s t e m
6: end for
7 :   for   i   = 1   to   n b do
8 :   F s t e m ← INCEPTION_RESNET _B_SE F s t e m
9: end for
10 :   for i   = 1   to   n c  do
11 : F s t e m ← INCEPTION_RESNET_C_SE F s t e m
12: end for
13 :   F f i n a l Final_Classification_Block F s t e m
14 :   return   F f i n a l
15: end procedure

3.1. Noise Suppression Module

In Algorithm 1, IRSN-WAKE introduces a noise suppression module that effectively suppresses noise and preserves key features during the convolution process. The input to the noise suppression module first undergoes initial convolution processing through a BasicConv2 module, producing the convolved output feature map F 1 . The BasicConv2 module comprises a convolutional layer, batch normalization layer, and activation function. The convolutional layer extracts local spatial features, the batch normalization layer normalizes feature values to accelerate training and improve stability, and the normalized feature map undergoes non-linear transformation via the activation function, enhancing the model’s expressive power. Subsequently, F 1 is combined with the input image through residual connection to form a new feature map F 2 . F 2 undergoes convolution processing again via the BasicConv2 module to obtain F 3 . Finally, F 3 undergoes element-wise multiplication with F 1 , and is connected back to the input image through residual connection to form the ultimate noise-suppressed feature map F i n p u t . This effectively enhances feature representation capability while suppressing noise influence.

3.2. Inception-ResNet-v2 Network

The size of images in pixels is 48 × 56 × 3. As shown in Figure 6, the Inception-ResNet-v2 network combines the Inception architecture with residual connections, offering efficient computation and demonstrating high performance in image recognition tasks. The network consists of several types of modules, each designed to handle features at different scales. The Stem module serves as the entry point of the network and is responsible for initial feature extraction through three 3 × 3 convolutions and pooling layers. The Inception-ResNet-A module combines various convolution operations and residual connections to capture fine-grained features, including 1 × 1 convolutions, 3 × 3 convolutions, and 7 × 1 and 1 × 7 convolutions, which are combined through the Inception feature concatenation layer. The Reduction-A module reduces spatial dimensions while increasing the depth of the feature maps, facilitating the processing of more abstract features in deeper layers. Its core operation involves 3 × 3 convolutions and max pooling to decrease spatial dimensions. The Inception-ResNet-B module is similar to the Inception-ResNet-A module but is configured differently to capture mid-level features, including more 7 × 1 and 1 × 7 convolutions, focusing on capturing more complex patterns. The Reduction-B module further reduces spatial dimensions, preparing the network for the final stage, shrinking the spatial dimensions to 5 × 6 while increasing the depth to 1152 through 3 × 3 convolutions and max pooling. The Inception-ResNet-C module acts as the network’s final convolutional operation, capturing high-level features and refining the feature maps through a series of 1 × 1 and 3 × 3 convolutions to produce the final output.

3.3. The IRSN-WAKE Network

The IRSN-WAKE network integrates a denoising module on the basis of the Inception-ResNet-v2 architecture and incorporates the SE (Squeeze-and-Excitation) module.
The SE module consists of two main steps: Squeeze and Excitation. The Squeeze step addresses the issue of channel dependencies by compressing global spatial information into a channel descriptor. It achieves this by globally averaging the spatial dimensions of feature maps, compressing a W × H × C feature map into a 1 × 1 × C feature vector Z that encapsulates global information. Each channel’s feature maps are compressed into a single numerical value, ensuring that the generated channel-wise statistics Z contain contextual information, thereby alleviating channel dependencies. This involves applying global average pooling to each channel’s feature map, generating channel-level statistical data, as shown in Equation (5).
z c = F s q ( U ) = 1 H × W i = 1 H j = 1 W u c , i , j
where F s q ( U ) is the global average pooling using the channel, u c , i , j is the value of channel c at position ( i , j ) , H and W are the height and width of the feature map, respectively.
To leverage the information pooled during the compression operation, SE uses the Excitation operation to comprehensively capture channel dependencies. A gating mechanism composed of two fully connected layers is employed; the first layer reduces the number of channels from c to c / r to decrease the computational load, followed by a Re L U non-linear activation layer. The second fully connected layer restores the number of channels back to c , and then a Sigmoid activation yields the weights s . The final dimension of s is 1 × c × c , which is used to characterize the weights of c feature maps in the feature map U . r refers to the reduction ratio. The gating mechanism, including the fully connected layers, captures dependencies between channels and recalibrates the feature map, as seen in the formula.
s = F ex ( z , W ) = σ ( W 2 δ ( W 1 z ) )
Here, δ represents the Re L U activation function, σ represents the Sigmoid activation function, W 1 and W 2 are the weight matrices of the fully connected layers, and z is the channel statistics generated by the Squeeze operation.
A reweighting operation is then performed. The attention weights previously obtained are applied to each channel’s features. Each feature map in feature map U is multiplied by its corresponding weight to produce the final output Y ˜ of the SE module.
Y ˜ = F s c a l e ( U c , s c ) = s c U c
where s c is the weight for channel c , U c is channel c of the input feature map and Y ˜ is the reweighted feature map.
To enhance the feature representation ability of the Inception-ResNet-v2 network, allowing it to adaptively weight different feature channels, the SE module can be inserted after each Inception module (Inception-ResNet-A, B, C modules). Take the Inception-ResNet-A module as an example, as shown in Figure 7.
First, the feature map U is generated by the Inception-ResNet-A, which is then passed to the SE module for processing. The output U out from the Inception-ResNet-A module is:
U out = SE ( U )
The integration of the SE module ensures that the network can utilize both fine-grained and abstract features, focusing on the most informative features, thereby enhancing the model’s robustness and accuracy.

4. Experimental Setup and Results

4.1. Experimental Platform

The experiments were conducted on the Windows 11 operating system using the Python programming language with the PyTorch 2.4.0 training framework. The GPU used was NVIDIA GeForce RTX 4090 (NVIDIA, Santa Clara, CA, USA), the processor was 13th Gen Intel® Core™ i7-13700K 24 (Intel, Santa Clara, CA, USA), and the memory was 32GB.

4.2. Data Processing and Network Parameter Settings

In the initial stages of the experiment, we collected wake vortex data from different aircraft models, including A380 and A320, using a Doppler lidar at Chengdu Shuangliu International Airport. Due to the varying wind speeds in the wake vortex background, the collected radar data was normalized to remove scale effects before inputting them into the network. Normalizing the data accelerates gradient descent in convolutional neural networks, enhancing the model’s convergence speed. Figure 8 depicts the average wind speed variation over time in the background wind field.
In total, 3500 data samples were collected for this experiment. After randomizing the data, 60% was allocated for the training set, 20% for the validation set, and the remaining 20% for the test set.
This experiment used the Adam adaptive gradient descent algorithm. This method is simple to implement, computationally efficient, and consumes less memory [19]. It is widely applied in the fields of computer vision and natural language processing. The initial learning rate for the optimizer in this experiment was set to 0.001; momentum was set to 0.9; weight decay was 1 × 10−4. Additionally, the convolutional neural network training model utilized the cross-entropy loss function. Cross-entropy measures the distance between the actual output p and the expected output probability q. A smaller cross-entropy value indicates that the model fits better with the training data. Mathematically, as shown in Equation (9):
H ( p , q ) = x p ( x ) log q ( x )

4.3. Ablation Experiments

To verify the effectiveness of each module, we conducted ablation studies, as shown in Table 3. Inception-ResNet refers to the original Inception-ResNet-v2 model, Inception-ResNet-SE is the model with the addition of the SE module to the base Inception-ResNet-v2, Inception-ResNet-ND involves the addition of a noise suppression module to the base Inception-ResNet-v2, and ISRN-wake is our constructed model which incorporates both SE and noise suppression modules to the Inception-ResNet-v2 framework. The experimental results demonstrate that the addition of the SE and noise suppression modules significantly improves the ISRN-wake model across various evaluation metrics, including Accuracy, Precision, Recall, and F1-Score., validating our architectural enhancements.

4.4. Results and Analysis

In this experiment, preprocessed training data was fed into IRSN-WAKE for training, which stopped after 200 iterations. Figure 9 shows the iteration-wise change in loss values during the training process.
To further demonstrate the effectiveness of the constructed target network, we compared common target recognition networks: VGG16, SVM, KNN, and RF. We used accuracy, precision, recall, and F1 scores as evaluation metrics. Accuracy refers to the proportion of correctly classified samples to the total number of samples, as shown in Equation (10); precision refers to the proportion of actual positive samples among all samples predicted to be positive, as shown in Equation (11); recall refers to the proportion of correctly predicted positive samples among all actual positive samples, as shown in Equation (12); and the F1 score is the harmonic mean of precision and recall, used to comprehensively evaluate the model’s performance, as shown in Equation (13).
Accuracy = T P + T N T P + T N + F P + F N
Precision = T P T P + F P
Recall = T P T P + F N
F 1 = 2 × Precision × Recall Precision + Recall
where TP represents the number of true positive predictions made by the model. TN represents the number of true negative predictions made by the model. FP represents the number of false positive predictions made by the model (also known as Type I errors). FN represents the number of false negative predictions made by the model (also known as Type II errors).
The results are shown in Table 4, where the ISRN-wake model exhibits outstanding performance across all evaluation metrics, achieving the highest accuracy of 0.986, precision of 0.967, and F1 score of 0.960. This indicates the effectiveness of the constructed ISRN-wake network in recognizing aircraft wake vortices and minimizing false alarms.
To further evaluate the performance of the models in binary classification tasks, we calculated the confusion matrix for each model and conducted an in-depth analysis of their classification capabilities across different categories. As shown in Figure 10, the results indicated that deep learning models, such as VGG16 and ISRN-wake, performed exceptionally well in these tasks, especially in situations where the ratio of positive to negative samples was imbalanced. Although traditional machine learning models like SVM, KNN, and RF also showed good performance, they exhibited some limitations in balancing precision and recall.
The ISRN-wake model, with a TP of 1665, TN of 424, FP of 57, and FN of 80, demonstrated extremely high precision and recall in detecting positive samples. Compared to the other models, it had the lowest miss rate and false alarm rate. Moreover, the ISRN-wake model achieved a detection speed of 180 fps while utilizing only 0.4 M of resources, proving its real-time capability and accuracy in handling complex tasks such as wake vortex detection. In contrast, although VGG16 is also a deep learning model, its detection speed was only 50fps, with resource usage as high as 120 M.

5. Conclusions

This study proposes the IRSN-WAKE network, which achieves efficient identification of aircraft wake vortices by integrating SE (Squeeze-and-Excitation) and denoising modules. The SE module enhances the sensitivity of the network to critical features by adaptively weighting feature channels, while the denoising module effectively suppresses noise, thereby improving feature extraction robustness and accuracy. Experimental results demonstrate that the framework accurately identifies aircraft wake features in complex airflow environments, showing high recognition accuracy and robustness. Through experimental validation, we found:
  • IRSN-WAKE network has enhanced feature extraction capabilities: The SE module significantly improves the network’s sensitivity to key features, while the denoising module effectively reduces noise interference.
  • Importance of combining Doppler LiDAR data with deep learning in capturing aircraft wake vortices: Integrating deep learning techniques with sensor data enables efficient wake recognition in complex environments, offering new solutions for aircraft wake monitoring.
However, for aircraft wake recognition tasks, we only used Doppler LiDAR data from a single airport in this study. To address these limitations, future research will consider integrating heterogeneous data sources such as weather radar data and ADS-B data into aircraft wake recognition tasks to improve recognition accuracy. Additionally, collecting aircraft wake data from various types of civil airports and training deep learning models accordingly will enhance model generalizability, thereby providing technical support for safe and efficient flight operations.

Author Contributions

Conceptualization, L.D. and W.P.; methodology, L.D. and Y.L.; software, L.D. and Y.L.; validation, L.D., C.Z., and T.L.; formal analysis, L.D.; investigation, L.D.; resources, L.D.; data curation, T.L. and L.D.; writing—original draft preparation, L.D.; writing—review and editing, L.D.; visualization, W.P. and L.D.; supervision, L.D. and C.Z.; project administration, W.P.; funding acquisition, W.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (U2333209), the National Key R&D Program of China (No. 2021YFF0603904), the National Natural Science Foundation of China (U1733203), the Civil Aviation Administration of China (AQ20200019), Civil Aircraft Fire Science and Safety Engineering Key Laboratory of Sichuan Province, Civil Aviation Flight University of China (MZ2024JB01) and the Student Innovation Fund Program 24CAFUC10182.

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gharbia, Y.; Derakhshandeh, J.F.; Alam, M.M.; Amer, A.M. Developments in Wingtip Vorticity Mitigation Techniques: A Comprehensive Review. Aerospace 2024, 11, 36. [Google Scholar] [CrossRef]
  2. Deng, L.; Pan, W.; Wang, Y.; Luan, T.; Leng, Y. Aircraft Wake Evolution Prediction Based on Parallel Hybrid Neural Network Model. Aerospace 2024, 11, 489. [Google Scholar] [CrossRef]
  3. Rojas, J.I.; Melgosa, M.; Prats, X. Sensitivity Analysis of Maximum Circulation of Wake Vortex Encountered by En-Route Aircraft. Aerospace 2021, 8, 194. [Google Scholar] [CrossRef]
  4. Luo, H.; Pan, W.; Wang, Y.; Luo, Y. A330-300 Wake Encounter by ARJ21 Aircraft. Aerospace 2024, 11, 144. [Google Scholar] [CrossRef]
  5. Wei, Z.; Lu, T.; Gu, R.; Liu, F. DBN-GABP Model for Estimation of Aircraft Wake Vortex Parameters Using Lidar Data. Chin. J. Aeronaut. 2024; in press. [Google Scholar] [CrossRef]
  6. Shen, C.; Tang, W.; Gao, H.; Wang, X.; Chan, P.-W.; Hon, K.-K.; Li, J. Aircraft Wake Recognition and Strength Classification Based on Deep Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2237–2249. [Google Scholar] [CrossRef]
  7. Schwithal, J.; Fezans, N. Deriving Lidar Sensor Requirements for Use in Wake Impact Alleviation Functions. J. Aircr. 2023, 60, 1290–1301. [Google Scholar] [CrossRef]
  8. Holzäpfel, F.; Strauss, L.; Schwarz, C. Assessment of Dynamic Pairwise Wake Vortex Separations for Approach and Landing at Vienna Airport. Aerosp. Sci. Technol. 2021, 112, 106618. [Google Scholar] [CrossRef]
  9. Li, J.; Wang, T.; Li, W. Experimental study on the scattering characteristics of X-band radar in aircraft wake. Radar Sci. Technol. 2009, 12, 406–410. [Google Scholar]
  10. Michel, D.T.; Dolfi-Bouteyre, A.; Goular, D.; Augère, B.; Planchat, C.; Fleury, D.; Lombard, L.; Valla, M.; Besson, C. Onboard Wake Vortex Localization with a Coherent 1.5 Μm Doppler LIDAR for Aircraft in Formation Flight Configuration. Opt. Express 2020, 28, 14374. [Google Scholar] [CrossRef]
  11. Liu, X.; Zhang, X.; Zhai, X.; Zhang, H.; Liu, B.; Wu, S. Observation of Aircraft Wake Vortex Evolution under Crosswind Conditions by Pulsed Coherent Doppler Lidar. Atmosphere 2021, 12, 49. [Google Scholar] [CrossRef]
  12. Köpp, F.; Rahm, S.; Smalikho, I. Characterization of Aircraft Wake Vortices by 2-Μm Pulsed Doppler Lidar. J. Atmos. Ocean. Technol. 2004, 21, 194–206. [Google Scholar] [CrossRef]
  13. Akbulut, M.; Hwang, J.; Kimpel, F.; Gupta, S.; Verdun, H. Pulsed Coherent Fiber Lidar Transceiver for Aircraft In-Flight Turbulence and Wake-Vortex Hazard Detection. In Proceedings of the Laser Radar Technology and Applications XVI, SPIE, Orlando, FL, USA, 27–29 April 2011. [Google Scholar]
  14. Inokuchi, H.; Furuta, M.; Inagaki, T. High altitude turbulence detection using an airborne Doppler lidar. In Proceedings of the 29th Congress of the International Council of the Aeronautical Sciences (ICAS), St. Petersburg, Russia, 7–12 September 2014; Volume 3, p. 255. [Google Scholar]
  15. Chen, J.; Du, L.; Guo, G.; Yin, L.; Wei, D. Target-Attentional CNN for Radar Automatic Target Recognition with HRRP. Signal Process. 2022, 196, 108497. [Google Scholar] [CrossRef]
  16. Lohani, B.; Ghosh, S. Airborne LiDAR Technology: A Review of Data Collection and Processing Systems. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2017, 87, 567–579. [Google Scholar] [CrossRef]
  17. Pan, W.; Leng, Y.; Yin, H.; Zhang, X. Identification of Aircraft Wake Vortex Based on VGGNet. Wirel. Commun. Mob. Comput. 2022, 2022, 1487854. [Google Scholar] [CrossRef]
  18. Chu, N.; Ng, K.K.H.; Liu, Y.; Hon, K.K.; Chan, P.W.; Li, J.; Zhang, X. Assessment of Approach Separation with Probabilistic Aircraft Wake Vortex Recognition via Deep Learning. Transp. Res. Part E Logist. Transp. Rev. 2024, 181, 103387. [Google Scholar] [CrossRef]
  19. Zhang, X.; Zhang, H.; Wang, Q.; Liu, X.; Liu, S.; Zhang, R.; Li, R.; Wu, S. Locating and Grading of Lidar-Observed Aircraft Wake Vortex Based on Convolutional Neural Networks. Remote Sens. 2024, 16, 1463. [Google Scholar] [CrossRef]
  20. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4– 9 February 2017; Volume 31. [Google Scholar] [CrossRef]
  21. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  22. Ma, Y.; Tang, X.; Shi, Y.; Chan, P.-W. YOLOv8n–CBAM–EfficientNetV2 Model for Aircraft Wake Recognition. Appl. Sci. 2024, 14, 7754. [Google Scholar] [CrossRef]
  23. Panagiotou, P.; Ioannidis, G.; Tzivinikos, I.; Yakinthos, K. Experimental Investigation of the Wake and the Wingtip Vortices of a UAV Model. Aerospace 2017, 4, 53. [Google Scholar] [CrossRef]
  24. Breitsamter, C. Wake vortex characteristics of transport aircraft. Prog. Aerosp. Sci. 2011, 47, 89–134. [Google Scholar] [CrossRef]
  25. Holzäpfel, F.; Gerz, T.; Köpp, F.; Stumpf, E.; Harris, M.; Young, R.I.; Dolfi-Bouteyre, A. Strategies for Circulation Evaluation of Aircraft Wake Vortices Measured by Lidar. J. Atmos. Ocean. Technol. 2003, 20, 1183–1195. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of aircraft wake vortex formation.
Figure 1. Schematic diagram of aircraft wake vortex formation.
Aerospace 11 00840 g001
Figure 2. LiDAR scanning wake vortex, RHI.
Figure 2. LiDAR scanning wake vortex, RHI.
Aerospace 11 00840 g002
Figure 3. (a) The flow field and velocity intensity, (b) The jet cloud image.
Figure 3. (a) The flow field and velocity intensity, (b) The jet cloud image.
Aerospace 11 00840 g003
Figure 4. Aircraft Wake Vortex Observation Site Selection and Observation Schematic Diagram (a) Location of LiDAR (b) Schematic diagram of LiDAR detection.
Figure 4. Aircraft Wake Vortex Observation Site Selection and Observation Schematic Diagram (a) Location of LiDAR (b) Schematic diagram of LiDAR detection.
Aerospace 11 00840 g004
Figure 5. Inception-ResNet-wake Network Architecture.
Figure 5. Inception-ResNet-wake Network Architecture.
Aerospace 11 00840 g005
Figure 6. Inception-ResNet-v2 Network Architecture.
Figure 6. Inception-ResNet-v2 Network Architecture.
Aerospace 11 00840 g006
Figure 7. Squeeze-and-Excitation Module.
Figure 7. Squeeze-and-Excitation Module.
Aerospace 11 00840 g007
Figure 8. Background wind field average wind change graph.
Figure 8. Background wind field average wind change graph.
Aerospace 11 00840 g008
Figure 9. Loss Value with the Number of Iterations.
Figure 9. Loss Value with the Number of Iterations.
Aerospace 11 00840 g009
Figure 10. Confusion Matrix (a) ISRN-wake Model Confusion Matrix. (b) KNN Model Confusion Matrix. (c) RF Model Confusion Matrix. (d) SVM Model Confusion Matrix. (e) VGG16 Model Confusion Matrix.
Figure 10. Confusion Matrix (a) ISRN-wake Model Confusion Matrix. (b) KNN Model Confusion Matrix. (c) RF Model Confusion Matrix. (d) SVM Model Confusion Matrix. (e) VGG16 Model Confusion Matrix.
Aerospace 11 00840 g010
Table 1. A320 Specifications.
Table 1. A320 Specifications.
ParameterValue
MTOW (t)78.00
Wingspan (m)35.80
Speed (m/s)69.96
Table 2. LiDAR parameters.
Table 2. LiDAR parameters.
Metric ItemsParameter Value
Laser Wavelength1.5 μm, invisible and safe for human eyes
Radial Detection Range45 m–6000 m
Radial Distance Resolution15 m/30 m/user-defined
Data Refresh Rate1 HZ–10 HZ
Radial Wind Speed Measurement Range −37.5 m/s–37.5 m/s
Wind Speed Measurement Accuracy≤0.1 m/s
Scan Servo Accuracy0.1°
Scan ModesFixed Point/DBS/VAD/RHI/PPI/CAPPI, Script Programming
Weight<90 kg
Table 3. Results of Ablation Experiments.
Table 3. Results of Ablation Experiments.
ModelAccuracyPrecisionRecallF1-Score
Inception-ResNet0.9570.9200.9060.873
Inception-ResNet-SE0.9670.9430.9460.944
Inception-ResNet-ND0.9800.9500.9530.951
ISRN-wake0.9860.9670.9540.960
Table 4. Comparison of Object Recognition Networks.
Table 4. Comparison of Object Recognition Networks.
ModelAccuracyPrecisionRecallF1-Score
SVM0.9170.7630.7240.743
KNN0.9090.6400.7140.930
RF0.9230.7780.7520.765
VGG160.9840.9510.9590.955
ISRN-wake0.9860.9670.9540.960
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Deng, L.; Pan, W.; Luan, T.; Zhang, C.; Leng, Y. Rapid Aircraft Wake Vortex Identification Model Based on Optimized Image Object Recognition Networks. Aerospace 2024, 11, 840. https://doi.org/10.3390/aerospace11100840

AMA Style

Deng L, Pan W, Luan T, Zhang C, Leng Y. Rapid Aircraft Wake Vortex Identification Model Based on Optimized Image Object Recognition Networks. Aerospace. 2024; 11(10):840. https://doi.org/10.3390/aerospace11100840

Chicago/Turabian Style

Deng, Leilei, Weijun Pan, Tian Luan, Chen Zhang, and Yuanfei Leng. 2024. "Rapid Aircraft Wake Vortex Identification Model Based on Optimized Image Object Recognition Networks" Aerospace 11, no. 10: 840. https://doi.org/10.3390/aerospace11100840

APA Style

Deng, L., Pan, W., Luan, T., Zhang, C., & Leng, Y. (2024). Rapid Aircraft Wake Vortex Identification Model Based on Optimized Image Object Recognition Networks. Aerospace, 11(10), 840. https://doi.org/10.3390/aerospace11100840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop