Next Article in Journal
Decline in Planting Areas of Double-Season Rice by Half in Southern China over the Last Two Decades
Next Article in Special Issue
Seasonal Variation in Vertical Structure for Stratiform Rain at Mêdog Site in Southeastern Tibetan Plateau
Previous Article in Journal
Uncertainty Quantification of Soil Organic Carbon Estimation from Remote Sensing Data with Conformal Prediction
Previous Article in Special Issue
Design and Implementation of K-Band Electromagnetic Wave Rain Gauge System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Radar Echo Recognition of Gust Front Based on Deep Learning

1
College of Electronic Engineering, Chengdu University of Information Technology, Chengdu 610225, China
2
Key Laboratory of South China Sea Meteorological Disaster Prevention and Mitigation of Hainan Province, Haikou 570100, China
3
State Key Lab of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing 100081, China
4
College of Atmospheric Sciences, Chengdu University of Information Technology, Chengdu 610225, China
5
Nanjing Joint Institute for Atmospheric Sciences, Nanjing 210041, China
6
CMA Basin Heavy Rainfall Key Laboratory, Institute of Heavy Rain, China Meteorological Administration, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(3), 439; https://doi.org/10.3390/rs16030439
Submission received: 24 November 2023 / Revised: 13 January 2024 / Accepted: 18 January 2024 / Published: 23 January 2024
(This article belongs to the Special Issue Advance of Radar Meteorology and Hydrology II)

Abstract

:
Gust fronts (GFs) belong to the boundary layer convergence system. A strong GF can cause serious wind disasters, so its automatic monitoring and identification are very helpful but difficult in daily meteorological operations. By collecting convective weather processes in Hubei, Jiangsu, and other regions of China, 1422 GFs from 106 S-band new-generation weather radar (CINRAD/SA) volume scan data are labeled as positive samples by means of human–computer interaction, and the same number of negative samples are randomly tagged from no GF radar data. A deep learning dataset including 2844 labels with a positive and negative sample ratio of 1:1 is constructed, and 80%, 10%, and 10% of the dataset are separated as training, validation, and test sets, respectively. Then, the training dataset is expanded to 273,120 samples by data augmentation technology. Since the height of a GF is generally less than 1.5 km, three deep-learning-based models are trained for GF automatic recognition according to the distance from the radars. Three models (M1, M2, M3) are trained with the data at a 0.5° elevation angle from 65 to 180 km away from the radars, at 0.5° and 1.5° angles from 40 to 65 km, and at 0.5°, 1.5°, and 2.4° angles within 40 km, respectively. The precision, confusion matrix, and its derived indicators including receiver operating characteristic curve (ROC) and the area under ROC (AUC) are used to evaluate the three models by the test set. The results show that the identification precisions of the models are 97.66% (M1), 90% (M2), and 90.43% (M3), respectively. All the hit rates are over 89%, the false positive rates are less than 11%, and the critical success indexes (CSIs) surpass 82%. In addition, all the optimal critical points on the ROC curves are close to (0, 1), and the AUC values are above 0.93. These results suggest that the three models can effectively achieve the automatic discrimination of GFs. Finally, the models are demonstrated by three GF events detected with Qingpu, Nantong, and Cangzhou radars.

1. Introduction

In strong convective weather systems such as a severe storm or squall line, the cold air in the mature phase sinks to a low altitude and advances against the warm ambient airflow at the front of the thunderstorm to form a convergence line. This line is called a gust front (GF) if it reaches a certain strength [1]. In weather radar images, a GF appears as one or more arc narrowband echoes at the front of the thunderstorm, outflow boundary, bow echo, or squall line. A GF is often accompanied by an increase in atmospheric pressure, decrease in temperature, sudden changes in wind direction and wind speed, and obvious ground divergence behind it [2]. The low-level wind shear generated by GFs may pose a threat to aircraft operations, especially during takeoff and landing. Therefore, effective automatic monitoring of GFs is necessary [3]. When a GF encounters a strong echo cell, the reflectivity of the cell will increase rapidly [4]. In addition, if a GF intersects with an existing convective system, the GF will develop more strongly and cause more destructive disasters [1]. In recent years, the GFs of convective cold pools are also increasingly recognized as triggering subsequent convective cells [5]. Therefore, improved techniques for monitoring outflow boundaries may help to better understand the location, timing, and intensity of GF events [6].
At present, the research on GFs mainly focuses on algorithm improvement and case analysis. In 1986, Uyeda proposed an automatic GF recognition algorithm (AGFA) based on velocity convergence [7]. In 1993, the main method for detecting wind shear was the GF detection algorithm in the TDWR system of the Federal Aviation Administration of the United States. Local airport controllers and supervisors have found these algorithms very helpful in detecting GFs [8]. Then, in 1994, combining a meteorological mechanism and spatiotemporal distribution characteristic, Troxel suggested a machine intelligence GF recognition algorithm (MIGFA) which included a functional statistical template to extract the narrowband weak echoes caused by GFs [9]. MIGFA was a milestone for GF recognition as the function template correlation (FTC) method was introduced firstly, and many MIGFA-based algorithms were proposed subsequently. MIGFA is improved with pixel-based fusion technology, but the average accuracy is reduced from 81.5% to 68% compared with the original algorithm although it performed better with the same batch of data. This reflects that the method has limited generalization ability because of its data dependence [10]. Based on the statistical features of radar echoes, a GF is recognized by a predefined template with set threshold values [11]. MIGFA is also applicable to microscale or weak GFs [12]. A dynamic weight is suggested to adjust the threshold values of MIGFA [13]. With the development of digital image-processing technology techniques such as mathematical morphology have been introduced to improve the GF recognition algo-rithms. Its accuracy reached 73.6% in individual cases [14]. Also, based on MIGFA, a neural fuzzy-based GF detection algorithm (NFGDA) is proposed, and its accuracy reaches 93% for S-band radar [15]. Later, a new dual template local binary (LBDT) algorithm based on radar image features was used to identify potential areas of narrowband echoes for GF automatic detection, and the algorithm has a high detection probability and low false alarm rate [16].
However, it is difficult for these traditional methods based on feature templates to match all narrowband echoes with size and shape differences by limited templates. With the widespread application of deep learning, two deep convolutional neural networks, i.e., the ultra-fast regional convolutional neural network (Faster RCNN) and Inception V2, are introduced to train a GF identification model whose accuracy reaches 87%, but the model needs to be verified with more data as the number of GF labels is only 28 during modeling in [17].
Ignoring the differences between samples, the algorithm and accuracy of GF recognition have been continuously enhanced in the past 20 years (Table 1). Nevertheless, these improvements are mainly based on MIGFA, with few based on deep learning methods.
By designing an appropriate neural network architecture and collecting sufficient data, deep learning can realize a map from one vector space to another to achieve complex nonlinear relationships.
The remainder of this study is organized as follows. Section 2 covers the description including data sources, construction of labeled datasets, data augmentation, and standardization. Section 3 provides a brief introduction to the Unet neural network and its parameter settings in this study and the model training processes. In Section 4, the models are evaluated by the given evaluation indicators through the test set. Section 5 demonstrates the application effect of the models with the radar base data during three GF events. Finally, some conclusions are given and the prospects for deep learning in GF identification are discussed in Section 6.

2. Data and Methods

2.1. Label Collection

The dataset for deep learning comprises volume scan data from 106 S-band new-generation weather radar (CINRAD/SA) during eight convective weather events that occurred in Hubei, Jiangsu, and other regions between 2002 and 2014. The CINRAD/SA radar runs a volume scan once every six minutes. These scans consist of a series of plan position indicator (PPI) sweeps at a sequence of increasing elevations. The reflectivity used in this study is sampled every 1000 m up to 430 km. The radar data were controlled to eliminate the pollution of ground clutter and other nonmeteorological echoes using the algorithm developed by the radar detection team of the State Key Laboratory of Serious Weather, Chinese Academy of Meteorological Sciences. A total of 1422 GFs were labeled by means of human–computer interaction which cost a significant amount of manpower and time. Since clutter and weak echoes were the main source of errors in GF recognition, more than 30,000 negative samples of clutter and weak echoes were specifically labeled in the dataset. The tags were fixed at a size of 60 km × 60 km, which included the radar parameters at nine elevation angles. For displaying the features of GF tags, the radar reflectivity at a 0.5° elevation angle of partial positive samples was found, which is illustrated by grayscale images in Figure 1. The same number of negative samples were randomly tagged from no GF radar data.
The 1280 positive samples from GF events one and two in Table 2 were used as training and validation set data, and the remaining 142 positive samples in Wuhan (2014-07-31 0818-0913 UTC) were used as independent test set data. In addition, 142 samples in the training set were used as the validation set. In order to ensure the model has a better recognition effect, a large amount of data is used to train the neural network, and the test set data are sparse. In order to ensure that the evaluation results are closer to the actual identification results, 114 GF label data collected by the Nantong radar station on 10 June 2023 are added as the test set data.

2.2. Data Augmentation and Normalization

A GF appears as a narrowband echo in a radar image whose intensity, size, and orientation of echoes are irregular due to the influence of the storm matrix, terrain, and boundary layer. Therefore, the narrowband echo feature in the radar echo images of the GF remains unchanged regardless of data rotation.
As examples, six GF labels detected with the Wuhan radar at 0455, 0507, 0513, and 0537 UTC on 24 August 2002, at 1200 UTC on 14 June 2005, and at 0855 UTC on 31 July 2014 (Figure 2) are randomly selected to demonstrate the data augmentation steps. It should be emphasized again that the grayscale images shown below are actually saved as radar base data for all elevation angles and parameters.
(1)
According to the rotational invariance of GF features, the 1138 GF positive samples in the training set are augmented to 136,560 by rotation counterclockwise every 3° around the matrix center of the label data. After adding the negative samples with the same size and quantity, a total of 273,120 samples are obtained to build the training data set. As an example, the left half of Figure 3 shows the images rotated by 30° at the corresponding times in Figure 2.
(2)
Obviously, there will be no data at the four corners of the rotated images, so the 60 km × 60 km images are clipped to 40 km × 40 km to eliminate blank data while the GF features are preserved (Figure 3). Therefore, the size of the label matrix in this study is actually unified as 40 km × 40 km.
(3)
The labels are saved as matrix forms. The matrixes are normalized by the maximum and minimum value methods using Equation (1) in which the values greater than 70 dBZ (decibel reflectivity factor) are set to 70 and less those than 0 dBZ are set to 0,
X i * = X i X m i n X m a x X m i n
In which X i * represents the value of the ith point after normalization, X i represents the original value, X m a x and X m i n are 70 and 0 dBZ, respectively.
It is worth emphasizing that Figure 2 and Figure 3 provide a visual representation of the data augmentation process, but the training of the neural network is based on the raw radar intensity data.
Different from first converting radar data into grayscale images for model training, the models are trained directly using radar base data, which is also one of the highlights of this study.

3. Model Construction

3.1. Algorithm Introduction

In recent years, deep learning has played an important role in image recognition. A deep learning-based model has shown good performance in squall line recognition [18].
The Unet network, one of the most commonly used deep learning algorithms, performs well in image segmentation and classification. However, a large number of parameters in Unet need to be trained due to its deep levels, which leads to a slow rate of convergence, and it is prone to overfitting when the network is large.
In this study, a Unet-based GF recognition network was designed (Figure 4). The vertical lines represent the network layers, and the numbers above each vertical line indicate the number of nodes in that layer. The two numbers beside the vertical lines denote the matrix size. An output value of one indicates the presence of a GF, while zero denotes its absence. Short black right-facing arrows signify convolution with a step size of 1, a padding size of 1, and a kernel size of 3 × 3, preserving the matrix size after convolution. Long gray right-facing arrows represent feature fusion after downsampling. Black downward arrows denote maximum pooling with a step size of 2 and a filter size of 2 × 2. Gray upward arrows indicate upsampling with a filter size of 2 × 2, restoring and decoding abstracted features to the original matrix size while preserving important features in their corresponding positions.
Compared to the original Unet neural network, this study reduces one layer of convolution, pooling, and feature concatenation processes for a shallower neural network that pays more attention to local features such as textures.

3.2. Model Training

Since the height of a GF is generally less than 1.5 km [19], three Unet-based GF identification models are trained to use more radar information according to the detection height of the radar beam in standard atmosphere, and the closer the GF is to the radar station, the more elevation angles can be observed. Therefore, three models are trained with different elevation angles. Model one (M1) is trained with 206,880 samples which include the data at only one elevation angle (0.5°) from 65 to 180 km away from the radars, model two (M2) with 49,200 samples at two elevation angles (0.5°, 1.5°) from 40 to 65 km, and model three (M3) with 17,040 samples at three elevation angles (0.5°, 1.5°, and 2.4°) within 40 km. In addition, the early stop mechanism is adopted to prevent overfitting. Namely, the iteration is stopped, and the model is saved if the loss function value of the validation set does not decrease after eight continuous epochs. Stochastic gradient descent (SGD) is selected as the optimizer, the learning rate is set to 0.01, ReLU is adopted as the activation function, and dropout layers are added to further prevent model overfitting during model training.

4. Model Evaluation

4.1. Evaluation Indicator

The precision, confusion matrix, and its derived indicators such as receiver operating characteristic curve (ROC) and the area under the ROC (AUC) are used to evaluate the three models by the test set. The confusion matrix can intuitively display the accuracy and categories of classification models by counting the numbers of wrong and right categories and derives five scoring indicators, namely, probability of detection ( P O D ), false positive rate ( F P R ), missed alarm rate ( M A R ), critical success index ( C S I ), and precision, whose formulas are listed in Equations (2)–(6). T P means that a GF actually occurs and is recognized by the models, and F N means that a GF actually occurs but is not recognized. F P means that a GF does not occur but is mistakenly recognized, and T N means that a GF does not occur and is not recognized by the models.
    P O D = T P T P + F N  
F P R = F P F P + T N
M A R = F N T P + F N
C S I = T P T P + F P + F N
P r e c i s i o n = T P T P + F P
According to the confusion matrix calculated by the test set, ROC can be presented with POD (sensitivity) as the vertical axis and FPR (1-specificity) as the horizontal axis, while AUC is obtained. When AUC = 1, that is, TP = 1 and FP = 0, it means that the classifier is the most perfect at the point (0, 1). However, this point cannot be found directly but is replaced with AUC at the closest position (0, 1). So, the larger the AUC, the closer the point on the curve is to (0, 1), and the better the classifier.

4.2. Evaluation Results

The models are evaluated by the test set which includes 214 (M1), 90 (M2), and 94 (M3) samples.
The confusion matrixes calculated by the test set are shown in Figure 5, in which the vertical axis represents the real label and the horizontal axis is the model prediction. It can be seen from the confusion matrixes that the models are highly sensitive to positive examples and negative examples.
The ROC and AUC with POD (vertical axis) and FPR (horizontal axis) are shown in Figure 6. The three ROC curves are close to the upper-left corner (0, 1), which indicates high sensitivity for positive samples and low FPR and MAR. Therefore, the features of GF have been successfully learned by the network, and the models perform well in the test set.

5. Model Application

The following sections demonstrate that the three models read the radar base data within their respective range segments and locate the GF echoes to realize the automatic recognition of GFs.

5.1. Qingpu Radar

On 30 April 2021, there was an instable atmospheric stratification caused by the intersection between the warm wet air at a low altitude and the cold air accompanying the northeast cold vortex moving eastward and southward. A large-scale severe convective weather event including a thunderstorm and hail occurred from 1000 to 1400 UTC in east China, and GF echoes appeared many times in Qingpu, Nantong, and other weather radars.
From 1204 to 1402 UTC, the GF echoes appear in a total of 23 volume scan data of the Qingpu radar, and all the GFs are basically identified by the models. As examples, the automatic recognition results are illustrated by eight consecutive volume scan data detected with the Qingpu radar on 30 April from 1215 to 1252 UTC. The models are performed for each radar data from north to south and from east to west to realize automatic GF identification. The size of the recognition window is the same as that of the label data, i.e., 40 km × 40 km, and the length of the step is 8 km. M1, M2, or M3 is called according to the distance from the window center to the radar. M1 is responsible for the GF recognition from 65 to 180 km away from the radars, M2 from 40 to 65 km, and M3 from 0 to 40 km.
In order to better demonstrate the models’ recognition effect, Figure 7 shows the recognition results of the 0.5° elevation plane position indicator (PPI) of the Qingpu radar through human–computer interaction, in which if it is manually determined that a GF is correctly recognized, the window is represented in red, in black if a GF is missed, and in yellow if a GF is incorrectly recognized. As can be seen from Figure 7, the models can accurately identify the area where a GF occurs, and only one window is missed in Figure 7g, marked by a black window. As an example, Figure 8 shows a cross-sectional view along the main echo of the GF at 1231 UTC, from which it can be seen that the GF intensity is less than 30 dBZ and the height is less than 2.5 km.

5.2. Nantong Radar

With the cold air continuing to move eastward and southward, the GFs were also detected by the Nantong radar about half an hour after the GFs occurred in Qingpu.
From 1316 to 1351 UTC, the GF echoes appear in a total of seven volume scan data of the Nantong radar, and all the GFs are basically identified by the models. Figure 9 shows the GFs’ automatic recognition results in six consecutive base data from 1322 to 1351 UTC. Different from the Qingpu radar, a large area of clutter echo appears around the GF in the Nantong radar. However, because the input data of M2 and M3 come from two and three radar elevations, the models can accurately capture the narrowband and other characteristics of GFs to identify the size and range of the GF main echo. There are no false positives and only one missed identification in Figure 9c, marked with a black window.
Similarly, Figure 10 shows a cross-sectional view along the main echo of the GF at 1334 UTC, from which it can be seen that the echo fully conforms to the definition of a GF with an intensity less than 30 dBZ and a height less than 2.0 km.

5.3. Cangzhou Radar

Influenced by a high trough, shear line, and ground inverted trough, a severe convective weather event including a rainstorm, thunderstorm gale, and hail occurred in the east of Cangzhou from the afternoon to the night of 10 July 2023. GF echoes appear in a total of 12 volume scan data of the Cangzhou radar from 0548 to 0748 UTC, and the models almost achieve accurate recognition of all the GFs. Figure 11 illustrates the automatic recognition results taking eight consecutive base data from 0600 to 0642 as examples. There are no false positives and only one missed identification in Figure 11c, marked with a black window. It should be noted that the Cangzhou radar type is CINRAD/SAD which has undergone a polarization upgrade. Because its gate width is 250 m, the average values of four reflectivities are the same as the input of the models.
The three GF events occur with different weather backgrounds, and the clutter is also diverse. All three models can effectively identify the GFs, except for a few missed identifications.

6. Discussion

With the upgrade of radar polarization, polarimetric parameters such as differential reflectivity and specific differential phase shift will be included in the label data for deep learning, and more and more GF data will be collected to abstract more GF characteristics to realize more accurate GF automatic recognition. In addition, based on the recognition results of GFs, we are further studying what weather background and terrain conditions will trigger strong convective weather to achieve its forecasting and warning.

7. Conclusions

Three Unet-based GF automatic recognition models are trained with the dataset including 341,280 labels marked by means of human–computer interaction and data augmentation. These labels are not grayscale images but consist of multiple elevation radar base data. M1 is trained by the data only within an elevation angle (0.5°) from 65 to 180 km away from the radars, M2 the data within two elevations (0.5°, 1.5°) from 40 to 65 km, and M3 the data within three elevations (0.5°, 1.5°, and 2.4°) within 40 km. According to the evaluation by the test set, the accuracies of M1, M2, and M3 are 97.66%, 90%, and 90.43%, respectively. The POD of all three models is above 89%, and the CIS are also above 82%. M1 performs exceptionally well. All the ROC curves are close to the point (0, 1) for a very high identification probability of GFs.
The actual application effect of the models is demonstrated with three GF events detected with three radars in Qingpu, Nantong, and Cangzhou. Regardless of how the intensity and length of GFs change, the models can accurately recognize the GFs in the entire process from formation to weakening and are not susceptible to interference from the clutter in the base data and therefore are more helpful in practical forecasting operations. The study demonstrates that deep learning has promising application prospects in GF recognition. However, more GF events need to be collected in the future to continuously improve the accuracy and the generalization ability of models.

Author Contributions

Conceptualization, H.T. and Z.H.; methodology, Z.H.; software, Z.H. and F.W.; validation, H.T., F.W. and P.X.; formal analysis, F.W.; investigation, H.T.; resources, Z.H. and L.L.; data curation, H.T., P.X., L.L. and F.X.; writing—original draft preparation, H.T.; writing—review and editing, Z.H. and H.T.; visualization, H.T.; supervision, Z.H.; project administration, Z.H.; funding acquisition, Z.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key-Area Research and Development Program of Guangdong Province (2020B1111200001), the Key Laboratory of South China Sea Meteorological Disaster Prevention and Mitigation of Hainan Province (Grant No SCSF202301), the Joint Fund of Key Laboratory of Atmosphere Sounding, CMA and Research Centre on Meteorological Observation Engineering Technology, CMA (U2021Z05), the Key Project of Monitoring, Early Warning and Prevention of Major Natural Disasters of China (2019YFC1510304), the Science and Technology Development Fund of CAMS (2021KJ019), the Basic Research Fund of CAMS (2021Z003), and the Science and Technology Research Project of Guangdong Province Meteorological Bureau (GRMC2022Z05, GRMC2021XQ03), the Open Grants of the State Key Laboratory of Severe Weather (2023LASW-B02).

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Acknowledgments

The authors would like to sincerely thank Leng Liang of the Key Laboratory of Rainstorm Monitoring and Early Warning in Hubei Province for providing technical guidance and collection of part of the dataset.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Zhang, P.C.; Du, B.Y.; Dai, T.P. Radar Meteorology; China Meteorological Press: Beijing, China, 2001; pp. 392–402. (In Chinese) [Google Scholar]
  2. Bedard, A.J.; Hooke, W.H.; Beran, D.W. The Dulles Airport Pressure Jump Detector Array for Gust Front Detection. Bull. Am. Meteorol. Soc. 1977, 58, 920–927. [Google Scholar] [CrossRef]
  3. Klingle, D.L.; Smith, D.R.; Wolfson, M.M. Gust Front Characteristics as Detected by Doppler radar. Mon. Weather Rev. 1987, 115, 905–918. [Google Scholar] [CrossRef]
  4. Kingsmill, D.E. Convection Initiation Associated with a Sea-Breeze Front, a Gust Front, and Their Collision. Mon. Weather Rev. 1995, 123, 2913–2933. [Google Scholar] [CrossRef]
  5. Henneberg, O.; Meyer, B.; Haerter, J.O. Particle-Based Tracking of Cold Pool Gust Fronts. J. Adv. Model. Earth Syst. 2020, 12, e2019MS001910. [Google Scholar] [CrossRef] [PubMed]
  6. Weaver, J.F.; Nelson, S.P. Multiscale Aspects of Thunderstorm Gust Fronts and Their Effects on Subsequent Storm Development. Mon. Weather Rev. 1982, 110, 707–718. [Google Scholar] [CrossRef]
  7. Uyeda, H.; Zrnic, D.S. Automatic Detection of Gust Fronts. J. Atmos. Ocean. Technol. 1986, 3, 36–50. [Google Scholar] [CrossRef]
  8. Hermes, L.G.; Witt, A.; Smith, S.D.; Klingle-Wilson, D.; Morris, D.; Stumpf, G.J.; Eilts, M.D. The Gust-Front Detection and Wind-Shift Algorithms for the Terminal Doppler Weather radar System. J. Atmos. Ocean. Technol. 1993, 10, 693–709. [Google Scholar] [CrossRef]
  9. Troxel, S.W.; Delanoy, R.L. Machine-intelligent approach to automated gust-front detection for Doppler weather radars. Sensing, Imaging, and Vision for Control and Guidance of Aerospace Vehicles. Int. Soc. Opt. Photonics 1994, 2220, 182–193. [Google Scholar] [CrossRef]
  10. Kwon, S.M. Pixel-Level Data Fusion Techniques Applied to the Detection of Gust Fonts. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1994. [Google Scholar]
  11. Zheng, J.F.; Zhang, J.; Zhu, K.Y.; Liu, Y.X.; Zhang, T. Automatic Identification and Alert of Gust Fronts. J. Appl. Meteor. Sci. 2013, 24, 117–125. [Google Scholar] [CrossRef]
  12. Zheng, J.; Zhang, J.; Zhu, K.; Liu, L.; Liu, Y. Gust Front Statistical Characteristics and Automatic Identification Algorithm for CINRAD. J. Meteorol. Res. 2014, 28, 607–623. [Google Scholar] [CrossRef]
  13. Xu, F.; Yang, J.; Zheng, Y.Y.; Zhou, H.G. Improvement of the MIGFA Technique for Identifying Gust Front and Its Verification. Meteorol. Mon. 2016, 42, 44–53. [Google Scholar] [CrossRef]
  14. Leng, L.; Xiao, Y.J.; Wu, T. Automatic Recognition of Gust Fronts Based on Mathematical Morphology. Meteorol. Sci. Technol. 2016, 44, 1–6+46. (In Chinese) [Google Scholar] [CrossRef]
  15. Hwang, Y.; Yu, T.Y.; Lakshmanan, V.; Kingfield, D.M.; Lee, D.I.; You, C.H. Neuro-Fuzzy Gust Front Detection Algorithm With S-Band Polarimetric radar. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1618–1628. [Google Scholar] [CrossRef]
  16. Yuan, Y.; Wang, P.; Wang, D.; Jia, H. An Algorithm for Automated Identification of Gust Fronts from Doppler radar Data. J. Meteorol. Res. 2018, 32, 444–455. [Google Scholar] [CrossRef]
  17. Xu, Y.F.; Zhao, F.; Mao, C.Y. Gust front detection algorithm based on deep convolutional neural network. Torrential Rain Disasters 2020, 39, 81–88. (In Chinese) [Google Scholar] [CrossRef]
  18. Xie, P.; Hu, Z.; Yuan, S.; Zheng, J.; Tian, H.; Xu, F. radar Echo Recognition of Squall Line Based on Deep Learning. Remote Sens. 2023, 15, 4726. [Google Scholar] [CrossRef]
  19. Wang, Y.D.; Jing, X.Y.; Wang, W.D. Analysis on the Birth and Disappearance History and Weather Characteristics of a Rare Gust Front in Heilongjiang Province. Heilongjiang Meteorol. 2021, 38, 9–13. (In Chinese) [Google Scholar] [CrossRef]
Figure 1. Grayscale images of the radar reflectivity at 0.5° elevation angle from partial positive samples. The saved data include radar base data for all elevation angles and parameters.
Figure 1. Grayscale images of the radar reflectivity at 0.5° elevation angle from partial positive samples. The saved data include radar base data for all elevation angles and parameters.
Remotesensing 16 00439 g001
Figure 2. The 60 km × 60 km GF label grayscale images with 0.5° elevation reflectivity detected with Wuhan radar at (a) 0455, (b) 0507, (c) 0513, and (d) 0537 UTC on 24 August 2002, (e) 1200 UTC on 14 June 2005, and (f) 0855 UTC on 31 July 2014. The narrowband echoes in the central region of the grayscale images are the primary feature of the labeled data collected in this study.
Figure 2. The 60 km × 60 km GF label grayscale images with 0.5° elevation reflectivity detected with Wuhan radar at (a) 0455, (b) 0507, (c) 0513, and (d) 0537 UTC on 24 August 2002, (e) 1200 UTC on 14 June 2005, and (f) 0855 UTC on 31 July 2014. The narrowband echoes in the central region of the grayscale images are the primary feature of the labeled data collected in this study.
Remotesensing 16 00439 g002
Figure 3. (a1f1) Images in Figure 2 rotated 30°, and (a2f2) 40 km × 40 km label images clipped from the 60 km × 60 km images of (a1f1) to eliminate the blank data while the GF features are preserved.
Figure 3. (a1f1) Images in Figure 2 rotated 30°, and (a2f2) 40 km × 40 km label images clipped from the 60 km × 60 km images of (a1f1) to eliminate the blank data while the GF features are preserved.
Remotesensing 16 00439 g003
Figure 4. The Unet-based GF recognition network.
Figure 4. The Unet-based GF recognition network.
Remotesensing 16 00439 g004
Figure 5. The confusion matrixes calculated by the test set of (a) M1, (b) M2, and (c) M3 according to Equations (2)–(6), the values of evaluation indicators are listed in Table 3.
Figure 5. The confusion matrixes calculated by the test set of (a) M1, (b) M2, and (c) M3 according to Equations (2)–(6), the values of evaluation indicators are listed in Table 3.
Remotesensing 16 00439 g005
Figure 6. The ROC and AUC values of the models.
Figure 6. The ROC and AUC values of the models.
Remotesensing 16 00439 g006
Figure 7. The GF recognition results of 0.5° elevation PPI of Qingpu radar at (a) 1215, (b) 1220, (c) 1225, (d) 1231, (e) 1236, (f) 1242, (g) 1247, and (h) 1252 UTC, in which the red windows indicate that GFs are correctly recognized and the black window represents missed recognition. The distance circle is 50 km, and the yellow line in (d) is the position of the section in Figure 8.
Figure 7. The GF recognition results of 0.5° elevation PPI of Qingpu radar at (a) 1215, (b) 1220, (c) 1225, (d) 1231, (e) 1236, (f) 1242, (g) 1247, and (h) 1252 UTC, in which the red windows indicate that GFs are correctly recognized and the black window represents missed recognition. The distance circle is 50 km, and the yellow line in (d) is the position of the section in Figure 8.
Remotesensing 16 00439 g007
Figure 8. The cross-section along the yellow line in Figure 7d. At that moment, the development of the GF was robust, with the main body reaching a height of approximately 1.7 km. The intensity ranged between 20 and 30 dBZ, and the total length exceeded 120 km.
Figure 8. The cross-section along the yellow line in Figure 7d. At that moment, the development of the GF was robust, with the main body reaching a height of approximately 1.7 km. The intensity ranged between 20 and 30 dBZ, and the total length exceeded 120 km.
Remotesensing 16 00439 g008
Figure 9. Similar to Figure 7 but the GF recognition results of 0.5° elevation PPI of Nantong radar at (a) 1322, (b) 1328, (c) 1334, (d) 1339, (e) 1345, (f) 1351 UTC, and the yellow line in (c) is the position of the section in Figure 10. The red windows indicate that GFs are correctly recognized and the black window represents missed recognition.
Figure 9. Similar to Figure 7 but the GF recognition results of 0.5° elevation PPI of Nantong radar at (a) 1322, (b) 1328, (c) 1334, (d) 1339, (e) 1345, (f) 1351 UTC, and the yellow line in (c) is the position of the section in Figure 10. The red windows indicate that GFs are correctly recognized and the black window represents missed recognition.
Remotesensing 16 00439 g009
Figure 10. The cross-section along the yellow line in Figure 9c. At that moment, the GF had a main body height of approximately 1 km, intensity ranging between 15 and 25 dBZ, and a length exceeding 120 km.
Figure 10. The cross-section along the yellow line in Figure 9c. At that moment, the GF had a main body height of approximately 1 km, intensity ranging between 15 and 25 dBZ, and a length exceeding 120 km.
Remotesensing 16 00439 g010
Figure 11. The GF recognition results of 0.5° elevation PPI of Cangzhou radar at (a) 0600, (b) 0606, (c) 0612 (d) 0618, (e) 0624, (f) 0630, (g) 0636, (h) 0642 UTC, in which the red windows indicate the correct recognition of GF and the black one means missed recognition.
Figure 11. The GF recognition results of 0.5° elevation PPI of Cangzhou radar at (a) 0600, (b) 0606, (c) 0612 (d) 0618, (e) 0624, (f) 0630, (g) 0636, (h) 0642 UTC, in which the red windows indicate the correct recognition of GF and the black one means missed recognition.
Remotesensing 16 00439 g011
Table 1. GF recognition algorithms by year and accuracy.
Table 1. GF recognition algorithms by year and accuracy.
YearAuthorAlgorithmAccuracy
1994Troxel et al., 1994 [9]MIGFA81.5%
1994Kwon et al., 1994 [10]Pixel-base data fusion MIGFA68%
2013Zheng et al., 2013 [11]Bidirectional gradient method68.4%
2016Xu et al., 2016 [13]Improved MIGFA68%
2016Leng et al., 2016 [14]Mathematical morphology73.6%
2017Hwang et al. [15]NFGDA93%
2020Xu et al. [17]Faster RCN and Inception V291.7%
Table 2. Radar station and data time.
Table 2. Radar station and data time.
NumberRadar StationTime (UTC)
1Wuhan2002-08-23 2247-2305
2002-08-24 0000-0537
2005-06-14 1200-1225
2014-07-31 0818-0913
2Nanjing2009-06-03 1600-1824
2009-06-14 0936-1142
2011-07-25 0912-1200
2012-05-16 1040-1129
3Nantong2023-06-10 0731-0926
Table 3. The values of the five indicators calculated with the test set by the GF recognition models.
Table 3. The values of the five indicators calculated with the test set by the GF recognition models.
ModelsPODFPRMARCSIPrecision
M198.13%2.80%1.87%95.45%97.22%
M291.11%11.11%8.89%82%89.13%
M389.36%8.51%10.64%82.35%91.30%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tian, H.; Hu, Z.; Wang, F.; Xie, P.; Xu, F.; Leng, L. Radar Echo Recognition of Gust Front Based on Deep Learning. Remote Sens. 2024, 16, 439. https://doi.org/10.3390/rs16030439

AMA Style

Tian H, Hu Z, Wang F, Xie P, Xu F, Leng L. Radar Echo Recognition of Gust Front Based on Deep Learning. Remote Sensing. 2024; 16(3):439. https://doi.org/10.3390/rs16030439

Chicago/Turabian Style

Tian, Hanyuan, Zhiqun Hu, Fuzeng Wang, Peilong Xie, Fen Xu, and Liang Leng. 2024. "Radar Echo Recognition of Gust Front Based on Deep Learning" Remote Sensing 16, no. 3: 439. https://doi.org/10.3390/rs16030439

APA Style

Tian, H., Hu, Z., Wang, F., Xie, P., Xu, F., & Leng, L. (2024). Radar Echo Recognition of Gust Front Based on Deep Learning. Remote Sensing, 16(3), 439. https://doi.org/10.3390/rs16030439

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop