Next Article in Journal
Multi-Temporal Analysis of Environmental Carrying Capacity and Coastline Changes in Yueqing City
Previous Article in Journal
Automatic Extraction of the Calving Front of Pine Island Glacier Based on Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow

National Institute of Meteorological Sciences, Jeju 63568, Republic of Korea
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(21), 5169; https://doi.org/10.3390/rs15215169
Submission received: 18 September 2023 / Revised: 27 October 2023 / Accepted: 28 October 2023 / Published: 29 October 2023
(This article belongs to the Section Atmospheric Remote Sensing)

Abstract

:
Precipitation nowcasting is critical for preventing damage to human life and the economy. Radar echo tracking methods such as optical flow algorithms have been widely employed for precipitation nowcasting because they can track precipitation motions well. Thus, this method, including the McGill algorithm for precipitation nowcasting by Lagrangian extrapolation (MAPLE), was implemented for operational precipitation nowcasting. However, advection-based methods struggle to predict the nonlinear motions of precipitation fields and dynamic processes, such as the growth and decay of precipitation. This study proposes an enhanced optical flow model using a multi-temporal optical flow field and a conditional generative adversarial network (cGAN). We trained the proposed model using a 3-year radar dataset provided by the Korean Meteorological Administration and performed forecast skill evaluations using both qualitative and quantitative methods. In particular, the model featuring multi-temporal optical flow enhances prediction accuracy for the nonlinear motion of precipitation fields, and the model’s accuracy can be further improved through the use of the cGAN structure. We have verified that these improvements hold for 0–3 h lead times. Based on this performance enhancement, we conclude that the multi-temporal optical flow model with cGAN has a potential role in operational precipitation nowcasting.

1. Introduction

Precipitation nowcasting is crucial for preventing damage to human life and financial losses; thus, nowcasting models have been extensively proposed in recent decades. Using weather radar datasets, radar echo tracking based on the optical flow method has been adopted for precipitation nowcasting [1,2,3,4,5] and has also been implemented in operational precipitation nowcasting such as the McGill algorithm for precipitation nowcasting by Lagrangian extrapolation (MAPLE) [6]. Precipitation nowcasting based on such models has the benefit of maintaining the spatial resolution of predicted images. However, the available prediction timescale is limited, typically around a 1 h lead time [1,2], due to their poor performance when predicting newly developed and/or dissipated precipitation. Additionally, optical flow methods often struggle to capture the temporal evolution of motion, which makes tracking the nonlinear motion commonly observed during precipitation events challenging.
To train and predict the nonlinear evolution of precipitation fields, precipitation nowcasting models based on a deep learning approach have been extensively proposed using a weather radar dataset. For instance, convolutional long short-term memory models (Conv-LSTM) have been employed to predict sequential output from sequential inputs [7,8,9]. In addition, U-Net convolutional neural networks have also been used (e.g., [10,11,12,13,14]). Models based on convolutional neural networks outperform strong baselines such as optical flow methods and numerical weather prediction [9]. However, Fourier analysis reveals that these models produce blurred prediction maps; thus, they are unlikely to be adopted for operational precipitation nowcasting [11].
Compared with the convolutional models listed above, generative models have attracted attention as a technology for precipitation nowcasting [15,16,17]. The generative model produces images or videos based on the distribution of the training image/video dataset. Generative adversarial networks (GAN) [18] have been extensively implemented in various models for efficient training. The GAN simultaneously trains the generator and discriminator. The generator produces images from random input noise, and the images produced by a well-trained generator show distributions similar to real images. The discriminator learns to classify the fake (i.e., images produced by the generator) and real images. Through competitive training of the generator and discriminator, the generator can make the output images indistinguishable from real images. Generative models, such as conditional GAN (cGAN), use additional conditions (e.g., input data of the generator) for training and can generate targeted outputs that suit specific conditions [19]. Generative models based on cGAN, such as deep generative model of radar (DGMR), have been proposed, and these models have the benefit of reducing the blurry effect compared with convolutional neural networks [15,17]. However, the prediction accuracy based on the critical success index (CSI) is poor, particularly when predicting strong rainfall events ( 5 m m / h ), and the predicting time is limited so far up to 1.5 h lead time, which is much shorter than the timescale required for operational precipitation nowcasting. Indeed, according to surveys across Europe, for instance, the preferred lead time to start with preparatory measures is 3–6 h lead times [20,21]. Hence, it is unlikely to adopt the currently proposed model for operational precipitation nowcasting.
Recently, deep learning-based methods using optical flow technique have been reported to capture nonlinear motions, both in computer vision [22,23,24,25,26] and atmospheric science using satellite imagery [27]. These methods extract time-sequence input features using deep learning networks to track nonlinear evolution, which cannot be captured by the optical flow algorithm alone. In this study, we adopted these approaches to advance the precipitation nowcasting model based on advection through the optical flow method. Specifically, the model proposed in this study is divided into two parts: (1) input image generation by linear extrapolation using multi-temporal optical flow fields, and (2) a conditional GAN for capturing the features of nonlinear evolution that are not included in the linear extrapolated input image. The forecasting skills of the model were evaluated using both quantitative and qualitative methods.
This paper is organized as follows. In Section 2, we describe the radar dataset and propose the model architecture, including the optical flow method and conditional GAN structure. The baselines for the model comparison are also summarized. Section 3 reports the results of qualitative and quantitative analyses. Finally, Section 4 provides a brief summary.

2. Method

2.1. Radar Dataset

Weather radars measure instantaneous rain rates, and radar data have been used to develop precipitation nowcasting models. The Korean Meteorological Administration (KMA) has provided radar data from ten S-band Doppler radars covering the latitudes of 31.0 ° to 40.5 ° N and longitudes of 121.5 ° to 132.5 ° E. The Lambert conformal conic method with a central point at 38.0 ° N and 126.0 ° E was employed for the map projection. The radar reflectivity dataset provided by the KMA was generated using the HSR method. This method involves synthesizing reflectivity from the hybrid surface that is unaffected by non-meteorological echoes, ground clutter, beam blockage, and the bright band [28]. The HSR data have spatial and temporal resolutions of 0.5 km and 10 min, respectively. The precipitation ( R ) in the unit of mm/h can be obtained by the radar reflectivity factor (Z) in the unit of m m 6 / m through the ZR-relation ( Z = 148 R 1.59 ), which is currently employed in the weather radar center of the KMA. Radar data from 2018, 2019, and 2021 were used for training, whereas data from 2020 and 2022 were used for validation and testing, respectively.

2.2. Motion of Precipitation Field

The optical flow method was employed to obtain the motion fields of the precipitation events. The optical flow method extracts the velocity fields, V f between the two images. Here, we used the TV-L1 algorithm (see [29] for details of the TV-L1 algorithm), which minimizes the following regularization force,   J :
J V f + λ R 0 x R 1 x + V f x d x .  
Here, R 0 and R 1 correspond to precipitation images measured at two different time steps. The first and second terms represent the smoothness of the motion fields and the optical flow constraint, assuming brightness constancy during motion, respectively. The relative importance between these terms is determined by the free parameter λ . The TV-L1 algorithm is included in the OpenCV library (https://opencv.org, accessed on 17 September 2023).
To analyze the flow motion in detail, we decomposed the optical flow field, V f into its divergence component ( V d i v ) and curl component ( V c u r l ) based on Helmholtz’s theorem [30], which are defined as follows:
V f = V d i v + V c u r l ,  
V d i v = χ ,       V c u r l = k × ψ ,  
2 χ = · V f ,       2 ψ = k ·   × V f .
Note that all flow vectors are comprised of two velocity components (i.e., V f = u f i + v f j ; V c u r l = u c u r l i + v c u r l j ; V d i v = u d i v i + v d i v j ), where component- i and j represent unit vectors in the in-plane direction, and component k is directed along the out-of-plane direction. ψ and χ denote the streamfunction and velocity potential, respectively. To calculate V c u r l and V d i v , we first computed χ for V d i v , using an iterative successive overrelaxation as described in [2]. We then subtracted V d i v from the original optical flow field, V f . The component V d i v is responsible for growth and decay of the precipitation field, whereas the component V c u r l describes the circulation motion of an incompressible field.
To examine the motion characteristics of the precipitation events, we conducted Fourier analysis on the decomposed field components V c u r l and V d i v . Using the power spectra of the curl component ( F c u r l ( k x , k y ) ) and the divergence component ( F d i v ( k x , k y ) ) obtained through Fourier transform, we computed the radially averaged power spectra (see also [31] and the references therein), P c u r l ( k ) and P d i v ( k ) , as follows:
P c u r l k = 1 N r ( k ) i = 1 N r ( k ) F c u r l ( k i ) ,
P d i v k = 1 N r ( k ) i = 1 N r ( k ) F d i v ( k i ) ,  
Here, k k x 2 + k y 2 represents the radial wavenumber and N r ( k ) is the number of wavenumber samples that satisfy k k i = d k / 2 within the radial wavenumber bin, d k . The ratio ϕ k   P c u r l ( k ) / P d i v ( k ) was also estimated for examining the relative importance of the turbulent flow motions as a function of the spatial scale.
Panels (a) and (b) of Figure 1 display the power spectra P c u r l ( k ) and P d i v ( k ) for 40 arbitrarily selected precipitation events of 30   m m / h or more from June to September 2020. The peaks of P c u r l ( k ) and P d i v ( k ) are approximately ~ O 10 1   k m 1 and ~ O 10 2   k m 1 , respectively, indicating that the nature of precipitation motions depends strongly on the spatial scales. Such dependence on the motion scales can be prominently characterized by the ratio ϕ k   P c u r l ( k ) / P d i v ( k ) shown in panel (c) of Figure 1. In the large scale corresponding to the synoptic scale, k / 2 π < 10 2   k m 1 , the translation motion presented in the V d i v -component is dominant as expected ( ϕ k < 1 ). In contrast, in the smaller scale, 10 2   k m 1 < k / 2 π , the turbulent motion presented in the V c u r l -component becomes dominant ( ϕ k > 1 ). We interpret that the spatial scale, L = 2 π / k 10 2   k m , is crucial for operational nowcasting. For instance, considering the averaged flow speed, V f     5 10   m / s , the dynamical timescale is roughly ~ L / V f 3–6 h, which is comparable to the preferred timescale for nowcasting [20,21]. Hence, for nowcasting at 0–6 h lead times, the nonlinear motion of the precipitation fields significantly affected the accuracy of the nowcasting model.
The nonlinear evolution of the velocity field can be observed using the multi-temporal features of the velocity field evolution. Figure 2 displays the optical flow fields V f ( t , t t ) obtained by three different time intervals, t = 10 , 20 , and 30 min, for the two examples shown in Figure 1. As shown in the two examples, the characteristics of precipitation motions contain multi-temporal features. In the upper panels, for instance, the direction of movement of the heavy rainfall region depends on the time intervals. With the interval t = 10 min, the heavy rainfall region with > 10   m m / h is mainly moving toward the eastern side. With the interval t = 30 min, on the other hand, the vector field associated to the heavy rainfall region is mainly aligned toward the southern area. Such nonlinear evolution of the vector field is also observed in the example displayed in the bottom panels. Hence, considering multi-temporal motions could enhance the accuracy of precipitation nowcasting by capturing nonlinear motions at multi-spatial scales.

2.3. Model Architecture

In this subsection, we propose a model architecture that includes an optical flow method and conditional GAN. As shown in Figure 3, the model consists of two parts: (1) linear extrapolation using multi-temporal optical flow fields, and (2) a conditional GAN (cGAN) to capture the nonlinear evolution of precipitation events.
To resolve the multi-temporal features of precipitation motions, we calculated the temporally-averaged flow field by summing over the optical flow field obtained at different timesteps, as shown in Figure 3a:
V f ~ t , t t 1 N i = 1 N 1 i V f t ,   t i t .  
Using a parameter N = 3 and t = 10 min, the field V f ~ is the mean field averaged over 30 min. The magnitude of the vector field with a time interval longer than 30 min was smaller than that obtained with a time interval shorter than 30 min. Therefore, we considered the optical flow fields up to N = 3 . The future frame extrapolated by V f ~ t , t t can be estimated as follows:
I f ~ = g I t ,   V f ~ t , t t ,
where g ( . ) represents a backward warping function.
We then employed the conditional GAN (cGAN) method to capture the temporal variation in rainfall intensity and the nonlinear motion of the precipitation fields. The cGAN consists of a generator G and discriminator D. While the generator of the original GAN produces the distribution of data from random noise, the image ( y = G ( I f ~ ) ) produced through the generator of the cGAN is based on the input image ( x = I f ~ ) [19]. Here, the future image obtained by linear advection through a multi-temporal optical flow, I f ~ , is used as the input image. The discriminator D learns to classify the image from the generator ( y = G ( I f ~ ) ) or ground truth ( y = I t + 10 ).
We employed Pix2Pix [32] for the cGAN implementation, which contains a generator based on the U-Net [33] architecture to create y and a discriminator based on patch-GAN to compare y and y . Figure 4 depicts the structure of the generator (G) and discriminator (D). In the Pix2Pix model, G minimizes the loss function L c G A N ( G , D ) when G is trained to produce the image, y y from input ( I f ~ ). By contrast, D maximizes L c G A N ( G , D ) . L c G A N ( G , D ) can be expressed as follows:
L c G A N G , D = log D y , y + E 1 log D y , y .  
To improve the quality of the output images, the L1 loss function L L 1 is additionally considered:
L L 1 G = E y y 1 .
Combining the adversarial loss function L c G A N G , D and the L1 loss function L L 1 , the total loss function with a free parameter λ is described as follows:
L t o t = a r g   min G max D L c G A N G , D + λ L L 1 G .
Here, we summarize the details of model training. Owing to the computational cost, we resized the image from 2048 × 2048 pixels to 512 × 512 pixels. The resized image contained an area 1024   k m × 1024   k m with a 2-km spatial interval around the Korean Peninsula. By optimizing the model, the hyperparameters were tuned, which are summarized as follows: The binary cross entropy and Adam optimizers were employed as the loss function and optimizer, respectively. A learning rate of 0.0001 and momentum parameters β 1 = 0.5 , β 2 = 0.999 were chosen. In addition, after conducting performance tests with various values of 𝜆 within the range of 10–200, we determined that the optimal free parameter 𝜆 for the L1 loss function is 100. Our model was trained using 100 epochs, with a batch size of eight, and the parameters of both the generator and discriminator were updated simultaneously during one epoch.

3. Results

3.1. Models for Performance Comparison

In this Section, we report on the forecasting skill evaluation using qualitative and quantitative methods. Specifically, we examined four types of models to evaluate the performance improvements resulting from the use of multi-temporal optical flow and cGAN. The models are summarized as follows:
Single-temporal model without cGAN: A model that includes a single-temporal optical flow field, V f ( t , t t ) , but excludes cGAN structure;
Multi-temporal model without cGAN: A model that includes a multi-temporal optical flow field, V f ~ t , t t , but excludes cGAN;
Single-temporal model with cGAN: A model that includes a single-temporal optical flow field, V f ( t , t t ) , and cGAN structure;
Multi-temporal model with cGAN: A model that includes a multi-temporal optical flow field, V f ~ t , t t , and cGAN structure.
As a reference model, we also employed Eulerian persistence (Persistence, hereafter), which assumes that for any lead time N precipitation at t + N is the same as at forecast time t . Because this simple model is powerful, particularly for short lead times, it has been widely employed as a baseline model to quantify and compare the performance of developed AI-models [2,13]. In addition, an indirect comparison between the model proposed in this study and the models proposed in previous studies based on the deep learning approach can be performed using Persistence.

3.2. Qualitative Evaluation

We compared the nowcasting outputs obtained by three different models. An example of precipitation nowcasting at 1–3 h lead time is displayed in Figure 5. In the case of the models without multi-temporal optical flow field, the precipitation motions mainly occurred from west to east, which is inconsistent with the ground truth results. By considering the multi-temporal optical flow field, the motion toward the southern area was captured, as shown in the middle panels, up to 3 h lead time. We point out that, although the inclusion of cGAN can capture the temporal changes in rainfall intensity, tracking the multi-temporal optical flow field is of greater importance in predicting the nonlinear evolution of the precipitation field.
To quantify the properties of the motion vectors of the sample shown in Figure 5, we measured their statistics. The frequency distribution of motion fields at the 1–3 h lead times are shown in Figure 6. Here, the frequency distribution of motion fields, d N / d l o g V f , represents the number of pixels within the logarithmic bin d l o g V f . The velocity distribution of the models with the multi-temporal optical flow field showed good agreement with that of the ground truth. The models without the multi-temporal optical flow field (blue and magenta lines), on the other hand, slightly overestimated the motion speeds of precipitation compared with the models with the multi-temporal optical flow field and the ground truth. Owing to such overestimations, forecasts using the model without the multi-temporal optical flow field could contain more missing events and false alarm events.
To test the operational feasibility of the model with the cGAN, we examined the blurry effects of the nowcasting model by calculating the radially averaged power spectra of the precipitation fields. Figure 7 shows the power spectra of the same sample at 1–3 h lead times. We noted that the models with cGAN show slightly blurry forecasts in the scale of k / 2 π > ~   1 / 20   k m 1 due to the adversarial loss function, L c G A N ( G , D ) . Despite the blurry effects of the model with cGAN, we interpret that the models with cGAN could be applicable for precipitation nowcasting, because such blurry effects are independent of the lead time whereas the blurry effects of the model based on the U-Net convolution neural network are more prominent as the lead time increases.

3.3. Quantitative Evaluation

We evaluated forecast skills by estimating pixel-wise scores such as the mean absolute error (MAE), critical success index (CSI), probability of detection (POD), and false alarm rate (FAR).
MAE is the score to capture errors in rainfall rate prediction:
M A E = j = 1 N y j y j N ,  
where y j and y j denote the rainfall intensities obtained from the ground truth and prediction in the j th pixel of the corresponding radar image, respectively, and N is the total number of pixels.
The CSI is the fraction of the forecast event that was correctly forecasted, and it is typically recognized as a score for model accuracy. POD denotes the fraction of correctly forecasted rainfall events to observed events, whereas FAR is the ratio of incorrectly forecasted rainfall events to all forecasted rainfall events. The scores are given as follows:
C S I = h i t s h i t s + f a l s e   a l a r m s + m i s s e s ,
P O D = h i t s h i t s + m i s s e s ,
F A R = f a l s e   a l a r m s h i t s + f a l s e   a l a r m s .  
Perfect scores for CSI and POD are 1, and these values range from 0 to 1. FAR ranges from 0 to 1, with a perfect score of 0. Although the accuracy of nowcasting can be tested using the CSI value, it can be higher when nowcasting overestimates the area and intensity of the precipitation fields. We conducted separate evaluations of the CSI, POD, and FAR values at the thresholds of 1 and 10   m m / h [14].
The MAE scores at 0–3 h lead times are shown in Figure 8. While the evaluation results for very short lead times within 20 min could be affected by the optical flow model’s overestimation of motion fields for heavy precipitation in the Korean Peninsula, as shown in Figure 6, we confirmed that the impact of motion fields becomes more significant in terms of accuracy and the evolution of the precipitation field with longer lead times of 0.5 to 3 h. Hence, the models including the optical flow method generally outperformed the Persistence model. As illustrated by the qualitative analysis presented in Figure 5, it is evident that the inclusion of the multi-temporal optical flow field significantly contributes to the accuracy of predicting precipitation field locations. The MAE values obtained from models incorporating the multi-temporal optical flow fields are notably lower than those obtained from models lacking such fields. Furthermore, the multi-temporal model with cGAN exhibits lower MAE values compared to the multi-temporal model without cGAN, demonstrating that the model with cGAN provides more accurate predictions of rainfall intensities.
Figure 9 presents the CSI, POD, and FAR scores at the thresholds of 1 and 10   m m / h . Based on the CSI and POD evaluations, the accuracy is influenced by the presence of both the multi-temporal optical flow field and cGAN. When compared to the single-temporal model without cGAN, the CSI values for the single-temporal model with cGAN and the multi-temporal model without cGAN show improvement within 0–3 h lead times. Furthermore, when both the multi-temporal optical flow and cGAN are considered together, the performance enhancement becomes more significant, as indicated by the red solid lines in Figure 9. For example, when compared to the single-temporal optical flow model, the multi-temporal model with cGAN enhances accuracy by approximately 13 16 % in predicting heavy rainfall events with 10   m m / h at 1–3 h lead times. In contrast, the multi-temporal model without cGAN shows an improvement of approximately 4 6 % . In this context, it would be possible to further enhance currently operational nowcasting models at KMA based on MAPLE by incorporating the cGAN. Note that the prediction accuracy in terms of the CSI can be improved when the predicted rainfall intensity is overestimated. However, based on the FAR scores, we interpreted the enhancement of CSI scores as independent of such effects. Because the FAR of the multi-temporal model with cGAN is generally lower than that of the other models, we interpret that the multi-temporal model with the cGAN does not overestimate the area or intensity of rainfall.
To assess the generalizability of our findings, we conducted a long-term forecast skill evaluation. Figure 10 displays CSI scores at thresholds of 1 and 10   m m / h for the models over the summer season (June to September) of 2020. Our analysis confirms that both the multi-temporal optical flow method and the refinement using cGAN contribute to improved prediction performance.

4. Summary and Discussion

The optical flow technique has been widely used in operational precipitation nowcasting models, and its performance is comparable to that of other state-of-the-art models. However, because the optical flow model cannot predict the nonlinear evolution, growth, and decay of precipitation fields, its prediction timescale has been limited to a roughly 1 h lead time. This study reported a possible method to improve current optical flow-based precipitation nowcasting by tracking the nonlinear evolution of precipitation fields. To follow the nonlinear motion more precisely, we first analyzed the characteristics of motions typically exhibited in precipitation events, and calculated the multi-temporal optical flow field. We then implemented a cGAN structure to capture dynamic processes such as the growth and decay of precipitation fields. Based on forecast skill evaluations employing both qualitative and quantitative methods, we have verified that the inclusion of the multi-temporal optical flow field significantly improves the accuracy of predicting the nonlinear motion of precipitation fields. Additionally, the refinement using cGAN further enhances the accuracy of nowcasting. Therefore, we interpret that the proposed model can be applied to enhance the currently available model for precipitation nowcasting based on the advection of precipitation fields such as MAPLE.
While this study mainly focuses on predicting heavy rainfall during the summer season, it is also essential to predict ice-phase precipitation, including snow, using the proposed model. Both microphysical processes within the cloud and the conditions related to the ambient dynamics and thermodynamics of the system affect the intensity and amount of snow within the cloud and at the surface, making accurate prediction of snow precipitation challenging (see [34] for more details). In the context of data-driven precipitation nowcasting models based on deep learning architectures, it is possible to consider a multimodal dataset, including radar, satellite observations, and surface observations, to improve the accuracy of predicting different types of precipitation. This can be achieved by training the model with information about cloud types and phases from satellite observations and incorporating thermodynamic effects from surface observations. We leave this as a topic for future work.

Author Contributions

All the authors contributed significantly to this study. J.-H.H. and H.L. designed the study. J.-H.H. performed the experiments and analyzed the results. J.-H.H. wrote the original draft of the manuscript, and H.L. provided suggestions to revise the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the KMA Research and Development program “Developing AI technology for weather forecasting” under Grant (KMA 2021-00121).

Data Availability Statement

The radar data are freely available on the Korea Meteorological Administration (KMA) data released website (https://data.kma.go.kr/cmmn/main.do, accessed on 17 September 2023), and the code used in this work is available upon request from the corresponding author.

Acknowledgments

The authors thank anonymous referees for their constructive comments on the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ayzel, G.; Heistermann, M.; Winterrath, T. Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1). Geosci. Model Dev. 2019, 12, 1387–1402. [Google Scholar] [CrossRef]
  2. Bechini, R.; Chandrasekar, V. An Enhanced Optical Flow Technique for Radar Nowcasting of Precipitation and Winds. J. Atmos. Ocean. Technol. 2017, 34, 2637–2658. [Google Scholar] [CrossRef]
  3. Bowler, N.E.H.; Pierce, C.E.; Seed, A. Development of a precipitation nowcasting algorithm based upon optical flow techniques. J. Hydrol. 2004, 288, 74–91. [Google Scholar] [CrossRef]
  4. Pulkkinen, S.; Nerini, D.; Perez Hortal, A.A.; Velasco-Forero, C.; Seed, A.; Germann, U.; Foresti, L. Pysteps: An open-source Python library for probabilistic precipitation nowcasting (v1.0). Geosci. Model Dev. 2019, 12, 4185–4219. [Google Scholar] [CrossRef]
  5. Seed, A.W.; Pierce, C.E.; Norman, K. Formulation and evaluation of a scale decomposition-based stochastic precipitation nowcast scheme. Water Resour. Res. 2013, 49, 6624–6641. [Google Scholar] [CrossRef]
  6. Shi, X.; Lee, Y.H.; Ha, J.-C.; Chang, D.-E.; Bellon, A.; Zawadzki, I.; Lee, G. McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation (MAPLE) Applied to the South Korean Radar Network. Part II: Real-Time Verification for the Summer Season. Asia-Pac. J. Atmos. Sci. 2010, 46, 383–391. [Google Scholar]
  7. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; Woo, W.-C. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, New Orleans, Louisiana, 7–10 December 2015; Volume 28. [Google Scholar]
  8. Shi, X.; Gao, Z.; Lausen, L.; Wang, H.; Yeung, D.-Y. Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  9. Sønderby, C.K.; Espeholt, L.; Heek, J.; Dehghani, M.; Oliver, A.; Salimans, T.; Kalchbrenner, N. Metnet: A neural weather model for precipitation forecasting. arXiv 2020, arXiv:2003.12140. [Google Scholar]
  10. Agrawal, S.; Barrington, L.; Bromberg, C.; Burge, J.; Gazen, C.; Hickey, J. Machine learning for precipitation nowcasting from radar images. arXiv 2019, arXiv:1912.12132. [Google Scholar]
  11. Ayzel, G.; Scheffer, T.; Heistermann, M. RainNet v1.0: A convolutional neural network for radar-based precipitation nowcasting. Geosci. Model Dev. 2020, 13, 2631–2644. [Google Scholar] [CrossRef]
  12. Ko, J.; Lee, K.; Hwang, H.; Oh, S.G.; Son, S.W.; Shin, K. Effective training strategies for deep-learning-based precipitation nowcasting and estimation. Comput. Geosci. 2022, 161, 105072. [Google Scholar] [CrossRef]
  13. Lebedev, V.; Ivashkin, V.; Rudenko, I.; Ganshin, A.; Molchanov, A.; Ovcharenko, S.; Grokhovetskiy, R.; Bushmarinov, I.; Solomentsev, D. Precipitation nowcasting with satellite imagery. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2680–2688. [Google Scholar]
  14. Oh, S.G.; Park, C.; Son, S.W.; Ko, J.; Shin, K.; Kim, S.; Park, J. Evaluation of Deep-Learning-Based Very Short-Term Rainfall Forecasts in South Korea. Asia-Pac. J. Atmos. Sci. 2023, 59, 239–255. [Google Scholar] [CrossRef]
  15. Choi, S.; Kim, Y. Rad-cGAN v1.0: Radar-based precipitation nowcasting model with conditional generative adversarial networks for multiple dam domains. Geosci. Model Dev. 2022, 15, 5967–5985. [Google Scholar] [CrossRef]
  16. Kim, Y.; Hong, S. Very Short-Term Rainfall Prediction Using Ground Radar Observations and Conditional Generative Adversarial Networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4104308. [Google Scholar] [CrossRef]
  17. Ravuri, S.; Lenc, K.; Willson, M.; Kangin, K.; Lam, R.; Mirowski, P.; Fitzsimons, M.; Atanassiadou, M.; Kashem, S.; Madge, S.; et al. Skilful precipitation nowcasting using deep generative models of radar. Nature 2021, 597, 672–677. [Google Scholar] [CrossRef]
  18. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; Volume 27, pp. 2672–2680. [Google Scholar]
  19. Mirza, M.; Osindero, S. 2014: Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  20. Kox, T.; Gerhold, L.; Ulbrich, U. Perception and use of uncertainty in severe weather warnings by emergency services in Germany. Atmos. Res. 2015, 158–159, 292–301. [Google Scholar] [CrossRef]
  21. Sivle, A.D.; Agersten, S.; Schmid, R.; Simon, A. Use and perception of weather forecast information across Europe. Meteorol. Appl. 2022, 29, e2053. [Google Scholar] [CrossRef]
  22. Jiang, H.; Sun, D.; Jampani, V.; Yang, M.-H.; Learned-Miller, E.; Kautz, J. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9000–9008. [Google Scholar]
  23. Liu, M.; Xu, C.; Yao, C.; Lin, C.; Zhao, Y. 2022: JNMR: Joint Non-linear Motion Regression for Video Frame Interpolation. arXiv 2022, arXiv:2206.04231. [Google Scholar]
  24. Liu, Y.; Xie, L.; Siyao, L.; Sun, W.; Qiao, Y.; Dong, C. Enhanced quadratic video interpolation. In Computer Vision–ECCV 2020 Workshops; Part IV 16; Springer International Publishing: Glasgow, UK, 2020; pp. 41–56. [Google Scholar]
  25. Xu, X.; Siyao, L.; Sun, W.; Yin, A.; Yang, M.-H. Quadratic video interpolation. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
  26. Zhang, Y.; Wang, C.; Tao, D. Video frame interpolation without temporal priors. In Proceedings of the Advances in Neural Information Processing Systems, Proceedings of the Conference on Neural Information Processing Systems, Online, 6–12 December 2020; Volume 33. [Google Scholar]
  27. Seo, M.; Choi, Y.; Ryu, H.; Park, H.; Bae, H.; Lee, H.; Seo, W. Intermediate and Future Frame Prediction of Geostationary Satellite Imagery with Warp and Refine Network. In Proceedings of the AAAI 2022 Fall Symposium: The Role of AI in Responding to Climate Challenges, Arlington, VA, USA, 18 November 2022. [Google Scholar]
  28. Kwon, S.; Jung, S.H.; Lee, G. Inter-comparison of radar rainfall rate using Constant Altitude Plan Position Indicator and hybrid surface rainfall maps. J. Hydrol. 2015, 531, 234. [Google Scholar] [CrossRef]
  29. Wedel, A.; Pock, T.; Zach, C.; Bischof, H.; Cremers, D. An improved algorithm for TV-L1 optical flow. In Statistical and Geometrical Approaches to Visual Motion Analysis; Springer: Berlin/Heidelberg, Germany, 2009; pp. 23–45. [Google Scholar]
  30. Arfken, G.B.; Weber, H.J. Mathematical Methods for Physicists, International Edition, 6th ed.; Academic Press: San Diego, CA, USA, 2005; pp. 95–101. [Google Scholar]
  31. Ruzanski, E.; Chandrasekar, V. Scale Filtering for Improved Nowcasting Performance in a High-Resolution X-Band Radar Network. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2296–2307. [Google Scholar] [CrossRef]
  32. Isola, P.; Zhu, J.; Zhou, T.; Efros, A.A. Image-to-Image Translation with conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  33. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  34. Gultepe, I.; Heymsfield, A.J.; Field, P.R.; Axisa, D. Ice-Phase Precipitation. Meteorol. Monogr. 2017, 58, 6.1–6.36. [Google Scholar] [CrossRef]
Figure 1. Power spectra of 40 events with 30   m m / h or more obtained by V c u r l (panel (a); P c u r l ( k ) ), V d i v (panel (b); P d i v ( k ) ), and the ratio between P c u r l ( k ) and P d i v ( k ) is shown in panel (c). Here, the red solid line shows the characteristic spatial scale, 2 π / k = 100   k m (see the main text for more details).
Figure 1. Power spectra of 40 events with 30   m m / h or more obtained by V c u r l (panel (a); P c u r l ( k ) ), V d i v (panel (b); P d i v ( k ) ), and the ratio between P c u r l ( k ) and P d i v ( k ) is shown in panel (c). Here, the red solid line shows the characteristic spatial scale, 2 π / k = 100   k m (see the main text for more details).
Remotesensing 15 05169 g001
Figure 2. The optical flow fields ( V f ( t , t t ) ) estimated in three different time intervals ( t = 10 , 20 , and 30 min) for two examples (top panels: 01 UTC 30 July 2020; bottom panels: 17 UTC 11 June 2020).
Figure 2. The optical flow fields ( V f ( t , t t ) ) estimated in three different time intervals ( t = 10 , 20 , and 30 min) for two examples (top panels: 01 UTC 30 July 2020; bottom panels: 17 UTC 11 June 2020).
Remotesensing 15 05169 g002
Figure 3. An overview of the proposed model, consisting of (a) an input generation stage using the optical flow methods and (b) a refining stage using conditional GAN.
Figure 3. An overview of the proposed model, consisting of (a) an input generation stage using the optical flow methods and (b) a refining stage using conditional GAN.
Remotesensing 15 05169 g003
Figure 4. Structure of the generator and discriminator. (a) Generator (G). (b) Discriminator (D).
Figure 4. Structure of the generator and discriminator. (a) Generator (G). (b) Discriminator (D).
Remotesensing 15 05169 g004
Figure 5. Nowcasting outputs of an example case (01 UTC 30 July 2020) at 1–3 h lead times.
Figure 5. Nowcasting outputs of an example case (01 UTC 30 July 2020) at 1–3 h lead times.
Remotesensing 15 05169 g005
Figure 6. The frequency distribution of motion fields, d N / d l o g V f , for the sample (0100 UTC 30 July 2020). Black, red, blue, green, and magenta solid lines indicate the ground truth, multi-temporal model with cGAN, single-temporal model with cGAN, multi-temporal model without cGAN, and single-temporal model without cGAN, respectively.
Figure 6. The frequency distribution of motion fields, d N / d l o g V f , for the sample (0100 UTC 30 July 2020). Black, red, blue, green, and magenta solid lines indicate the ground truth, multi-temporal model with cGAN, single-temporal model with cGAN, multi-temporal model without cGAN, and single-temporal model without cGAN, respectively.
Remotesensing 15 05169 g006
Figure 7. Power spectra of the precipitation fields for the sample (0100 UTC 30 July 2020).
Figure 7. Power spectra of the precipitation fields for the sample (0100 UTC 30 July 2020).
Remotesensing 15 05169 g007
Figure 8. MAE measured using 40 events with 30   m m / h or more.
Figure 8. MAE measured using 40 events with 30   m m / h or more.
Remotesensing 15 05169 g008
Figure 9. CSI, POD, and FAR scores at the thresholds of 1 and 10   m m / h measured using 40 events with 30   m m / h or more.
Figure 9. CSI, POD, and FAR scores at the thresholds of 1 and 10   m m / h measured using 40 events with 30   m m / h or more.
Remotesensing 15 05169 g009
Figure 10. CSI scores at the thresholds of 1 and 10   m m / h , measured at 0–3 h lead times, using the dataset from the summer season (June to September) of 2020.
Figure 10. CSI scores at the thresholds of 1 and 10   m m / h , measured at 0–3 h lead times, using the dataset from the summer season (June to September) of 2020.
Remotesensing 15 05169 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ha, J.-H.; Lee, H. Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow. Remote Sens. 2023, 15, 5169. https://doi.org/10.3390/rs15215169

AMA Style

Ha J-H, Lee H. Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow. Remote Sensing. 2023; 15(21):5169. https://doi.org/10.3390/rs15215169

Chicago/Turabian Style

Ha, Ji-Hoon, and Hyesook Lee. 2023. "Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow" Remote Sensing 15, no. 21: 5169. https://doi.org/10.3390/rs15215169

APA Style

Ha, J. -H., & Lee, H. (2023). Enhancing Rainfall Nowcasting Using Generative Deep Learning Model with Multi-Temporal Optical Flow. Remote Sensing, 15(21), 5169. https://doi.org/10.3390/rs15215169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop