Next Article in Journal
Caching-Aware Intelligent Handover Strategy for LEO Satellite Networks
Next Article in Special Issue
Investigation on the Relationship between Satellite Air Quality Measurements and Industrial Production by Generalized Additive Modeling
Previous Article in Journal
Multi-Scale Fused SAR Image Registration Based on Deep Forest
Previous Article in Special Issue
Remote Sensing of Atmospheric Hydrogen Fluoride (HF) over Hefei, China with Ground-Based High-Resolution Fourier Transform Infrared (FTIR) Spectrometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Radar Composite Reflectivity Factor Estimations from Fengyun-4A Geostationary Satellite Observations

1
Key Laboratory of Radiometric Calibration and Validation for Environmental Satellites (LRCVES/CMA), National Satellite Meteorological Center, China Meteorological Administration (NSMC/CMA), Beijing 100081, China
2
Guangdong Province Key Laboratory for Climate Change and Natural Disaster Studies, School of Atmospheric Sciences, Sun Yat-sen University, Zhuhai 519082, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(11), 2229; https://doi.org/10.3390/rs13112229
Submission received: 2 May 2021 / Revised: 28 May 2021 / Accepted: 2 June 2021 / Published: 7 June 2021
(This article belongs to the Special Issue Optical and Laser Remote Sensing of Atmospheric Composition)

Abstract

:
Ground-based weather radar data plays an essential role in monitoring severe convective weather. The detection of such weather systems in time is critical for saving people’s lives and property. However, the limited spatial coverage of radars over the ocean and mountainous regions greatly limits their effective application. In this study, we propose a novel framework of a deep learning-based model to retrieve the radar composite reflectivity factor (RCRF) maps from the Fengyun-4A new-generation geostationary satellite data. The suggested framework consists of three main processes, i.e., satellite and radar data preprocessing, the deep learning-based regression model for retrieving the RCRF maps, as well as the testing and validation of the model. In addition, three typical cases are also analyzed and studied, including a cluster of rapidly developing convective cells, a Northeast China cold vortex, and the Super Typhoon Haishen. Compared with the high-quality precipitation rate products from the integrated Multi-satellite Retrievals for Global Precipitation Measurement, it is found that the retrieved RCRF maps are in good agreement with the precipitation pattern. The statistical results show that retrieved RCRF maps have an R-square of 0.88-0.96, a mean absolute error of 0.3-0.6 dBZ, and a root-mean-square error of 1.2-2.4 dBZ.

Graphical Abstract

1. Introduction

Severe weather always shows the natural characteristics of strong intensity, abruptness, wide distribution, rapid development and evolution, along with large destructive power, thus causing widespread concern [1]. A rapid and effective response to severe weather events, such as strong wind, heavy precipitation and flash floods, is essential to reducing the threats to people’s lives and property. The key to forecasting these sudden disastrous weather systems is timely observations with a high spatial resolution, which serves as the foundation of both situational awareness and forecast. As is well known, the Doppler weather radar with high spatio-temporal resolution has become one of the most effective tools for analyzing and nowcasting the meso- and micro-scale weather systems. To date, based on the radar echo data, many well-established and robust methods of monitoring, tracking and extrapolating for severe weather have been well-developed and extensively applied [2,3,4,5]. Particularly, reflectivity factors, radial velocity and velocity spectrum width derived from the Doppler weather radar echo data can be utilized to analyze the cloud and hydrometeor distribution of severe weather phenomena more accurately and timely [6,7]. Furthermore, the convection initiation determined by the threshold of radar reflectivity factor >35 dBZ can be used to forecast the occurrences of severe weather events in advance [8,9,10]. In addition to the real-time weather analysis, the radar reflectivity data are also employed as initial conditions for high-resolution numerical weather prediction (NWP) models to perform cloud analysis [11,12]. In variational data assimilation systems, radar reflectivity can effectively initialize both model hydrometeors and microphysics parameters associated with a microphysical scheme [13,14]. The assimilation of radar reflectivity data into numerical weather prediction models effectively increases the information of moisture and rainwater information in the cloud, further improving the forecasts of location and intensity of convective weather systems [15,16].
However, most of the world still lacks the observational infrastructure of weather radars, especially in ocean and mountainous regions. This is perhaps the most serious disadvantage of relying on radars to monitor and forecast severe weather events in real-time. In contrast, the geostationary (GEO) satellite imaging system can continuously capture images of the Earth from space, and these images are already well applied for cloud cluster tracking. Satellite observations, especially those from the current GEO meteorological satellite with high-frequency observations for tracking convective clouds, can fill the observation gaps of radar and monitor the rapid generation as well as development of severe weather. Recently, with the rapid development of the latest technology, the new- generation GEO meteorological satellites, such as Fengyun-4A, Himawari-8/9, GEO Operational Environmental Satellite-R, have significantly enhanced their spatio-temporal resolution and increased the spectral detecting channels [17,18,19]. Therefore, the GEO satellites will play a more essential role in monitoring and nowcasting severe convective weather [20,21,22,23]. Actually, the new-generation GEO meteorological satellites have already been used in convective initiation nowcasting [24,25,26], dynamic structure analysis of super typhoons [27], assimilating the infrared radiances to improve convective predictability, precipitation estimation [28,29], and so on [30].
Note that, in recent years, significant progress has been made in deep neural network (DNN) techniques, which is mainly attributed to the increased amount of available data, better training and predicting model architectures, and the ease of implementation on powerful, specialized hardware such as Graphics Processing Units (GPUs) [31,32]. For the application in weather forecasts, the deep neural network techniques have already been used to forecast the precipitation, severe weather and others based on the observation of radar reflectivity data [33,34]. Some new Gate Recurrent Unit (GRU) models (such as theTrajectory GRU and the Generative Adversarial-Convolutional GRU) beyond Long Short-Term Memory were proposed for precipitation nowcasting [35,36]. Recently, a new deep learning model named the MetNet was also proposed and developed to forecast precipitation up to 8 hours in the future at the high spatial resolution of 1 km2 and the temporal resolution of 2 min. It has been revealed that the MetNet performs better than the optical flow-based models [37].
In this investigation, a deep learning-based algorithm is developed to retrieve the radar composite reflectivity factor (RCRF) maps from the observations of the new-generation GEO satellite of Fengyun-4A. Based on the microphysical properties of cloud top, the RCRF maps retrieved from satellite observations can provide the basic detection and diagnosis data for severe weather in areas not covered by radar. Moreover, the RCRF maps provide an excellent supplement to the existing real-time radar data. The fusion of RCRF maps retrieved from the GEO satellite and radar data at different times makes it possible to monitor the rapidly developing convective systems by reducing the time interval. In addition, the RCRF maps retrieved from Fengyun-4A observations can provide a relatively intuitive understanding on the geographical distribution of intensity of strong convective systems and typhoon cyclones where the coverage of radar data is incomplete.
The remainder of this paper proceeds as follows. Section 2 introduces the GEO satellite, the weather radar data and the precipitation rate data from the Global Precipitation Measurement (GPM) mission. Section 3 describes the detailed algorithm for retrieving the RCRF maps. Section 4 provides the validations and discussions about this new algorithm. Finally, Section 5 presents the main conclusions of this study.

2. Data

2.1. Interest Fields from Fengyun-4A AGRI

Fengyun-4A is China’s first new-generation GEO meteorological satellite, which was successfully launched on December 11, 2016. It carries three new, advanced optical instruments, namely the Advanced Geosynchronous Radiation Imager (AGRI), the Geosynchronous Interferometric Infrared Sounder and the Lightning Mapping Imager. The nadir point of Fengyun-4A/AGRI’s is located at 104.7°E, with the observation field covering the East Asian, Australia, Indian Ocean, etc. The spatial resolution of the AGRI ranges from 0.5 km (visible channel) to 4 km (infrared channel) at the nadir point. In addition, regular observations can be acquired by two different scanning operations of 15-min full disk and a 5-min Chinese continental domain. Compared with the previous Fengyun-2 GEO satellite, Fengyun-4A has significantly enhanced its capabilities in weather warning and forecasting, due to its increased radiation channels and spatio-temporal resolution [17]. Moreover, the detection channels of Fengyun-4A are more sensitive to the cloud-top properties (such as phase and particle size) that cannot be obtained by using the limited channel selection of the previous Fengyun-2 [38].
Table 1 shows the sensitivity of interest fields to cloud-top properties from Fengyun-4A/AGRI channels, which is used in models for retrieving the radar reflectivity factor. These parameters are chosen because the radar reflectivity factor is closely related to the cloud microphysical properties and hydrometeor distribution [39,40,41]. The visible channels have a higher spatial resolution (0.5 km), but they are only applicable during the daytime, while the infrared channels can provide full-time observations. For Fengyun-4A/AGRI, the 1.61 μm channel at the near-infrared band shows a high sensitivity to the ice cloud phase and cloud effective radius with a strong-absorbing effect. The visible channel at 0.65 μm is a weakly absorbing channel sensitive to cloud optical thickness and cloud phase, especially for strong convective clouds [42]. The absorption characteristics of cloud particles at 8.6, 10.8, and 12.3 μm are significantly different for various cloud phases. Due to the different absorption effects of liquid water cloud particles, the difference of brightness temperature is smaller between 10.8 and 12.3 μm than that between 8.6 and 10.8 μm, but the situation is opposite for ice cloud particles [43]. To further evaluate the potential application ability of composite radar data, a comparison test between visible and infrared channels for model sensitivity analysis is also proposed in this study.
a b e l d o M = a b e l d o × s ec θ
where θ is the solar zenith angle ( θ < 70°).

2.2. Weather Radar Data

Based on the interpretation of backscattered echoes, the ground-based weather radars detect convective systems, rainfall intensity and speed by receiving the emitted electromagnetic waves. The RCRF maps used in this study are derived from the China New Generation Weather Radar System (data available at http://10.1.64.154/cimissapiweb/apidataclassdefine_list.action accessed on 1 May 2021 [44]. Composite reflectivity is the maximum base reflectivity value that occurs in a given vertical column in the radar umbrella. The China New Generation Weather Radars scan in several pre-defined volume coverage patterns. In principle, the radars tilt a fixed elevation angle above the horizontal plane and perform a 360° horizontal sweep, then change the elevation angle, and complete another 360° sweep. Composite reflectivity can generate a plane view of the most intense portions of thunderstorms and can also be compared with the base reflectivity to help users to accurately determine the 3-D structure of a thunderstorm. Currently, the temporal resolution of the operational weather radar data is 6 min.

2.3. GPM IMERG Data

The integrated Multi-satellite Retrievals for the GPM(IMERG) dataset based on a unified U.S. algorithm provides a high-quality precipitation product based on multi-satellite observations. In this study, the IMERG dataset is primarily used to evaluate the performance of retrieved RCRF in the areas not covered by radar. This gridded fusion dataset has a temporal resolution of 30 min and a spatial resolution of 0.1° × 0.1° with the maximum precipitation rate of 50 mm∙h−1, covering the area between the latitudes of 60° S and 60° N. Technically, by the inter-calibration, fusion and interpolation of several satellite microwave precipitation products (such as NOAA Joint Polar Satellite System-Advanced Technology Microwave Sounder, JPSS-ATMS), together with microwave-calibrated infrared satellite estimates (NOAA GOES-E/W), rain gauge analyses, and other potential precipitation estimators, this uniform and gridded precipitation product is estimated at fine spatio-temporal scales for Tropical Rainfall Measuring Mission and GPM eras over the globe [45]. Note that the freely released IMERG V04A version data will be delayed by about 3-4 months. Consequently, this time lag makes it impossible to support near-real-time storm monitoring and nowcasting applications. However, we can still use this high-quality precipitation dataset to validate the RCRF retrieved in this investigation. For more details about IMERG data, please refer to https://disc.gsfc.nasa.gov/datasets/GPM_3IMERGHHL_06/summary?keywords="IMERG%20late" accessed on 1 May 2021 [46,47].

3. Methodology

To develop the retrieving method for RCRF maps, we explore a deep learning method that consists of three modules, including data preprocessing, model training and RCRF validating based on three independent datasets, i.e., a training dataset, a validation dataset and a test dataset (Figure 1). The training dataset is used to train the model by optimizing its learnable parameters with the back-propagation algorithm. To evaluate the performance of model, the independent validation dataset is used to assert its ability to generalize well to unseen data, i.e., to make sure that the model is not overfitting.

3.1. Training and Validation Data

To train and estimate RCRF maps by using multi-band infrared/visible observations from Fengyun-4A/AGRI based on a deep learning algorithm, firstly we collected the satellite observations and actual RCRF data for the same period to construct the training and validation datasets. Because of the difference in spatial resolutions between Fengyun-4A/AGRI (about 4.0 km) and the real radar reflectivity factor (about 1.0 km), we averaged the radar reflectivity factor in the horizontal space to match each observed pixel of FY-4A/AGRI. We collocated the simultaneous Fengyun-4A/AGRI and radar reflectivity factor data from May to October in 2020 over a broad spatial coverage in China (70°E–135°E, 16°N–55°N). Since the ground-based weather radars conduct scanning mode within a 6-min interval, the collocated datasets with an observation time difference should be less than 3 min. On the one hand, it ensures that the matching data are one-to-one. On the other hand, it also ensures that the cloud system movement is not obvious within 3 min. The quality control processing for both satellite and radar data is also performed after data matching. After that, we delete the radar echo samples in clear sky pixel determined by Fengyun-4A/AGRI operational cloud mask products [48] and abandon the incomplete satellite data with lost lines. The fully matched dataset randomly selects 70% of samples for model training, 15% for model validation, and the remaining 15% for model testing (Table 2). Since the radar echoes cover a small part of the ocean, this part of data is utilized to validate the samples of the ocean part. The other part of data is treated as the samples of the land part. The complete statistical validation results will be shown in Section 4.2.

3.2. Network Architecture

As illustrated in Figure 2, the network architecture is evolved from the U-net [49]. The U-net has already been widely used in biomedical image segmentation [50,51]. This study introduces a U-net-based regression model that consists of a contracting pyramid structure (left side) and an expansive inverted pyramid one (right side). The contracting structure follows the typical architecture of a convolutional network. It consists of the repeated application of two 3 × 3 convolutions (unpadded convolutions), each followed by a batch-normalization (BN) module, an activation function named rectified linear unit (ReLU) and a 2 × 2 max pooling operation with a stride of 2 for down-sampling. At each down-sampling step, the number of feature channels is doubled. Every step in the expansive architecture consists of an up-sampling of the feature map followed by a 2 × 2 convolution (called an up-convolution) that halves the number of feature channels, a concatenation with the correspondingly cropped feature map from the contracting structure, and two 3 × 3 convolutions, each followed by a BN and a ReLU. At the final layer, a 1 × 1 convolution layer followed by a ReLU is used to map each 16-component feature vector to the desired regression map. Finally, the network has 23 independent convolutional layers and 1,765,440 free parameters in total. Note that the sizes of input data are 800 × 1280 × 3 for the infrared model (model I) and 800 × 1280 × 2 for the visible model (model II) in this investigation.

3.3. Model Training

The network training for the model uses the matched Fengyun-4A/AGRI and RCRF maps based on the Adaptive Moment Estimation of Pytorch (Torch of Python version). To minimize overhead and maximize the use of the Graphics Processing Unit memory, we favor large input tiles over a large batch size to reduce the batch to a single image. Accordingly, we use a high momentum (0.99), such that a large number of the previously seen training samples determine the update in the current optimization step. In addition, the learning rate is set to 0.00016. In this training model, the lost function is computed by a pixel-wise root-mean-square error (RMSE, Equation (2)). Note that only the pixels covered by radar echoes are used to estimate the lost function, as follows.
RMSE = 1 N i = 1 N y i y p r e d , i 2
Also, we introduce the pixel-wise mean absolute error (MAE) and the R-square ( R 2 ) (Equations (3) and (4)) as the reference indicators to evaluate the training model.
M A E = i = 1 N y i y p r e d , i / N
R 2 = 1 i = 1 N y i y p r e d , i 2 / i = 1 N y i y i ¯ 2
where y i and y p r e d , i are the pixels of the true data and the retrieved RCRF maps, respectively; y i ¯ is the average value of the true data. In order to prevent overfitting of the model, we will stop the training step when the verified loss function no longer drops or even rises.

4. Validation and Discussions

4.1. Case Studies

Figure 3 shows the developing severe convective events covering a large area in China (70°E–135°E, 16°N–55°N) at 08:00 UTC on June 5, 2020. The light blue areas are the regions covered by radar echoes. Obviously, the blind areas of radar echoes significantly affect the real-time monitoring capability on fast-developing severe weather systems that usually propagate from the source to downstream. Particularly, some severe weather systems frequently occur in the area without the coverage of radar echoes, such as part of the Tibetan Plateau, Xinjiang, Inner Mongolia, Northeast China, and coastal areas. As shown in Figure 3, a typical severe convective weather system occurred in the south region of the Yangtze River and South China with heavy precipitation. In the overlapped region, the retrieved radar reflectivity factors match well with the real radar maps. However, in the region outside the coverage of radar echoes, the retrieved radar reflectivity factors also agree well with the precipitation distribution from GPM IMERG data. Although there are no matched samples from blind areas of radar echoes (such as the eastern Japan, the East China Sea, the Tibetan Plateau and areas north of 50°N) for training, the deep learning model can still retrieve a high-quality retrieved radar reflectivity factor as well as areas covered by radar echoes. Thus, these results further demonstrate that the deep learning method can retrieve radar reflectivity factors even in the regions without matched radar samples. The radar reflectivity factors retrieved by infrared and visible models are almost consistent. However, there are differences in details, especially in the severe convective precipitation area (centered at 30°N, 130°E). The details of radar reflectivity factors from convective precipitation predicted by the visible model are more obvious than those by the infrared model.
In addition to convective systems, tropical cyclones are also one of the most destructive disasters in nature [52]. Tropical cyclones originate and develop in ocean areas, which cannot be detected by ground-based radars. Figure 4 shows the case of Super Typhoon Haishen at 02:30 UTC on September 6, 2020. At that time, the spiral cloud systems around the typhoon were affecting Japan and South Korea with heavy precipitation. The eye region of the typhoon can be clearly seen from the RCRF maps retrieved by both infrared and visible models mentioned above. In addition, it can be found that the radar reflectivity on the right side of the typhoon is greater than that on its left side. Note that the RCRF maps retrieved by the infrared model show a better agreement with the precipitation distribution than that retrieved by the visible model. However, the RCRF maps retrieved by the visible model show more details on the cloud systems of Super Typhoon Haishen. Such a situation is primarily attributed to the finer texture of cloud systems that can be clearly seen by the visible band with a higher spatial resolution. For capturing the characteristics of the dynamic and intensity of the typhoon over the ocean area, it would be helpful to diagnose and forecast the intensity and track of the typhoon by using the RCRF maps with a high spatial (of 4 km) and temporal resolution (of 5–15 min).
The Northeast China cold vortex is a common disastrous weather system in the middle and high latitudes of East Asia [53]. Attributed to the disasters induced by rainstorms, thunderstorms, and tornadoes in the mesoscale vortex rain belts, the Northeast China often suffers from great damage and casualties. However, neither Inner Mongolia nor the northeastern provinces of China are fully covered by radar observations. Note that, due to the sparse distribution of radar data in Northeast China, there are some problems in the radar data mosaics in this area. However, this only accounts for a small part of the training samples, and thus it will not affect the overall training effect. Figure 5 shows a typical case of a Northeast China cold vortex that occurred on September 16, 2020. Under the influence of the cold vortex, moderate rain, local heavy rain, and even rainstorm occurred in the east and northeast parts of Inner Mongolia. Figure 5 clearly show the rain belt and intensity distribution from the RCRF maps retrieved by the visible model. In contrast, the value range of the retrieved RCRF maps retrieved by the infrared model is narrower. The RCRF maps from infrared model are closer to the actual observation, while the results from the visible model are higher than the observation. Although the radar mosaic is sparse in this area, the retrieved RCRF maps are consistent with the precipitation distribution. This conclusion also indicates the robustness of deep learning algorithm.
Figure 6 presents a set of developing convective events monitored by the retrieved RCRF maps over a time series. This rapidly developing convective system was generated at 04:30 UTC on July 8, 2020, and was maintained for two and a half hours. Then, this convective system entered the mature and stable stage. The retrieved RCRF (especially by the visible model) and precipitation maps at 04:30 UTC show that the convective target-1 (marked by the red circles) has developed to a mature stage, but it only corresponds to 2–3 pixels on radar maps at that moment. Note that similar phenomena are observed for convective target-2, target-3, and target-4. Therefore, it can be concluded that the retrieved RCRF maps in Figure 6 agree with the structures of the precipitation fields very well during the lifespan of this convective system. Particularly, the RCRF maps retrieved by the visible model are more sensitive to the initial convection with long leading time.

4.2. Statistical Results

In this section, we present the validation results of the RCRF maps retrieved by the visible and infrared models over the land and ocean, respectively. As mentioned in Section 3.1, 15% of labeled datasets are utilized for model validation. As shown in Figure 7, the RCRF maps retrieved by both visible and infrared models over the land and ocean are validated against the ground-based radar data. It is not surprising that more samples concentrate around the 0 dBZ. Generally, the retrieved RCRF maps show a better consistency with the actual situation over the ocean and land, particularly for the cases by visible models.
Figure 8 presents the statistical indicators for the retrieved RCRF maps over two different regions (land and ocean) from May to October in 2020. It can be seen that the RCRF maps retrieved by the visible model exhibit a better performance over the ocean area, with an MAE of 0.3–0.62 dBZ and an RMSE of 1.6–2.3 dBZ. In contrast, the results from the infrared model over the ocean have an MAE of 0.38–0.7 dBZ and a higher RMSE of 1.8–2.5 dBZ. However, the results from the infrared model over the land performs the worst, with an MAE of 0.4–0.8 dBZ and an RMSE of 1.7–2.8 dBZ. Among them, the worst case occurred in July with an MAE of 0.8 dBZ and an RMSE of 2.8 dBZ. Overall, the accuracy of the retrieved data is higher over the ocean than that over the land in May–July and September. However, the situation in August is the opposite. In addition, the accuracies of the retrieved data over the ocean and land are almost equal in October. One of the reasons is that summer is the peak season for the occurrence and development of severe convections in China. Due to the rapid development of convective systems, the uncertainty of the spatio-temporal matching increases, leading to the relative errors of summer samples. The mechanisms, duration, and spatio-temporal scales of summer precipitation are extremely complicated. In addition, the topography is more complex over the land, where convective systems in summer are mainly triggered by the unstable environment due to the obvious heating, while the situation is the opposite over the ocean. The complex topography of the land surface together with the more homogeneous ocean surface contribute to the existing differences between land and ocean.

4.3. Validation Based on Precipitaiton Observations

To validate the retrieved RCRF results using rain/snow measurements, this study also refers to some research on precipitation issues/surface observations [54,55]. The precipitation rate maps have been obtained from the retrieved RCRF maps by using the standard Z–R relationship (see Equation (5)).
Z = a R b
where a = 700, b = 1.38. The GPM IMGER datasets are the validating source.
Figure 9 shows a heavy precipitation process in Hubei, Jiangxi, Anhui and JiangSu provinces at 02:00 UTC on July 27,2020 and the specific precipitation rate maps from the retrieved RCRF maps by using standard Z–R relationship.
The original validating datasets are translated into precipitation rate by using standard Z–R relationships. The matched GPM IMGER datasets are also selected as the validating source of precipitation rate maps. Table 3 presents the statistical indicators for the precipitation rate data retrieved from retrieved RCRF maps by using the standard Z–R relationship from May to October in 2020. From Table 3, it can be seen that the precipitation rate maps retrieved by using the visible model exhibit a better performance with an MAE of 0.247–0.339 mm/h and an RMSE of 0.506–0.701 mm/h, respectively. In contrast, the results from the infrared model have an MAE of 0.259–0.452 mm/h and an RMSE of 0.543–0.991 mm/h, respectively. However, the precipitation rate maps retrieved by both the visible model and infrared model exhibit the worst performance on August.

5. Conclusions

For severe weather monitoring and nowcasting, this study aims to investigate and develop a unified retrieving algorithm for quantitatively estimating the RCRF maps by China’s new generation GEO meteorological satellite Fengyun-4A/AGRI. A novel deep learning, U-net regression model, is used to estimate near-real-time RCRF maps. This new algorithm is remarkably different from statistical regression methods, and its key advantage is the powerful ability to capture non-linear associational patterns between predictors and predictees, with multi-scale structural characteristics. This study also proposes a comparison test between visible and infrared models for sensitivity analysis. The results show that visible models exhibit a better performance over the ocean area with an MAE of 0.3–0.62 dBZ and an RMSE of 1.6–2.3 dBZ. In contrast, the results from the infrared model over the ocean have an MAE of 0.38–0.7 dBZ and a higher RMSE of 1.8–2.5 dBZ. The worst case is results from the infrared model over the land with an MAE of 0.4–0.8 and an RMSE of 1.7–2.8 dBZ. In addition to the advantages of spatial resolution, another reason for the visible model over the infrared model may be that the cloud optical thickness in the visible channel intuitively reflects the development of cloud and convections.
Moreover, three typical cases including a cluster of rapidly developing convective systems, a Northeast China cold vortex and the Typhoon Haishen, are analyzed and studied. The comparisons reveal that the proxy radar reflectivity obtained by the conversion of Fengyun-4A/AGRI data can capture the strong precipitation signal in areas with insufficient coverage of radar echoes. The RCRF maps retrieved by the visible model are more sensitive to the initial convections with long leading time.
As of this writing, the RCRF map-retrieving algorithm has been under the test of the Feng Yun GEO Algorithm Test-bed [56], and will be integrated into the operational Fengyun-4 operational products system. Ongoing improvements to the RCRF retrieving algorithm will include the combination of both weather radar reflectivity data and precipitation data. In the future, Fengyun-4B will add rapid-scan operation with a 1-min scanning mode and a 1-km spatial resolution. It is thus believed that many potential errors will be greatly reduced by using the higher spatio-temporal resolution data from the newer GEO satellite instruments.

Author Contributions

Conceptualization, F.S. and D.Q.; methodology, F.S. and D.Q.; software, F.S.; validation, F.S. and B.L.; formal analysis, F.S., M.M. and D.Q.; investigation, F.S., M.M. and D.Q.; resources, F.S. and D.Q.; data curation, F.S.; writing—original draft preparation, F.S.; writing—review and editing, D.Q. and M.M.; visualization, F.S.; supervision, D.Q.; project administration, F.S. and D.Q.; funding acquisition, F.S. and D.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China under Grants 2017YFB0502803, the Natural Science Foundation of China under Grants 41975031, and the Guangdong Province Key Laboratory for Climate Change and Natural Disaster Studies (Grant 2020B1212060025).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We appreciate the ground-based radar reflectivity data generously shared by Chinese National Meteorological Information Center. Last but not least, we would also like to thank the anonymous reviewers for their thoughtful and constructive suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ivers, L.C.; Ryan, E.T. Infectious diseases of severe weather-related and flood-related natural disasters. Curr. Opin. Infect. Dis. 2006, 19, 408–414. [Google Scholar] [CrossRef]
  2. Roy, S.S.; Lakshmanan, V.; Bhowmik, S.K.R.; Thampi, S.B. Doppler weather radar based nowcasting of cyclone Ogni. J. Earth Syst. Sci. 2010, 119, 183–199. [Google Scholar] [CrossRef] [Green Version]
  3. Bech, J.; Berenguer, M. Predictability of heavy sub-hourly precipitation amounts for a weather radar based nowcasting system. Brain Res. 2007, 1153, 84–91. [Google Scholar]
  4. Liechti, K.; Panziera, L.; Germann, U.; Zappa, M. Flash-flood early warning using weather radar data: From nowcasting to forecasting. Hydrol. Earth Syst. Sci. Discuss. 2013, 10, 8490. [Google Scholar]
  5. Norin, L. A quantitative analysis of the impact of wind turbines on operational Doppler weather radar data. Atmos. Meas. Tech. 2015, 8, 593–609. [Google Scholar] [CrossRef] [Green Version]
  6. Sheng, C.; Gao, S.; Xue, M. Short-range prediction of a heavy precipitation event by assimilating Chinese CINRAD-SA radar reflectivity data using complex cloud analysis. Meteorol. Atmos. Phys. 2006, 94, 167–183. [Google Scholar] [CrossRef]
  7. Chu, Z.; Yin, Y.; Gu, S. Characteristics of velocity ambiguity for CINRAD-SA doppler weather radars. Asia Pac. J. Atmos. Sci. 2014, 50, 221–227. [Google Scholar] [CrossRef]
  8. Wilson, J.W.; Schreiber, W.E. Initiation of convective storms at radar-observed boundary-layer convergence lines. Mon. Weather Rev. 1986, 114, 2516–2536. [Google Scholar] [CrossRef] [Green Version]
  9. Mecikalski, J.R.; Bedka, K.M. Forecasting convective initiation by monitoring the evolution of moving cumulus in daytime GOES imagery. Mon. Weather Rev. 2006, 134, 49–78. [Google Scholar] [CrossRef] [Green Version]
  10. Walker, J.R.; Mackenzie, W.M., Jr.; Mecikalski, J.R.; Jewett, C.P. An enhanced geostationary satellite-based convective initiation algorithm for 0-2-h nowcasting with object tracking. J. Appl. Meteorol. Climatol. 2012, 51, 1931–1949. [Google Scholar] [CrossRef]
  11. Sokol, Z. Assimilation of extrapolated radar reflectivity into a NWP model and its impact on a precipitation forecast at high resolution. Atmos. Res. 2011, 100, 201–212. [Google Scholar] [CrossRef]
  12. Sokol, Z.; Zacharov, P. Nowcasting of precipitation by an NWP model using assimilation of extrapolated radar reflectivity. Q. J. Roy. Meteor. Soc. 2012, 138, 1072–1082. [Google Scholar] [CrossRef]
  13. Wang, H.; Sun, J.; Fan, S.; Huang, X.Y. Indirect assimilation of radar reflectivity with WRF 3D-var and its impact on prediction of four summertime convective events. J. Appl. Meteorol. Climatol. 2013, 52, 889–902. [Google Scholar] [CrossRef]
  14. Wang, H.; Sun, J.; Guo, Y.R. Radar reflectivity assimilation with the four-dimensional variational system of the Weather Research and Forecast model. J. Environ. Hydrol. 2011, 46, 289–298. [Google Scholar]
  15. Aumont, O.C.; Ducrocq, V.; Wattrelot, E.; Jaubert, G.; Pradier-Vabre, S. 1D+3DVar assimilation of radar reflectivity data: A proof of concept. Tellus Ser. A 2010, 62, 173–187. [Google Scholar] [CrossRef]
  16. Sokol, Z.; Rezacova, D. Assimilation of radar reflectivity into the LM COSMO model with a high horizontal resolution. Meteorol. Appl. 2010, 13, 317–330. [Google Scholar] [CrossRef]
  17. Yang, J.; Zhang, Z.; Wei, C.; Lu, F.; Guo, Q. Introducing the new generation of Chinese geostationary weather satellites—FengYun 4 (FY-4). Bull. Am. Meteorol. Soc. 2016. [Google Scholar] [CrossRef]
  18. Bessho, K.; Date, K.; Hayashi, M.; Ikeda, A.; Yoshida, R. An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites. J. Meteorol. Soc. Jpn. 2016, 94, 151–183. [Google Scholar] [CrossRef] [Green Version]
  19. Schmit, T.J.; Gunshor, M.M.; Menzel, W.P.; Gurka, J.J.; Bachmeier, A.S. Introducing the next-generation advanced baseline imager on GOES-R. Bull. Am. Meteorol. Soc. 2005, 86, 1079–1096. [Google Scholar] [CrossRef]
  20. Mecikalski, J.R.; Rosenfeld, D.; Manzato, A. Evaluation of geostationary satellite observations and the development of a 1–2 h prediction model for future storm intensity. J. Geophys. Res. Atmos. 2016, 121, 6374–6392. [Google Scholar] [CrossRef]
  21. Horinouchi, T.; Shimada, U.; Wada, A. Convective bursts with gravity waves in tropical cyclones: Case study with the Himawari-8 satellite and idealized numerical study. Geophys. Res. Lett. 2020, 47, 47. [Google Scholar] [CrossRef] [Green Version]
  22. Chen, D.; Guo, J.; Yao, D.; Feng, Z.; Lin, Y. Elucidating the life cycle of warm-season mesoscale convective systems in eastern China from the Himawari-8 geostationary satellite. Remote Sens. 2020, 12, 2307. [Google Scholar] [CrossRef]
  23. Sawada, Y.; Okamoto, K.; Kunii, M.; Miyoshi, T. Assimilating every-10-minute Himawari-8 infrared radiances to improve convective predictability. J. Geophys. Res. Atmos. 2019. [Google Scholar] [CrossRef]
  24. Sun, F.; Qin, D.; Min, M.; Li, B.; Wang, F. Convective initiation nowcasting over China from Fengyun-4A measurements based on TV-L1 optical flow and BP_adaboost neural network algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 12, 4284–4296. [Google Scholar] [CrossRef]
  25. Zhuge, X.; Zou, X. Summertime convective initiation nowcasting over southeastern China based on advanced Himawari imager observations. J. Meteorol. Soc. Jpn. 2018, 96, 337–353. [Google Scholar] [CrossRef] [Green Version]
  26. Han, D.; Lee, J.; Im, J.; Sim, S.; Han, H. A novel framework of detecting convective initiation combining automated sampling, machine learning, and repeated model tuning from geostationary satellite data. Remote Sens. 2019, 11, 1454. [Google Scholar] [CrossRef] [Green Version]
  27. Sun, F.; Min, M.; Qin, D.; Wang, F.; Hu, J. Refined typhoon geometric center derived from a high spatiotemporal resolution geostationary satellite imaging system. IEEE Geosci. Remote Sens. Lett. 2019, 16, 499–503. [Google Scholar] [CrossRef]
  28. Min, M.; Bai, C.; Guo, J.P.; Sun, F.L.; Liu, C.; Wang, F.; Xu, H.; Tang, S.H.; Li, B.; Di, D.; et al. Estimating summertime precipitation from Himawari-8 and global forecast system based on machine learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2557–2570. [Google Scholar] [CrossRef]
  29. Kuligowski, R.J.; Li, Y.P.; Hao, Y.; Zhang, Y. Improvements to the GOES-R rainfall rate algorithm. J. Hydrometeorol. 2016, 17, 1693–1704. [Google Scholar] [CrossRef]
  30. Min, M.; Li, J.; Wang, F.; Liu, Z.J.; Menzel, W.P. Retrieval of cloud top properties from advanced geostationary satellite imager measurements based on machine learning algorithms. Remote Sens. Environ. 2020, 239, 111616. [Google Scholar] [CrossRef]
  31. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [Green Version]
  32. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  33. Ayzel, G.; Scheffer, T.; Heistermann, M. RainNet v1.0: A convolutional neural network for radar-based precipitation nowcasting. Geosci. Model Dev. 2020, 13, 2631–2644. [Google Scholar] [CrossRef]
  34. Zhang, P.; Lei, Z.; Leung, H.; Wang, J. A deep-learning based precipitation forecasting approach using multiple environmental factors. In Proceedings of the IEEE International Congress on Big Data (BigData Congress), Honolulu, HI, USA, 25–30 June 2017. [Google Scholar]
  35. Han, P.; Wang, W.; Shi, Q.; Yang, J. Real-time short-term trajectory prediction based on GRU neural network. In Proceedings of the 38th Digital Avionics Systems Conference (DASC), IEEE/AIAA, San Diego, CA, USA, 8–12 September 2019. [Google Scholar]
  36. Tian, L.; Li, X.T.; Ye, Y.M.; Xie, P.F.; Li, Y. A generative adversarial gated recurrent unit model for precipitation nowcasting. IEEE Trans. Geosci. Remote Sens. 2020, 17, 601–605. [Google Scholar] [CrossRef]
  37. Snderby, C.K.; Espeholt, L.; Heek, J.; Dehghani, M.; Oliver, A.; Salimans, T.; Agrawal, S.; Hickey, J.; Kalchbrenner, N. MetNet: A neural weather model for precipitation forecasting. arXiv 2020, arXiv:2003.12140. [Google Scholar]
  38. Shou, Y.X.; Lu, F.; Shou, S. High-resolution Fengyun-4 satellite measurements of dynamical tropopause structure and variability. Remote Sens. 2020, 12, 1600. [Google Scholar] [CrossRef]
  39. Steiner, M.; Smith, J.A.; Uijlenhoet, R. A microphysical interpretation of radar reflectivity-rain rate relationships. J. Atmos. Sci. 2004, 61, 1114–1131. [Google Scholar] [CrossRef]
  40. Matrosov, S.Y.; Heymsfield, A.J.; Wang, Z. Dual-frequency radar ratio of nonspherical atmospheric hydrometeors. Geophys. Res. Lett. 2005, 32, 32. [Google Scholar] [CrossRef] [Green Version]
  41. Letu, H.S.; Nagao, T.M.; Nakajima, T.Y.; Riedi, J.; Ishimoto, H.; Baran, A.J.; Shang, H.Z.; Sekiguchi, M.; Kikuchi, M. Ice cloud properties from Himawari-8/AHI next-generation geostationary satellite: Capability of the AHI to monitor the DC cloud generation process. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3229–3239. [Google Scholar] [CrossRef]
  42. Letu, H.; Yang, K.; Nakajima, T.Y.; Ishimoto, H.; Shi, J. High-resolution retrieval of cloud microphysical properties and surface solar radiation using Himawari-8/AHI next-generation geostationary satellite. Remote Sens. Environ. 2020, 239, 1–16. [Google Scholar] [CrossRef]
  43. Zhao, L.; Zhao, C.; Wang, Y.; Wang, Y.; Yang, Y. Evaluation of cloud microphysical properties derived from MODIS and Himawari-8 using in situ aircraft measurements over the Southern Ocean. Earth Space Sci. 2020, 7. [Google Scholar] [CrossRef] [Green Version]
  44. Luo, C. Research on three-dimensional wind field structure characteristic of supercell hailstorm by dual- and triple-doppler radar retrieval. Acta Meteorol. Sin. 2017, 75, 757–770. [Google Scholar]
  45. Huffman, G.; Bolvin, D.; Braithwaite, D.; Hsu, K.; Joyce, R.; Kidd, C.; Nelkin, E.; Sorooshian, S.; Wang, J.; Xie, P. First results from the integrated multi-satellite retrievals for GPM (IMERG). In Proceedings of the EGU General Assembly Conference, Vienna, Austria, 12–17 April 2015. [Google Scholar]
  46. Sharifi, E.; Steinacker, R.; Saghafian, B. Assessment of GPM-IMERG and other precipitation products against gauge data under different topographic and climatic conditions in Iran: Preliminary results. Remote Sens. 2016, 8, 135. [Google Scholar] [CrossRef] [Green Version]
  47. Huffman, G.; Bolvin, D.; Nelkin, E.; Kidd, C. Improving user access to the integrated multi-satellite retrievals for GPM (IMERG) products. In Proceedings of the EGU General Assembly Conference, Vienna, Austria, 12–17 April 2016. [Google Scholar]
  48. Wang, X.; Min, M.; Wang, F.; Guo, J.; Li, B.; Tang, S. Intercomparisons of cloud mask products among Fengyun-4A, Himawari-8, and MODIS. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8827–8839. [Google Scholar] [CrossRef]
  49. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation; Springer: Cham, Switzerland, 2015. [Google Scholar]
  50. Iek, Z.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation; Springer: Cham, Switzerland, 2016. [Google Scholar]
  51. Norman, B.; Pedoia, V.; Majumdar, S. Use of 2D U-net convolutional neural networks for automated cartilage and meniscus segmentation of knee MR imaging data to determine relaxometry and morphometry. Radiology 2018, 288, 177–185. [Google Scholar] [CrossRef] [Green Version]
  52. Simpson, R.; Saffir, H. Tropical cyclone destructive potential by integrated kinetic energy. Bull. Am. Meteorol. Soc. 2008, 89, 219. [Google Scholar]
  53. Zhen, Z.; Lei, H. Observed microphysical structure of nimbostratus in northeast cold vortex over China. Atmos. Res. 2014, 142, 91–99. [Google Scholar]
  54. Gultepe, I.; Heymsfield, A.J.; Field, P.R.; Axisa, D. Chapter 6 ice phase precipitation. Meteorol. Monogr. 2017, 58. [Google Scholar] [CrossRef]
  55. Gultepe, I.; Sharman, R.; Williams, P.D.; Zhou, B.; Thobois, L. A review of high impact weather for aviation meteorology. Pure Appl. Geophys. 2019, 176, 1869–1921. [Google Scholar] [CrossRef]
  56. Min, M.; Wu, C.Q.; Li, C.; Liu, H.; Xu, N.; Wu, X.; Chen, L.; Wang, F.; Sun, F.L.; Qin, D.Y.; et al. Developing the science product algorithm testbed for Chinese next-generation geostationary meteorological satellites: Fengyun-4 series. J. Meteorol. Res. 2017, 31, 708–719. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the data preprocessing and the training module (left) and the RCRF map validating module after training (right) of the deep learning model.
Figure 1. Flowchart of the data preprocessing and the training module (left) and the RCRF map validating module after training (right) of the deep learning model.
Remotesensing 13 02229 g001
Figure 2. A U-net regression architecture (example of 50 × 80 pixels in the lowest resolution). Each colored box corresponds to a multi-channel feature map and the numbers of channels for the feature maps are denoted at the bottom of the box.
Figure 2. A U-net regression architecture (example of 50 × 80 pixels in the lowest resolution). Each colored box corresponds to a multi-channel feature map and the numbers of channels for the feature maps are denoted at the bottom of the box.
Remotesensing 13 02229 g002
Figure 3. (a) RCRF maps (dBZ) retrieved from infrared channels, (b) visible channels and (c) ground-based radars, and (d) the corresponding precipitation rate from the GPM IMERG data (mm∙h−1) at 08:00 UTC on June 5, 2020. The light blue areas are the regions covered by radar echoes.
Figure 3. (a) RCRF maps (dBZ) retrieved from infrared channels, (b) visible channels and (c) ground-based radars, and (d) the corresponding precipitation rate from the GPM IMERG data (mm∙h−1) at 08:00 UTC on June 5, 2020. The light blue areas are the regions covered by radar echoes.
Remotesensing 13 02229 g003
Figure 4. A typical case of Super Typhoon Haishen at 3:00 UTC on September 6,2020. (a) RCRF maps (dBZ) retrieved from infrared channels, (b) visible channels and (c) ground-based radars, and (d) the corresponding precipitation rate from the GPM IMERG data (mm∙h−1). The light blue areas are the regions covered by radar echoes.
Figure 4. A typical case of Super Typhoon Haishen at 3:00 UTC on September 6,2020. (a) RCRF maps (dBZ) retrieved from infrared channels, (b) visible channels and (c) ground-based radars, and (d) the corresponding precipitation rate from the GPM IMERG data (mm∙h−1). The light blue areas are the regions covered by radar echoes.
Remotesensing 13 02229 g004
Figure 5. A typical case of a Northeast China cold vortex occurred on September 16, 2020. (a) RCRF maps (dBZ) retrieved from infrared channels, (b) visible channels and (c) ground-based radars, and (d) the corresponding precipitation rate from the GPM IMERG data (mm∙h−1). The light blue areas are the regions covered by radar echoes.
Figure 5. A typical case of a Northeast China cold vortex occurred on September 16, 2020. (a) RCRF maps (dBZ) retrieved from infrared channels, (b) visible channels and (c) ground-based radars, and (d) the corresponding precipitation rate from the GPM IMERG data (mm∙h−1). The light blue areas are the regions covered by radar echoes.
Remotesensing 13 02229 g005
Figure 6. The occurrence and development for the case of a convective system occurring over North China from 04:30 UTC to 08:00 UTC on July 8, 2020. The subplots at each row of the panel are RCRF maps (dBZ) retrieved from (a) infrared channels, (b) visible channels, and (c) ground-based radar, and (d) the corresponding precipitation rate from the GPM IMERG data (mm∙h−1). The red circles stand for target-1, the pink circles for target-2, the blue circles for target-3 and the black circles for target-4. The light blue areas are the regions covered by radar echoes.
Figure 6. The occurrence and development for the case of a convective system occurring over North China from 04:30 UTC to 08:00 UTC on July 8, 2020. The subplots at each row of the panel are RCRF maps (dBZ) retrieved from (a) infrared channels, (b) visible channels, and (c) ground-based radar, and (d) the corresponding precipitation rate from the GPM IMERG data (mm∙h−1). The red circles stand for target-1, the pink circles for target-2, the blue circles for target-3 and the black circles for target-4. The light blue areas are the regions covered by radar echoes.
Remotesensing 13 02229 g006
Figure 7. Comparisons of the RCRF maps between ground-based radars and deep learning-based retrieving model. The color bar represents the occurrence frequency (in logarithmics scale) for the retrieved RCRF maps. (a) The infrared model over the land, (b) the visible model over the land, (c) the infrared model over the ocean, (d) the visible model over the ocean.
Figure 7. Comparisons of the RCRF maps between ground-based radars and deep learning-based retrieving model. The color bar represents the occurrence frequency (in logarithmics scale) for the retrieved RCRF maps. (a) The infrared model over the land, (b) the visible model over the land, (c) the infrared model over the ocean, (d) the visible model over the ocean.
Remotesensing 13 02229 g007
Figure 8. Validations on the RCRF obtained by the U-net regression-based retrieving algorithm, for land and ocean regions during May–October, respectively.(a) pixel-wise RS, (b) pixel-wise MAE, (c) pixel-wise RMSE.
Figure 8. Validations on the RCRF obtained by the U-net regression-based retrieving algorithm, for land and ocean regions during May–October, respectively.(a) pixel-wise RS, (b) pixel-wise MAE, (c) pixel-wise RMSE.
Remotesensing 13 02229 g008
Figure 9. Precipitation Rain rate (mm/h) maps at 02:00 UTC on July 27 of 2020, including (a) PR retrieved from RCRF maps using infrared channels, (b) PR retrieved from RCRF maps using visible/near-infrared channels, (c) PR from radar RCRF maps, and (d) PR from GPM IMGER data.
Figure 9. Precipitation Rain rate (mm/h) maps at 02:00 UTC on July 27 of 2020, including (a) PR retrieved from RCRF maps using infrared channels, (b) PR retrieved from RCRF maps using visible/near-infrared channels, (c) PR from radar RCRF maps, and (d) PR from GPM IMGER data.
Remotesensing 13 02229 g009
Table 1. Fengyun-4A/AGRI interest fields for models retrieving the radar reflectivity factor.
Table 1. Fengyun-4A/AGRI interest fields for models retrieving the radar reflectivity factor.
NoInterest FieldsPhysical BasisModel Index
1BT 10.8μmCloud-top temperature assessmentModel I
2BTD 10.8–6.2μmCloud-top height relative to tropopause
3BTD 12.3+8.6–2×10.8μmCloud-top glaciation/ phase
4Modified albedo 0.65μm Cloud optical thicknessModel II
5Albedo ratio 0.65/1.61μmCloud-top glaciation /phase
Table 2. Temporal distribution of the samples for model training and validation.
Table 2. Temporal distribution of the samples for model training and validation.
Application
Month
Model IModel II
TrainingValidationTrainingValidation
May.42849181927412
Jun.41838961882403
Jul.42579121916410
Aug.42739151923412
Sept.41948981887404
Oct.42179031898406
Table 3. Validations on precipitation rate from retrieved RCRF maps by using standard Z-R relationship.
Table 3. Validations on precipitation rate from retrieved RCRF maps by using standard Z-R relationship.
Application
Month
Model IModel II
MAE (mm/h)RMSE (mm/h)MAE (mm/h)RMSE (mm/h)
May.0.2590.5430.2510.552
Jun.0.3290.7590.2470.506
Jul.0.3850.7760.2740.524
Aug.0.4520.9910.3390.701
Sept.0.3940.8130.2650.517
Oct.0.2790.5940.2970.637
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, F.; Li, B.; Min, M.; Qin, D. Deep Learning-Based Radar Composite Reflectivity Factor Estimations from Fengyun-4A Geostationary Satellite Observations. Remote Sens. 2021, 13, 2229. https://doi.org/10.3390/rs13112229

AMA Style

Sun F, Li B, Min M, Qin D. Deep Learning-Based Radar Composite Reflectivity Factor Estimations from Fengyun-4A Geostationary Satellite Observations. Remote Sensing. 2021; 13(11):2229. https://doi.org/10.3390/rs13112229

Chicago/Turabian Style

Sun, Fenglin, Bo Li, Min Min, and Danyu Qin. 2021. "Deep Learning-Based Radar Composite Reflectivity Factor Estimations from Fengyun-4A Geostationary Satellite Observations" Remote Sensing 13, no. 11: 2229. https://doi.org/10.3390/rs13112229

APA Style

Sun, F., Li, B., Min, M., & Qin, D. (2021). Deep Learning-Based Radar Composite Reflectivity Factor Estimations from Fengyun-4A Geostationary Satellite Observations. Remote Sensing, 13(11), 2229. https://doi.org/10.3390/rs13112229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop