Next Article in Journal
Effectiveness of Innovate Educational Practices with Flipped Learning and Remote Sensing in Earth and Environmental Sciences—An Exploratory Case Study
Next Article in Special Issue
CORN: An Alternative Way to Utilize Time-Series Data of SAR Images in Newly Built Construction Detection
Previous Article in Journal
Remote Sensing Derived Indices for Tracking Urban Land Surface Change in Case of Earthquake Recovery
Previous Article in Special Issue
Multi-Scale Context Aggregation for Semantic Segmentation of Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Hourly Rainfall during Typhoons Using Radar Mosaic-Based Convolutional Neural Networks

Department of Marine Environmental Informatics & Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 20224, Taiwan
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(5), 896; https://doi.org/10.3390/rs12050896
Submission received: 7 February 2020 / Revised: 3 March 2020 / Accepted: 9 March 2020 / Published: 10 March 2020
(This article belongs to the Special Issue Deep Neural Networks for Remote Sensing Applications)

Abstract

:
Taiwan is located at the junction of the tropical and subtropical climate zones adjacent to the Eurasian continent and Pacific Ocean. The island frequently experiences typhoons that engender severe natural disasters and damage. Therefore, efficiently estimating typhoon rainfall in Taiwan is essential. This study examined the efficacy of typhoon rainfall estimation. Radar images released by the Central Weather Bureau were used to estimate instantaneous rainfall. Additionally, two proposed neural network-based architectures, namely a radar mosaic-based convolutional neural network (RMCNN) and a radar mosaic-based multilayer perceptron (RMMLP), were used to estimate typhoon rainfall, and the commonly applied Marshall–Palmer Z-R relationship (Z-R_MP) and a reformulated Z-R relationship at each site (Z-R_station) were adopted to construct benchmark models. Monitoring stations in Hualien, Sun Moon Lake, and Taichung were selected as the experimental stations in Eastern, Central, and Western Taiwan, respectively. This study compared the performance of the models in predicting rainfall at the three stations, and the results are outlined as follows: at the Hualien station, the estimations of the RMCNN, RMMLP, Z-R_MP, and Z-R_station models were mostly identical to the observed rainfall, and all models estimated an increase during peak rainfall on the hyetographs, but the peak values were underestimated. At the Sun Moon Lake and Taichung stations, however, the estimations of the four models were considerably inconsistent in terms of overall rainfall rates, peak rainfall, and peak rainfall arrival times on the hyetographs. The relative root mean squared error for overall rainfall rates of all stations was smallest when computed using RMCNN (0.713), followed by those computed using RMMLP (0.848), Z-R_MP (1.030), and Z-R_station (1.392). Moreover, RMCNN yielded the smallest relative error for peak rainfall (0.316), followed by RMMLP (0.379), Z-R_MP (0.402), and Z-R_station (0.688). RMCNN computed the smallest relative error for the peak rainfall arrival time (1.507 h), followed by RMMLP (2.673 h), Z-R_MP (2.917 h), and Z-R_station (3.250 h). The results revealed that the RMCNN model in combination with radar images could efficiently estimate typhoon rainfall.

Graphical Abstract

1. Introduction

Taiwan is situated at the junction of the tropical and subtropical climate zones, bordering the Eurasian continent and the Pacific Ocean. Natural disasters frequently occur each year because of typhoons—the strong winds and rainfall of which often lead to severe disasters. In addition, the northwest Pacific Ocean is commonly affected by typhoons; approximately four typhoons strike Taiwan every year and cause serious severe damage [1,2]. For example, typhoon Megi, a typical moderate-strength typhoon, made landfall on the east coast of Taiwan in 2016. The strong winds and heavy rain caused severe damage; buildings were buried by a mudslide down a river, and 650,000 households experienced power outages, which resulted in US$100 million in economic losses [3]. Improving quantitative precipitation estimation (QPE) for tropical storms is crucial for disaster mitigation [4,5].
Taiwan’s Central Mountain Range (CMR) runs from the north to the south of the island, dividing it into eastern and western parts. The CMR is situated in the east and is 340 km long and 80 km wide, with an average elevation of approximately 2500 m [6]. When a typhoon strikes Taiwan from the east, it exerts its most severe effects on the eastern regions, which sustain considerable damage. By contrast, the western regions sustain less damage because the typhoon structure is often disrupted and weakened by the CMR. Notably, the distribution of major rainfall during typhoons statistically resembles the contours of terrain height, often referred to as the “phase-locking” mechanism [7]. The phase-locking mechanism can provide a basic explanation of the topographic rainfall in the CMR associated with typhoons impinging from different directions [8]. Several researchers have discussed the effects of the aforementioned mechanism, such as [9,10,11,12,13]. These researchers have revealed that when typhoons approach Taiwan from the east before passing through the CMR, their routes are more easily estimated, and early warnings can be provided for precaution against the corresponding wind and rain. By contrast, after typhoons pass through the CMR, the routes, arrival times of wind and rain, and delays in rainfall become unpredictable [14,15]. Therefore, efficiently estimating typhoon rainfall is crucial for Taiwan.
A radar is commonly used for both weather surveillance and research to help meteorologists understand the dynamics and microphysical processes of atmospheric phenomena [16]. Radar reflectivity is a measure of the fraction of electromagnetic waves reflected by precipitation particles (e.g., rain, snow, or hail). Different colors are used to create radar images according to the intensity of the signal reflected by precipitation particles [3]. The intensity of reflection is associated with the size, shape, state of matter, and number of particles within a unit of precipitation. Stronger reflected signals generally indicate heavier precipitation. Therefore, radar images can be used to examine the intensity of precipitation and distribution of weather systems [17]. Currently, the Central Weather Bureau (CWB) of Taiwan has four S-band (10-cm wavelength) weather surveillance radar (WSR) systems; this new type of radar is also referred as WSR-88D (Weather Surveillance Radar—1988 Doppler). Of the four WSR systems, three are separately located in Hualien in Eastern Taiwan, Kenting in southern Taiwan, and Cigu in Western Taiwan (locations displayed in Figure 1); these three systems use a Doppler radar. The remaining system is located in Wufenshan in northern Taiwan and uses a dual-polarized radar. The scan range of the four systems covers Taiwan and its adjacent bodies of water (i.e., the Pacific Ocean, East China Sea, Taiwan Strait, and Luzon Strait) and forms a WSR network.
Numerous operational nowcasting systems use the relationship between static radar reflectivity (Z) and the rain rate (R), such as the commonly used Marshall–Palmer [18] formula of Z = 200 R1.6, to convert radar echo into rainfall estimates. Radar- and gauge-based rainfall estimates are typically consistent, particularly when the rainfall intensity is high [19,20,21,22,23,24,25]. Researchers have studied the QPE of tropical cyclones. For example, Ku and Yoo [26] analyzed the rainfall engendered by typhoon Nakri by using radar and rain gauge data. Libertino et al. [27] developed a quasi-real-time procedure to optimize the radar estimation of rainfall rates. Moreover, Tang and Matyas [28] developed a nowcasting model for tropical cyclone precipitation based on rain retrieval for a Doppler radar. Chen et al. [29] reported the vertical structures of the raindrop size distribution and QPE parameters of two main synoptic systems: typhoons and Meiyu (or Baiu) fronts. According to these researchers, radar reflectivity factors constitute a vital variable that can be used as a QPE model input.
This study developed a model for estimating hourly rainfall during typhoons and analyzed the accuracy of typhoon rainfall estimation. As previously discussed, to address the problem of QPE, reflectivity measurements were employed. Radar imagery released by the CWB was used to estimate instantaneous rainfall, and a neural network-based model was employed to develop a precipitation estimation model. Specifically, multilayer perceptron (MLP) neural networks and convolutional neural networks (CNNs) were used to develop a typhoon precipitation estimation model.
Numerous studies have discussed typhoon rainfall estimation using machine learning models. Such models are typically composed of an MLP network, a radial basis function network, a support vector machine, and random forests and Bayesian networks [30,31,32,33,34,35]. Studies have combined such machine learning models with radar reflectivity and satellite remotely sensed data [36,37]. For example, Wei [38] developed a fuzzy inference-based neural network and estimated overland precipitation by using ground climatological data and radar reflectivity. Tao et al. [39] extracted useful features from bispectral satellite information, infrared images, and water vapor channels to detect rain by using deep neural networks.
Furthermore, studies have employed new deep learning models such as CNNs. CNNs are revolutionary in image analysis and are considered state-of-the-art for numerous tasks, including image classification, face recognition, and object detection [40,41,42,43,44]. CNNs are typically used in classification tasks in which the output of an image is a single-class label [45]. A standard CNN consists of alternating convolutional and pooling layers with fully connected layers above them [46]. A pretrained CNN such as VGG-16 [47] and its intermediate convolutional layer are used as an initial feature-map extractor. The convolutional layer is the core component of a CNN and outputs feature maps by computing the dot product between the local region in the input feature maps and a filter [48]. The pooling layer performs downsampling on feature maps by computing the maximum or average value of a subregion. Such newly developed models can be used to process radar images in atmospheric studies (e.g., rainfall inversion or typhoon intensity prediction). For example, Chen et al. [49] proposed a satellite imagery-based CNN for estimating the tropical cyclone intensity. Tran and Song [50] predicted multichannel radar image sequences by using deep neural network-based image processing techniques. Lee et al. [51] adopted CNNs using image patterns to estimate the tropical cyclone intensity by mimicking human cloud pattern recognition. Few studies have applied CNNs and remote sensing images to tropical cyclone rainfall estimation. Accordingly, the present study proposes a relevant methodology to fill this research gap.
This study proposes two structures for radar mosaic-based neural networks, namely a radar mosaic-based convolutional neural network (RMCNN) and a radar mosaic-based multilayer perceptron (RMMLP), to estimate typhoon rainfall. The neural network-based models were compared with empirical formulas by using the formula Z = 200 R1.6 proposed by [18], which is referred to as the Marshall–Palmer model (Z-R_MP). In addition, the formula Z = a Rb was reformulated at each station for the investigated events (Z-R_station). Regarding rainfall rate estimations of the studied stations, the smallest relative root mean squared error was computed by the proposed RMCNN (0.713), followed by RMMLP (0.848), Z-R_MP (1.030), and Z-R_station (1.392).

2. Materials

The geography of the research area is illustrated in Figure 1. Typhoons generally move from east to west through Taiwan. Accordingly, this study used three experimental stations, namely Hualien station in the east (121.6051° E, 23.9729° N), Sun Moon Lake station in central Taiwan (120.9081° E, 23.8813° N), and Taichung station in the west (120.6841° E, 24.1457° N), to analyze the accuracy of rainfall forecasts in typhoon precipitation.
This study considered 17 typhoons that affected the area between 2013 and 2017 (Figure 2). As the typhoons landed on Taiwan, the CWB released radar images every hour and collected observatory data. The resolution and color appearance of earlier radar images (released by CWB before 2012) differs from those of recent images (released by CWB after 2013). Therefore, we collected radar images starting from 2013. According to the CWB, the maximum wind speeds of mild, moderate, and severe typhoons are 17.2–32.6, 32.7–50.9, and >51 m/s, respectively. The statistics in Table 1 demonstrate that from 2013 to 2017, five severe (29.4%), seven moderate (41.2%), and five mild typhoons passed through the research area.
The radar reflectivity images collected in this study were radar mosaics produced by the CWB. The CWB typically creates a mosaic by using the reflectivity fields of the four WSR systems. Reflectivity is measured in decibels relative to Z (dBZ), and higher dBZ values suggest stronger echo signals. Radar mosaics are created as follows: The WSR systems scan for a period of approximately 7 min. Approximately 2.5 min later, the data collected by the four WSR systems are delivered to the host computer for data processing, and a composite mosaic is produced. After another 1–5 min, the mosaic and relevant data are sent to each servo host. Accordingly, a radar mosaic can be created approximately 15 min after the WSRs begin scanning.
The radar mosaics collected during typhoons in the present study were composed of hourly radar images. The radar images of the typhoons that made landfall or approached Taiwan are depicted in Figure 3. Most of the radar mosaics were complete because they comprised radar images delivered from several radar systems. Few mosaics included partial sea areas that were not covered by the radar systems. This study filled in the missing areas in the images by using the most recent images captured before the time corresponding to missing data.

3. Methodology

This study designed a set of methods for estimating rainfall during typhoons by using the received instantaneous radar images. The procedures can be explained as follows:
Step 1: Collect relevant data from the research area and select typhoons for analysis. Typhoon weather information was scanned to confirm whether the typhoons affected the research area, and the event data of all historical typhoons were then examined;
Step 2: Refine the data collected from radar reflectivity images and gauge rainfall during the typhoons. After the relevant typhoon data were collected, data preprocessing was conducted. The preprocessed data were divided into radar images and observatory rainfall data. The observatory rainfall data obtained in this study were raw data. Parts of the observatory data were incomplete because of instrument malfunction or human error. Before data analysis, the missing values were supplied using interpolation. The method of radar image processing is explained in Section 2;
Step 3: Preprocess the data sets and categorize typhoon event data into training–validation and testing subsets. Of the analyzed typhoons, four approached the island from the east, made landfall on the east coast, passed through the CMR, and left the island from the west coast. The typhoon routes were near the three experimental stations in Hualien, Sun Moon Lake, and Taichung, and passed through the center of the island. This type of typhoon route results in the most severe damage. The four typhoons, namely Matmo in 2014 (Figure 2e), Soudelor in 2015 (Figure 2h), Dujuan in 2015 (Figure 2j), and Megi in 2016 (Figure 2m), constituted the testing set. The remaining 13 typhoons constituted the training–validation set. The training–validation set was used to train model parameters and to verify the applicability of models by using 10-fold cross-validation. In total, 4701 hourly data were obtained. Of these, 4146 were used for the training–validation set and 555 were used for testing;
Step 4: Crop the reflectivity images. The latitudinal and longitudinal range of the original radar images was 117.32°–124.79° E, 21.70°–27.17° N. Because the original images had a wide geographical range, cropping was required to obtain the most suitable image size that included the rainfall data collected by ground-based observatories;
Step 5: Conduct neural network-based estimation and identify optimal reflectivity image sizes and model parameters. This study developed an RMCNN model in accordance with the different image sizes after cropping and initially applied default mode parameter values to identify the most suitable image size. After selection, relevant parameters for the RMCNN and RMMLP models were analyzed. The number of neurons and learning rate were calibrated in the RMCNN and RMMLP models. The model inputs were the reflectivity images cropped in Step 4, and the outputs were the rainfall rates at the stations. The RMCNN and RMMLP models were implemented using the open-source scikit-learn and Keras libraries in Python 3.7 (Python Software Foundation, Wilmington, DE, USA) [52,53];
Step 6: Evaluate the accuracy of the neural network-based models using the testing subset. Each selected typhoon was analyzed using the models to estimate typhoon rainfall and compare model errors and efficiency. This study used the root mean squared error (RMSE), mean absolute error (MAE), relative RMSE (rRMSE), relative MAE (rMAE), and efficiency coefficient (CE) as evaluation measures. Their equations can be defined as follows:
RMSE = 1 n i = 1 n ( R i e s t R i o b s ) 2 ,
MAE = 1 n i = 1 n | R i e s t R i o b s | ,
rRMSE = RMSE R ¯ o b s ,
rMAE = MAE R ¯ obs ,
CE = 1 i = 1 n ( R i o b s R i e s t ) 2 i = 1 n ( R i o b s R - o b s ) 2 ,
where n represents the number of data; R i e s t   and   R i o b s represent the ith estimated and observed values, respectively; and R ¯ o b s represents the average observed value;
Step 7: Compare the optimal neural network-based models with empirical formulas by using the Z-R_MP and Z-R_station formulas. We computed the Z-R relationship at Hualien station (Z-R_Hualien) as Z = 288 R1.195, that at Sun Moon Lake station as Z = 378 R1.0957, and that at Taichung station (Z-R_Taichung) as Z = 352 R0.8376.

Network Architecture

The developed RMMLP and RMCNN models were based on two types of neural networks, namely an MLP and a CNN. The framework of the proposed RMMLP (Figure 4) is a conventional type of ANN that includes input, hidden, and output layers. The additional fully connected layer directly receives the cropped images to be flattened (conversion from a two-dimensional to a one-dimensional array). Therefore, the RMMLP model can be used when images serve as the network model inputs.
RMMLP model training involves learning and recall phases. In the learning phase, data training is conducted using known input and output values to obtain a set of connection weights in the hidden layer. The weights are used to calculate objective output results. In the recall phase, another input value (i.e., validation and test sets) is entered, and the estimated value is output using the weights obtained in the learning phase.
Unlike the RMMLP model, the RMCNN model has two additional layers: convolutional and pooling layers (Figure 5). The two additional layers enable the RMCNN to efficiently convert the collected information into applicable data for extracting features. Subsequently, the features are combined by the fully connected layer, and model training is conducted.
The convolutional layer of the RMCNN model contains several convolution kernels. Different weight combinations assigned to the convolution kernels can be used to sharpen images, detect edges, and extract features for recognition purposes. The neurons in the convolutional layer are feature detectors. The input end first corresponds to a convolution kernel and defines its size. In brief, the convolutional layer extracts features in an image. The convolution equation is expressed as follows:
f ( τ ) g ( x τ ) d τ ,
where f(x) is the original pixels; the original image is created after all pixels are superimposed. In addition, g(x) denotes the point of application.
Through the convolution equation, the modeling system can output a result superimposed by multiple inputs at a specific moment. In image analysis, a convolution kernel is the combination of all application points. The application points on the convolution kernel affect the original pixel sequentially; accordingly, the output of the linear superposition is the final output of the convolution kernel. The equation is as follows:
O ( i , j ) = ( I K ) ( i , j ) = m = 0 1 n = 0 1 I ( i + m , j + n ) K ( m , n ) ,
where O, I, and K are outputs, inputs, and convolution kernels, respectively, and m and n are defined by the convolution kernel function.
In an RMCNN, the fully connected layer performs a regression on the extracted features flattened in the convolution and pooling layers. Data are processed using a dropout function that enables the models to shut down some of the neurons to prevent overfitting during training.

4. Modeling

This study employed the training–validation set to train model parameters and verify model applicability. Two types of neural network models (i.e., RMCNN and RMMLP models) were established to examine the most suitable size and hyperparameters (e.g., number of neurons and learning rate) concerning radar images. A trial-and-error approach was used for verification, and the RMCNN architecture used in this study is displayed in Figure 5. The settings of each RMCNN model parameter are outlined as follows: filter weighting size = 3 × 3; padding method = same; max pooling size = 2 × 2. The activation function of the convolutional layer was a rectified linear unit (ReLU) function, the activation function of the hidden layer was a sigmoid function, and the dropout rate was 0.2.

4.1. Image Resizing and Selection

The resolution of the original radar images was 1024 × 1024 pixels. This study first located and centered the experimental stations in the images and then cropped the sheet by sizes of 4 × 4, 6 × 6,…, and 30 × 30 pixels. One pixel typically corresponds to an actual distance of 0.703 km. The cropped images and observatory rainfall data were then used as inputs into the RMCNN model for rainfall estimation. In this phase, the number of neurons and the learning rate in the hidden layer were set to 10 and 0.0001, respectively.
Table 2, Table 3 and Table 4 present the RMSE results for different image sizes for the Hualien, Sun Moon Lake, and Taichung stations, respectively. The lowest RMSE values for the estimations at both experimental stations (3.449, 3.061, and 2.478 mm/h) were observed when the image size of 12 × 12 pixels was used for Hualien and Taichung and the image size of 14 × 14 pixels was used for Sun Moon Lake. Therefore, these were selected as the most suitable image sizes. Here, 12 pixels correspond to an actual distance of 8.44 km.

4.2. Parameter Calibration

After the most suitable image size was selected, the RMCNN model hyperparameters were examined. First, the number of neurons in the hidden layer was calibrated. The number of neurons was set between 5 and 35. Figure 6a,c,e present the RMSE values of the RMCNN and RMMLP models when applied at the Hualien, Sun Moon Lake, and Taichung stations, respectively. The lowest RMSE values (2.764 and 3.624 mm/h) were obtained when the RMCNN and RMMLP models applied at the Hualien station had 18 and 20 neurons, respectively; the lowest RMSE values (2.141 and 2.695 mm/h) for the estimations at the Sun Moon Lake station were obtained when the RMCNN and RMMLP models had 13 and 16 neurons, respectively; and the lowest RMSE values (1.542 and 1.756 mm/h) for the estimations at the Taichung station were obtained when the RMCNN and RMMLP models had 20 and 22 neurons, respectively.
This study used adaptive moment estimation (Adam) proposed by [54] to analyze the learning rate. The Adam algorithm was selected because it has an adaptive learning rate and can adjust the learning rate and update neural network weights on the basis of first- and second-order moment estimation, which increases the calculation efficiency. To apply the Adam algorithm, this study set the initial learning rate to between 0.0001 and 0.002. Figure 6b,d,f illustrate the RMSE values of the RMCNN and RMMLP models applied at the three stations. For the estimations at the Hualien station, the RMCNN and RMMLP models had minimum RMSE values (2.690 and 3.222 mm/h) when the learning rate was 0.0008 and 0.0009, respectively. For Sun Moon Lake station, the RMCNN and RMMLP models had minimum RMSE values (2.031 and 2.606 mm/h) when the learning rate was 0.0006 and 0.0008, respectively. For Taichung station, the RMCNN and RMMLP models had minimum RMSE values (1.437 and 1.563 mm/h) when the learning rate was 0.0006 and 0.0007, respectively.

5. Results and Discussion

To examine the applicability of the three models and estimation errors for typhoons passing through the CMR from east to west, this study selected data on four typhoons for evaluation and compared the analysis results of the RMCNN, RMMLP, Z-R_MP, and Z-R_station models.

5.1. Simulation Results

Four typhoons (Matmo, Soudelor, Dujuan, and Megi) were employed to identify the most suitable rainfall estimation model. Figure 7 presents hyetographs of observed rainfall and RMCNN, RMMLP, Z-R_MP, and Z-R_station model estimations at the Hualien, Sun Moon Lake, and Taichung stations.
The rain duration of typhoon Matmo was 43 h (time series: t = 1–43 h; Figure 7). The Hualien, Sun Moon Lake, and Taichung stations had the maximum observed rainfall at t = 20 h (41 mm/h), t = 24 h (26 mm/h), and t = 24 h (20 mm/h), respectively. After the typhoon circulation structure was disturbed by the CMR, the observed rainfalls at the Taichung and Sun Moon Lake stations were less than that at the Hualien station. The results for the remaining three typhoons were similar. Typhoon Soudelor had a rain duration of 50 h (t = 44–93 h) and the Hualien station recorded the maximum rainfall at t = 66 h (54 mm/h), whereas the Sun Moon Lake station recorded the maximum rainfall with a 3-h delay (t = 69 h; 19 mm/h) and Taichung station recorded the maximum rainfall with a 7-h delay (t = 73 h; 9 mm/h). The rain duration of typhoon Dujuan was 28 h (t = 94–121). The maximum rainfall at the Hualien station was reached at t = 109 h (35 mm/h); that at the Sun Moon Lake station was reached at t = 114 h (34 mm/h), with a 5-h delay; and at the Taichung station, it was reached at t = 114 h (15 mm/h), with a 5-h delay. Typhoon Megi had a rain duration of 63 h (t = 122–184 h). The maximum rainfall at the Hualien, Sun Moon Lake, and Taichung stations was reached at t = 140 h (61 mm/h), t = 143 h (8 mm/h), and t = 151 h (8 mm/h; 11-h delay), respectively.
Regarding the estimated rain pattern, the estimations of all models at the Hualien station (Figure 7a) were mostly identical to the observed pattern. Concerning the peak rainfall on the hyetographs, all models reflected an increase in rainfall. However, these models had larger differences in estimated types of rain, peak rainfall on the hyetograph, and the arrival time of peak rainfall for data recorded at the Sun Moon Lake and Taichung stations. This may be because the rapidly changing typhoon circulation structure and the radar images detected and received at the Sun Moon Lake and Taichung stations increase forecast uncertainty. By contrast, before a typhoon reaches the island, its structure and circulation observed at the Hualien station tend to be more stable, which facilitates the estimation of intense rainfall. This study compared these models according to their performance for several evaluation measures.

5.2. Evaluations

Figure 8 shows scatter plots comparing the observations obtained with the four model estimations. For both stations, the correlation coefficient (r) indicated that the RMCNN model was more accurate than the RMMLP, Z-R_MP, and Z-R_station models. The RMCNN model estimations at the Hualien station (r = 0.946) and Sun Moon Lake station (r = 0.961) were more satisfactory and consistent with the observed data than those at the Taichung station (r = 0.878).
Regarding absolute errors, this study assessed the MAE and RMSE for the four examined typhoons. For the RMCNN model, the MAE and RMSE values for the four typhoons observed at the Hualien station were higher than those observed at the Sun Moon Lake and Taichung stations (Figure 9a,e). Similar results were observed for the RMMLP (Figure 9b,f), Z-R_MP (Figure 9c,g), and Z-R_station (Figure 9d,h) models. According to these results, the estimations at the Hualien station had a larger error because of the rainfall at this station.
This study also used rMAE and rRMSE to evaluate the estimations for the four typhoons. For the RMCNN model, the rMAE and rRMSE values observed for the Taichung station were higher than those observed for the Hualien and Sun Moon Lake stations for typhoons Matmo, Soudelor, and Dujuan (Figure 10a,e). For the RMMLP and Z-R_MP models, the rMAE and rRMSE values observed for the Taichung station were also higher than those observed for the Hualien and Sun Moon Lake stations for typhoons Matmo and Soudelor (Figure 10b,c,f,g).
Table 5 presents the overall performance levels obtained for the models for the four examined typhoons at Hualien, Sun Moon Lake, and Taichung stations. The RMCNN model exhibited a superior performance for each measure compared with the RMMLP, Z-R_MP, and Z-R_station models. Moreover, the absolute errors observed for the estimations at the Hualien station were greater than those observed for the estimations at the Sun Moon Lake and Taichung stations, but the relative errors observed for the estimations at the Hualien station were lower than those observed for the estimations at the Sun Moon Lake and Taichung stations. Finally, the RMCNN model had the highest CE (0.867) for the estimations at the Hualien station, whereas the Z-R_station model had the lowest CE among all models at the three stations.

5.3. Estimation Performance for Peak Rainfall

This study examined the estimation performance of the four models for peak rainfall. The relative error of peak rainfall and the absolute time error of peak rainfall were derived as measures for comparison. First, the relative error of peak rainfall (REPeak), defined as the average ratio of absolute errors, can be expressed as follows:
RE peak = k = 1 m | O k p p r e O k p o b s | k = 1 m O k p o b s ,
where O k p p r e is the estimated peak value in typhoon event k, O k p o b s is the observed peak value in typhoon event k, and m is the total number of typhoon events.
Second, the absolute time error of peak rainfall (ATPeak), defined as the average of absolute time errors, can be expressed as follows:
AT peak = 1 m k = 1 m | T k p p r e T k p o b s | ,
where T k p p r e is the arrival time of estimated peak rainfall in typhoon event k and T k p o b s is the arrival time of observed peak rainfall in typhoon event k.
Lower REPeak and ATPeak values indicate higher performances. Figure 11 displays the REPeak and ATPeak values for the four typhoons. As illustrated by Figure 11a, the following results were observed: (1) For the estimations at the Hualien and Sun Moon Lake stations, the REPeak value for the RMCNN model was superior to those for the RMMLP, Z-R_MP, and Z-R_station models, but for the estimations at the Taichung station, the REPeak value for the Z-R_MP model was superior to those for the RMCNN, RMMLP, and Z-R_station models; (2) with relative error of the estimated peak rainfall on RMCNN, Hualien station computed the smallest REPeak (0.262), followed by Sun Moon Lake station (0.303), and then Taichung station (0.385). The peak rainfall estimated at the Hualien station was less than that at the Sun Moon Lake and Taichung stations because the typhoon routes were disturbed by the CMR. Hence, peak rainfall was difficult to estimate at the Sun Moon Lake and Taichung stations.
Additionally, as shown in Figure 11b, the following results were observed: (1) The ATPeak values observed for the RMCNN and RMMLP models for the estimations at Hualien station were 0, indicating that the models accurately estimated the wind and rain arrival times. Although all models could not accurately estimate the rain arrival times for the estimations at the Sun Moon Lake and Taichung stations, the ATPeak value obtained for the RMCNN model was smaller than those obtained for the RMMLP, Z-R_MP, and Z-R_station models; (2) the ATPeak values obtained for all models for the estimations at Hualien station were smaller than those obtained for the estimations at the Sun Moon Lake and Taichung stations because the typhoon eye and structure were more stable before reaching landforms. Therefore, the peak rainfall arrival time was more easily estimated in Hualien.

6. Conclusions

This study examined the efficiency of typhoon precipitation estimation and evaluated the influence of the CMR on rainfall during typhoons. In general, when a typhoon approaches Taiwan from east to west, its route or the wind and rain it carries are more easily predicted before it passes through the CMR. By contrast, after the typhoon passes the CMR, its route and delay in rainfall become unpredictable.
This study considered 17 typhoons that affected the research area from 2013 to 2017 and collected hourly radar images released by the CWB during the typhoons, as well as observatory data. Hualien station in the east, Sun Moon Lake station in Central Taiwan, and Taichung station in the west were selected as the three experimental sites, and RMCNN, RMMLP, Z-R_MP, and Z-R_station models were employed. The RMCNN model was incorporated with the radar images to estimate typhoon rainfall. Using the overall rainfall rates of all stations, RMCNN computed the smallest relative root mean squared error (0.713), followed by RMMLP (0.848), Z-R_MP (1.030), and Z-R_station (1.392). Moreover, the smallest relative error of peak rainfall was yielded by RMCNN (0.316), followed by RMMLP (0.379), Z-R_MP (0.402), and Z-R_station (0.688). The smallest relative error of peak rainfall arrival time was computed by RMCNN (1.507 h), followed by RMMLP (2.673 h), Z-R_MP (2.917 h), and Z-R_station (3.250 h). On the basis of relevant evaluation measures, the RMCNN model outperformed the RMMLP, Z-R_MP, and Z-R_station models.
The estimations of the four models at Hualien station were mostly identical to the observed rainfall data. Although all models estimated an increase in rainfall at the peak rainfall time on the hyetographs, the peak value was underestimated. The estimations of the four models concerning the hyetograph and peak rainfall arrival time at the Sun Moon Lake and Taichung stations were considerably inconsistent with the observed data. This may be because the rapidly changing typhoon circulation structure and the radar images increase the uncertainty of rainfall forecasting. By contrast, peak rainfall and the peak rainfall arrival time were easier to detect at Hualien station because the typhoon structure and circulation are more stable before the typhoon reaches the island. In conclusion, this study employed the RMCNN model integrated with radar imagers to estimate typhoon rainfall, and the results reveal that the model incorporating radar images can effectively estimate precipitation during typhoons.

Author Contributions

C.-C.W. conceived and designed the experiments and wrote the manuscript; C.-C.W. and P.-Y.H. carried out this experiment and analysis of the data and discussed the results. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Ministry of Science and Technology, Taiwan, under Grant No. MOST108-2111-M-019-001.

Acknowledgments

The authors acknowledge data provided by the Central Weather Bureau of Taiwan, which are available at https://rdc28.cwb.gov.tw/. This manuscript was edited by Wallace Academic Editing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hsiao, L.H.; Peng, M.S.; Chen, D.S.; Huang, K.N.; Yeh, T.C. Sensitivity of typhoon track predictions in a regional prediction system to initial and lateral boundary conditions. J. Appl. Meteorol. Climatol. 2009, 48, 1913–1928. [Google Scholar] [CrossRef]
  2. Wei, C.C. Examining El Niño–Southern Oscillation effects in the subtropical zone to forecast long-distance total rainfall from typhoons: A case study in Taiwan. J. Atmos. Ocean. Technol. 2017, 34, 2141–2161. [Google Scholar] [CrossRef]
  3. Central Weather Bureau (CWB). Available online: http://www.cwb.gov.tw/V7/index.htm (accessed on 1 June 2019).
  4. Wang, M.; Zhao, K.; Xue, M.; Zhang, G.; Liu, S.; Wen, L.; Chen, G. Precipitation microphysics characteristics of a Typhoon Matmo (2014) rainband after landfall over eastern China based on polarimetric radar observations. J. Geophys. Res. Atmos. 2016, 121, 12415–12433. [Google Scholar] [CrossRef]
  5. Wei, C.C.; You, G.J.Y.; Chen, L.; Chou, C.C.; Roan, J. Diagnosing rain occurrences using passive microwave imagery: A comparative study on probabilistic graphical models and “black box” models. J. Atmos. Ocean. Technol. 2015, 32, 1729–1744. [Google Scholar] [CrossRef]
  6. Wei, C.C. Surface wind nowcasting in the Penghu Islands based on classified typhoon tracks and the effects of the Central Mountain Range of Taiwan. Weather Forecast. 2014, 29, 1425–1450. [Google Scholar] [CrossRef]
  7. Huang, C.Y.; Chou, C.W.; Chen, S.H.; Xie, J.H. Topographic rainfall of tropical cyclones past a mountain range as categorized by idealized simulations. Weather Forecast. 2020, 35, 25–49. [Google Scholar] [CrossRef]
  8. Wu, C.C.; Kuo, Y.H. Typhoons affecting Taiwan: Current understanding and future challenges. Bull. Am. Meteorol. Soc. 1999, 80, 67–80. [Google Scholar] [CrossRef]
  9. Fang, X.; Kuo, Y.H.; Wang, A. The impacts of Taiwan topography on the predictability of Typhoon Morakot’s record-breaking rainfall: A high-resolution ensemble simulation. Weather Forecast. 2011, 26, 613–633. [Google Scholar] [CrossRef] [Green Version]
  10. Hsiao, L.F.; Yang, M.J.; Lee, C.S.; Kuo, H.-C.; Shih, D.-S.; Tsai, C.-C.; Wang, C.-J.; Chang, L.-Y.; Feng, L.; Chen, D.-S.; et al. Ensemble forecasting of typhoon rainfall and floods over a mountainous watershed in Taiwan. J. Hydrol. 2013, 506, 55–68. [Google Scholar] [CrossRef] [Green Version]
  11. Huang, C.Y.; Chen, C.A.; Chen, S.H.; Nolan, D.S. On the upstream track deflection of tropical cyclones past a mountain range: Idealized experiments. J. Atmos. Sci. 2016, 73, 3157–3180. [Google Scholar] [CrossRef] [Green Version]
  12. Wu, C.C.; Yen, T.H.; Kuo, Y.H.; Wang, W. Rainfall simulation associated with Typhoon Herb (1996) near Taiwan. Part I: The topographic effect. Weather Forecast. 2002, 17, 1001–1015. [Google Scholar] [CrossRef]
  13. Wu, C.C.; Chou, K.H.; Lin, P.H.; Aberson, S.; Peng, M.; Nakazawa, T. Impact of dropwindsonde data on typhoon track forecasts in DOTSTAR. Weather Forecast. 2007, 22, 1157–1176. [Google Scholar] [CrossRef] [Green Version]
  14. Tsai, H.C.; Lee, T.H. Maximum covariance analysis of typhoon surface wind and rainfall relationships in Taiwan. J. Appl. Meteorol. Climatol. 2009, 48, 997–1016. [Google Scholar] [CrossRef]
  15. Wei, C.C. Improvement of typhoon precipitation forecast efficiency by coupling SSM/I microwave data with climatologic characteristics and precipitation. Weather Forecast. 2013, 28, 614–630. [Google Scholar] [CrossRef]
  16. Fabry, F.; Meunier, V.; Treserras, B.P.; Cournoyer, A.; Nelson, B. On the climatological use of radar data mosaics: Possibilities and challenges. Bull. Am. Meteorol. Soc. 2017, 98, 2135–2148. [Google Scholar] [CrossRef] [Green Version]
  17. Fabry, F. Radar Meteorology—Principles and Practice; Cambridge University Press: Cambridge, UK, 2015; p. 272. [Google Scholar]
  18. Marshall, J.M.; Palmer, W.M.K. The distribution of raindrops with size. J. Appl. Meteorol. 1948, 5, 165–166. [Google Scholar] [CrossRef]
  19. Biswas, S.K.; Chandrasekar, V. Cross-validation of observations between the GPM dual-frequency precipitation radar and ground based dual-polarization radars. Remote Sens. 2018, 10, 1773. [Google Scholar] [CrossRef] [Green Version]
  20. Georgakakos, K.P. Covariance propagation and updating in the context of real-time radar data assimilation by quantitative precipitation forecast models. J. Hydrol. 2000, 239, 115–129. [Google Scholar] [CrossRef]
  21. Hazenberg, P.; Torfs, P.J.J.F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar. J. Geophys. Res. 2013, 118, 10243–10261. [Google Scholar] [CrossRef]
  22. Ivanov, S.; Michaelides, S.; Ruban, I. Mesoscale resolution radar data assimilation experiments with the harmonie model. Remote Sens. 2018, 10, 1453. [Google Scholar] [CrossRef] [Green Version]
  23. Krajewski, W.F.; Smith, J.A. Radar hydrology: Rainfall estimation. Adv. Water Resour. 2002, 25, 1387–1394. [Google Scholar] [CrossRef]
  24. Mimikou, M.A.; Baltas, E.A. Flood forecasting based on radar rainfall measurements. J. Water Resour. Plan. Manag. 1996, 122, 151–156. [Google Scholar] [CrossRef]
  25. Woo, W.C.; Wong, W.K. Operational application of optical flow techniques to radar-based rainfall. Nowcasting. Atmosphere 2017, 8, 48. [Google Scholar] [CrossRef] [Green Version]
  26. Ku, J.M.; Yoo, C. Calibrating radar data in an orographic setting: A case study for the typhoon Nakri in the Hallasan Mountain, Korea. Atmosphere 2017, 8, 250. [Google Scholar] [CrossRef] [Green Version]
  27. Libertino, A.; Allamano, P.; Claps, P.; Cremonini, R.; Laio, F. Radar estimation of intense rainfall rates through adaptive calibration of the Z-R relation. Atmosphere 2015, 6, 1559–1577. [Google Scholar] [CrossRef] [Green Version]
  28. Tang, J.; Matyas, C. A nowcasting model for tropical cyclone precipitation regions based on the TREC motion vector retrieval with a semi-Lagrangian scheme for doppler weather radar. Atmosphere 2018, 9, 200. [Google Scholar] [CrossRef] [Green Version]
  29. Chen, B.; Chen, B.; Lin, H.; Elsberry, R.L. Estimating tropical cyclone intensity by satellite imagery utilizing convolutional neural networks. Weather Forecast. 2019, 34, 447–465. [Google Scholar] [CrossRef]
  30. Kashiwao, T.; Nakayama, K.; Ando, S.; Ikeda, K.; Lee, M.; Bahadori, A. A neural network-based local rainfall prediction system using meteorological data on the Internet: A case study using data from the Japan Meteorological Agency. Appl. Soft Comput. 2017, 56, 317–330. [Google Scholar] [CrossRef]
  31. Lin, F.R.; Wu, N.J.; Tsay, T.K. Applications of cluster analysis and pattern recognition for typhoon hourly rainfall forecast. Adv. Meteorol. 2017, 2017, 17. [Google Scholar] [CrossRef]
  32. Lo, D.C.; Wei, C.C.; Tsai, N.P. Parameter automatic calibration approach for neural-network-based cyclonic precipitation forecast models. Water 2015, 7, 3963–3977. [Google Scholar] [CrossRef] [Green Version]
  33. Song, Y.; Han, D.; Rico-Ramirez, M.A. High temporal resolution rainfall rate estimation from rain gauge measurements. J. Hydroinform. 2017, 19, 930–941. [Google Scholar] [CrossRef] [Green Version]
  34. Unnikrishnan, P.; Jothiprakash, V. Data-driven multi-time-step ahead daily rainfall forecasting using singular spectrum analysis-based data pre-processing. J. Hydroinform. 2018, 20, 645–667. [Google Scholar] [CrossRef]
  35. Wei, C.C. Soft computing techniques in ensemble precipitation nowcast. Appl. Soft Comput. 2013, 13, 793–805. [Google Scholar] [CrossRef]
  36. Tripathi, S.; Srinivas, V.V.; Nanjundiah, R.S. Downscaling of precipitation for climate change scenarios: A support vector machine approach. J. Hydrol. 2006, 330, 621–640. [Google Scholar] [CrossRef]
  37. Wei, C.C. Meta-heuristic Bayesian networks retrieval combined polarization corrected temperature and scattering index for precipitations. Neurocomputing 2014, 136, 71–81. [Google Scholar] [CrossRef]
  38. Wei, C.C. Simulation of operational typhoon rainfall nowcasting using radar reflectivity combined with meteorological data. J. Geophys. Res. Atmos. 2014, 119, 6578–6595. [Google Scholar] [CrossRef]
  39. Tao, Y.; Gao, X.; Ihler, A.; Sorooshian, S.; Hsu, K. Precipitation identification with bispectral satellite information using deep learning approaches. J. Hydrometeorol. 2017, 18, 1271–1283. [Google Scholar] [CrossRef]
  40. Castelluccio, M.; Poggi, G.; Sansone, C.; Verdoliva, L. Land use classification in remote sensing images by convolutional neural networks. arXiv 2015, arXiv:1508.00092. [Google Scholar]
  41. Liu, Y.; Zhong, Y.; Fei, F.; Zhu, Q.; Qin, Q. Scene classification based on a deep random-scale stretched convolutional neural network. Remote Sens. 2018, 10, 444. [Google Scholar] [CrossRef] [Green Version]
  42. Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 105–109. [Google Scholar] [CrossRef] [Green Version]
  43. Nogueira, K.; Penatti, O.A.B.; dos Santos, J.A. Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognit. 2017, 61, 539–556. [Google Scholar] [CrossRef] [Green Version]
  44. Tomè, D.; Monti, F.; Baroffio, L.; Bondi, L.; Tagliasacchi, M.; Tubaro, S. Deep convolutional neural networks for pedestrian detection. Signal Process. Image Commun. 2016, 47, 482–489. [Google Scholar] [CrossRef] [Green Version]
  45. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  46. Wu, H.; Gu, X. Towards dropout training for convolutional neural networks. Neural Netw. 2015, 71, 1–10. [Google Scholar] [CrossRef] [Green Version]
  47. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 2015 International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  48. Zhang, W.; Tang, P.; Zhao, L. Remote sensing image scene classification using CNN-CapsNet. Remote Sens. 2019, 11, 494. [Google Scholar] [CrossRef] [Green Version]
  49. Chen, Y.; Duan, J.; An, J.; Liu, H. Raindrop size distribution characteristics for tropical cyclones and meiyu-baiu fronts impacting Tokyo, Japan. Atmosphere 2019, 10, 391. [Google Scholar] [CrossRef] [Green Version]
  50. Tran, Q.K.; Song, S. Multi-channel weather radar echo extrapolation with convolutional recurrent neural networks. Remote Sens. 2019, 11, 2303. [Google Scholar] [CrossRef] [Green Version]
  51. Lee, J.; Im, J.; Cha, D.H.; Park, H.; Sim, S. Tropical cyclone intensity estimation using multi-dimensional convolutional neural networks from geostationary satellite data. Remote Sens. 2020, 12, 108. [Google Scholar] [CrossRef] [Green Version]
  52. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  53. Chollet, F. Keras: Deep Learning Library for Theano and Tensorflow. 2015. Available online: https://keras.io/ (accessed on 1 October 2019).
  54. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Figure 1. Geography of Taiwan and the locations of study sites and radar stations.
Figure 1. Geography of Taiwan and the locations of study sites and radar stations.
Remotesensing 12 00896 g001
Figure 2. Historical typhoon paths.
Figure 2. Historical typhoon paths.
Remotesensing 12 00896 g002
Figure 3. Reflectivity images of analyzed typhoons.
Figure 3. Reflectivity images of analyzed typhoons.
Remotesensing 12 00896 g003
Figure 4. Architecture of the proposed radar mosaic-based multilayer perceptron (RMMLP).
Figure 4. Architecture of the proposed radar mosaic-based multilayer perceptron (RMMLP).
Remotesensing 12 00896 g004
Figure 5. Architecture of radar mosaic-based convolutional neural network (RMCNN).
Figure 5. Architecture of radar mosaic-based convolutional neural network (RMCNN).
Remotesensing 12 00896 g005
Figure 6. Parameter calibration of the number of neuron nodes and learning rate at Hualien (a,b), Sun Moon Lake (c,d), and Taichung (e,f).
Figure 6. Parameter calibration of the number of neuron nodes and learning rate at Hualien (a,b), Sun Moon Lake (c,d), and Taichung (e,f).
Remotesensing 12 00896 g006
Figure 7. Rainfall hyetographs and model estimations for (a) Hualien, (b) Sun Moon Lake, and (c) Taichung stations.
Figure 7. Rainfall hyetographs and model estimations for (a) Hualien, (b) Sun Moon Lake, and (c) Taichung stations.
Remotesensing 12 00896 g007
Figure 8. Scatterplots of observations versus estimations using RMCNN, RMMLP, Marshall–Palmer Z-R relationship (Z-R_MP), and reformulated Z-R relationship at each site (Z-R_station) models for Hualien (ad), Sun Moon Lake (eh), and Taichung (il).
Figure 8. Scatterplots of observations versus estimations using RMCNN, RMMLP, Marshall–Palmer Z-R relationship (Z-R_MP), and reformulated Z-R relationship at each site (Z-R_station) models for Hualien (ad), Sun Moon Lake (eh), and Taichung (il).
Remotesensing 12 00896 g008
Figure 9. Absolute errors for RMCNN, RMMLP, Z-R_MP, and Z-R_station: (ad) MAE and (eh) RMSE.
Figure 9. Absolute errors for RMCNN, RMMLP, Z-R_MP, and Z-R_station: (ad) MAE and (eh) RMSE.
Remotesensing 12 00896 g009
Figure 10. Relative errors for RMCNN, RMMLP, Z-R_MP, and Z-R_station: (ad) rMAE and (eh) rRMSE.
Figure 10. Relative errors for RMCNN, RMMLP, Z-R_MP, and Z-R_station: (ad) rMAE and (eh) rRMSE.
Remotesensing 12 00896 g010
Figure 11. Performance levels for RMCNN, RMMLP, Z-R_MP, and Z-R_station in terms of (a) the relative error of peak rainfall (REpeak) and (b) the absolute time error of peak rainfall (ATpeak).
Figure 11. Performance levels for RMCNN, RMMLP, Z-R_MP, and Z-R_station in terms of (a) the relative error of peak rainfall (REpeak) and (b) the absolute time error of peak rainfall (ATpeak).
Remotesensing 12 00896 g011
Table 1. Typhoons affecting the research area, 2013–2017 (17 incidents).
Table 1. Typhoons affecting the research area, 2013–2017 (17 incidents).
TyphoonPeriod (UTC)IntensityTotal rain (mm)
HualienSun Moon LakeTaichung
Cimaron17–18 July 2013Mild typhoon29113
Usagi21–22 September 2013Severe typhoon303255
Fitow5–6 October 2013Moderate typhoon5952
Hagibis15–16 June 2014Mild typhoon10165
Matmo22–23 July 2014Moderate typhoon33424095
Fung-Wong19–22 September 2014Mild typhoon1696431
Linfa7–9 July 2015Mild typhoon9212
Soudelor7–8 August 2015Moderate typhoon21913366
Goni22–23 August 2015Severe typhoon269512
Dujuan28–29 September 2015Severe typhoon16812987
Nepartak7–9 July 2016Severe typhoon3092312
Meranti13–14 September 2016Severe typhoon3235219
Megi27–28 September 2016Moderate typhoon3999876
Nesat28–29 July 2017Moderate typhoon5520942
Haitang30–31 July 2017Mild typhoon9079153
Hato22–23 August 2017Moderate typhoon11568
Talim13–13 September 2017Moderate typhoon3300
Table 2. Results for various image sizes (pixel × pixel) at Hualien station.
Table 2. Results for various image sizes (pixel × pixel) at Hualien station.
Image size4 × 46 × 68 × 810 × 1012 × 1214 × 1416 × 16
RMSE (mm/h)3.7763.7733.6594.2853.4494.2634.813
Image size18 × 1820 × 2022 × 2224 × 2426 × 2628 × 2830 × 30
RMSE (mm/h)3.8863.6394.3444.0143.8544.4614.114
Table 3. Results for various image sizes (pixel × pixel) at Sun Moon Lake station.
Table 3. Results for various image sizes (pixel × pixel) at Sun Moon Lake station.
Image size4 × 46 × 68 × 810 × 1012 × 1214 × 1416 × 16
RMSE (mm/h)3.4733.7873.2733.4473.3433.0613.330
Image size18 × 1820 × 2022 × 2224 × 2426 × 2628 × 2830 × 30
RMSE (mm/h)3.1823.2803.1723.1583.4463.7064.002
Table 4. Results for various image sizes (pixel × pixel) at Taichung station.
Table 4. Results for various image sizes (pixel × pixel) at Taichung station.
Image size4 × 46 × 68 × 810 × 1012 × 1214 × 1416 × 16
RMSE (mm/h)2.7972.9973.0612.6622.4782.6402.534
Image size18 × 1820 × 2022 × 2224 × 2426 × 2628 × 2830 × 30
RMSE (mm/h)2.6233.1193.0913.1272.9363.0173.099
Table 5. Overall model performance levels for four typhoons.
Table 5. Overall model performance levels for four typhoons.
StationModelRMCNNRMMLPZ-R_MPZ-R_ station
HualienMAE (mm/h)1.8702.2182.6704.206
RMSE (mm/h)3.5024.3514.9557.569
rMAE0.3090.3660.4410.695
rRMSE0.5790.7190.8181.250
r0.9460.9300.8680.854
CE0.8670.7940.7330.378
Sun Moon LakeMAE (mm/h)1.0701.3401.5222.268
RMSE (mm/h)2.1242.6723.1554.458
rMAE0.3140.3930.4460.665
rRMSE0.6230.7830.9251.307
r0.9610.9390.8350.852
CE0.8600.7780.6910.383
TaichungMAE (mm/h)0.7710.9051.1411.474
RMSE (mm/h)1.6411.8282.3612.836
rMAE0.4400.5170.6510.842
rRMSE0.9371.0431.3481.619
r0.8780.8430.6550.741
CE0.7270.6610.4160.185

Share and Cite

MDPI and ACS Style

Wei, C.-C.; Hsieh, P.-Y. Estimation of Hourly Rainfall during Typhoons Using Radar Mosaic-Based Convolutional Neural Networks. Remote Sens. 2020, 12, 896. https://doi.org/10.3390/rs12050896

AMA Style

Wei C-C, Hsieh P-Y. Estimation of Hourly Rainfall during Typhoons Using Radar Mosaic-Based Convolutional Neural Networks. Remote Sensing. 2020; 12(5):896. https://doi.org/10.3390/rs12050896

Chicago/Turabian Style

Wei, Chih-Chiang, and Po-Yu Hsieh. 2020. "Estimation of Hourly Rainfall during Typhoons Using Radar Mosaic-Based Convolutional Neural Networks" Remote Sensing 12, no. 5: 896. https://doi.org/10.3390/rs12050896

APA Style

Wei, C. -C., & Hsieh, P. -Y. (2020). Estimation of Hourly Rainfall during Typhoons Using Radar Mosaic-Based Convolutional Neural Networks. Remote Sensing, 12(5), 896. https://doi.org/10.3390/rs12050896

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop