Next Article in Journal
On the Spatial Patterns of Urban Thermal Conditions Using Indoor and Outdoor Temperatures
Next Article in Special Issue
Satellite Monitoring of Environmental Solar Ultraviolet A (UVA) Exposure and Irradiance: A Review of OMI and GOME-2
Previous Article in Journal
Investigation of Ionospheric Response to June 2009 Sarychev Peak Volcano Eruption
Previous Article in Special Issue
Multi-Temporal Small Baseline Interferometric SAR Algorithms: Error Budget and Theoretical Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inundation Assessment of the 2019 Typhoon Hagibis in Japan Using Multi-Temporal Sentinel-1 Intensity Images

1
Graduate School of Engineering, Chiba University, Chiba, Chiba 263-8522, Japan
2
National Research Institute for Earth Science and Disaster Resilience, Tsukuba, Ibaraki 305-0006, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(4), 639; https://doi.org/10.3390/rs13040639
Submission received: 12 January 2021 / Revised: 5 February 2021 / Accepted: 8 February 2021 / Published: 10 February 2021
(This article belongs to the Collection Feature Papers for Section Environmental Remote Sensing)

Abstract

:
Typhoon Hagibis passed through Japan on October 12, 2019, bringing heavy rainfall over half of Japan. Twelve banks of seven state-managed rivers collapsed, flooding a wide area. Quick and accurate damage proximity maps are helpful for emergency responses and relief activities after such disasters. In this study, we propose a quick analysis procedure to estimate inundations due to Typhoon Hagibis using multi-temporal Sentinel-1 SAR intensity images. The study area was Ibaraki Prefecture, Japan, including two flooded state-managed rivers, Naka and Kuji. First, the completely flooded areas were detected by two traditional methods, the change detection and the thresholding methods. By comparing the results in a part of the affected area with our field survey, the change detection was adopted due to its higher recall accuracy. Then, a new index combining the average value and the standard deviation of the differences was proposed for extracting partially flooded built-up areas. Finally, inundation maps were created by merging the completely and partially flooded areas. The final inundation map was evaluated via comparison with the flooding boundary produced by the Geospatial Information Authority (GSI) and the Ministry of Land, Infrastructure, Transport, and Tourism (MLIT) of Japan. As a result, 74% of the inundated areas were able to be identified successfully using the proposed quick procedure.

1. Introduction

In recent years, meteorological disasters have occurred frequently all over the word. According to numbers of the Center for Research on the Epidemiology of Disasters (CRED), the number of flood events in 2019 was 194, totaling 49% of the natural disasters in the period [1]. This number was 35 higher than the annual average between 2009 and 2018. Floods were also the deadliest type of disaster in 2019, accounting for 44% of deaths. Storms were the second most frequent events, affecting 35% of people, followed by floods with 33% [1]. Japan suffered from two damaging events in 2019, Typhoon Faxai in September and Typhoon Hagibis in October. The economic losses of these two events reached 26 billion dollars [1]. Typhoon Hagibis also led to the top economic losses among global disasters in 2019.
Typhoon Hagibis developed from a tropical disturbance on October 1, 2019. Rapid intensification started on October 5, and the storm changed to a typhoon early on October 6. Hagibis made its landfall in the Shizuoka Prefecture of Japan at 19:00 on October 12 (JST) and made its second landfall in the Greater Tokyo area one hour later. Before landfall, the center pressure of the typhoon reached 955 hPa, with 10 min sustained winds of 40 m/s. The typhoon passed from the Kanto region to the Tohoku region and became an extratropical low-pressure system on October 13. The path of Typhoon Hagibis around Japan is shown in Figure 1a. As the typhoon passed through the country, it brought heavy rainfall to half of Japan. According to the 613 stations of the Automatic Meteorological Data Acquisition System (AMeDAS) in Northern and Eastern Japan, the cumulative rainfall until October 12 was over 73,075 mm [2]. Due to this heavy rainfall in a short period, many rivers were overflown, and wide areas of agriculture and urban land were flooded. Fourteen banks in seven state-managed rivers broke, and 128 banks in 67 prefecture-managed rivers collapsed. Due to these floods and landslides, 97 people were killed, and three people went missing [3]. Moreover, a total of 3308 buildings were heavily damaged, and more than 30 thousand buildings were flooded.
The International Charter Space and Major Disasters (Charter) is a worldwide collaboration scheme to share satellite data to countries affected by natural or man-made disasters. The Charter was activated for this event on October 11, 2019, in response to a request from the Japanese Government [4]. Many optical and synthetic-aperture radar (SAR) satellite images were provided through the website of the Charter. Then, different agencies and experts generated various damage maps using these satellite images. The rapid response products provided useful information for lifesaving and future reconstructions.
Remote sensing is an effective tool to analyze damage after natural disasters [5,6,7,8,9,10,11]. Klemas [12] and Lin et al. [13] summarized recent studies on flood assessments using optical and SAR sensors. Koshimura et al. [14] published a review on the application of remote sensing to tsunami disasters. Optical satellites are often used to collect post-flood information but are limited by weather conditions, making it difficult to monitor flood situations continuously. The most common techniques used for identifying water bodies from optical images can be categorized into two basic types: (1) single or multispectral band thresholding [15,16,17,18] and (2) classification methods [19,20,21,22].
SAR sensors, however, can penetrate cloud cover. Thus, SAR imagery has been widely used for estimating and monitoring various floods. Because water bodies provide specular reflection and show low backscattering intensity in SAR images, the water surface can be identified easily from such bodies. The thresholding method is commonly used to extract water from single or multi-temporal SAR images [23,24,25,26,27,28], and threshold values for water or floods are defined by supervised or unsupervised classifications. In [8,24,25], existing water and/or land areas were adopted as training data to determine the threshold values. In [26,27,28], the threshold values were defined from sub-tiled images automatedly. However, the results from a single SAR image are complex and affected by various factors, such as radar shadows and land cover. The authors used the thresholding method to extract inundation and its recession in Joso City, Ibaraki Prefecture, Japan, after the torrential rain in Kanto and Tohoku in 2015 [25]. Because floods generally occur in the flat area with a low elevation, topography information is useful for flood detections. However, without introducing elevation data, more than 60% of inundation could be detected in our previous study. By introducing high resolution terrain data, Martinis’ group [26,27] extracted more than 80% inundation automatedly from TerraSAR-X intensity images successfully. Change detection is another common method used to highlight surface changes due to floods. There are two types of change detection methods: amplitude (intensity) change detection and coherence change detection [29]. Coherence change detection methods, using both the amplitude and the phase information, are more sensitive for identifying flooded urban areas and have less misclassification in agriculture fields [29,30,31,32]. However, their applicability is limited by their strict requirements for temporal and spatial baselines [33]. In addition, most emergency SAR data are provided as intensity images through the Charter. Thus, coherence change detection is difficult to conduct in emergency phases. For another torrential rain event in July 2018 in Western Japan, the authors tried both the thresholding method and coherence change detection [32]. Overall, 54% of inundation was identified using the thresholding method, and 72% of the inundated area was extracted by the coherence method. Both studies used L-band ALOS-2 images. Notably, a 10 m land-cover map was introduced to assist the analysis [34]. The commission errors will increase, however, if these procedures are applied to countries without land-cover maps.
Besides traditional methods, machine learning and statistical models have been used to investigate the probability of being flooded [35,36,37,38]. Different topographical, hydrological, and geological condition factors were applied to the existing machine learning models; then, the best conditioning factors and the models with the best results were adopted to generate flood susceptibility maps [35,36]. Moya et al. [37] introduced a supervised machine learning classifier to learn from a past event to identify flooded areas during Typhoon Habibis. Using the pre- and co-event coherences, the overall accuracy of the predication was 0.77, and the kappa coefficient was 0.54. The major problem of machine learning models is the necessary collection of numerous ground truth data for training and testing. The damage conditions and affected areas have a wide variety depending on the disasters. When a model generated by studying one event is applied to a new event with different conditions, e.g., different satellite images and different land-covers, the accuracy of the predication would decrease.
In this study, we propose a simple procedure to extract inundation due to the 2019 Typhoon Hagibis from pre- and post-event Sentinel-1 intensity images. The proposed procedure can generate a quick inundation map using only one pre- and post-event intensity pair via several additions and subtractions. Because it requires minimum number of satellite images and takes short time, it can be applied to various flood events for emergency responses. First, the target area and Sentinel-1 images are introduced in Section 2. The restoration conditions of three inundated areas visited by the present authors are described in Section 3. Then, both the thresholding method (mono-temporal determination) and the change detection method (multi-temporal comparison) are applied to identify the completely inundated areas (Section 4). The backscatter models of partly inundated buildings are investigated in Section 5. According to the models, a new index is then proposed for detecting inundated built-up areas. Finally, the proposed procedure is applied to the whole target area. In Section 6, the obtained results are verified by comparing them with field survey data, a maximum boundary of inundation [39], and a visual interpretation result from aerial photos [40]. Section 7 provides a summary of the study.

2. Study Area and Sentinel-1 Imagery

2.1. Study Area of Ibaraki Prefurcture, Japan

The study area is a part of Ibaraki Prefecture, Japan, and is located within the red frame shown in Figure 1. There are two state-managed rivers, Naka and Kuji, in this area. The Typhoon Hagibis passed the study area in the period between 21:00 and 24:00 on 12 October 2019. According the Hitachi–Omiya observation station of the AMeDAS located in the upstream area of the two rivers, the 24 h cumulative rainfall on October 12 was 216.0 mm [41]. The records of six flow observation stations from October 12 to 14 are shown in Figure 2 [42]. The water levels at all stations increased starting at 13:00 on October 12. The upstream station K1 along Kuji River recorded a 5.7 m maximum water level at 4:00 AM on October 13, which was 2.2 m higher than the critical water level. The peaks of the water levels at stations K2 and K3 were missing. The recorded maximum values were 8.3 m at the midstream station K2 and 3.6 m at the downstream station K3. The times of the maximum values were both 5:00 on October 13. The data from the stations along Naka River were also missing. The upstream station N1 recorded a 6.5 m maximum water level at 6:00 on October 13. The midstream station N2 recorded the maximum water level at 8:00, whereas the downstream station N3 recorded the maximum at 11:00 on October 13, later than the stations on the Kuji River. The maximum values were 11.4 m at station N2 and 2.7 m at station N3. The heights of the banks at these stations are also shown in Figure 2. The peak water levels at stations N1, N2, N3, and K2 were higher than the values at their banks, which caused an overflow at these locations. Besides the overflow, three bank collapses occurred in the mainstreams of the two rivers, and six bank collapses occurred in their tributaries in total. Our study area includes the four locations of the bank collapses.

2.2. Sentinel-1 Images and Pre-Processing

Two pairs of pre- and co-event Sentinel-1 (S1) intensity images were used to assess inundation. One pair was acquired in the descending path and the other was in the ascending path. The two pre-event images were taken on the same day (October 7, 2019), and the two post-event images were also taken on the same day (October 13, 2019). The descending pair was acquired at 5:42, and the ascending pair was at taken at 17:34. The coverages of the four temporal images are shown in Figure 1a. All images were downloaded from the Copernicus Open Access Hub as Level-1 Ground Range Detected (GRD) products [43]. The images were taken in the Interferometric Wide Swatch (IW) high resolution mode, including VV and VH polarizations. The acquisition conditions of the used S1 images are shown in Table 1. In this study, only the VV polarization data with the strongest backscattering intensity were used for the inundation assessment. The pre-processing was conducted by the ENVI SARscape software. First, the geo-referenced GRD images were projected on a WGS 84 reference ellipsoid by employing 30 m elevation data from the shuttle rater topography mission (SRTM). The pixel spacing was 10 m. Radiometric calibration was carried out to convert the amplitude data to the backscattering coefficient (sigma naught) values. Then, the post-event images were registered based on the pre-event images. Finally, an enhanced Lee filter with 3 × 3 pixels was applied to reduce the speckle-noise while retaining the details [44].
The color composites of the pre- and post-event S1 backscattering coefficient images after pre-processing are shown in Figure 3. Wide red regions, which indicate a decrease of the backscatter can be confirmed around the two target rivers. These regions were possibly inundated. In the descending pair at 5:42, a decrease in the backscatter was observed along the whole Kuji River, from the upstream to the midstream of Naka River, and at Hinuma River, which is a branch of Naka River. On the other hand, an increase in the backscatter was confirmed at Hinuma Lake. According to the neighboring AMeDAS station, the rainfall stopped before 3:00. The increase in the backscatter might be due to the waves in the lake caused by the typhoon wind. In the ascending pair at 17:33, the region of decreased backscatter around Kuji and Hinuma Rivers became smaller but expanded at the midstream and downstream areas of Naka River. The time lag of inundation between the Kuji and Naka Rivers matched the water level records shown in Figure 2.

3. The Field Survey

A field survey was conducted by the authors on October 28, 2019, two weeks after the typhoon passed by. The route of the survey is shown in Figure 1b. We visited four severely flooded locations, (a), (b), and (c), shown in Figure 1b, and the other location near station N1. Besides location a, bank failure also occurred at the other three locations. An enlarged aerial photo of location (a), taken by the Geospatial Information Authority of Japan (GSI) on October 17, 2019, is shown in Figure 4a [39]. This area is the intersection of Naka River and its branch Tano River, 600 m away from the right bank of Naka River. The bank of Tano River collapsed near this location [45], and overflows from both the Naka and Tano Rivers caused wide inundation in this area. The GSI captured the aerial photo on October 17, four days after the typhoon passed by. At this time, the flood water had already dried up, but the mud on the ground indicted that this area was flooded. When we visited this area on October 28, the debris brought by the flood still remained. Two ground photos of buildings (i) and (ii) are shown in Figure 4b. Building (i) is a two-story dental clinic. According to the debris on the roof, the maximum flood depth was estimated at about 3.9 m. Building (ii) is a do-it-yourself (DIY) store. This store was closed due to restoration work when we visited. The watermark on the exterior wall was around 2.1 m high. The elevation of the dental clinic is 5.5 m, whereas that of the DIY store is 7.1 m. Thus, the maximum water level at location a was estimated to be approximately higher than 9 m.
An enlarged aerial photo of location (b) is shown in Figure 5a. Location (b) is surrounded by Naka River and its branch, Fujii River. One of the two bank collapses along Fujii River occurred here. In the aerial photo on October 17, the restoration work was still ongoing, and a temporary bank had been built at the time of our field survey. We took a photo from a UAV (DJI Phantom 4 Pro), as shown in Figure 5b. A ground photo of wooden house iii is also shown in Figure 5b. This building is located 160 m away from the collapsed bank, and the ground floor was severely damaged by the flood water from Fujii River. According to the watermark, the flood depth was higher than 2 m in this area.
The enlarged aerial photo of location (c) is shown in Figure 6a. Location (c) is located in the upstream area of Kuji River, near flow observation station K1. Both the left and right banks broke at this location. The length of the broken right bank was 40 m, and the restoration of the right bank was finished when we visited. The length of the broken left bank was 390 m. The restoration work was finished on November 5. Two photos taken by the UAV are shown in Figure 6b. A watermark was observed on the fence of building v, which was about 1.5 m high.

4. Extraction of Completely Inundated Areas

A flowchart of all processing steps is shown in Figure 7. The completely inundated areas are easily identified from SAR images by the significant decrease in their backscattering intensity. Because it is important to grasp a damage situation quickly and accurately, two simple methods were applied in this study: multi-temporal comparison and mono-temporal determination.

4.1. Multi-Temporal Comparison

The water surface shows the lowest backscattering due to specular reflection. Thus, the backscattering intensity of other landcovers will decrease if they are flooded completely. As the color composites in Figure 3 indicate, a decrease in backscattering intensity was confirmed in the surrounding rivers after the typhoon passed by. The differences between the pre- and post-event SAR intensity images (multi-temporal comparison) were calculated and are shown in Figure 8a,b. Because the unflooded regions were much larger than the inundations, the histograms of the differences obtained in the two pairs were similar to the Gaussian distribution. The significant changed regions with the difference away from the mean were considered as the inundations. Thus, a combination of the average value and the standard deviation (STD) was used to extract the inundated areas. The average value of the descending pair was 0.15 dB with a STD of 2.50 dB. The average value of the ascending pair was −0.19 dB with a STD of 2.34 dB. The threshold values of the differences were set by subtracting n 1 times of the STD from the average value. We tried several n1 values, such as 0.5, 1.0, 1.5, and 2.0. Comparing the results at locations a and (b), n 1 was ultimately set as 1.0. The threshold values are shown in Table 2. Then, the areas with differences lower than the threshold value were extracted as areas with complete inundation. To reduce the noise, the extracted regions smaller than 100 pixels (0.01 km2) were removed from the results. Slope information was introduced to reduce commission (false positive) errors. The slope was calculated using the 30 m SRTM, which was also used in the preprocessing step. The extracted regions located on slopes larger than 10 degrees were removed. From the descending path, 52.3 km2 regions were extracted as inundation. From the ascending path, the inundated regions totaled 40.2 km2. The inundated regions were 72.9 km2 in total after merging the two temporal results. The obtained results are shown in Figure 8c, where the inundation at 5:42 is presented in a blue color, the inundation at 17:34 is shown in a green color, and areas inundated at both times are shown in a cyan color. From these results, we can see that the inundated regions moved from the upstream to the downstream as time passed by. The flooded regions in the midstream lasted a longer period, as detected from both pairs.

4.2. Mono-Temporal Determination

The thresholding of low backscattering intensity (mono-temporal determination) was also applied to extract the completely inundated areas. We also used this method to extract inundation caused by the 2015 Kanto and Tohoku torrential rain events in Japan and the July 2018 Western Japan torrential rain event [25]. These studies showed positive results, according to which, the threshold value of the water regions can be estimated by using the average value and the STD of existing water regions. In this study, we used Hinuma Lake as the existing water region to define the threshold value. The shape data of Hinuma Lake released by the GSI [46] was thus downloaded. The threshold values for the ascending images were obtained as the average value plus twice the STD within Hinuma Lake. The threshold values for the descending images on October 7 were also defined in the same way. However, this method did not work well for the descending image. As shown in Figure 3a, the water surface of Hinuma Lake was not quiet at 5:42 on October 13. Its backscattering intensity, therefore, did not represent normal water regions. Thus, several pixels from the quiet surface of Hinuma Lake and several pixels from the existing water regions of Naka and Kuji Rivers were selected. The threshold value was selected via the same combination of the average value and the STD. The threshold value of the water region used in each S1 intensity image is shown in Table 2. The accuracy of the threshold values for the two pre-event images is verified in the dashed frame in Figure 2. The overall accuracies of both the descending and ascending pre-event images were higher than 96%, and the Kappa coefficients were higher than 0.92, indicating very good agreement.
Then, the water regions were extracted by the threshold values from the four temporal intensity images. Like with the extraction process using differences, the extracted regions smaller than 100 pixels (0.01 km2) or located on slopes larger than 10 degrees were removed from the results. Color composites of the extracted water regions from the pre- and post-event images are shown in Figure 9a,b. The existing water regions are shown in a white color and were extracted from both the pre- and post-event images. The inundation is shown in a red color and represents increased water regions. Because strong backscattering intensity was observed in Hinuma Lake at 5:42 on October 13, Hinuma Lake is shown in a cyan color in Figure 9a. The extracted inundation area at 5:42 on October 13 was 32.7 km2 and that at 17:34 was 23.0 km2. These results are shown in Figure 9c. The inundated areas extracted by the mono-temporal determination were 20 km2 smaller than those extracted using the multi-temporal comparison.

4.3. Comparison and Verificiation

A comparison of the inundation extraction using the two methods is shown in Figure 10, which provides a close-up of the black frame in Figure 8c and Figure 9c. This area includes the inundated locations (a) and (b), which we visited during the field survey. This area also includes two bank collapses in Fuji River and one in Tano River, as shown by the red crosses. The extracted results around the bank failure locations were similar. However, the inundations surrounding buildings (i) and (ii) were only extracted using the multi-temporal comparison. To verify these results, a maximum inundation boundary was introduced. This boundary was created via GSI [39] and was estimated based on several inundation locations and their elevation data. The inundation limits were obtained from aerial photos and the information was taken from SNS, as shown in Figure 10 in red lines.
The confusion matrixes for the close-up area are shown in Table 3, using the GSI boundary as the ground truth of inundation and the pixels outside of the boundary as the ground truth of the others. In the results obtained from the multi-temporal comparison, 61% of the inundated areas could be accurately extracted as inundation (recall), and 54% of the extracted areas featured real inundation (precision). The overall accuracy was 85%, and the Kappa coefficient was 0.48. In the results obtained from the mono-temporal determination from the S1 image pairs, the recall of the inundation was 40%, and the precision was 60%. The mono-temporal determination had more omission (false negative) errors but less commission (false positive) errors. This method’s overall accuracy was 86%, and its kappa coefficient was 0.40. These results showed a moderate level of agreement with the GSI’s flood boundary.
Most of the commission errors were caused by the inundation of high-water channels that were out of water and used for playgrounds or crop fields during normal periods. Because the widths of water surfaces are narrow in normal periods, the expansion of water surfaces within the rivers in the flood period was extracted using our methods. However, these areas were not included in the flood boundary of the GSI. The extraction of these regions produced the low precision in the confusion matrices. The omission errors were mainly caused by buildings. The building footprints are also shown in Figure 10. These footprints were downloaded from the base information of the GSI [46]. Compared with the maximum inundation boundary, the areas surrounded by the extracted water pixels but not detected were mainly built-up regions. Because our objective was to extract inundation, we selected the results obtained by the multitemporal comparison for further analysis. Both the multitemporal comparison and the mono-temporal determination detected the completely inundated areas, which changed to water surface areas, or mostly inundated areas, such as buildings (i) and (ii). Thus, partly inundated buildings could not be identified by these methods.

5. Extraction of the Partly Inundated Built-Up Areas

As shown in Figure 10, the buildings within the inundation could not be identified by either the multi-temporal comparison or the mono-temporal determination because their backscattering intensities did not decrease significantly. We also used the increase of the backscatter to extract inundated buildings in the 2015 Kanto and Tohoku torrential rain event in Japan [13]. The decrease of coherence is another effective way to identify inundated buildings and has been used in several studies [25,29,30,31,32,37]. However, the coherence change needs three or more complex temporal SAR images under the same acquisition conditions. In addition, the SAR data were mostly provided as intensity images for emergency analyses [4]. Thus, a method to identify inundated buildings using intensity images remains useful. In this study, we investigate a backscatter change model after the buildings are inundated. Then, an index is proposed to extract the partly inundated buildings.

5.1. Backscatter Model of Partly Inundated Buildings

Inundated buildings (i) and (ii) at location a of the field survey were used for the modeling. A close-up of the color composite in the ascending path at location a is shown in Figure 11a. At 17:34 on October 13, this area was completely inundated. The backscatter models before and after the flood for building (i), which was inundated up to the roof level, are shown at the top of Figure 11b. According to this model, a decrease of backscattering occurred in the layover, whereas an increase occurred within the footprint. Because the walls completely sank into the water, the double-bounce and surface reflection from the walls disappeared. Although the water surface showed low backscattering, it was stronger than the radar shadow due to the high sensitivity of the C-band. The speckle profile of the red line crossing building (i) along the range direction is shown at the bottom of Figure 11b. A decrease in backscattering was observed in the layover and far-range directions, whereas a litter increase was confirmed within the building footprint. The speckle profile matched our backscattering model.
The model for building (ii) is shown at the top of Figure 11c, which was partly inundated. Because parts of the walls were higher than the water surface, double-bounce still occurred but shifted to the near-range. The moved double-bounce and the shortened radar shadow demonstrated an increase in backscattering, whereas the other areas showed a decrease in backscattering. The speckle profile of the red line crossing building (ii) in the range direction is shown at the bottom of Figure 11c. A decrease in backscattering was confirmed in the layover. However, the shift of the double-bounce could not be observed. According to the watermark, the inundation depth ( h ) for this building was about 2 m. The incident angle ( θ ) for the ascending images was 39.0 degree. Thus, the movement of double-bounce should be 2.5 m ( h / tan θ ). The pixel spacing for the S1 images was 10 m. The backscattering intensity of the double-bounce was averaged by the mixed-pixel effect. Because the backscattering intensity within the footprint did not change much, it could not be identified as inundation via the method mentioned in the previous section.

5.2. Index for Inundated Buildings

According to the backscatter models of buildings (i) and (ii), we confirmed the changes in the backscattering intensity after buildings were inundated. The layover and surroundings of single inundated buildings showed a decrease in backscattering, whereas their footprints showed no change or an increase in backscattering. This is why the buildings within the flood water could not be detected, as shown in Figure 10. Then, we investigated the characteristics of the built-up areas with many buildings. A close-up of Figure 8b within the white frame is shown in Figure 12a. Two built-up areas were selected as examples. One area was close to the bank failure of Fujii River and surrounded by inundated agriculture fields. The other was located outside of the flooded area. Aerial photos of the two areas taken on October 17, 2019 by the GSI are also shown in Figure 12a. A histogram of the backscattering of the two sample areas is shown in Figure 12b. The peak of the backscatter differences for the unflooded built-up area was around 0, and the variation was small. Conversely, the variation in the flooded built-up area was large in both the positive and negative sides.
Firstly, the increase in the backscattering intensity was used to identify inundated buildings, which obtained good results in the previous study [15]. The areas with increased backscattering larger than the average value plus n 2 times the STD were extracted as inundated buildings, where n 2 was set as 0.5, 1.0, 1.5, and 2.0. When n 2 was small, the extracted pixels included considerable noise. When n 2 was large, some inundated buildings could not be extracted. However, both results included many commission errors. These results were not as good as those in the previous study, which may be due to the short wavelength of the C-band SAR.
Thus, an index ( I ) was proposed to identify inundated built-up areas for the C-band. This index was calculated by the following equation:
I = μ d w + σ d w
where μ d w and σ d w are the average value and standard deviation of the differences calculated within a moving window.
The window size was set to 7 × 7 pixels, 15 × 15 pixels, and 21 × 21 pixels. Comparing these results, the 15 × 15 pixels window was adopted for the extraction of inundated built-up areas. Using this window size, the differences of I between flooded and unflooded built-up areas were significant, and the neighboring flooded buildings were merged as one built-up area. The index obtained for the same area in Figure 12a is shown in Figure 13a. The flooded built-up area shows a high value, whereas the unflooded built-up area shows a low value. This index was proposed to estimate the increased part of the backscattering intensity within the flooded buildings. Thus, the threshold for μ d w was expected to be larger than the average value ( μ d ) plus one standard deviation ( σ d ) from the differences for the whole target area. This threshold value was used in a previous study [15]. According to the histograms in Figure 12b, the standard deviation of the flooded built-up area was twice that of the unflooded built-up area. The threshold for σ d w was set to be larger than 2 σ d . Thus, the index for the flooded built-up area should be larger than μ d + 3 σ d , which corresponds to 7.65 dB for the descending pair and 6.83 dB for the ascending pair.
However, the boundary of inundation also showed high values. Thus, we used the building footprints to specify the built-up area. Considering the pixel spacing of the S1 images, a 10 m buffer was created for each building. Then, most of the neighboring buildings could be merged as one area. The extracted area with an index larger than the threshold value within the built-up area was identified as the flooded built-up area. The inundation extracted in the previous section was then updated by adding the flooded built-up area. The improved inundation at 17:34 is shown in Figure 13b. The green colored pixels represent inundated areas with decreased backscattering, and the light green pixels represent the flooded built-up areas. The buildings within the maximum flood boundary of the GSI were detected successfully. The improved results at 5:42 are shown in Figure 13c, and the merged maximum inundation is shown in Figure 13d.

5.3. Verfication

The confusion matrix of the improved total inundation shown in Figure 13d is summarized in Table 4. Compared with Table 3, both the recall and precision of inundation increased. Although the extraction of flooded built-up areas included new commission errors, the improvement of inundation was larger than the commission errors. Thus, the precision increased by 1%. The recall increased significantly from 61% to 70%, which indicates the notable effect of the proposed inundation index for flooded buildings. The overall accuracy slightly increased, but the kappa coefficient improved significantly from 0.48 to 0.53. The low precision was mainly caused by the inundation extracted in the high-water channel (between the banks). In the maximum flood boundary of the GSI, the inundation in the high-water channel was not counted. If we removed it from our results, the precision would be improved to 68%, and the overall accuracy and the kappa coefficient would be increased to 90% and 0.65, respectively. According to this kappa coefficient, our results show good agreement with the flood boundary of the GSI. The comparison of the recalls and the precisions for the two classes in the different extraction steps is shown in Figure 14. Case 4, the completed inundation extracted via the multi-temporal comparison and the partly inundated built-up areas after masking the high-water channel, shows the best accuracy.

6. Final Inundation Maps and Discussion

The final inundation maps of the study area at 5:42 and 17:43 on October 13, 2019, were obtained by combining the results in Section 4 and Section 5. These results are shown in Figure 15a,b, respectively. A total of 4.95 km2 of built-up area was extracted as inundations at 5:42, whereas 5.15 km2 was extracted at 17:43. Because more people live in the downstream area of the Naka and Kuji rivers, the number of affected buildings increased as time elapsed. The total inundation areas are shown in Figure 15c by merging the two inundation maps. The total flooded built-up area reached 8.18 km2, and the total area of inundation was 80.88 km2 in the study area.

6.1. Verfication Using the GSI’s Boundary

The confusion matrix of the total inundation was calculated via comparison with the maximum flood boundary of the GSI, as shown in Table 5. Verification was conducted in the light-red colored regions, representing the coverage of the GSI data. The recall of the inundation extraction was 73%, which is higher than that of the close-up region in Section 5. This high recall means most of the inundated area could be identified by the proposed method. However, several regions enclosed by the flood boundary of the GSI could not be extracted in our results. One region was enlarged and is shown in Figure 16. The precision here was 37%, which indicates many commission errors. However, the precision was improved to 45% after masking the inundation in the high-water channel. Before the masking, the overall accuracy was 83%, and the Kappa coefficient was 0.40, which increase to 86% and 0.48, respectively, after the masking.
Because the GSI’s boundary was developed according to the locations of the inundation limits and their elevations, it does not represent the real inundated areas. Thus, if there are no reports of inundation at a given site under a flood situation, those areas would be missed in the GSI data. Several agriculture fields were identified as inundation areas in our study, but they were not included in the GSI data.

6.2. Verfication Using the MLIT’s Boundary

Then, another inundation map provided by the Ministry of Land, Infrastructure, Transport, and Tourism (MLIT) of Japan was introduced to verify our results. This map was created according to the observations from a helicopter flight at around 11:00 on October 13, 2019. As shown by the water levels in Figure 2, 11:00 is considered the maximum inundation time. The inundation outline is shown in Figure 15c by purple lines. Comparing our results with the GSI data, our results show better agreement with the outline from the MLIT.
The confusion matrix using the MLIT’s outline as the truth is shown in Table 5. The verification region is shown in a light-purple color, which represents the coverage of the MLIT’s data. The recall is 74%, which is slightly higher than the recall using the GSI’s data as the truth. The precision is 48.5%, showing an increase of more than 10%. Moreover, the overall accuracy is 85%, and the kappa coefficient is 0.5, both of which are higher than the values using the GSI’s data as the truth. The expanded water surface in the high-water channel was also counted as inundation. When masking these pixels, the precision increases to 57%, the overall accuracy to 88%, and the kappa coefficient to 0.57. These numbers indicate that our proposed method possesses good potential in inundation assessments.
However, 25% pixels could not be identified even after the extraction of completely inundated areas and flooded built-up areas. Figure 16a provides a close-up of the black frame in Figure 15c, including both flooding outlines from the GSI and the MLIT. Comparing our results with these flooding outlines, most of the inundated areas were identified successfully. However, the area beside the right bank included by the two outlines could not be extracted by our results. Figure 16b shows an optical Pleiades image taken at 10:06 on October 13. This image was pansharpened with 0.5 m spacing. Because the acquisition time of the optical image was close to that of the helicopter flight, the inundation in the optical image correctly matches the MLIT’s outline. The areas that could not be identified by our methods were inundated at that time. Figure 16c shows the inundation depth estimated by the GSI. Comparing the three images in Figure 16, we can confirm that most of the inundation deeper than 1.0 m was extracted, whereas the shallow puddles were difficult to identify from the S1 intensity images. Because the acquisition times for the optical image and the MLIT data were between the post-event descending and ascending images, this region was estimated to have been submerged by the flood around 11:00 but out of the water by 17:43. Another problem in our results is the omission of flooded trees. Because most of the trees were higher than residential buildings, the backscatter model of flooded trees is similar to the building model. Because our method focuses only on the increase of backscattering in built-up areas, the flooded trees could not be identified. As shown in Figure 16b, the flooded trees along the rivers were not included in our results. Thus, the accuracy of the proposed method may decrease when it is applied to flooded areas including forests.

6.3. Disscusion

The proposed method was previously applied to the emergency analysis of the 2020 Kyushu floods in Japan. From July 3 to 8, 2020, heavy rainfall hit the Kyushu Region of Japan and caused flooding over a wide area. The Charter announced an emergency analysis on July 6. We downloaded the Sentinel-1 images taken at 6:17 on July 5 (JST) along with the closest pre-event image and created an inundation map using the proposed method. The extraction used the same rules applied to the threshold values in this study. The result was presented on the website of the Charter. Because the post-event image was acquired after the peak of floods, our results showed a smaller inundation area compared to the outline of the GSI. However, the inundated built-up area was extracted successfully. Because the proposed procedure included only several simple calculations, we were able to generate inundation maps in several minutes after the preprocessing step.
Compared to existing research for the same event [37], the recall and precision of the flooded area in this study were slightly lower. The average accuracies of our results after masking the high-water channels were 0.82 for the recall, 0.76 for the precision, and 0.79 for the F1-measure, which are the same levels found in existing research. However, our procedure needs only one pre- and one post-event SAR intensity image, whereas the previous study used two pre- and one post-event SAR complex data. Machine learning techniques can obtain higher accuracy results; however, the accuracy depends on the training data. Conversely, our methods can be applied to any location and are thus more versatile.

7. Conclusions

In this study, a simple procedure to create accurate inundation maps for an emergency response was proposed. Descending and ascending pre- and post-event Sentinel-1 intensity images were used to estimate the inundated areas in Ibaraki Prefecture, Japan, where two state-managed rivers were flooded due to the heavy rainfall during the 2019 Typhoon Hagibis. The completely inundated areas were extracted by both the multi-temporal comparison and the mono-temporal determination. Compared with the results of our field survey and the maximum flood boundary according to the GSI, the results obtained by the multi-temporal comparison extracted 61% of the inundation, showing better accuracy. Thus, the results using the difference of the pre- and post-event backscattering intensity were adopted for the future steps.
The backscatter models for the two inundated buildings were also investigated. According to the characteristics of these models, an index was proposed to identify inundated buildings. This index was calculated within a moving window via a combination of the average value and the standard deviation of the difference. The building footprints were introduced to identify the built-up regions. Then, the built-up regions with higher index values were extracted as inundated built-up areas. After adding the extracted flooded built-up area, the recall of inundation increased to more than 70%. In this study, we used the building footprints provided by the GSI. Recently, OpenStreetMap has begun to provide building footprints around the world. Thus, our proposed method can be applied globally.
Finally, the inundation maps at 5:42 and 17:43 on October 13, 2020, were generated. By comparing the two temporal inundation maps, the change in the flooded areas from the upstream to the downstream could be observed. The total inundated areas were verified by comparing them with two flood outlines provided by the GSI and the MLIT. In each comparison, 73% of the inundated areas could be identified successfully by the proposed simple procedure. A comparison of our results and the outlines by the MLIT showed good agreement, with 88% overall accuracy and a 0.57 kappa coefficient.
Our method was applied to other events for emergency analysis, and positive results were obtained. Generally, the member space agencies would provide archived satellite images and plan an emergency observation soon after the Charter is activated. The threshold values in the proposed procedure are all defined by the statistical values (average and STD) that can be obtained automatically from images. Thus, our procedure can generate an inundation map within one hour after images’ downloading. It would be helpful for emergency response. However, our results still include many commission errors. The proposed procedure still has difficulty in identifying flooded forests. In the future, these problems could be overcome by improving built-up masking for the index.

Author Contributions

Data curation, Y.M. and F.Y.; Formal analysis, W.L.; Funding acquisition, F.Y.; Investigation, W.L., K.F., Y.M. and F.Y.; Methodology, W.L.; Project administration, F.Y.; Software, W.L.; Validation, W.L., K.F. and Y.M.; Writing—original draft, W.L.; Writing—review & editing, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by JSPS KAKENHI Grant Number [17H02066].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created in this study. Data sharing is not applicable to this article.

Acknowledgments

The Sentinel-1 images are owned and provided by European Space Agency (ESA). The Pleiades images are owned by Centre National D’Etudes Spatiales (CNES) and distributed by Airbus Defence and Space.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Centre for Research on the Epidemiology of Disasters—CRED. Natural Disasters 2019: Now Is the Time to Not Give Up. 2020. Available online: https://cred.be/sites/default/files/adsr_2019.pdf (accessed on 20 December 2020).
  2. Japan Meteorological Agency. Breaking News of the Features and the Factors of the 2020 Typhoon Hagibis. 2019. Available online: https://www.jma.go.jp/jma/press/1910/24a/20191024_mechanism.pdf (accessed on 20 December 2020). (In Japanese)
  3. Cabinet Office, Government of Japan. Report of the Damage Situation Related to the Typhoon No. 19 (Hagibis) until 09:00 on 10 April 2020. Available online: http://www.bousai.go.jp/updates/r1typhoon19/pdf/r1typhoon19_45.pdf (accessed on 20 December 2020). (In Japanese)
  4. The International Charter Space and Major Disaster. Typhoon Hagibis in Japan. Available online: https://disasterscharter.org/web/guest/activations/-/article/storm-hurricane-urban-in-japan-activation-625- (accessed on 20 December 2020).
  5. Dell’Acqua, F.; Gamba, P. Remote sensing and earthquake damage assessment: Experiences, limits, and perspectives. Proc. IEEE 2012, 100, 2876–2890. [Google Scholar] [CrossRef]
  6. Nakmuenwai, P.; Yamazaki, F.; Liu, W. Multi-temporal correlation method for damage assessment of buildings from high-resolution SAR images of the 2013 Typhoon Haiyan. J. Disaster Res. 2016, 11, 557–592. [Google Scholar] [CrossRef]
  7. Wieland, M.; Liu, W.; Yamazaki, F. Learning change from Synthetic Aperture Radar images: Performance evaluation of a Support Vector Machine to detect earthquake and tsunami-induced changes. Remote Sens. 2016, 8, 792. [Google Scholar] [CrossRef] [Green Version]
  8. Nakmuenwai, P.; Yamazaki, F.; Liu, W. Automated extraction of inundated areas from multi-temporal dualpolarization RADARSAT-2 images of the 2011 central Thailand flood. Remote Sens. 2017, 9, 78. [Google Scholar] [CrossRef] [Green Version]
  9. Karimzadeh, S.; Matsuoka, M. Building damage assessment using multisensor dualpolarized synthetic aperture radar data for the 2016 M 6.2 Amatrice earthquake, Italy. Remote Sens. 2017, 9, 330. [Google Scholar] [CrossRef] [Green Version]
  10. Fan, Y.; Wen, Q.; Wang, W.; Wang, P.; Li, L.; Zhang, P. Quantifying disaster physical damage using remote sensing data—A technical work flow and case study of the 2014 Ludian earthquake in China. Int. J. Disaster Risk Sci. 2017, 8, 471–492. [Google Scholar] [CrossRef]
  11. Ferrentino, E.; Marino, A.; Nunziata, F.; Migliaccio, M. A dual-polarimetric approach to earthquake damage assessment. Int. J. Remote Sens. 2019, 40, 197–217. [Google Scholar] [CrossRef]
  12. Klemas, V. Remote sensing of floods and flood-prone areas: An overview. J. Coast. Res. 2015, 31, 1005–1013. [Google Scholar] [CrossRef]
  13. Lin, L.; Di, L.; Yu, E.G.; Kang, L.; Shrestha, R.; Rahman, M.S.; Tang, J.; Deng, M.; Sun, Z.; Zhang, C.; et al. A review of remote sensing in flood assessment. In Proceedings of the 2016 Fifth International Conference on Agro-Geoinformatics, Tianjin, China, 18–20 July 2016. [Google Scholar] [CrossRef]
  14. Koshimura, S.; Moya, L.; Mas, E.; Bai, Y. Tsunami damage detection with remote sensing: A review. Geosciences 2020, 10, 177. [Google Scholar] [CrossRef]
  15. McFeeters, S.K. The use of the normalized difference water index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  16. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  17. Feyisa, G.L.; Meilby, H.; Fensholt, R.; Proud, S.R. Automated water extraction index: A new technique for surface water mapping using Landsat imagery. Remote Sens. Environ. 2014, 140, 23–35. [Google Scholar] [CrossRef]
  18. Xie, H.; Luo, X.; Xu, X.; Tong, X.; Jin, Y.; Pan, H.; Zhou, B. New hyperspectral difference water index for the extraction of urban water bodies by the use of airborne hyperspectral images. J. Appl. Remote Sens. 2014, 8, 085098. [Google Scholar] [CrossRef] [Green Version]
  19. Ko, B.C.; Kim, H.H.; Nam, J.Y. Classification of potential water bodies using Landsat 8 OLI and a combination of two boosted random forest classifiers. Sensors 2015, 15, 13763–13777. [Google Scholar] [CrossRef] [Green Version]
  20. Ogilvie, A.; Belaud, G.; Delenne, C.; Bailly, J.-S.; Bader, J.-C.; Oleksiak, A.; Ferry, L.; Martin, D. Decadal monitoring of the Niger Inner Delta flood dynamics using MODIS optical data. J. Hydrol. 2015, 523, 368–383. [Google Scholar] [CrossRef] [Green Version]
  21. Hakkenberg, C.; Dannenberg, M.; Song, C.; Ensor, K. Characterizing multi-decadal, annual land cover change dynamics in Houston, TX based on automated classification of Landsat imagery. Int. J. Remote Sens. 2018, 40, 693–718. [Google Scholar] [CrossRef]
  22. Nandi, I.; Srivastava, P.K.; Shah, K. Floodplain Mapping through Support Vector Machine and Optical/Infrared Images fromLandsat 8 OLI/TIRS Sensors: Case Study from Varanasi. Water Resour. Manag. 2017, 31, 1157–1171. [Google Scholar] [CrossRef]
  23. Pulvirenti, L.; Pierdicca, N.; Chini, M.; Guerriero, L. An algorithm for operational flood mapping from Synthetic Aperture Radar (SAR) data using fuzzy logic. Nat. Hazards Earth Syst. Sci. 2011, 11, 529–540. [Google Scholar] [CrossRef] [Green Version]
  24. Westerhoff, R.S.; Kleuskens, M.P.; Winsemius, H.C.; Huizinga, H.J.; Brakenridge, G.R.; Bishop, C. Automated global water mapping based on wide-swath orbital synthetic-aperture radar. Hydrol. Earth Syst. Sci. 2013, 17, 651–663. [Google Scholar] [CrossRef] [Green Version]
  25. Liu, W.; Yamazaki, F. Detection of inundation areas due to the 2015 Kanto and Tohoku torrential rain in Japan based on multi-temporal ALOS-2 imagery. Nat. Hazards Earth Syst. Sci. 2018, 18, 1905–1918. [Google Scholar] [CrossRef] [Green Version]
  26. Martinis, S.; Twele, A.; Voigt, S. Towards operational near real-time flood detection using a split-based automatic thresholding procedure on high resolution TerraSAR-X data. Nat. Hazards Earth Syst. Sci. 2009, 9, 303–314. [Google Scholar] [CrossRef]
  27. Martinis, S.; Kersten, J.; Twele, A. A fully automated TerraSAR-X based flood service. ISPRS J. Photogramm. Remote Sens. 2015, 104, 203–212. [Google Scholar] [CrossRef]
  28. Giustarini, L.; Hostache, R.; Kavetski, D.; Chini, M.; Corato, G.; Schlaffer, S.; Matgen, P. Probabilistic Flood Mapping Using Synthetic Aperture Radar Data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6958–6969. [Google Scholar] [CrossRef]
  29. Nico, G.; Pappalepore, M.; Pasquariello, G.; Refice, A.; Samarelli, S. Comparison of SAR amplitude vs. coherence flood detection methods—A GIS application. Int. J. Remote Sens. 2010, 21, 1619–1631. [Google Scholar] [CrossRef]
  30. Chini, M.; Pulvirenti, L.; Pierdicca, N. Analysis and interpretation of the COSMO-SkyMed observation of the 2011 Tsunami. IEEE Trans. Geosci. Remote Sens. 2012, 9, 467–571. [Google Scholar] [CrossRef]
  31. Pulvirenti, L.; Chini, M.; Pierdicca, N.; Boni, G. Use of SAR data for detecting floodwater in urban and agricultural areas: The role of the interferometric coherence. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1532–1544. [Google Scholar] [CrossRef]
  32. Liu, W.; Yamazaki, F.; Maruyama, Y. Extraction of inundation areas due to the July 2018 Western Japan torrential rain event using multi-temporal ALOS-2 images. J. Disaster Res. 2019, 14, 445–455. [Google Scholar] [CrossRef]
  33. Ohki, M.; Yamamoto, K.; Tadono, T.; Yoshimura, K. Automated Processing for Flood Area Detection Using ALOS-2 and Hydrodynamic Simulation Data. Remote Sens. 2020, 12, 2709. [Google Scholar] [CrossRef]
  34. Hashimoto, S.; Tadono, T.; Onosata, M.; Hori, M.; Shiomi, K. A new method to derive precise land-use and land-cover maps using multi-temporal optical data. J. Remote Sens. Soc. Jpn. 2014, 34, 102–112. (In Japanese) [Google Scholar] [CrossRef]
  35. Esfandiari, M.; Jabari, S.; McGrath, H.; Coleman, D. Flood mapping using Random Forest and identifying the essential conditioning factors; A case study in Fredericton, New Brunswick, Canada. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 3, 609–615. [Google Scholar] [CrossRef]
  36. Esfandiari, M.; Abdi, G.; Jabari, S.; McGrath, H.; Coleman, D. Flood hazard risk mapping using a pseudo supervised Random Forest. Remote Sens. 2020, 12, 3206. [Google Scholar] [CrossRef]
  37. Moya, L.; Mas, E.; Koshimura, S. Learning from the 2018 Western Japan heavy rains to detect floods during the 2019 Hagibis Typhoon. Remote Sens. 2020, 12, 2244. [Google Scholar] [CrossRef]
  38. Elmahdy, S.; Ali, T.; Mohamed, M. Flash flood susceptibility modeling and magnitude index using machine learning and geohydrological models: A modified hybrid approach. Remote Sens. 2020, 12, 2695. [Google Scholar] [CrossRef]
  39. Geospatial Information Authority of Japan. Information about the 2019 Typhoon Hagibis. Available online: https://www.gsi.go.jp/BOUSAI/R1.taihuu19gou.html (accessed on 25 December 2020). (In Japanese)
  40. Geospatial Information Authority of Japan. Digital Map (Basic Geospatial Information). Available online: https://maps.gsi.go.jp/ (accessed on 25 December 2020). (In Japanese)
  41. Japan Meteorological Agency. Available online: https://www.data.jma.go.jp/obd/stats/etrn/index.php (accessed on 25 December 2020). (In Japanese).
  42. Water Information System, Ministry of Land, Infrastructure, Transport and Tourism. Available online: http://www1.river.go.jp/ (accessed on 25 December 2020).
  43. Copernicus Open Access Hub. Available online: https://scihub.copernicus.eu/dhus/#/home (accessed on 25 December 2020).
  44. Lopes, A.; Touzi, R.; Nezry, E. Adaptive Speckle Filters and Scene Heterogeneity. IEEE Trans. Geosci. Remote Sens. 1990, 28, 992–1000. [Google Scholar] [CrossRef]
  45. Ibaraki University. The First Report of the Typhoon Hagibis Investigation Team. Available online: https://www.ibaraki.ac.jp/hagibis2019/researchers/IU2019hagibisresearch1streport.pdf (accessed on 25 December 2020). (In Japanese).
  46. Geospatial Information Authority of Japan. Basic Map information. Available online: https://fgd.gsi.go.jp/download/menu.php (accessed on 25 December 2020). (In Japanese)
Figure 1. (a) The path of Typhoon Hagibis and the coverage of the four temporal Sentinel-1 images. The study area is enclosed by the red frame. (b) Close-up of the study area with the field survey route on October 28, 2019 (by the present authors). Stars (a), (b), and (c) are the locations we visited in the field survey.
Figure 1. (a) The path of Typhoon Hagibis and the coverage of the four temporal Sentinel-1 images. The study area is enclosed by the red frame. (b) Close-up of the study area with the field survey route on October 28, 2019 (by the present authors). Stars (a), (b), and (c) are the locations we visited in the field survey.
Remotesensing 13 00639 g001
Figure 2. One-hour records of the water levels from Oct. 6 to 8 and Oct. 12 to 14, 2019 observed in the upstream, midstream, and downstream stations of the Kuji and Naka Rivers, respectively.
Figure 2. One-hour records of the water levels from Oct. 6 to 8 and Oct. 12 to 14, 2019 observed in the upstream, midstream, and downstream stations of the Kuji and Naka Rivers, respectively.
Remotesensing 13 00639 g002
Figure 3. Color composites of the pre- and post-event Sentinel-1 backscattering coefficient images after pre-processing.
Figure 3. Color composites of the pre- and post-event Sentinel-1 backscattering coefficient images after pre-processing.
Remotesensing 13 00639 g003
Figure 4. (a) Aerial photo of location a in Figure 1b, taken by GSI on October 17, 2019. (b) Ground photos of two flooded buildings (i and ii) taken by the authors on October 28, 2019.
Figure 4. (a) Aerial photo of location a in Figure 1b, taken by GSI on October 17, 2019. (b) Ground photos of two flooded buildings (i and ii) taken by the authors on October 28, 2019.
Remotesensing 13 00639 g004
Figure 5. (a) Aerial photo of location (b) in Figure 1b, taken by GSI on October 17, 2019. (b) A ground photo of a flooded house (iii), and a UAV photo of the temporary bank (iv), which failed due to the flood.
Figure 5. (a) Aerial photo of location (b) in Figure 1b, taken by GSI on October 17, 2019. (b) A ground photo of a flooded house (iii), and a UAV photo of the temporary bank (iv), which failed due to the flood.
Remotesensing 13 00639 g005
Figure 6. (a) Aerial photo of location (c) in Figure 1b, taken by GSI on October 17, 2019. (b) A ground photo of a flooded house (v) and two UAV photos of the repaired banks (vi) and (vii), which were collapsed by the flood.
Figure 6. (a) Aerial photo of location (c) in Figure 1b, taken by GSI on October 17, 2019. (b) A ground photo of a flooded house (v) and two UAV photos of the repaired banks (vi) and (vii), which were collapsed by the flood.
Remotesensing 13 00639 g006
Figure 7. Flowchart of the processing steps conducted in this study.
Figure 7. Flowchart of the processing steps conducted in this study.
Remotesensing 13 00639 g007
Figure 8. (a,b) Difference in the backscattering intensity obtained by subtracting the pre-event image from the post-event one. (c) Extracted inundation at 5:42 and 17:34 on October 13 using the multi-temporal comparison.
Figure 8. (a,b) Difference in the backscattering intensity obtained by subtracting the pre-event image from the post-event one. (c) Extracted inundation at 5:42 and 17:34 on October 13 using the multi-temporal comparison.
Remotesensing 13 00639 g008
Figure 9. (a,b) Color composite of the extracted water regions from the pre- and post-event images, where the increased water regions were completely inundated. (c) Extracted inundation at 5:42 and 17:34 on October 13 using the increase in low backscatter regions.
Figure 9. (a,b) Color composite of the extracted water regions from the pre- and post-event images, where the increased water regions were completely inundated. (c) Extracted inundation at 5:42 and 17:34 on October 13 using the increase in low backscatter regions.
Remotesensing 13 00639 g009
Figure 10. A close-up of the extracted results in Figure 8c and Figure 9c: (a) using the multi-temporal comparison and (b) using the mono-temporal determination. The red crosses are the locations of bank failures in the Fuji and Tano rivers.
Figure 10. A close-up of the extracted results in Figure 8c and Figure 9c: (a) using the multi-temporal comparison and (b) using the mono-temporal determination. The red crosses are the locations of bank failures in the Fuji and Tano rivers.
Remotesensing 13 00639 g010
Figure 11. (a) Color composite of the ascending pair at location a of the field survey. (b,c) Backscatter models and speckle profiles of the inundated building (i) and (ii).
Figure 11. (a) Color composite of the ascending pair at location a of the field survey. (b,c) Backscatter models and speckle profiles of the inundated building (i) and (ii).
Remotesensing 13 00639 g011aRemotesensing 13 00639 g011b
Figure 12. (a) Close-up of Figure 8b within the white frame and the aerial photos for the samples of flooded and unflooded built-up areas. (b) Histogram of the backscatter differences for the flooded and unflooded built-up areas within the squares in (a).
Figure 12. (a) Close-up of Figure 8b within the white frame and the aerial photos for the samples of flooded and unflooded built-up areas. (b) Histogram of the backscatter differences for the flooded and unflooded built-up areas within the squares in (a).
Remotesensing 13 00639 g012
Figure 13. (a) Proposed index for inundated buildings; (b) improved results of inundation by adding inundated buildings at 17:34 on October 13; (c) improved results of inundation at 5:42 on October 13; (d) total inundation by merging the two temporal results.
Figure 13. (a) Proposed index for inundated buildings; (b) improved results of inundation by adding inundated buildings at 17:34 on October 13; (c) improved results of inundation at 5:42 on October 13; (d) total inundation by merging the two temporal results.
Remotesensing 13 00639 g013
Figure 14. Comparison of the recall and the precision obtained in four different cases: 1. Multi-Table 2. Mon-temporal determination; 3. The combination of the result from case 1 and the inundated built-up areas; 4. Masking the high-water channels from the result of case 3.
Figure 14. Comparison of the recall and the precision obtained in four different cases: 1. Multi-Table 2. Mon-temporal determination; 3. The combination of the result from case 1 and the inundated built-up areas; 4. Masking the high-water channels from the result of case 3.
Remotesensing 13 00639 g014
Figure 15. (a) Inundation map at 5:42 on October 13, 2019; (b) inundation map at 17:34 on October 13, 2019, where the red lines are the maximum flood boundary of the GSI; (c) total inundations by merging the two temporal inundation maps, where the purple lines are the flood polygons from the MLIT.
Figure 15. (a) Inundation map at 5:42 on October 13, 2019; (b) inundation map at 17:34 on October 13, 2019, where the red lines are the maximum flood boundary of the GSI; (c) total inundations by merging the two temporal inundation maps, where the purple lines are the flood polygons from the MLIT.
Remotesensing 13 00639 g015aRemotesensing 13 00639 g015b
Figure 16. (a) Close-up of Figure 15c within the black frame; (b) pansharpened Pleiades image taken at 10:06 on October 13; (c) estimated inundation depth according to GSI.
Figure 16. (a) Close-up of Figure 15c within the black frame; (b) pansharpened Pleiades image taken at 10:06 on October 13; (c) estimated inundation depth according to GSI.
Remotesensing 13 00639 g016
Table 1. Acquisition conditions of the four temporal Sentinel-1 Ground Range Detected (GRD) images.
Table 1. Acquisition conditions of the four temporal Sentinel-1 Ground Range Detected (GRD) images.
PathDescendingAscending
DateOct. 7, 2019Oct. 13, 2019Oct. 7, 2019Oct. 13, 2019
Time5:4217:34
SensorsS1AS1BS1BS1A
Incident angle [°]39.039.0
Heading angle [°]−166.8−13.1
PolarizationsVV + VH
Resolution [m]10 × 10 (R × A)
Table 2. Threshold values used to extract the completely inundated areas.
Table 2. Threshold values used to extract the completely inundated areas.
(a) Multi-Temporal Comparison
PathDescendingAscending
Average μ d [dB]0.15−0.19
Standard deviation σ d [dB]2.502.34
Threshold value [dB] ( < μ d σ d )−2.35−2.53
(b) Mono-Temporal Determination
PathDescendingAscending
DateOct. 7, 2019Oct. 13, 2019Oct. 7, 2019Oct. 13, 2019
Average of water μ w [dB]−19.50−20.15−19.58−19.79
Standard deviation of water σ w [dB]1.541.671.451.52
Threshold value [dB] ( < μ w + 2 σ w )−16.42−16.81−16.68−16.75
Table 3. Confusion matrixes of the extracted inundations using the boundary of the maximum inundation by GSI.
Table 3. Confusion matrixes of the extracted inundations using the boundary of the maximum inundation by GSI.
(a) Multi-Temporal Comparison
Ground Truth [km2]
InundationOthersTotalPrecision
The extracted
results [km2]
Inundation5.764.9810.7453.6%
Others3.7344.8348.5692.1%
Total9.4949.8159.30
Recall60.7%90.0% 85.3%
(b) Mono-Temporal Determination
Ground Truth [km2]
InundationOthersTotalPrecision
The extracted
results [km2]
Inundation3.752.476.2260.3%
Others5.7447.3453.0889.2%
Total9.4949.8159.30
Recall39.5%95.0% 86.2%
Table 4. Confusion matrix of the improved inundations using the maximum inundation boundary from the GSI.
Table 4. Confusion matrix of the improved inundations using the maximum inundation boundary from the GSI.
Ground Truth [km2]
InundationOthersTotalPrecision
The extracted
results [km2]
Inundation6.685.5512.2354.6%
Others2.8144.2647.0794.0%
Total9.4949.8159.30
Recall70.4%88.9% 85.9%
Table 5. Confusion matrixes of the final inundation maps.
Table 5. Confusion matrixes of the final inundation maps.
(a) Verification Using the Inundation Boundary of the GSI.
Ground Truth [km2]
InundationOthersTotalPrecision
The extracted
results [km2]
Inundation20.2834.4454.7237.1%
Others7.71180.98188.6995.9%
Total27.99215.42243.41
Recall72.5%84.0% 82.7%
(b) Verification Using the Inundation Polygon of the MLIT.
Ground Truth [km2]
InundationOthersTotalPrecision
The extracted
results [km2]
Inundation29.7931.6761.4648.5%
Others10.42211.58222.0095.3%
Total40.21243.25283.46
Recall74.1%87.0% 85.2%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, W.; Fujii, K.; Maruyama, Y.; Yamazaki, F. Inundation Assessment of the 2019 Typhoon Hagibis in Japan Using Multi-Temporal Sentinel-1 Intensity Images. Remote Sens. 2021, 13, 639. https://doi.org/10.3390/rs13040639

AMA Style

Liu W, Fujii K, Maruyama Y, Yamazaki F. Inundation Assessment of the 2019 Typhoon Hagibis in Japan Using Multi-Temporal Sentinel-1 Intensity Images. Remote Sensing. 2021; 13(4):639. https://doi.org/10.3390/rs13040639

Chicago/Turabian Style

Liu, Wen, Kiho Fujii, Yoshihisa Maruyama, and Fumio Yamazaki. 2021. "Inundation Assessment of the 2019 Typhoon Hagibis in Japan Using Multi-Temporal Sentinel-1 Intensity Images" Remote Sensing 13, no. 4: 639. https://doi.org/10.3390/rs13040639

APA Style

Liu, W., Fujii, K., Maruyama, Y., & Yamazaki, F. (2021). Inundation Assessment of the 2019 Typhoon Hagibis in Japan Using Multi-Temporal Sentinel-1 Intensity Images. Remote Sensing, 13(4), 639. https://doi.org/10.3390/rs13040639

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop