Next Article in Journal
Spatial Scale Effect and Correction of Forest Aboveground Biomass Estimation Using Remote Sensing
Previous Article in Journal
Monitoring Ecological Conditions by Remote Sensing and Social Media Data—Sanya City (China) as Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Detection of Earthquake−Triggered Landslides Based on U−Net++: An Example of the 2018 Hokkaido Eastern Iburi (Japan) Mw = 6.6 Earthquake

1
Key Laboratory of Compound and Chained Natural Hazards Dynamics, Ministry of Emergency Management, Beijing 100085, China
2
National Institute of Natural Hazards, Ministry of Emergency Management of China, Beijing 100085, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(12), 2826; https://doi.org/10.3390/rs14122826
Submission received: 22 March 2022 / Revised: 30 May 2022 / Accepted: 10 June 2022 / Published: 13 June 2022

Abstract

:
Efficient detection of earthquake−triggered landslides is crucial for emergency response and risk assessment. With the development of multi−source remote sensing images, artificial intelligence has gradually become a powerful landslide detection method for similar tasks, aiming to mitigate time−consuming problems and meet emergency requirements. In this study, a relatively new deep learning (DL) network, called U−Net++, was designed to detect landslides for regions affected by the Iburi, Japan Mw = 6.6 earthquake, with only small training samples. For feature extraction, ResNet50 was selected as the feature extraction layer, and transfer learning was adopted to introduce the pre−trained weights for accelerating the model convergence. To prove the feasibility and validity of the proposed model, the random forest algorithm (RF) was selected as the benchmark, and the F1−score, Kappa coefficient, and IoU (Intersection of Union) were chosen to quantitatively evaluate the model’s performance. In addition, the proposed model was trained with different sample sizes (256,512) and network depths (3,4,5), respectively, to analyze their impacts on performance. The results showed that both models detected the majority of landslides, while the proposed model obtained the highest metric value (F1−score = 0.7580, Kappa = 0.7441, and IoU = 0.6104) and was capable of resisting the noise. In addition, the proposed model trained with sample size 256 possessed optimal performance, proving that the size is a non−negligible parameter in U−Net++, and it was found that the U−Net++ trained with shallower layer 3 yielded better results than that with the standard layer 5. Finally, the outstanding performance of the proposed model on a public landslide dataset demonstrated the generalization of U−Net++.

1. Introduction

Strong earthquakes can induce large−scale landslides in mountainous areas, which are considered the primary cause of slope failures. Usually, the area affected by earthquake−triggered landslides ranges from 0 to 500,000 km2 when the earthquake magnitude increases from Mw 4 to Mw 9.2 [1,2]. In recent years, many major earthquakes have occurred in China, resulting in large−scale landslides and sometimes barrier lakes. Such consequences have led to substantial traffic jams and building destruction, thus posing a threat to lives. For example, more than 48,000 landslides with high density in an area of 48,678 km2 were induced in the 12 May 2008 Ms = 8.0 earthquake in Wenchuan, China, causing about 20,000 deaths [3]. At least 22,528 landslides in an area of 5400 km2 were triggered in the 20 April 2013 Mw = 6.6 earthquake in Lushan, China [4]. About 3130 landslides in a 2410 km2 area were induced in the 18 November 2017 Ms = 6.9 earthquake in Milin, China [5]. Therefore, a comprehensive and detailed inventory of earthquake−triggered landslides is essential for disaster management, which is beneficial for quickly determining landslide locations and distribution ranges, and is important for subsequent landslide risk analysis and planning post−disaster reconstruction [6,7].
With high−resolution remote sensing images increasingly becoming the primary data source for emergency managers to examine disaster situations, numerous landslide detection methods have been proposed based on these images [8]. At present, landslide identification methods based on remote sensing are broadly classified into (1) visual interpretation [9], (2) change detection [10], (3) machine learning [11], and (4) deep learning [12].
Manual visual interpretation consists of image preprocessing, the establishment of landslide interpretation symbols, the delineation of landslide boundaries and sliding directions by expertise, and landslide inventory preparation [13]. For instance, Chigira and Yagi [9], Saba et al. [14], Gorum et al. [15], and Sato et al. [16] extracted information from earthquake−triggered landslides and then generated landslide inventories through manual visual interpretation for the 23 October 2004 Mid Niigata prefecture MJMA = 6.8 earthquake in Japan, the 8 October 2005 Northern Pakistan Mw = 7.6 earthquake, and the 12 May 2008 Wenchuan Mw = 7.9 earthquake, respectively. However, the method relies heavily on expertise and is labor−consuming; thus, it is often used as a validation tool for other methods because of its high accuracy [13].
Change detection refers to extracting change information from multi−temporal images by setting specific thresholds through image processing or the pattern recognition method. Usually, band subtraction, post−classification comparison, and principal component analysis are the most commonly used methods in change detection [17]. For example, Li et al. [18] used ROCSAT−2 multispectral pre− and post−earthquake images to detect landslides based on changes in image texture. Nichol et al. [10] employed SPOT panchromatic and IKONOS images to generate landslide inventory by maximum likelihood and change detection. Regarding band subtraction, Li et al. [19] set a threshold for selecting candidate landslide areas, followed by obtaining the boundary information of landslides through level−set evolution. However, the abundant feature information may be tricky to derive from the optical image, considering the bad weather. Plank et al. [20] and Rodriguez et al. [21] attempted to extract landslide information from the SAR (Synthetic Aperture Radar) images, such as the polarization covariance matrix, the α value, and cross−entropy H. In this aspect, more features related to landslides can be found in references [22,23,24].
Machine learning (ML) is a collection of multiple algorithms (Support Vector Machine (SVM) [25], Decision Tree [26], Random Forest (RF) [27], Artificial Neural Network [28], etc.) that possess the capacity to handle data of high dimensionality and to map classes with very complex characteristics [29]. Models based on ML have been widely used in science and engineering [30,31]. Among these, RF is one of the most powerful algorithms applied in remote sensing images. For example, Zhou et al. [32] used RF with geo−detectors and the recursive feature elimination method (RFE) to evaluate landslide susceptibility. Hang et al. [11] used several hybrid ML−based models to generate landslide susceptibility maps. Chen et al. [33] first used the object−oriented method to segment the image and mark the landslides, trained RF to extract the corresponding features, and finally successfully obtained the landslide distribution of the Three Gorges Reservoir. Ghorbanzadeh et al. [34] selected RGB and Near Infrared Ray (NIR) bands, Normalized Difference Vegetation Index (NDVI), and Digital Elevation Model (DEM) to extract landslides via RF, SVM, and artificial neural networks, respectively. The results showed that RF achieved the highest accuracy.
Deep learning (DL), as an extension of ML, consists of a network that usually involves more than two hidden layers and can automatically obtain deeper features from big data. It has been proven that DL−based models can be applied to target detection, and semantic segmentation, and they have become a popular method in medical image processing [35,36,37]. The aim of landslide detection is essential to extract landslides from complex background information, which coincides with the semantic segmentation task. Furthermore, the features of the data and processing parameters are not required to be set artificially except for the necessary parameters in DL, which greatly enhances the portability of landslide detection. In recent years, with the continuous updating of DL algorithms, various DL−based models have been increasingly applied in landslide identification and landslide susceptibility mapping [38,39]. For instance, Lei et al. [12] proposed an approach based on FCN (Fully Convolutional Network) with pyramid pooling embedded for landslide detection, Shi et al. [40] integrated CNN (convolutional neural network) with change detection for landslide identification, and the proposed model obtained satisfying results with accuracy exceeding 80%. Sameen and Pradhan [41] designed a residual network for landslide detection using RGB bands, altitude, slope, aspect, and curvature layers. By introducing layer stacking and feature−level fusion methods, the results showed that the network had higher accuracy in landslide detection.
On the other hand, U−Net is one of the most commonly used networks in semantic segmentation. Originally proposed by Ronneb−erger et al. [42], its capability has been validated in landslide detection. For example, Soares et al. [43] explored the impact of image size on landslide detection in a U−Net. Zhang et al. [44] carried out the landslide identification task for the 2018 Hokkaido eastern Iburi (Japan) Mw = 6.6 earthquake by the DL−based module built in the ENVI software, which is a type of U−Net compiled in the TensorFlow framework. Qi et al. [45] used the improved ResU−Net (Residual U−Net) to map rainfall−triggered landslides with high accuracy compared to the traditional U−Net. Yi et al. [46] imitated the ResU−Net and introduced an attention mechanism to build the Lands−Net for landslide identification, which alleviated the lack of capability of the model in generalization r caused by different landslide morphologies, and the results were more robust and feasible. Furthermore, by modifying the encoder and decoder of the U−Net, Su et al. [47] developed a network called LanDCNN for detecting landslides, in which the feature extraction layers could fuse multiple feature maps, and the decoder only contained fewer parameters. Ghorbanzadeh et al. [48] first used free Sentinel−2 data in landslide identification by evaluating the performance of U−Net and ResU−Net in three different landslide areas. The results indicated that ResU−Net obtained the highest F1−score. Prakash et al. [49] designed a modified U−Net with ResNet34 blocks for feature extraction layers to detect landslides. Based on the comparison to the object−based methods, the designed networks obtained a higher accuracy. Furthermore, Ghorbanzadeh et al. [50,51] recently proposed a new strategy for landslide detection by integrating the rule−based OBIA model and ResU−Net, enhancing and refining the result of the ResU−Net, and addressing the problem of fuzzy landslide boundaries caused by highly−abstracted features. Compared with the traditional ResU−Net and OBIA models, the designed models improved the mIoU by more than 22% in the same study area and obtained higher precision, recall, and F1−score. Different from the supervised DL models, in which the samples must correspond to the labels, the unsupervised DL models can detect landslides without the labels, which can reduce the time for generating the labeled dataset. Despite the paucity of relevant studies, researchers have demonstrated its effectiveness. Among them, the autoencoder is one of the unsupervised architectures embedded in DL models; it has been successfully used in the field of RS [52]. In the landslide detection task, Shahabi et al. [53] first proposed an unsupervised model based on the convolutional auto−encoder (CAE), aiming to extract high−level features without using training data. By stacking the refined Sentinel−2 data, NDVI and DEM as the inputs of CAE, the model outputted the reconstructed features, which were then clustered for landslide detection by min−batch K−means. The results indicated that the proposed model achieved satisfactory performance.
Considering that most of the relative work took 60% of the data as training samples, which is not reasonable in emergency response due to the fact that it is hard to generate adequate labeled data in a short time. In addition, U−Net only fused the feature maps generated by the current layer in the skip connection, and it may fail to utilize the full contextual information, while U−Net++, proposed by Zhou et al. [54], modified the skip connection strategy of U−Net for retaining the semantic information from both the upper and lower layers. Furthermore, due to the fact that there is rarely work employing U−Net++, it is worth exploring its capacity in landslide detection. Hence, in this work, we aimed to adopt U−Net++ to detect landsides, with only 30% of data selected as training data in Iburi district, Japan. For the backbone, ResNet50 was selected to extract deep features [11]. To verify the model performance, RF was constructed as the benchmark, and several metrics were introduced to quantitatively evaluate the model. In addition, we explored the impact of different sample sizes and network depths on the performance of U−Net++. Finally, a public landslide dataset was used to demonstrate the model’s generalization. This paper is structured as follows: (1) Overview of the study area; (2) Preparation of the dataset; (3) Introduction of RF; (4) Introduction of the U−Net++ and ResNet50; (5) Experimental process; (6) Evaluation indexes; (7) Results analysis; (8) Discussion; and (9) Conclusions.

2. Study Area

The study area is a regular rectangle with an area of 630 km2, spanning from 141.84° to 142.13° of longitude, and from 42.64° to 42.89° of latitude. It is located in the Iburi district, Hokkaido, Japan, with a total population of less than 23,000 and an annual rainfall of 1200–1800 mm. The area has an average elevation of 160 m, as shown in Figure 1, and is dominated by hills with moderate slopes, which are mainly composed of Quaternary sediments and Neogene rocks [55]. At 03:08 a.m. on 6 September 2018 (Japan Time), an Mw = 6.6 earthquake struck the Oshima Belt region east of Tomakomai on the island of Hokkaido, Japan, with an epicenter at 42.72 °N, 142.0 °E. The event killed 41 people, with 691 injured. In total, 394 houses were destroyed, and 1061 buildings were damaged. Strong ground shaking by this event induced about 9000 landslides, most of which occurred at altitudes 100–250 m. The vast majority of the landslides were continuous small− and medium−sized shallow detrital slides covering an area of about 1000–10,000 m2. The landslides severely damaged the mountains, changed the landform, destroyed massive farmland, blocked traffic, and caused 36 deaths [56,57,58].

3. Dataset and Pre−Processing

The landslide dataset was generated from visual interpretation based on the 3 m resolution Planet images, in which the wavelengths of the band include blue (455–nm), green (500–590 nm), and red (590–670 nm). In detail, the level 3b images taken on 3 August and 11 September 2018 were used in the experiment, which were orthorectified and atmospherically corrected. To avoid misidentification of landslides, we referred to landslide drone images released by the Geospatial Information Agency of Japan regarding this earthquake. Finally, 9295 landslides have been visually interpreted based on ArcMap10.6, with a total area of 30.96 km2 [58], as shown in Figure 2. Before training the models, we confirmed that the input image’s depth was 8bit, and was normalized to 0–1. In addition, the landslide label is a binary map in which 1 represents a landslide pixel and 0 represents a background pixel.

4. Random Forest Algorithm

A random forest is an integrated algorithm in machine learning. Based on the bagging model in which data are sampled randomly, RF adds a step of random feature selection to the procedure. In this algorithm, the model combines multiple decision trees to form a strong learner and then makes the final decision in accordance with the principle of the majority rule, substantially increasing the accuracy of the overall results and the generalization of the model. Specifically, each decision tree makes a classification separately when training samples are fed into the model; after that, RF generates the final decision based on the majority results of the decision trees [27]. The detailed generation steps of each decision tree are shown in Figure 3.
For a dataset with N samples and M attributes, we randomly selected m attributes (m << M) from M attributes as the candidate features and then selected one of these candidate features as the condition for splitting the node using a certain strategy. In OpenCV, RF is constructed from a series of CART [59,60] decision trees, whose metric for selecting the optimal split feature at a node is the Gini coefficient, defined as follows:
Gini _ Index ( D , a ) = k = 1 K | D k | | D | G i n i ( D k )
Before training, histogram equalization was first performed on each image to enhance the contrast of the grayscale values of pixels, thus highlighting the detailed objectives. For features, we calculated HOG (Histogram of Oriented Gradient), LBP (Local Binary Pattern), the grayscale value of each pixel in the red, green, and blue bands, and the mean value and variance of all pixels in these bands in the 4 × 4 range. Among them, HOG [61] is one of the image features and has been successfully applied in ML−based models for pedestrian detection and vehicle detection. It generates and normalizes the histogram by calculating and accumulating the gradients of individual blocks in different directions within the local region of the image, and then concatenating the gradient histograms of all the blocks. LBP also describes the local texture features of images [62]. It compares the gray value Ic of each pixel with the gray value Ik (k = 1…,8) of the eight adjacent pixels in the local region of the image. If Ik is greater than Ic, position k is marked as 1; otherwise, it is marked as 0. Then, an 8−bit binary number is generated as the LBP value of this pixel, and the frequency of occurrence of each LBP value in each region is calculated statistically to generate the histogram. Finally, the normalized histograms generated for each region were concatenated to obtain the final LBP features of the image. A detailed diagram of the RF is shown in Figure 4.

5. U−Net++ and ResNet50

Compared to the traditional DL models, which add a fully connected layer, in the end, to flatten the feature maps for classification, FCN (Fully Convolution Network) applies deconvolution to up−sample the feature maps generated by multiple−convolution layers and thus recovers them to the size of the input image to perform prediction for every pixel.
U−Net is an FCN−based model that has been widely used in landslide detection [43,44,45]. It has a structure of encoder and decoder and is different from the FCN with merely one deconvolution in the decoder. The structure of the U−Net is symmetrical, and it also has several convolutional operations in the decoder. Skip connection is a core part of U−Net, which preserves more semantic information in the decoder by fusing the feature maps generated from the encoder. U−Net++ is an adaption of U−Net, as shown in Figure 5. It effectively integrates U−Net structures of different depths; thus, it is more flexible to change the depth when the lower layer can obtain higher accuracy. In U−Net++, the size of the feature generated in each stage is 1/2 of the one from the previous layer, and the channels of feature maps are twice from the previous layer. In addition, the layers in decoders are intertwined at different levels, which recedes the semantic gap that may exist in the U−Net [54]. On the one hand, U−Net++ aims to generate feature maps in different receptive fields and integrate the feature maps with the up−sampled feature maps from the next layer. On the other hand, the skip connection in U−Net can only fuse the encoder and decoder outputs with the same scale, while the skip connection in U−Net++ provides feature maps from different encoders at the decoder nodes for the remaining semantic information. As shown in Figure 6, the network fuses the feature maps from the previous convolutional layer with the corresponding up−sampled feature maps from the next layer.
With layer stacking, the network can capture complex features. However, the model may suffer from saturation or even degradation in deeper layers [48]. Thus, ResNet50 was selected as the backbone for avoiding the vanishing gradient by retaining the shallow feature information. In ResNet50, it contains two basic residual blocks: the Conv block and the Identity block, as shown in Figure 7. Both are composed of multiple convolutions, BN, and ReLu. Among them, BN aims to speed up the convergence, and avoid the vanishing or exploding gradient, and overfitting [63]. In addition, the robust pre−trained weights of ResNet50 on ImageNet were introduced to accelerate convergence and avert dying ReLu [64]. Regarding basic residual blocks, the Conv block aims to change the dimension of the feature maps when they are fed into the next stage, and the Identity block avoids the vanishing gradient by the residual rule when stacking the multiple convolution operation. When extracting features, the images are first fed into the network through a convolution layer and a maxpooling layer for changing the size to 1/4 and the depth to 64. Then, the outputs are fed into four stages, where basic residual blocks are cyclically executed to extract features. The repeat times of the residual block in each stage are 3, 4, 6, and 3, respectively. The detailed and whole structure of ResNet50 can be seen in the attachment.

6. Experimental Process

In this study, RF and U−Net++ were implemented using the OpenCV module compiled by C++ and Pytorch frameworks, respectively. When training, 1/3 parts of images were selected as samples, in which 70% of the samples were used to train the models, and 30% were used for validation. Specifically, the images and labels were fed directly into the RF, while the images were cropped into 256 × 256 and 512 × 512 sizes in a sliding way before they were fed into U−Net++ [34,43]. To augment the samples to avoid overfitting, flipping, and rotation operations were performed on each cropped sample and label, as shown in Figure 8. Finally, a total of 6232 and 1496 images were generated for U−Net++256 and U−Net++512 separately.
The trial−and−error approach was used to adjust the parameters in RF and U−Net++. In RF, the maximum depth of each tree was set to 100, the minimum sample in each node was 100, and 20 features were selected for each node. The maximum num of the trees was set at 500. In U−Net++, Adam was selected as the optimizer to update the weight of neurons, with default betas = (0.9, 0.999). The net stopped when Epoch arrived at 200, and the learning rate was set to 0.001 for avoiding big variation in gradient, which may lead to dying neurons [64] and was reduced to 1/5 when Epoch was greater than 150. Additionally, the batch size was set to 32, and the activation function was Sigmoid. Ldice was selected as the loss function, which can alleviate the positive and negative sample imbalances [65]. Ldice is defined as:
Dice = 2 | X Y | | X | + | Y |
L dice = 1 Dice
where the numerator of Dice is the intersection of samples and labels, |X| and |Y| represent the summation of pixel values. In binary classification, Dice is similar to F1−Score; thus, minimizing the Ldice is equivalent to maximizing the F1−Score.
After training, the model outputted the weight file with a specific format, which could be directly recognized by codes. During prediction, the same model was used with inputting the trained weights, and then 2/3 parts of the images were fed to the model to identify the landslide areas. Finally, all predicting images were mosaiced and projected to WGS 1984 UTM_Zone_54N using the GDAL module.
The device used in this work is a Dell Precision 7920 Tower with RAM 512GB, processor Intel Xeon Gold [email protected], and Nvidia Quadro RTX 6000 (24 GB) in the Windows 10 system.

7. Evaluation of Performance

To evaluate the performance of the models, we selected the confusion matrix to quantitatively analyze the model’s performance, as shown in Table 1. To further describe the results explicitly, we calculated accuracy, precision, recall, F1−Score, Kappa coefficient, and IoU (Intersection over Union); among them, precision indicates the percentage of positive results that are also correct in the true case. Accuracy represents the number of samples that are correctly predicted over the number of all samples. The F1−Score is the harmonic mean of the model’s precision and recall, which is more appropriate for assessing the result. IoU is the ratio of the intersection to the union of the predicate and the true, which in the confusion matrix is expressed as the ratio of the number of true positives to the sum of true positives, false negatives, and false positives [66].
Accuracy = TP + TN TP + FN + FP + TN
Precision = TP TP + FP
Recall = TP TP + FN
F 1-score = 2 TP 2 TP + FP + FN
IoU = TP TP + FP + FN
Considering the unbalanced proportions of landslide and non−landslide samples, the kappa coefficient that can penalize model bias was introduced, it represents the proportion of error reduction of classification compared to the completely random classification. It is generally evaluated from 0 to 1, and the higher the value, the stronger the consistency of the model. The kappa coefficient is expressed as:
Kappa = P o P e 1 P e
where Po is the accuracy mentioned above. Assuming that the total number of samples is n, the number of true samples in each category is a1, a2…, ac, and the number of predicted samples in each category is b1, b2…, bc. Then, the expression of Pe is:
P e = a 1 b 1 + a 2 b 2 + + ac bc n n

8. Results and Evaluation

Figure 9 indicates that the proposed models exhibited relatively good performance in the test area, as they extracted the majority of landslides. In addition, we found that the U−Net++ showed the capability to resist noise while RF was slightly vulnerable to noise. Thus, landslide−like highlighted areas such as bare soil, buildings, and roads were likely to be misclassified as landslides in RF, as shown in Figure 10. On the other hand, the results of the U−Net++ were more complete, with the majority of landslides maintaining a good outline, while some landslides detected by RF exhibited some breakage and incomplete structures.
To further analyze the results of RF and U−Net++, four sub−regions were selected for detailed comparison. As shown in Figure 11, the peach area is the landslide area identified by three models, and the red polygon represents the manual interpretation of the landslide area. It showed that the U−Net++ predicted the interior and boundary of landslides more completely. In addition, both RF and U−Net++256 recognized large landslides in the four regions, while U−Net++512 missed some landslide regions in Figure 11(b2). It can be seen from Figure 1 that RF was prone to misclassify the pixels, which have similar spectral characteristics to landslides. In Figure 12, the yellow and green areas represent the FN and FP parts of the confusion matrix, respectively. This indicated that U−Net++256 obtained less FP and FN compared to RF and U−Net++512. Overall, the identified landslides of U−Net++256 were more complete.
As shown in Table 2, all metrics for the DL−based models were higher than those for the ML−based model. Among them, U−Net++256 achieved the optimal performance. Here, we briefly describe the quantitative improvement of the proposed model. The F1−score, Kappa, and IoU in U−Net++256 were improved by 12.38%, 13.10%, and 14.61% compared to RF, while they were improved by 5.45%, 5.72%, and 6.78% compared to U−Net++512. In summary, all models have the capability of landslide identification, while U−Net++256 exhibits the best performance.

9. Discussion

Rapid detection of earthquake−triggered landslides can provide the necessary support for emergency rescue. Deep learning, supported by big data, makes it possible to classify landslides more efficiently and precisely. In this study, we aimed to explore the capability of U−Net++ in landslide detection with only 1/3 part of datasets selected as training samples. As a relatively new encoder–decoder model, U−Net++ fuses the feature maps of the current layer and the upper layer to fully utilize the semantic information under the different receptive fields. Furthermore, it has been validated that ResNet [35] has the ability to alleviate the vanishing gradient and deepen the network; hence, ResNet50 was selected as the backbone of U−Net++ for extracting the features. The results showed that U−Net++256 obtained the highest F1−score, Kappa, and IoU of 75.80%, 74.42%, and 61.04% compared to other models. In addition, U−Net++ was able to resist the impact of noise on the results compared to RF. Hence, this work demonstrated that U−Net++ with ResNet50 can obtain better performance in landslide detection.

9.1. Model Comparison

In this experiment, the complexity of the models was evaluated by recording training time and prediction time. During the training phase, RF took the shortest time of 2.3 h, U−Net++512 took more than 5 h for training, and U−Net++256 required 6.7 h. In the prediction phase, the consuming time of U−Net++ was smaller than that of RF, which only cost about 2 min, while RF took more than 35 min. In terms of evaluation, U−Net++ with two sample sizes exhibited better performance than RF. Due to the dense network structure of U−Net++, the parameters are larger than those of U−Net. Considering the overall performance of U−Net++256, its complexity is acceptable. In addition, it can be inferred that the performance of U−Net++ is related to image size, as shown in Table 2. The model trained with small−scale samples outperformed those obtained from large−scale samples, which is consistent with the results of Ghorbanzadeh et al. and Soares et al. [43,48].

9.2. Result Analysis

In this section, we aimed to analyze the phenomenon in Table 2 and analyze the impact of depth on the proposed model. Generally, the landslide area accounts for a small proportion of the whole image, resulting in higher accuracy. Therefore, it is reasonable that the accuracy of all models exceeds 0.90. Here, we counted the confusion matrix of all models to briefly discuss the reasons why the models performed poorly in terms of recall and precision. It can be seen from Equations (5) and (6) that precision and recall are highly susceptible to FN and FP. As shown in Table 3, U−Net++ is prone to misclassify less FP compared to RF and U−Net++256, the latter tends to misclassify more FN and FP, which leads all models to obtain lower precision and recall, and this phenomenon is similar to the results of related studies [34,51]. Based on the preliminary analysis, this may be attributed to the way of label generation, where the pixels of landslide boundary are likely to be misjudged after rasterization. Furthermore, the resolution of the image may also impact precision and recall. Finally, the size of the landslide in the study area is mostly small and medium−sized, resulting in the fact that feature information may be lost several times after convolution and maxpooling. To address this problem, we considered that the characteristics of landslides should be taken into account when selecting the appropriate size, and it is better to choose the features related to the landslide as input data, such as DEM, NDVI, and Slope.
To further evaluate the impact of depth on U−Net++256, the models with depth 3, 4, 5 were trained in the same environment respectively. Here, IoU on the prediction dataset was used to search for optimum depth. After training, the same metrics proposed in Section 7 were applied to quantitatively analyze the performance. As shown in Table 4, the model with depth 3 achieved the highest precision, F1−score, Kappa, and IoU, while the model with depth 5 possessed the optimal recall. In terms of time consumption, the model with depth 3 took the least time compared to the other depths.

9.3. Comparison with Previous Work

In this section, we briefly compared and analyzed the results of previous studies in the same area. It can be found that most landslide detection approaches based on DL selected U−Net as the network structure, and the dataset was usually generated from high−resolution images. In addition, to pursue better performance, most experiments used more than 50% of the samples for training, and only a small proportion of samples were used for evaluation, which affects the efficiency of the emergency response. For instance, Zhang et al. [44] used the ENVI software module for landslide identification in the same study area. The model obtained relatively complete results, while the training data exceeded 60% of the total samples, and their model was inclined to misclassify landslide−like pixels as landslides. Ghorbanzadeh et al. [48] used U−Net and ResU−Net, respectively, for landslide detection in the Iburi district. Compared to U−Net, ResU−Net possessed higher metric values. However, the training samples also exceeded 50% of the total dataset.

9.4. Generalization Analysis

To validate the generalization of the proposed model for datasets with different landslide characteristics, a public landslide dataset located in Bijie city, China, was used, which has the same waveband as the samples used in the Iburi district. All landslides were extracted from TripleSat 0.8m resolution images based on visual interpretation by geologists and combined with field surveys [39]. In the experiment, 15 landslides were selected randomly as prediction datasets to evaluate the performance; 70% of the remaining landslides were used as training samples, and 30% of landslides were used as validation samples. Before training, samples were resized to 256 × 256 to fit the model, and the same data augmentation strategy was conducted on all samples. The results in Table 5 show that the proposed model obtained a fine result with an F1−score of 90.93%, Kappa of 88.25%, and IoU of 83.37%, indicating that U−Net++ can be used in areas with different landslide characteristics. Furthermore, the model has the ability to depict the relatively complete landslide boundaries and avoid cavities in the landslide body, as shown in Figure 13. Specifically, the model can extract complete landslides that are morphologically diverse in a complex geological environment, including roads, bare soil, and highlighted buildings, as shown in Figure 13b,c,f,g,i,j. These results demonstrate the generalization of U−Net++ in landslide detection.

9.5. Advantages and Limits

This study explored the capability of U−Net++ in landslide detection. As manifested in the results, U−Net++ achieved impressive results, even with only approximately 30% of the dataset as training samples. In addition, the proposed model effectively suppresses the impact of noise on landslide regions. In the prediction stage, the proposed model can extract landslides in an area of more than 400 km2 within only 2 min. With the continuous availability of optical and microwave remote sensing data, the acquisition of high−quality samples and application scenarios of U−Net++ will become more extensive. Overall, the proposed model is a relatively new approach to landslide detection and can fuse the feature maps generated from different layers, which increases the information of feature maps. Finally, U−Net++ can be applied in any region to detect landslides after an earthquake or rainfall, proving that the proposed model has promising potential for application in emergency response to natural disasters. We highly recommend that researchers select U−Net++ in remote sensing when studying semantic segmentation, such as building extraction or ship extraction.
Although the proposed model showed excellent performance in the two study areas, this work was still in the preliminary exploration stage of U−Net++ in landslide detection, and the limitations of the model are distinct. There were still some issues that were not addressed in the experiments.
  • The image size used in U−Net++ is fixed, and the multi−scale and multi−source remote sensing images are not taken into consideration for extracting more abundant information.
  • The high performance of U−Net++ requires the expense of a significant amount of time, and the training speed depends on the computer hardware.
  • Although only 1/3 parts of the samples were used in the training stage, the data augmentation was conducted on the dataset to ensure that there were enough datasets for learning, increasing the overhead of GPU.
  • Previous studies have shown that size has a non−negligible impact on the DL−based model [44]. In this work, the impact of different sample sizes on U−Net++ is not discussed.
  • The quality of the results obtained by the proposed model in preparing the earthquake−triggered landslide susceptibility map is not discussed further.
On the other hand, sensitive analysis was not taken into account in this work, which briefly reflects how the outputs of the model are influenced by the input and provides the importance of the model input parameters [68,69]. For instance, Asheghi et al. [70] analyzed the impact of different inputs on the model using different sensitivity analyses and updated the performance.
In future studies, the parameters of U−Net++ that have impacts on performance should be further tested to systematically evaluate the relationship between inputs and outputs. In addition, the quality of landslide data detected by U−Net++ should be evaluated as the data source for generating a landslide−triggered landslide susceptibility map. Furthermore, training U−Net++ without data augmentation should be considered in future studies to evaluate the model in handling scale invariance. Finally, more attention should be focused on enabling the model to capture more sophisticated landslide information in images with poor or different resolutions.

10. Conclusions

In this study, a relatively new DL−based model, denoted as U−Net++, was proposed to detect landslides in an earthquake−affected region under the condition of only 30% of the data selected as training samples. The proposed model combines the strengths of two sub−networks, namely ResNet50 and U−Net++. The former was used to extract the abundant features and alleviate the vanishing gradient, while the latter was used to leverage the feature maps generated from different layers. The pre−trained weights were introduced to accelerate the model convergence. To evaluate the performance of the proposed model, a widely used machine learning algorithm, RF, was selected for comparison. For visual comparison, it was prominent that both models have the capability to detect landslides with complex shape characteristics, while U−Net++ is more capable of resisting the effect of noise. Six criteria were selected to evaluate the models quantitatively, and the results showed that U−Net++ possessed optimum performance. Additionally, the effects of size and depth on the proposed model were explored, and the results indicated that the sample size had an unignorable impact on U−Net++. The proposed model with depth 3 obtained the highest performance. Finally, U−Net++ was further tested in a public landslide dataset to validate its generalization. Overall, U−Net++ has great potential for regional landslide detection.

Author Contributions

Z.Y. designed the framework of the research, conducted the experiment, and wrote the manuscript. C.X. proposed the research concept, organized landslide interpretation work, and offered basic data. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Institute of Natural Hazards, Ministry of Emergency Management of China (ZDJ2021−14), the Lhasa National Geophysical Observation and Research Station (NORSLS20−07), and the National Key Research and Development Program of China (2018YFC1504703).

Data Availability Statement

The complete structure of ResNet50 and part of training samples are available. This data can be found here: [https://drive.google.com/drive/folders/1x7umIkEnKqZlXS7KSmzDqQRPUWsfQPqB?usp=sharing] (accessed on 12 March 2022).

Acknowledgments

We are grateful to Planet Company for providing us with the images. We also express our thanks to the anonymous reviewers for their constructive comments and suggestions that improved the quality of the manuscript. The implement of U−Net++ structure construction referred to the source codes compiled in Pytorch On open source project in GitHub [71].

Conflicts of Interest

The authors declare that there is no conflict of interest.

References

  1. Zhu, S.; Shi, Y.; Lu, M.; Xie, F. Dynamic mechanisms of earthquake−triggered landslides. China Earth Sci. 2013, 56, 1769–1779. [Google Scholar] [CrossRef]
  2. David, K. Keffer. Landslides caused by earthquakes. GSA Bulletin. 1984, 95, 406–421. [Google Scholar] [CrossRef]
  3. Yao, X.; Xu, C.; Dai, F.; Zhang, Y. Contribution of strata lithology and slope gradient to landslides triggered by Wenchuan Ms 8 earthquake, Sichuan, China. Geol. Bull. China 2009, 28, 1156–1162. [Google Scholar]
  4. Xu, C. Catalogue of landslides and the amount of slope material lost due to the 2013 Lushan earthquake in China. In Proceedings of the Annual Meeting of Chinese Geoscience Union (2014), Beijing, China, 20–23 October 2014. [Google Scholar]
  5. Huang, Y.; Xu, C.; Zhang, X.; Xue, C.; Wang, S. An updated database and spatial distribution of landslides triggered by the Milin, Tibet Mw6.4 Earthquake of 18 November 2017. Earth Sci. 2021, 32, 1069–1078. [Google Scholar] [CrossRef]
  6. Harp, E.L.; Keefer, D.K.; Sato, H.P.; Yagi, H. Landslide inventories: The essential part of seismic landslide hazard analyses. Eng. Geology. 2011, 122, 9–21. [Google Scholar] [CrossRef]
  7. Bacha, A.S.; Shafique, M.; van der Werff, H. Landslide inventory and susceptibility modelling using geospatial tools, in Hunza−Nagar valley, northern Pakistan. Mt. Sci. 2018, 15, 1354–1370. [Google Scholar] [CrossRef]
  8. Peng, L.; Xu, S.; Mei, J.; Su, F. Earthquake−induced landslide recognition using high-resolution remote sensing images. J. Remote Sens. 2017, 21, 509–518. [Google Scholar] [CrossRef]
  9. Chigira, M.; Yagi, H. Geological and geomorphological characteristics of landslides triggered by the 2004 Mid Niigta prefecture earthquake in Japan. Eng. Geology 2006, 82, 202–2210. [Google Scholar] [CrossRef]
  10. Nichol, J.; Wong, M. Satellite remote sensing for detailed landslide inventories using change detection and image fusion. Int. J. Remote Sens. 2005, 26, 1913–1926. [Google Scholar] [CrossRef]
  11. Hang, H.; Tung, H.; Hoa, P.; Phuong, N.; Phong, T.; Costache, R.; Nguyen, H.; Amiri, M.; Le, H.; Le, H.; et al. Spatial prediction of landslides along National Highway−6, Hoa Binh province, Vietnam using novel hybrid models. Geocarto Int. 2021, 1–26. [Google Scholar] [CrossRef]
  12. Lei, T.; Zhang, Y.; Lv, Z.; Li, S.; Liu, S.; Nandi, A. Landslide Inventory Mapping from Bitemporal Images Using Deep Convolutional Neural Networks. IEEE Geosci. Remote. Sens. Lett. 2019, 16, 982–986. [Google Scholar] [CrossRef]
  13. Zhang, D.; Wu, Z.; Li, J.; Jiang, Y. An overview on earthquake−induced landslide research. J. Geomech. 2013, 19, 225–241. [Google Scholar]
  14. Saba, S.B.; van der Meijde, M.; van der Werff, H. Spatiotemporal landslide detection for the 2005 Kashmir earthquake region. Geomorphology 2010, 124, 17–25. [Google Scholar] [CrossRef]
  15. Gorum, T.; Fan, X.; van Westen Cees, J.; Huang, R.; Xu, Q.; Tang, C.; Wang, G. Distribution pattern of earthquake−induced landslides triggered by the 12 May 2008 Wenchuan earthquake. Geomorphology 2011, 133, 152–167. [Google Scholar] [CrossRef]
  16. Sato, H.P.; Hasegawa, H.; Fujiwara, S.; Tobita, M.; Koarai, M.; Une, H.; Iwahashi, J. Interpretation of landslide distribution triggered by the 2005 Northern Pakistan earthquake using SPOT−5 imagery. Landslides 2007, 4, 113–122. [Google Scholar] [CrossRef]
  17. Lu, D.; Mausel, P.; Brondízio, E.; Moran, E. Change detection techniques. Int. J. Remote Sens. 2004, 25, 2365–2401. [Google Scholar] [CrossRef]
  18. Li, S.; Hua, H. Automatic recognition of landslides based on change detection. In Proceedings of the International Symposium on Photoelectronic Detection and Imaging 2009: Advances in Imaging Detectors and Applications, Beijing, China, 17–19 June 2009; Volume 7384. [Google Scholar] [CrossRef]
  19. Li, Z.; Shi, W.; Myint, S.; Lu, P.; Wang, Q. Semi−automated landslide inventory mapping from bitemporal aerial photographs using change detection and level set method. Remote Sens. Environ. 2016, 175, 215–230. [Google Scholar] [CrossRef]
  20. Plank, S.; Twele, A.; Martinis, S. Landslide Mapping in Vegetated Areas Using Change Detection Based on Optical and Polarimetric SAR Data. Remote Sens. 2016, 8, 307. [Google Scholar] [CrossRef] [Green Version]
  21. Rodriguez, K.; Weissel, J.; Kim, Y. Classification of landslide surfaces using fully polarimetric SAR: Examples from Taiwan. IEEE Geosci. Remote Sens. Lett. 2002, 5, 2918–2920. [Google Scholar] [CrossRef]
  22. Yonezawa, C.; Watanabe, M.; Saito, G. Polarimetric Decomposition Analysis of ALOS PALSAR Observation Data before and after a Landslide Event. Remote Sens. 2012, 4, 2314–2328. [Google Scholar] [CrossRef] [Green Version]
  23. Shibayama, T.; Yamaguchi, Y.; Yamada, H. Polarimetric Scattering Properties of Landslides in Forested Areas and the Dependence on the Local Incidence Angle. Remote Sens. 2015, 7, 15424–15442. [Google Scholar] [CrossRef] [Green Version]
  24. Raspini, F.; Ciampalini, A.; Del Conte, S.; Lombardi, L.; Nocentini, M.; Gigli, G.; Ferretti, A.; Casagli, N. Exploitation of Amplitude and Phase of Satellite SAR Images for Landslide Mapping: The Case of Montescaglioso (South Italy). Remote Sens. 2015, 7, 14576–14596. [Google Scholar] [CrossRef] [Green Version]
  25. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  26. Myles, A.J.; Feudale, R.N.; Liu, Y.; Woody, N.A.; Brown, S.D. An introduction to decision tree modeling. J. Chemom. A J. Chemom. Soc. 2004, 18, 275–285. [Google Scholar] [CrossRef]
  27. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  28. Atkinson, P.M.; Tatnall, A.R.L. Introduction neural networks in remote sensing. Int J Remote Sens. 1997, 18, 699–709. [Google Scholar] [CrossRef]
  29. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
  30. Lary, D.; Alavi, A.; Gandomi, S.; Walker, A. Machine learning in geosciences and remote sensing. Geosci. Front. 2016, 7, 3–10. [Google Scholar] [CrossRef] [Green Version]
  31. Mahesh, B. Machine Learning Algorithms—A Review. Int. J. Sci. Res. 2019, 9, 381–386. [Google Scholar] [CrossRef]
  32. Zhou, X.; Wen, H.; Zhang, Y.; Xu, J.; Zhang, W. Landslide susceptibility mapping using hybrid random forest with GeoDetector and RFE for factor optimization. Geosci. Front. 2021, 12, 101211. [Google Scholar] [CrossRef]
  33. Chen, T.; Trinder, J.C.; Niu, R. Object−Oriented Landslide Mapping Using ZY−3 Satellite Imagery, Random Forest and Mathematical Morphology, for the Three−Gorges Reservoir, China. Remote Sens. 2017, 9, 333. [Google Scholar] [CrossRef] [Green Version]
  34. Ghorbanzadeh, O.; Blaschke, T.; Gholamnia, K.; Meena, S.R.; Tiede, D.; Aryal, J. Evaluation of Different Machine Learning Methods and Deep−Learning Convolutional Neural Networks for Landslide Detection. Remote Sens. 2019, 11, 196. [Google Scholar] [CrossRef] [Green Version]
  35. Zhu, X.; Tuia, D.; Mou, L.; Xia, G.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Lett. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
  36. Milletari, F.; Ahmadi, S.; Kroll, C.; Plate, K.; Rozanski, V.; Maiostre, M.; Levin, J.; Dietrich, O.; Ertl-Wagner, B.; Bötzel, K.; et al. Hough−CNN: Deep learning for segmentation of deep brain regions in MRI and ultrasound. Comput. Vis. Image Underst. 2017, 164, 92–102. [Google Scholar] [CrossRef] [Green Version]
  37. Hesamian, M.; Jia, W.; He, X.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [Green Version]
  38. Shahri, A.; Moud, F. Landslide susceptibility mapping using hybridized block modular intelligence model. Bull. Eng. Geol. Environ. 2021, 80, 267–284. [Google Scholar] [CrossRef]
  39. Ji, S.; Yu, D.; Shen, C.; Li, W.; Xu, Q. Landslide detection from an open satellite imagery and digital elevation model dataset using attention boosted convolutional neural networks. Landslides 2020, 17, 1337–1352. [Google Scholar] [CrossRef]
  40. Shi, W.; Zhang, M.; Ke, H.; Fang, X.; Zhan, Z.; Chen, S. Landslide Recognition by Deep Convolutional Neural Network and Change Detection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4654–4672. [Google Scholar] [CrossRef]
  41. Sameen, M.; Pradhan, B. Landslide Detection Using Residual Networks and the Fusion of Spectral and Topographic Information. IEEE Access. 2019, 7, 114363–114373. [Google Scholar] [CrossRef]
  42. Ronneberger, O.; Fischer, P.; Brox, T. U−Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the 18th Medical Image Computing and Computer−Assisted Intervention (MICCAI 2015), Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar] [CrossRef]
  43. Soares, L.; Dias, H.; Grohmann, C. Landslide Segmentation with U−Net: Evaluating Different Sampling Methods and Patch Sizes. arXiv 2020, arXiv:2007.06672. [Google Scholar] [CrossRef]
  44. Zhang, P.; Xu, C.; Ma, S.; Shao, X.; Tian, Y.; Wen, B. Automatic Extraction of Seismic Landslides in Large Areas with Complex Environments Based on Deep Learning: An Example of the 2018 Iburi Earthquake, Japan. Remote Sens. 2020, 12, 3992. [Google Scholar] [CrossRef]
  45. Qi, W.; Wei, M.; Yang, W.; Xu, C.; Ma, C. Automatic Mapping of Landslides by the ResU−Net. Remote Sens. 2020, 12, 487. [Google Scholar] [CrossRef]
  46. Yi, Y.; Zhang, W. A New Deep−Learning−Based Approach for Earthquake−Triggered Landslide Detection from Single−Temporal RapidEye Satellite Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6166–6176. [Google Scholar] [CrossRef]
  47. Su, Z.; Chow, J.; Tan, p.; Wu, J.; Ho, Y.; Wang, Y. Deep convolutional neural network–based pixel−wise landslide inventory mapping. Landslides 2021, 18, 1421–1443. [Google Scholar] [CrossRef]
  48. Ghorbanzadeh, O.; Crivellari, A.; Ghamisi, P.; Shahabi, H.; Blaschke, T. A comprehensive transferability evaluation of U−Net and ResU−Net for landslide detection from Sentinel−2 data (case study areas from Taiwan, China, and Japan). Sci. Rep. 2021, 11, 14629. [Google Scholar] [CrossRef] [PubMed]
  49. Prakash, N.; Manconi, A.; Loew, S. Mapping Landslides on EO Data: Performance of Deep Learning Models vs. Traditional Machine Learning Models. Remote Sens. 2020, 12, 346. [Google Scholar] [CrossRef] [Green Version]
  50. Ghorbanzadeh, O.; Gholamnia, K.; Ghamisi, P. The application of ResU−net and OBIA for landslide detection from multi−temporal sentinel−2 images. Big Earth Data 2022, 1–26. [Google Scholar] [CrossRef]
  51. Ghorbanzadeh, O.; Shahabi, H.; Crivellari, A.; Homayouni, S.; Blaschke, T.; Ghamisi, P. Landslide detection using deep learning and object−based image analysis. Landslides 2022, 19, 929–939. [Google Scholar] [CrossRef]
  52. Rahimzad, M.; Homayouni, S.; Alizadeh Naeini, A.; Nadi, S. An Efficient Multi−Sensor Remote Sensing Image Clustering in Urban Areas via Boosted Convolutional Autoencoder (BCAE). Remote Sens. 2021, 13, 2501. [Google Scholar] [CrossRef]
  53. Shahabi, H.; Rahimzad, M.; Tavakkoli Piralilou, S.; Ghorbanzadeh, O.; Homayouni, S.; Blaschke, T.; Lim, S.; Ghamisi, P. Unsupervised Deep Learning for Landslide Detection from Multispectral Sentinel−2 Imagery. Remote Sens. 2021, 13, 4698. [Google Scholar] [CrossRef]
  54. Zhou, Z.; Rahman Siddiquee, M.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U−Net Architecture for Medical Image Segmentation. In Proceedings of the 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML−CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 20 September 2018; pp. 3–11. [Google Scholar] [CrossRef]
  55. Yamagishi, H.; Ito, Y.; Kawamura, M. Characteristics of deep−seated landslides of Hokkaido: Analyses of a database of landslides of Hokkaido, Japan. Environ. Eng. Geosci. 2002, 8, 35–46. [Google Scholar] [CrossRef]
  56. Yamagishi, H.; Yamazaki, F. Landslides by the 2018 Hokkaido Iburi−Tobu Earthquake on September 6. Landslides 2018, 15, 2521–2524. [Google Scholar] [CrossRef] [Green Version]
  57. Zhang, S.; Li, R.; Wang, F.; Iio, K. Characteristics of landslides triggered by the 2018 Hokkaido Eastern Iburi earthquake, Northern Japan. Landslides 2019, 16, 1691–1708. [Google Scholar] [CrossRef]
  58. Shao, X.; Ma, S.; Xu, C.; Zhang, P.; Wen, B.; Tian, Y.; Zhou, Q.; Cui, Y. Planet Image−Based Inventorying and Machine Learning−Based Susceptibility Mapping for the Landslides Triggered by the 2018 Mw6.6 Tomakomai, Japan Earthquake. Remote Sens. 2019, 11, 978. [Google Scholar] [CrossRef] [Green Version]
  59. Pal, M. Random Forest classifier for remote sensing classification. Int J Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  60. Jaiswal, J.; Samikannu, R. Application of Random Forest Algorithm on Feature Subset Selection and Classification and Regression. In Proceedings of the World Congress on Computing and Communication Technologies (WCCCT,2017), Tiruchirappalli, India, 2–4 February 2017; pp. 65–68. [Google Scholar] [CrossRef]
  61. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2015; pp. 886–893. [Google Scholar] [CrossRef] [Green Version]
  62. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  63. Santurkar, S.; Tsipras, D.; Ilyas, A.; Madry, A. How does batch normalization help optimization? arXiv 2018, arXiv:1805.11604. [Google Scholar] [CrossRef]
  64. Lu, L.; Shin, Y.; Su, Y.; Karniadakis, G.E. Dying relu and initialization: Theory and numerical examples. arXiv 2019, arXiv:1903.06733. [Google Scholar] [CrossRef]
  65. Milletari, F.; Navab, N.; Ahmadi, S. V−Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar] [CrossRef] [Green Version]
  66. Garcia−Garcia, A.; Orts−Escolano, S.; Oprea, S.; Villena−Martinez, V.; Garcia−Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar] [CrossRef]
  67. Susmaga, R. Confusion matrix visualization. In Intelligent Information Processing and Web Mining; Springer: Berlin/Heidelberg, Germany, 2004; Volume 25, pp. 107–116. [Google Scholar] [CrossRef]
  68. Razavi, S.; Jakeman, A.; Saltelli, A.; Prieur, C.; Iooss, B.; Borgonovo, E.; Plischke, E.; Piano, S.L.; Iwanaga, T.; Becker, W.; et al. The future of sensitivity analysis: An essential discipline for systems modeling and policy support. Environ. Model. Softw. 2021, 137, 104954. [Google Scholar] [CrossRef]
  69. Saltelli, A.; Aleksankina, K.; Becker, W.; Fennell, P.; Ferretti, F.; Holst, N.; Li, S.; Wu, Q. Why so many published sensitivity analyses are false: A systematic review of sensitivity analysis practices. Environ. Model. Softw. 2019, 114, 29–39. [Google Scholar] [CrossRef]
  70. Asheghi, R.; Hosseini, S.A.; Saneie, M.; Shahri, A.A. Updating the neural network sediment load models using different sensitivity analysis methods: A regional application. J. Hydroinform. 2020, 22, 562–577. [Google Scholar] [CrossRef] [Green Version]
  71. Yakubovskiy, P.; Segmentation Models Pytorch. GitHub Repository. 2020. Available online: https://github.com/qubvel/segmentation_models.pytorch (accessed on 11 March 2022).
Figure 1. Location of the study area (a) and its topography (b). Aerial photos of landslides (c,d). The solid white boxes in (d) show the location of landslides.
Figure 1. Location of the study area (a) and its topography (b). Aerial photos of landslides (c,d). The solid white boxes in (d) show the location of landslides.
Remotesensing 14 02826 g001
Figure 2. Map showing landslides identified by manual visual interpretation based on ArcMap. Brown polygons are the boundaries of individual landslides. The upper (left) of the red line represents the training and validation data, and the lower (left) represents the prediction data. (ac) shows the specific ranges of landslides.
Figure 2. Map showing landslides identified by manual visual interpretation based on ArcMap. Brown polygons are the boundaries of individual landslides. The upper (left) of the red line represents the training and validation data, and the lower (left) represents the prediction data. (ac) shows the specific ranges of landslides.
Remotesensing 14 02826 g002
Figure 3. Random forest algorithm. Sub−datasets are generated from the dataset by random selection. The sub−datasets are fed into several decision trees that output the classification results, respectively. The final result was obtained using the majority rule.
Figure 3. Random forest algorithm. Sub−datasets are generated from the dataset by random selection. The sub−datasets are fed into several decision trees that output the classification results, respectively. The final result was obtained using the majority rule.
Remotesensing 14 02826 g003
Figure 4. Detailed procedures in RF. The input is the labeled image that is subsequently normalized and equalized. After training, the model outputs the predicting results and calculates the accuracy.
Figure 4. Detailed procedures in RF. The input is the labeled image that is subsequently normalized and equalized. After training, the model outputs the predicting results and calculates the accuracy.
Remotesensing 14 02826 g004
Figure 5. The architecture of U−Net++. [54] The purple part represents the feature extraction layers, which are replaced with ResNet50. X0,0 is the input data, and X0,4 is the final binary result. The Skip connection fuses the feature maps generated from the up and current layers.
Figure 5. The architecture of U−Net++. [54] The purple part represents the feature extraction layers, which are replaced with ResNet50. X0,0 is the input data, and X0,4 is the final binary result. The Skip connection fuses the feature maps generated from the up and current layers.
Remotesensing 14 02826 g005
Figure 6. The detailed structure in U−Net++. It represents the process in U−Net++ with the feature maps Xi, j, Xi+1, j, and Xi, j+1. The left part is the structure of ResNet50; *N represents the cycles of blocks in each stage, which is [3, 4, 6, 3].
Figure 6. The detailed structure in U−Net++. It represents the process in U−Net++ with the feature maps Xi, j, Xi+1, j, and Xi, j+1. The left part is the structure of ResNet50; *N represents the cycles of blocks in each stage, which is [3, 4, 6, 3].
Remotesensing 14 02826 g006
Figure 7. The architecture of the basic blocks of ResNet50. The line with 1X1 Conv represents the Conv block. It is used in the first place of every stage in ResNet50 to expand the channels of feature maps. The line without 1X1 Conv is the identity block, which is used in every stage following the Conv block.
Figure 7. The architecture of the basic blocks of ResNet50. The line with 1X1 Conv represents the Conv block. It is used in the first place of every stage in ResNet50 to expand the channels of feature maps. The line without 1X1 Conv is the identity block, which is used in every stage following the Conv block.
Remotesensing 14 02826 g007
Figure 8. Data augmentation. All samples are augmented by the GDAL library in python, which mainly includes the images flipping, and rotating the images at 90°, 180°, and 270°clockwise.
Figure 8. Data augmentation. All samples are augmented by the GDAL library in python, which mainly includes the images flipping, and rotating the images at 90°, 180°, and 270°clockwise.
Remotesensing 14 02826 g008
Figure 9. Prediction results of the three models.
Figure 9. Prediction results of the three models.
Remotesensing 14 02826 g009
Figure 10. Comparison of the model’s sensitivity to noise. a and b represent two subplots in the results that contain noise (e.g. buildings, bare soil, roads, etc.), 1 represents the result of random forest, 2 represents the result of U−Net ++512, and 3 is the result of U−Net ++256.
Figure 10. Comparison of the model’s sensitivity to noise. a and b represent two subplots in the results that contain noise (e.g. buildings, bare soil, roads, etc.), 1 represents the result of random forest, 2 represents the result of U−Net ++512, and 3 is the result of U−Net ++256.
Remotesensing 14 02826 g010
Figure 11. Comparison of results using different models. (ad) Planet image with ground truth of landslides (red boundary polygons), identified landslides (peach polygons) by RF (a1d1), U−Net++512 (a2d2), and U−Net++256 (a3d3).
Figure 11. Comparison of results using different models. (ad) Planet image with ground truth of landslides (red boundary polygons), identified landslides (peach polygons) by RF (a1d1), U−Net++512 (a2d2), and U−Net++256 (a3d3).
Remotesensing 14 02826 g011
Figure 12. Visualization of the confusion matrix. (ad) Planet image with ground truth of landslides (red boundary polygons), identified landslides (peach polygons) by RF (a1d1), U−Net++512 (a2d2), and U−Net++256 (a3d3). The yellow and green parts represent FN and FP, respectively.
Figure 12. Visualization of the confusion matrix. (ad) Planet image with ground truth of landslides (red boundary polygons), identified landslides (peach polygons) by RF (a1d1), U−Net++512 (a2d2), and U−Net++256 (a3d3). The yellow and green parts represent FN and FP, respectively.
Remotesensing 14 02826 g012
Figure 13. Prediction results. (a1o1) The landslide detected by U−Net++256 (orange zone). (a2o2) Ground truth (red polygons).
Figure 13. Prediction results. (a1o1) The landslide detected by U−Net++256 (orange zone). (a2o2) Ground truth (red polygons).
Remotesensing 14 02826 g013
Table 1. Confusion matrix for binary classification [67], 1 represents landslide pixel, and 0 represents background pixel.
Table 1. Confusion matrix for binary classification [67], 1 represents landslide pixel, and 0 represents background pixel.
Prediction Truth 1Prediction False 0
Ground Truth 1TPFN
Ground False 0FPTN
Table 2. Performance of RF, U−Net++256 and U−Net++512.
Table 2. Performance of RF, U−Net++256 and U−Net++512.
TypeAccuracyPrecisionRecallF1−ScoreKappaIoU
RF96.0362.8164.0363.4261.3246.43
U−Net++51296.8871.9268.8470.3568.7054.26
Compared to RF↑0.85↑9.11↑4.81↑6.93↑7.38↑7.83
U−Net++25697.3875.2676.3675.8074.4261.04
Compared to RF↑1.35↑12.45↑12.33↑12.38↑13.10↑14.61
Compared to 512↑0.50↑3.34↑7.52↑5.45↑5.72↑6.78
Table 3. Value of confusing matrix on different models.
Table 3. Value of confusing matrix on different models.
TypeTPFNFPTNTotal
RF1,480,314831,425876,56839,819,69343,008,000
U−Net++5121,591,441720,298621,24040,075,021
U−Net++2561,765,174546,565580,30440,115,957
Table 4. Performance of U−Net++256 with different depths. The bold represents the maximum value of each indicator under three different network depths.
Table 4. Performance of U−Net++256 with different depths. The bold represents the maximum value of each indicator under three different network depths.
DepthPrecisionRecallF1−ScoreKappaIoUTime
376.3376.1176.2274.8761.586.7 h
473.9575.4474.6973.2459.6010 h
575.2676.3675.8074.4261.0413.3 h
Table 5. Performance of U−Net++ in the Bijie landslide dataset.
Table 5. Performance of U−Net++ in the Bijie landslide dataset.
TypeAccuracyPrecisionRecallF1−ScoreKappaIoU
U−Net++25695.8689.6092.3090.9388.2583.37
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Z.; Xu, C. Efficient Detection of Earthquake−Triggered Landslides Based on U−Net++: An Example of the 2018 Hokkaido Eastern Iburi (Japan) Mw = 6.6 Earthquake. Remote Sens. 2022, 14, 2826. https://doi.org/10.3390/rs14122826

AMA Style

Yang Z, Xu C. Efficient Detection of Earthquake−Triggered Landslides Based on U−Net++: An Example of the 2018 Hokkaido Eastern Iburi (Japan) Mw = 6.6 Earthquake. Remote Sensing. 2022; 14(12):2826. https://doi.org/10.3390/rs14122826

Chicago/Turabian Style

Yang, Zhiqiang, and Chong Xu. 2022. "Efficient Detection of Earthquake−Triggered Landslides Based on U−Net++: An Example of the 2018 Hokkaido Eastern Iburi (Japan) Mw = 6.6 Earthquake" Remote Sensing 14, no. 12: 2826. https://doi.org/10.3390/rs14122826

APA Style

Yang, Z., & Xu, C. (2022). Efficient Detection of Earthquake−Triggered Landslides Based on U−Net++: An Example of the 2018 Hokkaido Eastern Iburi (Japan) Mw = 6.6 Earthquake. Remote Sensing, 14(12), 2826. https://doi.org/10.3390/rs14122826

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop