1. Introduction
Thailand ranks sixth among rice/paddy-producing countries from 2017 to 2022 and third in terms of export in 2022 [
1]. However, disasters, such as floods and droughts, wreak havoc and can greatly reduce rice production. Predictions of such events often come true and lower the country’s overall ranking in export [
2]. The toll hits hard on farmers who claim compensation from the government. The existing system to verify the legitimacy of the farmer’s claims include field visits by authorities, which is ambiguous, expensive, and slow. In this paper, we explore different sensing techniques to build an automated system for precise per-parcel detection of disasters that will assist the decision processes of government and insurance agencies.
Remotely sensed satellite images are a fundamental data source for disseminating flood and drought over large areas. These images are provided in different platforms, such as Copernicus Open Access Hub [
3], Google Earth Engine (GEE) [
4], United States Geological Survey (USGS) [
5], EO Browser [
6], etc. GEE is a cloud-based platform that offers a data catalog of multi-petabyte geospatial datasets and satellite images [
7]. It comprises Sentinel-1 [
8] and Sentinel-2A [
9] data for analyzing disasters, such as flood and drought. These images carry multi-spectral bands. For instance, Sentinel-2A acquires 13 spectral bands [
10] that allow an effective visualization of flood and drought with empirical indices, such as Normalized Difference Vegetation Index (NDVI) [
11] and Normalized Difference Water Index (NDWI) [
12], or machine learning (ML) algorithms, which are all available in the GEE code editor. The code editor in GEE provides the functionality to write scripts in a web-based interactive development environment (IDE) to perform geospatial work with speed and less complexity [
4].
Sentinel-2 satellites produce images with a spatial resolution ranging from 10 to 60 m with 13 spectral bands. The Sentinel-2A product is an orthorectified, atmospherically corrected surface reflectance image provided by the Sentinel-2 satellite [
13]. Out of 13 bands in the Sentinel-2A image, visible bands, namely red and green, and near-infrared (NIR) bands are used to derive indices, such as NDVI and NDWI. These bands have a spatial resolution of 10 m which is relatively high compared to the visible bands and NIR bands provided by Landsat-8 with 30 m. Therefore, Sentinel-2A provides higher-resolution NDVI and NDWI information. Cavallo et al. (2021) [
14] used both datasets to monitor continuous floods, combining NDWI, Modification of Normalized Difference Water INDEX (MNDWI), and NDVI for automatic classification, where Sentinel-2 achieved more accuracy (98.5%) than Landsat-8 (96.5%). Astola et al. (2019) [
15] compared Sentinel-2 and Landsat-8 images in the northern region to predict forest variables, and Sentinel-2 outperformed Landsat-8 even though the Sentinel-2 data were downsampled with a pixel resolution of 30 m to match Landsat-8 data pixel resolution. However, cloud and cloud shadows present in optical images impact the spectral bands [
16] and alter vegetation indices values [
17], such as NDVI and NDWI.
In such a case, Synthetic Aperture Radar (SAR) is instrumental in gathering necessary data regardless of cloud cover [
18]. SAR penetrates clouds and provides observations of earth features obscured by other sensing systems [
19]. Sentinel-1 satellite provides a C-band SAR instrument that collects Ground Range Detected (GRD) scenes and is available in GEE. Sentinel-1 SAR GRD: C-band has a wavelength of 6 cm [
18]. The Sentinel-1 images are used to detect disaster by implementing ML algorithms, such as Classification and Regression Trees (CART) [
20], Support Vector Machine (SVM) [
21], Random Forest (RF) [
22], etc.
Using mobile images is an alternative way for flood and drought detection under the cloud cover condition. In our previous work [
23], we implemented two deep learning (DL) methods to detect flood and drought events based on the mobile photographs submitted by farmers for insurance claims. The two methods include pixel-based semantic segmentation [
24] using DeepLabv3+ [
25] and object-based scene recognition using PlacesCNN [
26]. These two DL networks are widely used for semantic segmentation and image recognition of ground and remotely sensed images [
27].
The objective of this paper is to validate farmers’ claims for insurance during disaster events using sensing methods. Remotely sensed optical imagery from Sentinel-2A and SAR imagery from Sentinel-1 are the main priorities in building such methods. NDVI and NDWI are derived from optical imagery, and a ML method is applied to the SAR images. The experiments show that ML on SAR is an effective method for remote sensing images as it is unaffected by clouds. However, SAR images are not always available. Therefore, the methods from optical sensors need efficiency gain during cloud coverage. To increase the efficiency of the optical image-based method during cloud coverage, we use mobile images from land plots. A DL method is developed to detect disaster events from mobile images. The predictions from the mobile images are then combined with the predictions from NDVI and NDWI from Sentinel-2A. The combined method, therefore, provides an alternative to the SAR-based method. The overall method accurately and efficiently evaluates the disaster claims and ensures that no farmers are left behind during the decision processes involved in compensation.
3. Methodology
We perform our work in GEE Code Editor, which supports training and visualization, and use client libraries in python to fully automate the system. Our proposed methods consist of the following steps: (i) acquisition of Sentinel-2A dataset and cloud masking, (ii) thresholding NDVI and NDWI values in Sentinel-2A, (iii) scene recognition on mobile images, (iv) acquisition of Sentinel-1 GRD dataset and (v) supervised learning with the CART model and classification on Sentinel-1 GRD images.
Figure 2 presents the detailed workflow, and the following sections describe the steps in order.
3.1. Acquisition of Sentinel-2A Dataset and Cloud Masking
A total of 859 claims are received for flood and drought events together. We access the land plots in Sentinel-2A images from GEE API using the polygons of land plots provided by the farmers during their claims. We select an interval of five days for each report starting from the date of the disaster, either flood or drought, as the revisit time of Sentinel-2 satellites is five days. We filter the least cloudy images from image collections for each satellite image to tackle the cloud cover problem. However, we need to mask the cloud from the satellite images to reduce false detection, as cloud cover affects the band values propagating the error to NDVI and NDWI calculations.
GEE offers cloud masking for Sentinel-2A images with the “Sentinel-2 Cloud Probability” (also known as
s2cloudless) [
50] dataset. With the same date and region of interest, Sentinel-2 cloud probability image collection is filtered and joined with Sentinel-2A image collection. After adding the cloud probability layer, we derive a cloud mask as a band to a Sentinel-2A image. We also mask the shadow of the cloud by identifying dark pixels. However, all the dark pixels are not cloud pixels, due to which we project cloud shadows from clouds and intersect the projected cloud shadows with the dark pixels to identify shadows. Finally, we mask both clouds and cloud shadows.
Figure 3 clarifies how the cloud and cloud shadow layer is implemented and then masked from the Sentinel-2A image in GEE.
3.2. Thresholding NDVI and NDWI Values in Sentinel-2A
The range of NDVI and NDWI values describe the characteristics of the land surface. Based on the previous studies [
51,
52,
53], we divide the range of NDVI values into the following categories: NDVI value less than 0.17 as “flood”; NDVI value between 0.17 and 0.45 as “drought”; and NDVI value greater than 0.45 as “green paddy” category. Using Equation (
1), we find the NDVI value of each pixel in a land plot. Then, we compute the area coverage of the three categories for each land plot.
We follow similar steps using per-pixel NDWI values calculated using Equation (
2), but with different ranges and categories. Even though the categorization of NDWI values is not abundant in the literature, [
35] divides the range into
water surface, flooding, moderate drought and high drought. We use these interpretations to categorize NDWI values as NDWI less than 0 in the “drought” category and NDWI more than 0 in the “flood” category. Subsequently, we determine the proportionate area of “flood” and “drought” categories for each land plot to visualize the condition of land plots.
3.3. Scene Recognition using PlacesCNN on Mobile Images
The scene recognition method using PlacesCNN provides one more layer of validation to the farmers’ claim, especially in the case of the presence of a cloud. The PlacesCNN is trained on a Places365-Standard dataset with 365 classes of scenes. The list of classes, however, does not contain a class named flood or drought. As adopted from our previous work [
23], we formulate a decision based on existing classes, such as swamp, lagoon, desert, tundra, etc., that carry the properties of flood and drought. The decisions are formed with the following flood and drought categories:
Flood categories = [‘swamp’, ‘lagoon’, ‘watering hole’, ‘swimming hole’, ‘pond’, ‘canal/natural’]
Drought categories = [‘hayfield’, ‘tundra’, ‘desert road’, ‘desert/vegetation’, ‘archaeological excavation’, ‘trench’]
The pre-trained PlacesCNN model works as a classifier to categorize the mobile images into 365 classes with their probability in a percentage. The cumulative probability of each class in flood and drought categories forms the final decision on whether the scene presents a flood or a drought event. If the probability of flood categories is higher than the probability of drought categories, the prediction from the adopted method is flood and vice versa. If the probability of both categories is zero (with a threshold of 4 decimal points − 0.0000), then we consider the disaster event as neither flood nor drought.
3.4. Acquisition of Sentinel-1 SAR GRD Dataset
We access the Sentinel-1 SAR GRD images of the land plots through GEE API for 660 claims of flood and 199 claims of drought. To filter the images, we select an interval of 6 days (12 days in the case of satellite images obtained after 23 December 2021), which is equal to the revisit time of the Sentinel-1 mission, for each report starting from the date of the disaster. In addition, we also filter by IW mode with VV + VH polarization because of its superiority over the land surface over SM mode and EW mode, as discussed earlier.
Unlike Sentinel-2A images, Sentinel-1 is hardly affected by the clouds, which raises questions related to the necessity for using mobile images. Sentinel-1 provides ortho-corrected and calibrated products. However, the presence of granular noise, such as speckles due to the interference of signals during back-scattering [
54], can still obstruct the detection of disaster events. To make the most out of Sentinel-1, we tackle the problem of speckles by filtering them in GEE API with a function
(ee.Image.focal_mean) to smooth the image using a kernel. These pre-processed Sentinel-1 images are used to train a classifier to predict flood and drought areas, as explained in the next section.
3.5. Supervised Learning with CART Model and Classification on Sentinel-1 GRD Images
The probability of flood and drought events occurring on a single satellite image (single event) is extremely low. Therefore, we choose two satellite images of two dates, one image consisting of flood and another of drought. We run a supervised classification in both satellite images by collecting training data to train the classifier. We add different layers for feature collection, each having a group of related features. However, identifying and labeling features manually on a SAR image is a puzzle. Thus, we obtain an optical satellite image on the same date as the Sentinel-1 SAR GRD image and collect the features manually. We trace the same exact location in a Sentinel-1 SAR GRD image to collect the same features.
In the case of flood, we collect features from two different layers/classes; either “flood” or “not flood”. We collect 75 points as a feature collection for the “flood” layer and 49 points for the “not flood” layer in a single Sentinel-1 SAR GRD image. The two identified classes are merged into a single collection. Then, we select VV + VH polarization bands to be used to train our data by extracting back-scatter values of VV and VH for each class. Afterward, we train the CART classifier with the training data using the
ee.Classifier package and run the classification for image classification. However, we have to run the classification in numerous satellite images of different dates, as the flood claimed by the farmers are at different dates. Thus, we run the classification at several disaster dates having separate satellite images on a farmer’s claimed locations. After running the classification, we clip the land plot of individual farmers according to the polygons sent by respective farmers using a GEE function (
ee.Image.clip). Subsequently, we measure the area of flooded and normal regions by counting the pixels belonging to each classified class. The same procedure is followed for drought land plots with classes of “drought”, “green paddy”, and “neither”, each having 43, 28, and 33 training points, respectively. Similar to the method of the flood, the area of drought, green paddy, and other features are measured by counting the pixels.
Table 2 simplifies our discussion about the training data of this section.
3.6. Accuracy Assessment
We evaluate the accuracy of all the methods we implement by two-step validation. In the first step, we verify the location-tagged mobile images provided by the farmers with a visual inspection for both flood and drought claims. We accept the images that display the correct disaster for each claim made by the farmers and filter out the junk photographs that farmers unknowingly upload. In total, 660 claims of flood and 199 claims of drought are verified for the experiments. For these verified claims, we determine the accuracy of flood and drought detection using NDVI and NDWI from Sentinel-2A, CART classification on Sentinel-1 SAR GRD, and object-based scene recognition on mobile images. However, imaging gaps in geographical areas may occur due to transition time constraints during the measurement mode switch in Sentinel-1 SAR image [
55] or maintenance activity leading to failed image collection. Some of the land plots are likely to reside in these imaging gaps. Due to the unavailability of Sentinel-1 images in such plots; we measure the accuracy using Equation (
3).
The second step performs field validation of the satellite-based methods of Sentinel-2A and Sentinel-1. We measure the damaged area of disaster-affected land plots by visiting the fields. The high-quality images of land plots are captured from an unmanned aerial vehicle (UAV) on the exact date of the disaster. Afterward, the QGIS [
56] tool is used to measure the damaged area of each land plot. The affected area obtained from the UAV image and QGIS is used to validate the area coverage shown by the methods. The rate of difference in the proportion of area from the fields and the methods
is calculated using Equation (
4). The difference
is subtracted from 100% to get the correctness (Equation (
5)) of the methods against the field validation.
where
P is the proportion of the disaster obtained from field measurements of an individual land plot, and
is the proportion of the disaster obtained from our methods for an individual land plot.
is the sum of the absolute difference between
P and
for each land plot, and
indicates the sum of the proportion of disasters from field measurements.
4. Results
The following sections present the results obtained from experiments separately for flood and drought events. We split the results obtained from flood detection and drought detection into three sections: NDVI and NDWI on Sentinel-2A, CART Classification on Sentinel-1 SAR GRD, and Scene Recognition on mobile images.
4.1. Flood Detection
4.1.1. Flood Detection from NDVI and NDWI on Sentinel-2A
Out of 660 verified claims reported for flood, we are able to collect Sentinel-2A images for 167 reports that are not affected by the clouds in the next five consecutive days from the day of the claim. We perform NDVI and NDWI calculations in each pixel of Sentinel-2A images for these 167 claims.
Figure 4 shows a sample of a land plot in QGIS. The results are not instantaneous, and the near real-time verification is affected by Sentinel-2A’s revisit time of 5 days. Out of 167 images collected from Sentinel-2A for 167 flood claims, 100 and 84 claims are correctly verified using NDVI and NDWI, respectively. The experiment’s results are shown in
Table 3. With less than 60% accuracy in verifying claims obtained from Sentinel-2A-based NDVI and NDWI, we further verify the same claims using Sentinel-1 SAR GRD images, as shown in the next section.
4.1.2. Flood Detection with CART Classification on Sentinel-1 SAR GRD
The same 167 claims from the previous experiment of Sentinel-2A are verified using Sentinel-1 SAR GRD images. Due to the data unavailability from the Sentinel-1B anomaly, we collect the SAR images within six consecutive days for the claims before 23 December 2022 and twelve days for the claims after 23 December 2022. Out of 167 images, our method could not find two images while filtering the images in IW mode with VV + VH polarization, leaving images for 165 claims. From the 165 claims, 164 show the presence of flood using CART.
Figure 5 shows a sample plot re-produced in QGIS. With 99.39% detection of the flood, CART classification on Sentinel 1 SAR GRD outperforms the method of NDVI and NDWI on Sentinel 2A, as shown in
Table 3.
Table 3.
Accuracy assessment using verified mobile images of flood detection obtained from CART, NDVI and NDWI methods for land plots unaffected by cloud in Sentinel-2A images.
Table 3.
Accuracy assessment using verified mobile images of flood detection obtained from CART, NDVI and NDWI methods for land plots unaffected by cloud in Sentinel-2A images.
Methods | Number of Flood Claims | Inaccessible Land Plots | Correctly Detected | Accuracy |
---|
CART on Sentinel-1 | 167 | 2 | 164 | 99.39% |
NDVI on Sentinel-2A | 167 | - | 100 | 59.88% |
NDWI on Sentinel-2A | 167 | - | 84 | 50.30% |
4.1.3. Flood Detection Using Scene Recognition on Mobile Images
Out of 660 claims for flood, only 167 claims are verified using Sentinel-2A images due to cloud or cloud shadow effect. This gives a solid reason to incorporate the scene recognition method as one of three verification tiers of the claimed disasters. The method with PlacesCNN is used to validate the remaining 493 claims. From the experiments, 469 out of 493 claims are correctly predicted.
Figure 6 presents the sample images and the output from the PlacesCNN for flood detection, along with the activation map used by the model to make the prediction. An activation map shows the activation of different parts of the image that are used to make predictions. As PlacesCNN predicts 365 classes, the red part of the map contributes to likely predictions, and the blue contributes to less likely predictions.
Table 4 shows the overall accuracy of flood detection on all land plots using a combination of the DL method on mobile images (for cloud-affected areas) and the method of NDVI and NDWI, and the overall accuracy obtained from the CART method on Sentinel-1 SAR GRD images. This includes a total of 660 flood claims reported by the farmers. Sentinel-1 fails to collect images for seven flood claims.
The inaccuracy of NDVI and NDWI from Sentinel-2A is propagated to their integration with a method of scene recognition on mobile images.
Table 5 shows the difference in accuracy between the PlacesCNN method on mobile images and the CART classification on Sentinel-1 SAR GRD images for land plots affected by cloud cover areas without including Sentinel-2A product. In this scenario, the Sentinel-1 satellite lacks images in 5 flood-claimed locations. However, with accuracy, the CART method in Sentinel-1 SAR GRD images shows superiority with respect to PlacesCNN on mobile images. Nevertheless, scene recognition with PlacesCNN has adequate performance.
4.2. Drought Detection
4.2.1. Drought Detection from NDVI and NDWI on Sentinel-2A
The process of drought detection is the same as flood detection. In a total of 199 verified drought-affected claims by farmers, only 49 are unaffected by cloud cover.
Figure 7 presents a sample land plot to detect drought using NDVI and NDWI methods.
Table 6 shows the accuracy of NDVI and NDWI on the land plots unaffected by clouds and their shadows. The experiments show 100% accuracy from NDWI and 89.89% accuracy from NDVI.
4.2.2. Drought Detection Using CART Classification on Sentinel-1 SAR GRD
The land plots not affected by clouds are experimented on Sentinel-1 SAR GRD images using the CART method under the same conditions used for flood detection. Then, we observe the difference in accuracy between NDVI and NDWI, as tabulated in
Table 6. A classified sample plot is presented in
Figure 8. Among 49 drought claims, CART classification could not perform for seven claims due to the unavailability of images. For the remaining 42 claims of drought, CART achieves 100% accuracy and stands along with the method of NDWI on Sentinel-2A with similar accuracy.
4.2.3. Drought Detection Using Scene Recognition on Mobile Images
We implement the PlacesCNN method on mobile images to predict drought; sample images are shown in
Figure 9. Similar to the experiments for flood detection, we integrate the PlacesCNN method with the method of NDVI and NDWI to cover all land plots. Then, we observe the difference in accuracy with the CART method tallied in
Table 7.
Table 8 exhibits the accuracy of the PlacesCNN method and CART method for the land plots that are cloud affected in Sentinel-2A images. We also observe that some land plots with drought claim data cannot be gathered from Sentinel-1 SAR GRD images.
4.3. Field Verification for Flood and Drought Detection
In the second validation phase, we perform a field visit to validate our satellite-based methods to extract flood-affected areas. As the area of land affected by the disaster event cannot be verified on the mobile-based images, the field validation of area proportion is validated only for satellite-based methods. We validate the flood detection on seven land plots in the Nakhon Sawan province of Thailand. Similarly, we validate the drought area on four land plots in different locations in Thailand with the area proportion provided by the farmers. The number of ground samples is low because of the low occurrence of disaster events during the time of field validation.
Table 9 shows the area percentage of flood obtained from NDVI and NDWI on Sentinel-2A images and CART classification on Sentinel-1 SAR GRD images. NDVI and NDWI methods exhibit nearly accurate data for fully flooded parcels; however, Sentinel-2A images fail to detect flood for some of the land plots because of cloud cover over those land plots. CART classification accurately measures the area percentage for entirely flooded parcels, and the prediction for partially flooded plots is nearly similar in percentage. In the case of field ids 14007 and 14008, the flood event is one day prior to the actual flood event, indicating 0% flood on the day of the visit.
As only two of the plots are not affected by clouds, the correctness of the methods of NDVI, NDWI and CART in these plots is 99.97.%, 96.57%, and 100%, respectively. We compute the correctness for all land plots using the CART method, excluding field ids 14007 and 14008 as the field measurement is prior to the flood and satellite image acquisition day; we achieve the correctness of 90.88% and ( = 9.12%).
The validation of drought-affected areas of land plots is performed by collecting the area proportion of the disaster from the farmers themselves because of extremely low drought events and the remoteness of the claim location. The farmers provide the estimated damaged area in local Thai land measurement units (Rai, Ngan and Square Wah) (1 sq. wah = 4 sq. m, 1 ngan = 400 sq. m., 1 Rai = 1600 sq. m.). Four claims reported between 30 August and 19 September of 2022 could be validated for drought, as shown in
Table 10. In addition to the challenge, Sentinel-2A could not provide images for validation for the next 15 days, and Sentinel-1 could only provide the images after 12-day intervals because of the unavailability of Sentinel-1B. We relate the correctness obtained from our method to the farmer’s claim for drought area and observe the correctness of 90.63% and (
= 9.35%).
5. Discussion
5.1. Predicted Flood and Drought Distribution from Satellite Images
From the two-step validations performed, we comprehend that our methods are suitable for detecting flood and drought as well as measuring the area percentage of land plots affected by the disasters. In this section, we show the distribution of disasters on the land plots as predicted by the three methods based on satellite images. A sample of the three methods—NDVI and NDWI on Sentinel-2A and CART on Sentinel-1 SAR GRD—on a land plot is shown in
Figure 10. The flood/water distribution is nearly the same on NDVI and NDWI, as both indices use the same satellite image. CART on Sentinel-1 exhibits a similar distribution as well. A similar property is seen in the case of drought detection, as seen in
Figure 11. Most of the land plot is affected by drought in the sample according to all three methods. This observation signifies that all three methods are precise in seeing the distribution of flood and drought.
Despite the precision of the methods to visualize the distribution of disasters on the land plots, it is not necessary that all methods always detect disasters. Therefore, in the following sections, we compare the accuracy of all methods and discuss their limitations during real-world implementation in terms of accuracy, data acquisition and coverage area of the plots.
5.2. Findings from the Experiments
This section compares the accuracy of each method we implement from the results obtained in
Section 4. For the land plots unaffected by either cloud or cloud shadows in Sentinel-2A images, we carry out NDVI and NDWI methods. Studies [
14,
57] claim that NIR, and green and red bands derived from spectral indices, such as NDVI and NDWI, are appropriate for wetland detection. These two indices also have a strong relationship with drought conditions [
58]. From our experiments, NDVI outperforms NDWI for flood detection (59.88% vs. 50.30% accuracy), but NDWI transcends NDVI for drought detection (100% vs. 89.80% accuracy). Both methods fail in flood detection and perform exceptionally well in drought detection. Such disparity is not seen in CART classification on Sentinel-1 SAR GRD, as the detections are high for both flood (99.39%) and drought (100%).
The DL method of PlaceCNN to detect flood and drought on mobile images is an object-based scene recognition method. The method is more efficient than the state-of-the-art pixel-based semantic segmentation using a DL network called DeepLabv3+ [
23]. The use of a pre-trained ResNet-18 as the CNN in PlacesCNN further reduces the computational expense and complexity compared to DeepLabv3+. The comparison of PlacesCNN and DeepLabv3+ is shown in
Table 11.
The method of PlacesCNN on mobile images combined with the NDVI method achieves an accuracy of 86.21% for flood detection, while this combination contributes less than 73.40% for drought detection. In the case of the PlacesCNN and NDWI composite, flood detection accuracy is 83.79% and 81.91% for drought detection. In either combination, flood is detected better than drought. However, this composite is inadequate in terms of accuracy compared with the CART method on Sentinel-1 SAR GRD images, which has 98.77% and 99.44% accuracy during flood and drought detection, respectively. Unlike the composite of PlacesCNN with both NDVI and NDWI, the difference in accuracy for the two disaster detections is insignificant. To make the comparison fair, we also compare the PlacesCNN method on mobile images and the CART method on Sentinel-1 SAR GRD images. In both flood and drought detection scenarios, the CART method performs better. However, the difference is narrow (98.57% vs. 95.13%) for flood detection while significant (99.26% vs. 76%) for drought detection. The results indicate the PlacesCNN method lacks during drought detection.
All experiments show that the CART method on Sentinel-1 SAR GRD is superior in terms of accuracy and complexity. However, other ML methods on Sentinel-1 SAR GRD images may not perform up to the standard of the CART method or possibly overperform. Thus, it is necessary to investigate other ML methods.
5.3. Performance of Other ML Methods on Sentinel-1 SAR GRD Images
The exploration of ML on Sentinel-1 SAR images on flood and drought detection is limited to CART, SVM and RF in previous experiments [
44,
45]. We compare the CART method with SVM and RF in terms of accuracy and correctness, illustrated in
Table 12, from the same dataset for flood and drought. Besides RF performing better in flood detection than CART and SVM, CART shows superiority in drought detection and correctness compared to field measurements. Despite CART’s superiority and consistency, the use of Sentinel-1 SAR GRD is not free of limitations. The limitations of our methods are discussed in the next section.
5.4. Limitations
Clouds not being the challenge for the methods of CART on Sentinel-1 SAR GRD images, our experiments are still affected by the unavailability of images. Both polarization of VV and VH are not available for all parts of the Earth. The images of IW mode with VV + VH polarization that we use in our experiments are not available in all locations at all times. Our experiments show that images are missing for 28 out of 851 locations, which is disadvantageous and does hinder the reliability of the method. The method of scene recognition on mobile images has two limitations: (i) the inability of the farmers to upload correct photographs with the required angle of view and (ii) the calculation of the area proportion of land affected by the disaster. The junk photographs, such as personal photographs and indoor images, are filtered using PlacesCNN itself as it can already classify such images. However, the field of view can still affect the prediction of the correct disaster. Despite the limitations, all methods and their combinations could significantly automate the process of validating the claims, reducing the costs. The key strength of our methods and experiments are highlighted in the next section.
5.5. Key Strength of our Experiments
As suggested by our experiments, the performance of NDVI and NDWI on Sentinel-2A’s optical images lacks robustness in terms of accuracy to support the decision processes of government agencies and insurance companies. The inaccuracy is further aggravated by cloud coverage. On the other hand, the CART method is not affected by cloud and performs adequately for flood and drought detection. In addition, the CART method on Sentinel-1 SAR GRD images provides a precise percentage of the disaster-affected region claimed by the farmers at the parcel level. However, the imaging gaps in Sentinel-1 SAR images narrowly affect the information collection from some of the land plots, which cannot be neglected. In the case of the unavailability of images from Sentinel-1 and Sentinel-2 (cloud coverage), the mobile-based method is an alternative. The performance of the CNN-based method on mobile images is significant enough to implement in such cases. To summarize, SAR is the most accurate method to detect disasters and calculate damaged area, but the mobile-based method is the best alternative during the unavailability of images. The overall method is cost-effective and replaces the manual inspection required to validate the claims. The experiments carried out are also robust as the flood and drought claims are distributed all over Thailand.