Next Article in Journal
Assessment of the Effects of Urban Expansion on Terrestrial Carbon Storage: A Case Study in Xuzhou City, China
Previous Article in Journal
Within-Class and Neighborhood Effects on the Relationship between Composite Urban Classes and Surface Temperature
Previous Article in Special Issue
Spatial–Temporal Patterns and Driving Forces of Ecological-Living-Production Land in Hubei Province, Central China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Mangroves Extents on the Red Sea Coastline in Egypt using Polarimetric SAR and High Resolution Optical Remote Sensing Data

1
Center for Remote Sensing of Land Surfaces (ZFL), University of Bonn, 53113 Bonn, Germany
2
Environmental Studies Department, National Authority for Remote Sensing and Space Sciences (NARSS), Cairo 1564, Egypt
3
Remote Sensing Research Group (RSRG), University of Bonn, 53115 Bonn, Germany
*
Author to whom correspondence should be addressed.
Sustainability 2018, 10(3), 646; https://doi.org/10.3390/su10030646
Submission received: 30 December 2017 / Revised: 5 February 2018 / Accepted: 27 February 2018 / Published: 28 February 2018
(This article belongs to the Special Issue Sustainability Assessment of Land Use and Land Cover)

Abstract

:
Mangroves ecosystems dominate the coastal wetlands of tropical and subtropical regions throughout the world. They are among the most productive forest ecosystems. They provide various ecological and economic ecosystem services. Despite of their economic and ecological importance, mangroves experience high yearly loss rates. There is a growing demand for mapping and assessing changes in mangroves extents especially in the context of climate change, land use change, and related threats to coastal ecosystems. The main objective of this study is to develop an approach for mapping of mangroves extents on the Red Sea coastline in Egypt, through the integration of both L-band SAR data of ALOS/PALSAR, and high resolution optical data of RapidEye. This was achieved via using object-based image analysis method, through applying different machine learning algorithms, and evaluating various features such as spectral properties, texture features, and SAR derived parameters for discrimination of mangroves ecosystem classes. Three non-parametric machine learning algorithms were tested for mangroves mapping; random forest (RF), support vector machine (SVM), and classification and regression trees (CART). As an input for the classifiers, we tested various features including vegetation indices (VIs) and texture analysis using the gray-level co-occurrence matrix (GLCM). The object-based analysis method allowed clearly discriminating the different land cover classes within mangroves ecosystem. The highest overall accuracy (92.15%) was achieved by the integrated SAR and optical data. Among all classifiers tested, RF performed better than other classifiers. Using L-band SAR data integrated with high resolution optical data was beneficial for mapping and characterization of mangroves growing in small patches. The maps produced represents an important updated reference suitable for developing a regional action plan for conservation and management of mangroves resources along the Red Sea coastline.

1. Introduction

Mangroves are highly productive ecosystems located in the intertidal tropical and subtropical regions. They act as buffer zones between terrestrial and marine ecosystems, and therefore, play an important role in the functioning of adjacent ecosystems, such as salt marshes, seagrass beds, and coral reefs [1]. Mangroves support the conservation of biological diversity by providing habitats, nurseries, and nutrients for a number of animals and marine organisms [2]. They are considered of a great ecological importance in coastlines stabilization, reduction of coastal erosion, sediment, and nutrient retention, flood and flow control, and water quality [3,4,5]. They are also of a high economic importance and often provide valuable ecosystem goods and services for local communities [6].
Despite of their economic and ecological importance, mangroves experience high yearly loss rates of 1–2% [7,8]. High population pressure in coastal areas has led to the conversion of many mangroves areas to other uses, and numerous case studies described mangroves losses over time [9]. Among the reasons for this loss are the over exploitation, unsustainable wood extraction; conversion by urbanization, aquaculture; and alteration of the hydrological system [10,11]. In addition, long-term climatic change and the different interacting effects associated with global temperature increase and sea level rise are deemed as a global hazard to mangroves [12]. Degradation and loss of these coastal buffering systems due to climate change and direct human impacts contradict the coastal protection they provide and increases their vulnerability, with significant environmental, economic and social consequences for indigenous people in the coastal areas [13].
Mangroves in Egypt represent the northern latitudinal limits of the Endo-Pacific East African mangroves. Due to area’s extreme environmental conditions (high salinity, low rainfall, and extreme temperatures), the trees are generally stunted, rarely exceeding five meters in height [9]. They are found scattered along the Red Sea coastline, and their usual habitat is shallow water, such as lagoons, or sand bars parallel to the shoreline. Two of the four mangrove species known to occur in the Red Sea were recorded in Egypt; Avicennia marina (Forssk.) Vierh, and Rhizophora mucronata [14].
There is a growing demand for integrated assessment to address the risk on mangroves ecosystems, especially in the context of climate change, seal level rise, and related threats to coastal ecosystems. Therefore, mapping and retrieval of up-to-date information regarding the extent and conditions of mangroves is essential for conservation and sustainable management of mangroves ecosystems on the Red Sea coastline. Mapping of mangroves requires frequent and spatially detailed assessments; such information can be prohibitively expensive to be collected directly. Remote sensing is the most appropriate tool for mapping and assessment of mangroves, due to its ability to capture high spatio-temporal variability over large geographical scales [15]. Remote sensing provides a fast, cost-effective, and efficient methods for mangroves mapping, it is particularly useful since many mangroves extents are located in remote areas, where field measurements are difficult, time-consuming, and expensive [16]. Remote sensing data offers many advantages in this respect and has been used in several studies for mangroves mapping [17]. However, the accuracy of the mangroves maps is affected by the ability of the classification procedure to discriminate between various elements and vegetation types in the mangroves ecosystem, which is partly a function of the sensors’ resolution, and the image processing method or classification procedure adopted. This gap remains to be addressed to be able to produce more accurate spatially explicit information based on remote sensing data to eventually support sustainable management of mangroves ecosystems.
A variety of sensors and image processing methods have been used in the remote sensing of mangroves, such as SPOT (Système Pour l’Observation de la Terre) [18,19] and Landsat Thematic Mapper (TM) [20,21]. Since mangroves along the Red Sea coastline often grow in narrow small patches, high-resolution remotely sensed data are required to capture the newly colonized individual stands or relatively small patches of mangroves stands that cannot be captured with medium spatial resolution satellite data e.g., Landsat imagery. High-resolution data provide opportunities for mangroves mapping and interpretation, as well as it can be used to recognize, identify, and delineate mangroves at the individual tree level [22].
Object-based image analysis (OBIA) is a suitable approach for classification of high resolution satellite images, it allows extraction of meaningful objects, rather than single pixels during image segmentation because the spectral response of individual pixels no longer represents the characteristics of a target of interest e.g., tree canopies [23,24]. OBIA involves the identification of homogeneous groups of pixels that have similar spectral and/or spatial characteristics [25]. In addition, the object-based image analysis offers various advantages of using spectral characteristics, texture, shape, and context information with adjacent image objects that can be used in mapping of mangroves extents [26].
Although there are many advantages of using optical remote sensing data for mapping of mangroves, a major limitation is the availability of cloud-free scenes, which stimulates usage of other remote sensing datasets, such as the Synthetic Aperture Radar (SAR) data, and its more advanced operational mode polarimetric SAR (PolSAR) for mapping and characterization of mangroves extents. Several studies have demonstrated the potential of SAR data for mapping of mangroves indicating that SAR-based methods can potentially outperform the optical remote sensing data especially in the tropical areas, where cloud coverage restrain application of optical satellite images [27,28,29].
SAR sensors are particularly used for woody structural mapping, because of their capacity to capture within-canopy properties [30,31,32], and lack of sensitivity to cloudy conditions. L-band SAR provided by the Advanced Land Observing Satellite (ALOS) Phased Array L-band Synthetic Aperture Radar (PALSAR) has been proven to be the most effective in forests mapping and characterization [33,34,35] due to the high penetration of L-band into the canopy. L-band SAR data have been shown to be useful for mangroves mapping as a function of structural differences between species and growth stages [36]. Using SAR data in mapping of mangroves is challenging because of the complexity of the backscatter signal received from mangroves ecosystems. Different bands of radar backscatter are affected differently by the interactions between the transmitted signal and biophysical properties of mangroves, such as size, geometry, orientation of leaves, trunks, branches, different types of roots, and the moisture content of both mangroves trees and the underlying soil [36,37].
The main objective of this study was to develop an approach for mapping of mangroves extents on the Red Sea coastline through the integration of both L-band SAR data of ALOS/PALSAR, and high resolution optical data of RapidEye. We applied object-based image analysis method and assessed different machine learning algorithms, as well as we evaluated various input features for discrimination of mangroves ecosystem classes in the study area. The performed tests allowed to produce accurate mangroves maps in the study area as well as to make recommendations on the suitability of remote sensing data and selection of the classification methods for accurate mangroves mapping. This information is important for the development of a regional action plan for conservation and management of mangroves resources along the Red Sea coastline as well as it could be applied for similar applications worldwide.

2. Materials and Methods

2.1. Study Area

Wadi Lehmy stand is located on the Red Sea coastline in Egypt (latitude 24°13′, longitude 35°25′, Figure 1). It is one of the few mangroves stands growing along the Red Sea coastline. The study area is divided into three main units: The Red Sea Mountains, the coastal plain, and the Red Sea coast. The coastal plain reaches in width from 15 to 25 km, and it is made of undulating sand and gravels that separates the mountainous range from the Red Sea coast. Mangroves grow in small patches along the Red Sea coast. As halophytes, mangroves thrive well in saline water, but require fresh water to a certain extent in order to maintain an optimum salinity balance and to get nutrients, which explains why mangroves grow on the mouths of wadis (seasonal riverbeds), where suitable sediments and sources of freshwater allow the mangroves to grow in a high-saline substrate frequently inundated by seawater.
Wadi Lehmy stand is dominated by Avicennia marina; it is a tolerant species to the relatively high salinity, low rainfall and high temperature conditions [38]. Avicennia grows on a sandy substrate at the mouth of W. Lehmy, surrounding a shallow lagoon. The trees have developed morphological, physiological and reproductive adaptations to the high salinity of the swamp. Climatologically, the Egyptian Red Sea coast, which supports the distribution of mangroves, belongs to the category of warm coastal deserts. The climate of Ras Banas in the south, shows an annual mean of minimum and maximum temperature of 19.1 °C and 32.4 °C with annual mean of 17.4 mm year−1 rainfall.

2.2. Remote Sensing Data

This work is based on the integration of three different remote sensing datasets; SAR data provided by ALOS/PALSAR, high resolution optical data of RapidEye, and very high resolution data of WorldView-1. ALOS/PALSAR data used in this work was acquired at L-band (~23.6 cm wavelength), in a fine-beam dual-polarization mode (HH and HV), and fine-beam single polarization mode (HH), in an ascending orbit, with off-nadir angle of 34.3°. Data were acquired in a slant range single-look complex format (SLC, L1.1) acquired on 22 June 2007. Each scene covers an area of approximately 60 × 70 km.
RapidEye is a commercial optical Earth observation mission that consists of a constellation of five satellites [39]. The sensors deliver high spatial resolution imagery with a ground sampling distance of 6.5 m at nadir. RapidEye image was acquired on 25 February 2015. It is a 3A product from the RapidEye Science Archive (RESA) of the German Aerospace Centre (DLR). At this processing stage, the image was already radiometrically and geometrically corrected for sensor related issues and aligned to a cartographic map projection (WGS 1984/UTM zone 36N) [40]. WorldView-1 data with 0.5 m spatial resolution was acquired on 21 November 2015, the image was radiometrically and geometrically corrected. Our focus in the current study was on SAR data availability, particularly L-band; therefore, there is unavoidable time of eight years between the acquisition times of PALSAR data and the optical data, assumed to be insignificant for observation and mapping of mangroves as woody vegetation [41].
ALOS/PALSAR images were pre-processed from the SLC format according to the following steps; (a) SAR images for a given acquisition mode (FBD) were co-registered using a cross-correlation algorithm [40]; (b) each SLC image was then calibrated to convert the DN values to radar backscatter coefficients (σ°) in decibels (dB) using the following formula [42]:
σ o ( d B ) = 10 × log 10   ( D N 2 ) 83.4 ,
where DN is the image pixel digital number measured in the SAR image, and −83.4 dB is the calibration factor of ALSO/PALSAR data; (c) multi-looking was carried out using mode-specific factors aiming at achieving roughly squared pixels of 12.5 m in range and azimuth [42,43], followed by subset of the obtained backscatter images to the boundaries of the study area; (d) 5 × 5 Lee filter was applied to the images to reduce the effect of speckle noise, Lee filter was selected because it preserves polarimetric information [44]; (e) H/A/Alpha polarimetric decomposition parameters were extracted based on eigenvector decomposition of the (2 × 2) complex covariance matrix [C2] [45,46] to be used along with the other derived SAR parameters for mangroves mapping; (f) terrain-correction was carried out to remove the effect of geometric distortions, such as foreshortening, layover, and shadow; and (g) geocoding of the images to the UTM projection (zone 36N and WGS-84 datum) using SRTM Digital Elevation Model (DEM) using Sentinel application platform (SNAP) V5.0 toolbox provided by European Space Agency (ESA). In addition to HH and HV, the ratio HV/HH, total power HV + HH and difference HV − HH were calculated. Finally, the HH, HV polarizations and all the derived parameters were stacked to one multi-layered image for further analyses.
In addition to remote sensing data, other ancillary data were used in the current study including the SRTM DEM data, it was used to identify the potential mangrove areas based on the elevation, mangroves normally survive in the intertidal zones; therefore, SRTM could be used to identify the potential mangrove areas [47].

Data Analysis

Following image pre-processing, various procedures were performed to prepare the multi-sensor with multi-resolution input bands alongside with the derived and calculated features and parameters of both SAR and optical data for the subsequent analysis and categorization for mangroves mapping and characterization.
Two different methods were used to combine information from the remote sensing datasets used in the current study. The first method is data fusion; it combines information from multi-source images to obtain a new image with more information that can be separately derived from the original images [48,49,50]. This method was used to combine the optical data of RapidEye image with five-meter spatial resolution, and the Worldview-1 image with 0.5 m spatial resolution, to enhance the visual interpretation, and improve the quantitative analysis performance of both datasets for mangroves mapping. Several data fusion techniques have been tested including; the intensity-hue-saturation (IHS), Brovey transformation, Gram Schmidt fusion, and Ehlers fusion techniques [48,49,50,51,52], the IHS method was applied in this study for preserving the multispectral characteristics and improving the spatial features in the output. The second method is data integration, it combines images in different layers algorithmically, without creating a new set of images [48,49]. It was applied to combine both SAR data of ALOS/PALSAR and optical data of RapidEye to investigate the potential of the integrated SAR and optical data for mapping of mangroves, while optical data represent the reflective properties of ground cover, SAR data are sensitive to the shape, roughness and moisture content of the observed objects [49] (Figure 2).

2.3. Field Data

Field data were obtained from a field expedition carried out in the study area in June 2013. During this expedition, field measurements including; ground truth data (GPS points) with a corresponding short description were collected randomly to determine topographic reference points, and the main geographical features in the study area, as well as vegetation patterns, species distribution, and identification of the mangroves ecosystem classes (Figure 3). Five classes were identified within the mangroves ecosystem in the study area, these classes are; mangroves (MV), water (WT), intertidal zone (TZ), waterlogged areas (WG), and coastal plain (CP). Additionally, a part from the GPS-based sampling, the higher resolution optical data were utilized for ground truthing purposes especially within the mangroves swamps and remote areas that were not accessible during the field work (Figure 4).

2.4. Object-Based Image Analysis and Feature Extraction

Object-oriented approach was applied in this study for mapping of mangroves extents. This approach is based on classifying objects that are delineated as homogeneous units with similar spectral characteristics, called segments. Segmentation provides the building blocks of the object-based image analysis [25,51]. This approach is suitable for classification of high-resolution satellite images, as it allows the extraction of meaningful objects, rather than classifying individual pixels [23]. Segmentation enables the acquisition of a variety of spectral, spatial, and textural features, resulting in improved classification accuracy [52,53].
Image pixels of the optical and SAR data with relative homogeneity were clustered using the multi-resolution segmentation algorithm MRS [54] in eCognition Developer V9 [55]. MRS is an ascending area-merging technique where smaller objects are progressively merged into larger objects controlling the advancement in heterogeneity based on three user-defined parameters: scale, shape and compactness [56]. Scale parameter, which regulates the size and homogeneity of image objects [57], was adjusted until the obtained image objects visually represented the features of interest (canopy cover of mangroves trees). All layers were given equal importance in the segmentation settings, except NIR Infrared, and Red bands, which received double weighting to increase their response signal to vegetation greenness. A bottom-up, region-growing segmentation approach was used to produce consistent results across the relatively heterogeneous study area [58]. Figure 5 provides an overview of the entire approach adopted in this study.
Several object-based features can be calculated and extracted in eCognition and applied during the classification procedure. In this study, various features were extracted, including: vegetation indices (VIs), principal component analysis (PCA) [59], and gray-level co-occurrence matrix (GLCM) [60]. VIs and PCA are widely used for retrieval of vegetation structure as well as land cover classification [61]. We have selected the following VIs (Table 1): The Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (gNDVI), Enhanced Vegetation index (EVI2), Soil-adjusted Vegetation Index (SAVI), and Modified Soil-adjusted Vegetation index (MSAVI). NDVI was selected to separate mangroves from other non-vegetated areas. To address the limitations of NDVI that is affected by soil brightness [62] and saturates in high biomass areas [63,64], EVI2 was calculated as it shows greater sensitivity to vegetation and reduces atmospheric effects on vegetation index values [64]. SAVI [65] was computed as a corrective index on soil brightness for areas with low vegetation cover and exposed soil surface. The brightness algorithm was calculated to represent the reflectance intensity of bare rocks and soils among other features sharing similar spectral radiance [66,67].
GLCM texture measures make use of grey-tone spatial dependence matrix to quantify texture features. The GLCM expresses texture in a user defined kernel size and considers the spatial co-occurrence of pixel grey levels [68]. A kernel size of 7 × 7 was applied to avoid exaggeration of variations with smaller kernel size, or inefficient quantification of texture because of smoothing with larger kernel size [69]. In addition to the mean (MEN) and variance (VAR), other texture features were also computed for each band including: homogeneity (HOM), contrast (CON), entropy (ENT), dissimilarity (DIS), correlation (COR), and second moment (SEC) as shown in Table 1.
Backscattering characteristics of mangroves were investigated using the HH, and HV polarizations offered by the FBD mode of ALOS/PALSAR data, and their polarimetric parameters were retrieved. Decomposition features of, co-, and cross-polarized SAR data were retrieved according to the Alpha-Entropy decomposition proposed by Cloude and Pottier [45], including; entropy (H), anisotropy (A), and alpha angle (α), and the class separability offered by their different feature spaces was analyzed. GLCM texture measures were applied also for the SAR bands and their derived parameters to evaluate their influence on classification of mangroves ecosystem. Extracted and calculated features were then evaluated to determine the importance of the predefined variables of both SAR and optical data for distinguishing between the different classes in mangroves ecosystem.

2.5. Image Classification

Three classification scenarios were applied in the current study; Scenario 1 (GA) used only optical images information with the five multispectral bands of RapidEye (blue, green, red, red-edge, and near-infrared) and its calculated features and indices including VIs, PCA as well as texture GLCM features. Scenario 2 (GB) used all the available features from ALOS/PALSAR data; SAR bands, and SAR derived parameters including PolSAR and decomposition parameters, as well as the texture GLCM features. Scenario 3 (GC) integrated both optical and SAR data as well as the calculated and derived features, parameters and texture of both datasets to examine the potential of the integrated optical and SAR data in mapping of mangroves extents. Table 2 show the three proposed scenarios with different categories, datasets, and selected features and combinations.
Because of the variability of the data proposed in the classification schemes (Table 2), a comparison of machine learning algorithms was conducted for choosing suitable classification algorithms. We evaluated three different non-parametric classifiers: random forest (RF), support vector machines (SVM), and classification and regression trees (CART). RF is a machine ensemble approach that makes use of multiple self-learning decision trees to parameterize models and use them for estimating categorical or continuous variables [75]. SVM is a non-parametric statistical learning approach, which can resolve complex class distribution in high dimensional feature spaces [76]. SVM algorithms discriminate the classes by fitting an optimal separating hyperplane (OSH) between classes using the training samples within feature space and to maximize the margins between OSH and the closest training samples [77,78]. The points lying on the boundaries are called support vectors and the middle of the margin is the OSH [79]. CART are increasingly being used for analysis and classification of remotely sensed data. It has been used successfully for classification of multispectral imagery [80,81,82], incorporation of ancillary data with multispectral imagery for increased classification accuracy [83], and change detection analysis [84].

2.6. Accuracy Assessment

Accuracy assessment was carried out for the classified images using the high spatial resolution pan-sharpened RapidEye image to check the correspondence of the produced classes to real objects in the study area. If the results were not satisfactory, the classification was improved by selecting more features for better separation of the different classes of mangroves ecosystem. During the classifiers training, there was an independent test set for the classification accuracy. Tuning of the classifiers was carried out in a systematic way to identify the most suitable tuning parameters; the number of trees built in the forest (ntree), and the number of possible splitting variables for each node (mtry). Finally, a statistical accuracy assessment was conducted using a confusion matrix. The accuracy assessment was carried out through splitting the data into 70% for training of the classifiers, and the other 30% for testing the classification results [84,85]. The parameters of the overall accuracy, producer accuracy, user accuracy, and Kappa coefficient [86] were derived from the confusion matrix and used for the accuracy assessment of the classified images.

3. Results

3.1. Backscattering Characterization and Polarimetric Parameters Description

Mean backscattered coefficients were identified for the five main land-cover types of the mangroves ecosystem in the study area: mangroves (MV), water (WT), intertidal zone (TZ), waterlogged areas (WG), and coastal plain (CP). Table 3 shows the variation of backscattering coefficients for each class at different polarizations HH and HV. It was observed that the class of water (WT) showed the lowest backscattered intensities. The majority of the pixels lying in the range from −26.54 dB to −18.59 dB and from −29.40 dB to −25.96 dB from both HH and HV respectively, a smooth water surface results in no scattering back to the sensor in both HH and HV polarizations. Coastal plain (CP) also showed low backscattered values but slightly higher than WT due to the involvement of soil surface roughness. While higher values were observed in dense homogenous mangroves cover (MV) −8.19 dB and −16.86 dB for both polarizations HH and HV respectively. HV backscattered is closely correlated with mangroves structure and above-ground biomass, with HV reaching higher σo values due to higher sensitivity to volumetric scattering influenced by the random distribution of branches and leaves. The intertidal zone (TZ) and waterlogged areas also showed high backscattering values (WG). Reflections between the vertical aerial roots of mangroves and the water surface of the flooded intertidal zone can result in a strong HH backscatter (Table 3).
The Alpha-Entropy decomposition was generated from eigenvalues-based target decomposition (Figure 6). The different classes of mangroves ecosystem overlap each other showing predominantly surface scattering with moderate alpha values and relative high entropy values in the dual polarization mode. Volume scattering characterize the mangroves stand with higher biomass values and multiple canopy [45]. The low values of alpha represent surface scattering characterizing the plane surfaces found in WA and CP classes (Figure 6).

3.2. Segmentation and Feature Extraction

The pan-sharpened RapidEye image (Figure 7a) achieved the best segmentation results and produced fine units (Figure 7b). Shape and compactness were weighted at 0.1 and 0.5, respectively, and scale was set to five due to land cover heterogeneity in the study area. The segments were relatively well divided where mangroves appeared on the coastline. This is because the pan-sharpened RapidEye image has the highest spatial resolution compared to the other optical and SAR data used in this study, and at the meantime keeping the spectral information of the RapidEye multi-spectral image, while the segmentation of SAR data did not divide the vegetation units into much details despite using the same parameter settings. Furthermore, since SAR data contain speckle noise, using SAR imagery for segmentation resulted in obtaining objects that do not correspond to the real-world objects.

3.3. Classisifcation Results and Accuracy Assessment

Table 4 provides a summary of the classification results achieved for all data categories; classification of optical data, classification of SAR data, and classification of the integrated SAR and optical data. In category GA (optical data only), the highest overall accuracy achieved was 86.78% in subgroup GA1. The addition of the derived features VIs and PCA to the spectral bands improved the overall accuracy to 89.26% in subgroup GA2. This shows the influence of the derived features (VIs and PCA) on improving the classification accuracy. Classification results indicate that high resolution optical data of RapidEye has the potential to categorize land cover classes within the mangroves ecosystem efficiently (Table 4, Figure 8).
In category GB (SAR data only), the overall accuracy of the RF, CART, and SVM classifications with SAR inputs were 59.92%, 53.31%, and 38.43% respectively in subgroup GB1, and improved to 69.83%, 54.65%, and 45.04% respectively in subgroup GB2 after using PolSAR parameters and texture feature derived from both polarizations (Table 4, Figure 8).
In category GC (integrated SAR and optical data), the results obtained in this category indicated that the average overall classification accuracies of the data in subgroups GC1–GC5 ranges between 74% and 92% using RF classifier. SAR derived parameters combined with derived features of optical data improved the classification accuracy as indicated in subgroup GC2. The highest overall classification accuracy was achieved in subgroup GC3 based on the integration of SAR backscattering and PolSAR parameters, with surface reflectance of the optical data (92.15%). These results indicate that separability tends to increase, and an improvement in terms of classification accuracy could be achieved when SAR backscattering, and SAR derived parameters combined with surface reflection of optical data. The use of VIs in GC2 and GC4, and texture feature in GC5 slightly improved the accuracy to 84.71%, 87.60%, and 84.30% respectively (Figure 8).
In general, results of the classification of the three datasets and their accuracy assessment indicated that the derived vegetation indices improved the classification accuracy of the optical dataset in GA, similarly in SAR dataset GB the accuracy increased after including the derived PolSAR parameters and texture feature into the SAR backscattering. The integrated SAR and optical datasets achieved the highest overall accuracies, and the addition of PolSAR parameters and texture feature improved the accuracy as shown in (GC3), and (GC5) respectively (Figure 8).
Figure 9 shows the producer’s accuracy (PA) and user’s accuracy (UA) based on RF, CART and SVM classifications. Based on the producer’s accuracy; the class(MV) achieved the highest accuracy of more than 90%, followed by the class of (WA), and the lowest producer’s accuracy achieved by the class (WG). Regarding to the user’s accuracy, the class (WA) achieved the highest accuracy of more than 90%, followed by the class (CP), and the lowest user’s accuracy of less than 70% achieved by the class (MV).
We compared the performance of the three machine learning classifiers used in the current study, RF achieved the highest overall classification accuracy (92.15%) followed by CART with 88.43%, and SVM with 80.23%. Similarly, for the kappa coefficient, RF achieved the highest kappa 90.18% at GC3, while CART and SVM achieved 85.5% and 75.3% at GC3 and GC2, respectively. The highest performance was achieved using the integrated SAR and optical data (GC), the RF increased significantly the accuracy compared to CART and SVM. For SAR data (GB), RF increased the accuracy in the range of 9.91% compared to SVM in the range of 6.61% and CART in the range of 1.34% (Table 4). We calculated the variable importance for the different variables evaluated in the current study. Of all variables of optical data, the vegetation indices, particularly gNDVI was found to be the most valuable for class discrimination, while in SAR data, SAR band VH contributed to better class separation. Classification results displaying the distribution of the different land cover classes over the study area, and mapping the mangroves extents are presented in Figure 10. The map was produced using the integrated SAR and optical dataset.

4. Discussion

This study demonstrates how remote sensing of the polarimetric SAR data and high resolution optical data can be used for mapping of mangroves extents growing in small patches. In the current study, L-band SAR data of ALOS/PALSAR dual-polarization, and high resolution optical data of RapidEye were used for accurately mapping mangroves extents on the Red Sea coastline in Egypt, showing the effectiveness of integrating different data types for mangroves mapping.
We found that the integrated SAR and optical data achieved the highest accuracy in classifying the five land cover types in the mangroves ecosystem in the study area. The obtained accuracy value is similar to other classifications based on integrated SAR and optical data in African environments [87,88]. Data integration improved the classification accuracy by using different data sources to increase the dimensionality of the available information [89,90,91]. The integration of SAR and optical data, has been shown to be a good strategy for overcoming the limitations of the optical data, including the availability of cloud-free scenes, and also coping with the limitations of SAR data for a single-date information. Our findings are in agreement with [91], they mapped 8 classes of land cover near the mouth of the Amazon River using supervised classification of Landsat ETM and a merged ETM–SAR product. The integrated product increased the accuracy and provided additional information, permitting a more efficient identification and mapping of tropical coastal wetlands.
Random Forest classifier achieved the highest accuracy brought in an improvement in the results with respect to other used classifiers. In certain studies, SVM and RF achieved similar performance [92], in other studies, SVM classifier outperforms RF classifier [93]. Unlike the traditional and fast learning decision trees CART, RF is insensitive to small changes in the training datasets and are not prone to overfitting [94,95]. Additionally, RF is less complex and less computer intensive in comparison to the long training times for SVM [96]. Other studies such as [97] achieved a success with an ANN algorithm to map mangroves species from multi-seasonal IKONOS imagery. The decision to adopt machine learning algorithms should take into consideration the need for expert knowledge, for classifiers optimization and avoiding overfitting.
We applied an object-based approach for mangroves mapping in the current study. This approach allowed to clearly discriminate the different land cover classes within the mangroves ecosystem. Our results are in agreement with [98], they applied object-based approach for multi-scale mangrove composition mapping using multi-resolution data (Landsat TM, ALOS AVNIR-2, and WorldView-2) in two study sites in Australia and Indonesia, the results demonstrated the effectiveness of object-based analysis for mangroves mapping. Our results also are in agreement with [5], they applied multi-resolution segmentation for mangroves mapping in the Mekong Delta using SPOT5 data, and successfully detected areas with mixed aquaculture-mangrove land cover with high accuracies. The main limitation of the mapping approach related to the classification rule set was its site, sensor and time dependency. This limitation was due to the spectral reflectance variations of the images captured by different sensors and the variations of the mangrove environmental settings. The only uncertainty introduced in the accuracy assessment in the current study was attributed to the time gap of eight years between the acquisition times of PALSAR data and the optical data. Some level of change in mangrove condition may have occurred within this time gap. However, we notice that there was no major disturbance affecting the study areas within this time gap. This difference assumed to be insignificant for observation and mapping of woody vegetation in arid and semi-arid environment [41,98].
With regards to the techniques tested in the current study to improve the mapping accuracy, we found that the addition of different features derived from both optical and SAR data was effective in improving the overall accuracy; VIs improved the classification accuracy of optical data, texture features improved the classification accuracy of SAR data, and also the accuracy of the integrated SAR and optical data, similarly to what has been found by [99] in tropical forests, confirming the value of textural information when this data type is used singularly, and especially for discriminating classes of dense and tall vegetation. Our results are in agreement with several studies which used derived features and parameters for improving classification accuracy. In the study carried out by [100], the authors found that inclusion of image texture information improved the classification accuracy, and produced promising results. On the other hand, texture features were much less effective when included with optical data, perhaps due to the fact that very high values of accuracy were already reached and thus the margins of improvement were limited. PolSAR parameters improved the classification accuracy of both SAR data, and the integrated SAR and optical data. We realized also the addition of different features improved the overall accuracy into a certain margin as seen in GC3, and addition of more features after this margin reduced the overall accuracy as seen in GC4 and GC5 (Table 4).
In this study we showed that using SAR data only can still provide important landscape information. This result confirms SAR role in forest and vegetation mapping of tropical regions, and suggests that in the areas affected by optical data loss because of the atmospheric conditions, SAR data can be integrated with the available optical data. Using L-band SAR data was beneficial for mapping of mangroves because of the high penetration of L-band into the canopy, HH polarization has been found to be sensitive for detecting water beneath canopy [101], whereas HV polarization has been known for its sensitivity to volume scattering and biomass [102]. Compared to single polarized SAR data, the dual-polarimetric SAR data provided much more information about the backscattering mechanism of the mangroves ecosystem. Therefore, dual-polarimetric SAR data enables more detailed mapping of mangroves. These results are in agreement with [103], they determined the best polarization configurations (HH, VV, or HV) to discriminate different types of coastal Amazon wetlands, the classification accuracy was slightly improved when combining polarizations, with an overall accuracy of 83% versus single-pol accuracies between 78% and 81%.
The current study shows that using L-band SAR data provided by ALOS/PALSAR combined with the high-resolution optical data of RapidEye is very useful in mapping of mangroves extents growing in fractions and small patches on the Red Sea coastline, as it provides information on the spatial distribution of mangroves and mangroves ecosystem, which is required for conservation and management of mangroves in this area. In the future work, we will continue using the next generation of L-band SAR data; ALOS-2 PALSAR. It will allow comprehensive monitoring of the Earth surfaces by providing users with more detailed SAR data than ALOS data, allowing continued mapping and monitoring of mangroves ecosystems with higher spatial and temporal resolution.

5. Conclusions

In this study we investigated the contribution of the dual-polarimetric L-band SAR data of ALOS/PALSAR and the high resolution optical data of RapidEye for mapping mangroves extents on the Red Sea coastline. The study also assessed the contribution of various features derived from both SAR and optical datasets including VIs, PCA and GLCM texture as well as the PolSAR parameters derived from the ALOS/PALSAR data to the overall accuracy, using different machine learning algorithms and comparing their performance. The highest overall classification accuracy 92.15% was achieved by the integrated SAR and optical data. The VIs improved the overall accuracy of optical data classification. Texture features, and PolSAR parameters improved the overall accuracy of SAR data, and the integrated SAR and optical data classifications. RF classifier has the highest performance followed by CART, and the lowest accuracy achieved by SVM. The study showed that using SAR data of ALOS/PALSAR integrated with optical data of RapidEye was very useful in mapping of mangroves extents growing on the Red Sea coastline. The produced maps provided accurate information on the spatial distribution of mangroves in the study area, which is required for conservation and sustainable management of mangroves in this area.

Acknowledgments

The authors would like to thank the German Aerospace Center (DLR) for providing the RapidEye data, Digital Globe Foundation for providing the WorldView-1 data, and the Japan Aerospace Agency (JAXA) for providing the ALOS/PALSAR data. Thanks to the team members of ZFL for their support during the preparations of this work, and thanks to the team members of Wadi El-Gemal National Park for their support during the field work. The authors also thank the reviewers for their valuable comments and suggestions to improve the quality of the manuscript.

Author Contributions

Ayman Abdel-Hamid and Olena Dubovyk conceived and designed the experiments; Ayman Abdel-Hamid performed the experiments, analyzed the data, and wrote the paper; Olena Dubovyk contributed to the manuscript preparation and discussion, Islam Abou El-Magd contributed to the discussion related to the mangroves extents on the Red Sea coastline, and Gunter Menz supervised the research and participated in the discussion.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Spalding, M.D.; Blasco, F.; Field, C.D. World Mangrove Atlas; International Society for Mangrove Ecosystems: Okinawa, Japan, 1997. [Google Scholar]
  2. Barbier, E.B.; Sathiratai, S. Shrimp Farming and Mangrove Loss in Thailand; Edward Elgar: Cheltenham, UK, 2004. [Google Scholar]
  3. Giri, C.; Pengra, B.; Zhu, Z.; Singh, A.; Tieszen, L.L. Monitoring Mangrove forest dynamics of the Sundarbans in Bangladesh and India using multi-temporal satellite data from 1973 to 2000. Estuar. Coast. Shelf Sci. 2007, 73, 91–100. [Google Scholar] [CrossRef]
  4. Gedan, K.B.; Kirwan, M.L.; Wolanski, E.; Barbier, E.B.; Silliman, B.R. The present and future role of coastal wetland vegetation in protecting shorelines: Answering recent challenges to the paradigm. Clim. Chang. 2011, 106, 7–29. [Google Scholar] [CrossRef]
  5. Vo, Q.T.; Oppelt, N.; Leinenkugel, P.; Kuenzer, C. Remote sensing in mapping mangrove ecosystems—An object-based approach. Remote Sens. 2013, 5, 183–201. [Google Scholar] [CrossRef] [Green Version]
  6. Walters, B.B.; Roennbaeck, P.; Kovacs, J.; Crona, B.; Hussain, S.; Badola, R.; Primavera, J.H.; Barbier, E.B.; Dahdouh-Guebas, F. Ethnobiology, socio-economics and adaptive management of mangroves: A review. Aquat. Bot. 2008, 89, 220–236. [Google Scholar] [CrossRef]
  7. Beaumont, L.J.; Pitman, A.; Perkins, S.; Zimmermann, N.E.; Yoccoz, N.G.; Thuiller, W. Impacts of climate change on the world’s most exceptional ecoregions. Proc. Natl. Acad. Sci. USA 2011, 108, 2306–2311. [Google Scholar] [CrossRef] [PubMed]
  8. Feller, I.C.; Lovelock, C.E.; Berger, U.; McKee, K.L.; Joye, S.B.; Ball, M.C. Biocomplexity in mangrove ecosystems. Annu. Rev. Mar. Sci. 2010, 2, 395–417. [Google Scholar] [CrossRef] [PubMed]
  9. FAO. The World’s Mangroves 1980–2005; FAO Forestry 153; FAO: Rome, Italy, 2007. [Google Scholar]
  10. Lacerda, L.D. (Ed.) American mangroves. In Mangrove Ecosystems; Springer-Verlag: Berlin/Heidelberg, Germany, 2001; pp. 1–62. [Google Scholar]
  11. Hogarth, P.J. The Biology of Mangroves; Oxford University Press: Oxford, UK, 1999. [Google Scholar]
  12. Gilman, E.; Ellison, J.; Duke, N.C.; Field, C. Threats to mangroves from climate change and adaptation options: A review. Aquat. Bot. 2008, 89, 237–250. [Google Scholar] [CrossRef]
  13. Ellison, J.C. Vulnerability assessment of mangroves to climate change and sea-level rise impacts. Wetl. Ecol. Manag. 2015, 23, 115. [Google Scholar] [CrossRef]
  14. Galal, N. Studies on the Coastal Ecology and Management of the Nabq Protected Area, South Sinai, Egypt. Ph.D. Thesis, University of York, York, UK, 1999. [Google Scholar]
  15. Archibald, S.; Scholes, R.J. Leaf green-up in a semi-arid African savanna—Separating tree and grass responses to environmental cues. J. Veg. Sci. 2007, 181, 583–594. [Google Scholar]
  16. Held, A.; Ticehurst, C.; Lymburner, L.; Williams, N. High resolution mapping of tropical mangrove ecosystems using hyperspectral and radar remote sensing. Int. J. Remote Sens. 2003, 24, 2739–2759. [Google Scholar] [CrossRef]
  17. Green, E.P.; Mumby, P.J.; Edwards, A.J.; Clark, C.D. A review of remote sensing for the assessment and management of tropical coastal resources. Coast. Manag. 1996, 24, 1–40. [Google Scholar] [CrossRef]
  18. Jensen, J.R.; Lin, H.; Yang, X.; Ramsey, E.; Davis, B.A.; Thoemke, C.W. The measurement of mangrove characteristics in southwest Florida using SPOT multispectral data. Geochem. Int. 1991, 2, 13–21. [Google Scholar] [CrossRef]
  19. Rasolofoharinoro, M.; Blasco, F.; Bellan, M.; Aizpuru, M.; Gauquelin, T.; Denis, J. A remote sensing based methodology for mangrove studies in Madagascar. Int. J. Remote Sens. 1998, 19, 1873–1886. [Google Scholar] [CrossRef]
  20. Green, E.P.; Clark, C.D.; Mumby, P.J.; Edwards, A.J.; Ellis, A. Remote sensing techniques for mangrove mapping. Int. J. Remote Sens. 1998, 19, 935–956. [Google Scholar] [CrossRef]
  21. Long, B.G.; Skewes, T.D. A technique for mapping mangroves with Landsat TM satellite data and geographic information system. Estuar. Coast. Shelf Sci. 1996, 43, 373–381. [Google Scholar] [CrossRef]
  22. Neukermans, G.; Dahdouh-Guebas, F.; Kairo, J.G.; Koedam, N. Mangrove species and stand mapping in Gazi Bay (Kenya) using Quickbird satellite imagery. J. Spat. Sci. 2008, 53, 75–86. [Google Scholar] [CrossRef]
  23. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  24. Ke, Y.; Quackenbush, L.J.; Im, J. Synergistic use of QuickBird multispectral imagery and LIDAR data for object-based forest species classification. Remote Sens. Environ. 2010, 114, 1141–1154. [Google Scholar] [CrossRef]
  25. Hay, G.J.; Castilla, G. Geographic Object-Based Image Analysis (GEOBIA): A new name for a new discipline. In Object Based Image Analysis; Blaschke, T., Lang, S., Hay, G., Eds.; Springer: Heidelberg/Berlin, Germany; New York, NY, USA, 2008; pp. 93–112. [Google Scholar]
  26. Liu, Y.; Li, M.; Mao, L.; Xu, F. Review of remotely sensed imagery classification patterns based on object oriented image analysis. Chin. Geogr. Sci. 2006, 16, 282–288. [Google Scholar] [CrossRef]
  27. Almeida-Filho, R.; Shimabukuro, Y.E.; Rosenqvist, A.; Sanchez, G.A. Using dual-polarized ALOS PALSAR data for detecting new fronts of deforestation in the Brazilian Amazonia. Int. J. Remote Sens. 2009, 30, 3735–3743. [Google Scholar] [CrossRef]
  28. Hoekman, D.H.; Vissers, M.A.M.; Wielaard, N. PALSAR wide-area mapping of Borneo: Methodology and map validation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2010, 3, 605–617. [Google Scholar] [CrossRef]
  29. Häme, T.; Rauste, Y.; Sirro, L.; Stach, N. Forest cover mapping in French Guiana since 1992 using satellite radar imagery. In Proceedings of the International Symposium on Remote Sensing of Environment (ISRSE 33), Stresa, Italy, 4–8 May 2009. [Google Scholar]
  30. Le Toan, T.; Quegan, S.; Davidson, M.W.J.; Balzter, H.; Paillou, P.; Papathanassiou, K.; Plummer, S.; Rocca, F.; Saatchi, S.; Shugart, H.; et al. The BIOMASS mission: Mapping global forest biomass to better understand the terrestrial carbon cycle. Remote Sens. Environ. 2011, 115, 2850–2860. [Google Scholar] [CrossRef]
  31. Sun, G.; Ranson, K.J.; Guo, Z.; Zhang, Z.; Montesano, P.; Kimes, D. Forest Biomass Mapping from Lidar and Radar Synergies. Remote Sens. Environ. 2011, 115, 2906–2916. [Google Scholar] [CrossRef]
  32. Santoro, M.; Shvidenko, A.; McCallum, I.; Askne, J.; Schmullius, C. Properties of ERS-1/2 coherence in the Siberian boreal forest and implications for stem volume retrieval. Remote Sens. Environ. 2007, 106, 154–172. [Google Scholar] [CrossRef]
  33. Carreiras, J.M.B.; Melo, J.B.; Vasconcelos, M.J. Estimating the above-ground biomass in Miombo Savanna Woodlands (Mozambique, East Africa) using L-band synthetic aperture radar data. Remote Sens. 2013, 5, 1524–1548. [Google Scholar] [CrossRef]
  34. Naidoo, L.; Mathieu, R.; Main, R.; Kleynhans, W.; Wessels, K.; Asner, G.; Leblon, B. Savannah woody structure modelling and mapping using multi-frequency (X-, C- and L-band) Synthetic Aperture Radar data. ISPRS J. Photogramm. Remote Sens. 2015, 105, 234–250. [Google Scholar] [CrossRef]
  35. Mitchard, E.T.A.; Saatchi, S.S.; Lewis, S.L.; Feldpausch, T.R.; Woodhouse, I.H.; Sonke, B.; Rowland, C.; Meir, P. Measuring biomass changes due to woody encroachment and deforestation/degradation in a forest-savanna boundary region of central Africa using multi-temporal L-band radar backscatter. Remote Sens. Environ. 2011, 115, 2861–2873. [Google Scholar] [CrossRef]
  36. Lucas, R.M.; Mitchell, A.L.; Rosenqvist, A.; Proisy, C.; Melius, A.; Ticehurst, C. The potential of L-band SAR for quantifying mangrove characteristics and change: Case studies from the tropics. Aquat. Conserv. Mar. Freshw. Ecosyst. 2007, 17, 245–264. [Google Scholar] [CrossRef]
  37. Aslan, A.; Abdullah, F.R.; Matthew, W.W.; Scott, M.R. Mapping spatial distribution and biomass of coastal wetland vegetation in Indonesian Papua by combining active and passive remotely sensed data. Remote Sens. Environ. 2016, 183, 65–81. [Google Scholar] [CrossRef]
  38. Zahran, M.A.; Willis, A.J. The Vegetation of Egypt, 2nd ed.; Chapman & Hall: London, UK, 2009; 424p. [Google Scholar]
  39. Tyc, G.; Tulip, J.; Schulten, D.; Krischke, M.; Oxfort, M. The RapidEye mission. Des. Acta Astronaut. 2005, 56, 213–219. [Google Scholar] [CrossRef]
  40. Wegmüller, U. Automated terrain corrected SAR geocoding. In Proceedings of the IGARSS 1999, Hamburg, Germany, 28 June–2 July 1999. [Google Scholar]
  41. Santini, N.S.; Hua, Q.; Schmitz, N.; Lovelock, C.E. Radiocarbon Dating and Wood Density Chronologies of Mangrove Trees in Arid Western Australia. PLoS ONE 2013, 8, e80116. [Google Scholar] [CrossRef] [PubMed]
  42. Shimada, M.; Isoguchi, O.; Tadono, T.; Isono, K. PALSAR radiometric and geometric calibration. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3915–3932. [Google Scholar] [CrossRef]
  43. Maurizio, S.; Eriksson, L.E.B.; Fransson, J.E.S. Reviewing ALOS PALSAR backscatter observations for stem volume retrieval in Swedish forest. Remote Sens. 2015, 7, 4290–4317. [Google Scholar] [CrossRef]
  44. Lee, J.S.; Grunes, M.R.; de Grandi, G. Polarimetric SAR speckle filtering and its implication for classification. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2363–2373. [Google Scholar]
  45. Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  46. Pottier, E.; Ferro-Famil, L.; Allain, S.; Cloude, S.; Hajnsek, I.; Papathanassiou, K.; Moreira, A.; Williams, M.; Minchella, A.; Lavalle, M.; et al. Overview of the PolSARpro V4.0 software: The open source toolbox for polarimetric and interferometric polarimetric SAR data processing. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; pp. IV-936–IV-939. [Google Scholar]
  47. Treuhaft, R.N.; Law, B.E.; Asner, G.P. Forest attributes from radar interferometric structure and its fusion with optical remote sensing. Bioscience 2004, 54, 561–571. [Google Scholar] [CrossRef]
  48. Amarsaikhana, D.; Blotevogelb, H.H.; van Genderenc, J.L.; Ganzoriga, M.; Gantuyaa, R.; Nerguia, B. Fusing high-resolution SAR and optical imagery for improved urban land cover study and classification. Int. J. Image Data Fusion 2010, 1, 83–97. [Google Scholar] [CrossRef]
  49. Pohl, C.; Van Genderen, J.L. Multisensor image fusion in remote sensing: Concepts, methods and applications. Int. J. Remote Sens. 1998, 19, 823–854. [Google Scholar] [CrossRef]
  50. Ehlers, M.; Klonus, S.; Johan, Ã.; Strand, P.R.; Rosso, P. Multi-sensor image fusion for pan-sharpening in remote sensing. Int. J. Image Data Fusion 2010, 1, 25–45. [Google Scholar] [CrossRef]
  51. Lang, S. Object-based image analysis for remote sensing applications: Modeling reality—Dealing with complexity. In Object-Based Image Analysis; Blaschke, T., Lang, S., Hay, G.J., Eds.; Springer: Heidelberg/Berlin, Germany; New York, NY, USA, 2008; pp. 1–25. [Google Scholar]
  52. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef]
  53. Baatz, M.; Schäpe, A. Multiresolution Segmentation—An optimization approach for high quality multi-scale image segmentation. In Angewandte Geographische Informationsverarbeitung XII; Strobl, J., Blaschke, T., Griesebner, G., Eds.; Wichmann: Heidelberg, Germany, 2000; pp. 12–23. [Google Scholar]
  54. Xiaoxiao, L.; Soe, W.M.; Yujia, Z.; Chritopher, G.; Xiaoxiang, Z.; Billie, L.T. Object-based land cover classification for metropolitan Phoenix, Arizona, using aerial photography. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 321–330. [Google Scholar]
  55. Definiens. Ecognition. 2009. Available online: http://www.ecognition.com (accessed on 10 June 2015).
  56. Laliberte, A.S.; Fredrickson, E.L.; Rango, A. Combining decision trees with hierarchical object-oriented image analysis for mapping arid rangelands. Photogramm. Eng. Remote Sens. 2007, 73, 197–207. [Google Scholar] [CrossRef]
  57. Marceau, D. The scale issue in the social and natural sciences. Can. J. Remote Sens. 1999, 25, 347–356. [Google Scholar] [CrossRef]
  58. Münch, Z.; Okoye, P.I.; Gibson, L.; Mantel, S.; Palmer, A. Characterizing Degradation Gradients through Land Cover Change Analysis in Rural Eastern Cape, South Africa. Geosciences 2017, 7, 7. [Google Scholar] [CrossRef]
  59. Taylor, P.J. Quantitative Methods in Geography: An Introduction to Spatial Analysis; Houghton Mifflin Boston: Boston, MA, USA, 1977. [Google Scholar]
  60. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern 1973, 6, 269–285. [Google Scholar] [CrossRef]
  61. Hurni, K.; Hett, C.; Epprecht, M.; Messerli, P.; Heinimann, A. A texture-based land cover classification for the delineation of a shifting cultivation landscape in the Lao PDR using landscape metrics. Remote Sens. 2013, 5, 3377–3396. [Google Scholar] [CrossRef] [Green Version]
  62. Carlson, T.N.; Ripley, D.A. On the relation between NDVI, fractional vegetation cover, and Leaf Area Index. Remote Sens. Environ. 1997, 62, 241–252. [Google Scholar] [CrossRef]
  63. Wang, Z.; Liu, C.; Huete, A. From AVHRR-NDVI to MODIS-EVI: Advances in vegetation index research. Acta Ecol. Sin. 2002, 23, 979–987. [Google Scholar]
  64. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
  65. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modelling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  66. Kauth, R.J.; Thomas, G.S. The tasseled cap-A graphic description of the spectral-temporal development of agricultural crops as seen by Landsat. In Proceedings of the Symposium on Machine Processing of Remotely Sensed Data, West Lafayette, IN, USA, 9 June–1 July 1976; pp. 4B-41–4B-50. [Google Scholar]
  67. Schönert, M.; Weichelt, H.; Zillmann, E.; Jürgens, C. Derivation of tasseled cap coefficients for RapidEye data. In Proceedings of the SPIE 9245, Earth Resources and Environmental Remote Sensing/GIS Applications V, Amsterdam, The Netherlands, 23–25 September 2014; p. 92450Q. [Google Scholar] [CrossRef]
  68. Dorigo, W.; Lucieer, A.; Podobnikar, T.; Carni, A. Mapping invasive Fallopia japonica by combined spectral, spatial, and temporal analysis of digital orthophotos. Int. J. Appl. Earth Obs. Geoinf. 2012, 19, 185–195. [Google Scholar] [CrossRef]
  69. Lu, D.; Batistella, M. Exploring TM image texture and its relationships with biomass estimation in Rondônia, Brazilian Amazon. Acta Amaz. 2005, 35, 249–257. [Google Scholar] [CrossRef]
  70. Tucker, C.J. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
  71. Gitelson, A.; Kaufman, Y.; Merzlyak, M. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  72. Jiang, Z.; Huete, A.R.; Didan, K.; Miura, T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens. Environ. 2008, 112, 3833–3845. [Google Scholar] [CrossRef]
  73. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  74. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A Modified Soil Adjusted Vegetation Index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  75. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recogn. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  76. Vapnik, V. Estimation of Dependences Based on Empirical Data; Springer Series in Statistics; Springer: Secaucus, NJ, USA, 1982. [Google Scholar]
  77. Foody, G.M.; Mathur, A. Toward intelligent training of supervised image classifications: Directing training data acquisition for SVM classification. Remote Sens. Environ. 2004, 93, 107. [Google Scholar] [CrossRef]
  78. Van Der Linden, S.; Hostert, P. The influence of urban structures on impervious surface maps from airborne hyperspectral data. Remote Sens. Environ. 2009, 113, 2298–2305. [Google Scholar] [CrossRef]
  79. Meyer, D. Support Vector Machines. 2014. Available online: http://cran.rproject.org/web/packages/e1071/vignettes/svm-doc.pdf (accessed on 27 February 2018).
  80. Friedl, M.A.; Brodley, C.E. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ. 1997, 61, 399–409. [Google Scholar] [CrossRef]
  81. Hansen, M.; Dubayah, R.; Defries, R. Classification trees: An alternative to traditional land cover classifiers. Int. J. Remote Sens. 1996, 17, 1075–1081. [Google Scholar] [CrossRef]
  82. Lawrence, R.L.; Wright, A. Rule-based classification systems using classification and regression tree (CART) analysis. Photogramm. Eng. Remote Sens. 2001, 67, 1137–1142. [Google Scholar]
  83. Foody, G.M. Status of Land Cover Classification Accuracy Assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
  84. Rogan, J.; Miller, J.; Stow, D.; Franklin, J.; Levien, L.; Fisher, C. Land-cover change monitoring with classification trees using Landsat TM and ancillary data. Photogramm. Eng. Remote Sens. 2003, 69, 793–804. [Google Scholar] [CrossRef]
  85. McCoy, R.M. Field Methods in Remote Sensing; Guildford Press: New York, NY, USA, 2005. [Google Scholar]
  86. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; Lewis Publishers: Boca Raton, FL, USA, 1999. [Google Scholar]
  87. Haack, B.; Bechdol, M. Integrating multisensor data and RADAR texture measures for land cover mapping. Comput. Geosci. 2000, 26, 411–421. [Google Scholar] [CrossRef]
  88. Laurin, G.V.; Liesenberg, V.; Chen, Q.; Guerriero, L.; Frate, F.D.; Bartolini, A.; Coomes, D.; Wilebore, B.; Lindsell, J.; Valentini, R. Optical and SAR sensor synergies for forest and land cover mapping in a tropical site in West Africa. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 7–16. [Google Scholar] [CrossRef]
  89. Lang, M.; McCarty, G. Wetland Mapping: History and Trends. In Wetlands: Ecology, Conservation and Management; Russo, R.E., Ed.; Nova Publishers: New York, NY, USA, 2008; pp. 74–112. [Google Scholar]
  90. Ramsey, E.W., III; Chappell, D.K.; Jacobs, D.; Sapkota, S.K.; Baldwin, D.G. Resource management of forested wetlands: Hurricane impact and recovery mapping by combining Landsat TM and NOAA AVHRR data. Photogramm. Eng. Remote Sens. 1998, 64, 733–738. [Google Scholar]
  91. Waleska, S.; Rodrigues, P.; Walfir, P.; Souza-Filho, M. Use of Multi-Sensor Data to Identify and Map Tropical Coastal Wetlands in the Amazon of Northern Brazil. Wetlands 2011, 31, 11–23. [Google Scholar]
  92. Ghosh, A.; Fassnacht, F.E.; Joshi, P.K.; Koch, B. A framework for mapping tree species combining hyperspectral and LiDAR data: Role of selected classifiers and sensor across three spatial scales. Int. J. Appl. Erath Obs. Geoinf. 2014, 26, 49–63. [Google Scholar] [CrossRef]
  93. Burai, P.; Deak, B.; Valko, O.; Tomor, T. Classification of herbaceous vegetation using airborne hyperspectral imagery. Remote Sens. 2014, 7, 2046–2066. [Google Scholar] [CrossRef]
  94. Ismail, R.; Mutanga, O.; Kumar, L. Modelling the potential distribution of pine forests susceptible to Sirex Noctilo infestations in Mpumalanga, South Africa. Trans. GIS 2010, 14, 709–726. [Google Scholar] [CrossRef]
  95. Prasad, A.M.; Iverson, L.R.; Liaw, A. Newer classification and regression tree techniques: Bagging and random forests for ecological prediction. Ecosystems 2006, 9, 181–199. [Google Scholar] [CrossRef]
  96. Anguita, D.; Ghio, A.; Greco, N.; Oneto, L.; Ridella, S. Model Selection for Support Vector Machines: Advantages and Disadvantages of the Machine Learning Theory. In Proceedings of the International Joint Conference on Neural Networks, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  97. Wang, L.; Silván-Cárdenas, J.L.; Sousa, W.P. Neural Network Classification of Mangrove Species from Multi-seasonal Ikonos Imagery. Photogram. Eng. Remote Sens. 2008, 74, 921–927. [Google Scholar] [CrossRef]
  98. Kamal, M.; Phinn, S.; Johansen, K. Object-Based Approach for Multi-Scale Mangrove Composition Mapping Using Multi-Resolution Image Datasets. Remote Sens. 2015, 7, 4753–4783. [Google Scholar] [CrossRef]
  99. Longepe, N.; Rakwatin, P.; Isoguchi, O.; Shimada, M.; Uryu, Y.; Yulianto, K. Assessment of ALOS PALSAR 50 m orthorectified FBD data for regional land cover classification by support vector machines. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2135–2150. [Google Scholar] [CrossRef]
  100. Wang, L.; Sousa, W.P.; Gong, P.; Biging, G.S. Comparison of IKONOS and QuickBird images for mapping mangrove species on the Caribbean coast of Panama. Remote Sens. Environ. 2004, 91, 432–440. [Google Scholar] [CrossRef]
  101. Hess, L.L.; Melack, J.M.; Filoso, S.; Wang, Y. Delineation of inundated area and vegetation along the Amazon floodplain with the SIR-C synthetic aperture radar. IEEE Trans. Geosci. Remote Sens. 1995, 33, 896–904. [Google Scholar] [CrossRef]
  102. Bourgeau-Chavez, L.L.; Riordan, K.; Powell, R.B.; Miller, N.; Nowels, M. Improving wetland characterization with multi-sensor, multi-temporal SAR and optical/infrared data fusion. In Advances in Geoscience and Remote Sensing; Jedlovec, G., Ed.; InTech: Rijeka, Croatia, 2009. [Google Scholar]
  103. Souza-Filho, P.W.M.; Paradella, W.R.; Rodrigues, S.W.P.; Costa, F.R.; Mura, J.C.; Gonçalves, F.D. Discrimination of coastal wetland environments in the Amazon region based on multi-polarized L-band airborne Synthetic Aperture Radar imagery. Estuar. Coast. Shelf Sci. 2011, 95, 88–98. [Google Scholar] [CrossRef]
Figure 1. (a) Location of the study area on the Red Sea coastline in Egypt; (b) the study area overlaid on SRTM digital elevation model, and footprints of the remote sensing data used in this study; FBD: fine beam dual-polarization of ALOS/PALSAR, FBS: fine beam single polarization, RE: RapidEye, and WV-1: WorldView-1; and (c) Subset of the RapidEye high resolution image showing W. Lehmy stand.
Figure 1. (a) Location of the study area on the Red Sea coastline in Egypt; (b) the study area overlaid on SRTM digital elevation model, and footprints of the remote sensing data used in this study; FBD: fine beam dual-polarization of ALOS/PALSAR, FBS: fine beam single polarization, RE: RapidEye, and WV-1: WorldView-1; and (c) Subset of the RapidEye high resolution image showing W. Lehmy stand.
Sustainability 10 00646 g001
Figure 2. Remote sensing data covering the study area (a) WorldView-1 image; (b) RapidEye image (RGB); (c) Pan-sharpened RapidEye image (NIR, R, G); and (d) dual-polarization ALOS/PALSAR data in RGB (HH, HV, HV/HH).
Figure 2. Remote sensing data covering the study area (a) WorldView-1 image; (b) RapidEye image (RGB); (c) Pan-sharpened RapidEye image (NIR, R, G); and (d) dual-polarization ALOS/PALSAR data in RGB (HH, HV, HV/HH).
Sustainability 10 00646 g002aSustainability 10 00646 g002b
Figure 3. (a) RapidEye image showing mangroves at W. Lehmy stand on the Red Sea coastline, and the locations of the field photographs; (bf) Field photographs collected during the field work in the study area, showing the mangroves swamp and the surrounding habitats.
Figure 3. (a) RapidEye image showing mangroves at W. Lehmy stand on the Red Sea coastline, and the locations of the field photographs; (bf) Field photographs collected during the field work in the study area, showing the mangroves swamp and the surrounding habitats.
Sustainability 10 00646 g003aSustainability 10 00646 g003b
Figure 4. The distribution of the sampling points overlaid on the pan-sharpened RapidEye image.
Figure 4. The distribution of the sampling points overlaid on the pan-sharpened RapidEye image.
Sustainability 10 00646 g004
Figure 5. Flowchart of the proposed methodology, WV-1: WorldView-1 data, VIs: Vegetation Indices, RF: Random forest, SVM: Support-vector machine, and CART: classification and regression trees.
Figure 5. Flowchart of the proposed methodology, WV-1: WorldView-1 data, VIs: Vegetation Indices, RF: Random forest, SVM: Support-vector machine, and CART: classification and regression trees.
Sustainability 10 00646 g005
Figure 6. (a) H-Alpha plane plot of Cloude–Pottier decomposition; and (b) Entropy of the entire scene as retrieved from ALOS/PALSAR data, the square box highlight the location of mangroves stand.
Figure 6. (a) H-Alpha plane plot of Cloude–Pottier decomposition; and (b) Entropy of the entire scene as retrieved from ALOS/PALSAR data, the square box highlight the location of mangroves stand.
Sustainability 10 00646 g006
Figure 7. (a) Pan-sharpened RapidEye image used for segmentation; (b) the results of segmentation at [0.1, 0.5, and 5] for shape, compactness, and scale parameters; (c) using the object-feature (NIR band) for mangroves feature extraction; and (d) delineation of the canopies of mangroves trees in the study area on the Red Sea coastline based on the selected segmentation parameters and extracted object-feature.
Figure 7. (a) Pan-sharpened RapidEye image used for segmentation; (b) the results of segmentation at [0.1, 0.5, and 5] for shape, compactness, and scale parameters; (c) using the object-feature (NIR band) for mangroves feature extraction; and (d) delineation of the canopies of mangroves trees in the study area on the Red Sea coastline based on the selected segmentation parameters and extracted object-feature.
Sustainability 10 00646 g007
Figure 8. (a) Classification Overall accuracies; and (b) Kappa coefficient, of the classified categories based on RF, CART, and SVM classifiers.
Figure 8. (a) Classification Overall accuracies; and (b) Kappa coefficient, of the classified categories based on RF, CART, and SVM classifiers.
Sustainability 10 00646 g008
Figure 9. (a) Producer’s accuracies PA%; and (b) User’s accuracy UA% for different mangroves ecosystem classes based on RF, CART and SVM classifications.
Figure 9. (a) Producer’s accuracies PA%; and (b) User’s accuracy UA% for different mangroves ecosystem classes based on RF, CART and SVM classifications.
Sustainability 10 00646 g009
Figure 10. (a) Classification output of the integrated SAR and optical dataset using RF classifier, displaying the different land cover classes of the mangroves ecosystem in the study area; (b) show the delineation of the canopies of the mangroves trees.
Figure 10. (a) Classification output of the integrated SAR and optical dataset using RF classifier, displaying the different land cover classes of the mangroves ecosystem in the study area; (b) show the delineation of the canopies of the mangroves trees.
Sustainability 10 00646 g010
Table 1. Vegetation indices, and texture features used in this study.
Table 1. Vegetation indices, and texture features used in this study.
CategoriesVariablesAlgorithmReference
VIsNDVI ( N I R R ) / ( N I R + R ) [70]
gNDVI ( N I R G ) / ( N I R + G ) [71]
EVI2 2.5 ( N I R R ) / ( N I R + 2.4 × R + 1.0 ) [72]
SAVI 1.5 ( N I R R ) ] / [ N I R + R + 0.5 [73]
MSAVI 2 ( N I R + 1 ) { ( N I R + 1 ) 2 8 ( N I R R ) } / 2 [74]
GLCM textureMEN f   m e n = i , j = 0 N 1 i ( p i j ) [60]
VAR f   v a r = i , j = 0 N 1 p i j   ( i μ i ) 2
HOM f   h o m = i , j = 0 N 1 p i j / 1 + ( i j ) 2
CON f   c o n = i , j = 0 N 1 p i j   ( i j ) 2
ENT f   e n t = i , j = 0 N 1 p i j   ( ln   p i j )
DIS f   d i s = i , j = 0 N 1 p i j   | i j |
COR f   c o r = i , j = 0 N 1 p i j   [ ( i μ i ) ( j μ j ) / { ( σ i 2 ) ( σ j 2 ) } ]
SEC f   s e c = i , j = 0 N 1 p i j 2
NDVI: Normalized difference vegetation index; gNDVI: green Normalized difference vegetation index; EVI2: Enhanced vegetation index; SAVI: Soil adjusted vegetation index; MSAVI: Modified soil adjusted vegetation index; NIR: Near Infrared; R: Red; G: Green. p i , j = v i , j / i , j = 0 N 1 v i , j , where v i , j is the value in the cell, i , j of the moving window and 𝑁 is the number of rows or columns.
Table 2. Proposed scenarios for the classification schemes using optical data of RapidEye, SAR data of ALOS/PALSAR, and the integrated optical and SAR data.
Table 2. Proposed scenarios for the classification schemes using optical data of RapidEye, SAR data of ALOS/PALSAR, and the integrated optical and SAR data.
CategoryDatasetsSelected Features and Combinations
GAGA1Spectral bandsB, G, R, Red Edge, and NIR
GA2Spectral bands, VIs, and PCAB, G, R, Red Edge, NIR, VIs, pc1, and pc2
GA3Spectral bands, VIs, PCA, and textureB, G, R, Red Edge, NIR, VIs, pc1, pc2, and texture
GBGB1SAR bandsHH and HV
GB2SAR bands, PolSAR parameters, and GLCM textureHH, HV, HV/HH, HV + HH, HV − HH, H, A, α, and GLCM texture
GCGC1Spectral bands, and SAR bandsB, G, R, Red Edge, NIR, HH, and HV
GC2Spectral bands, VIs, SAR bands, and PolSAR parametersB, G, R, Red Edge, NIR, VIs, HH, HV, HV/HH,
HV + HH, HV − HH, H, A, and α
GC3Spectral bands, SAR bands, and PolSAR parametersB, G, R, Red Edge, NIR, HH, HV, HV/HH,
HV + HH, HV − HH, H, A, and α
GC4Spectral bands, VIs, and SAR bandsB, G, R, Red Edge, NIR, VIs, HH, and HV
GC5Spectral bands, VIs, SAR bands, PolSAR parameters, and textureB, G, R, Red Edge, NIR, VIs, HH, HV, HV/HH,
HV + HH, HV − HH, H, A, α, and GLCM texture
GA: optical data; GB: SAR data; GC: integrated optical and SAR data; PCA: principle components; B: blue; G: Green; R: Red; NIR: near infrared; PolSAR: Polarimetric SAR; VIs: vegetation indices; H: entropy; A: anisotropy; and α: alpha angle.
Table 3. Backscatter statistics of PALSAR data for each of the five classes in the mangroves ecosystem.
Table 3. Backscatter statistics of PALSAR data for each of the five classes in the mangroves ecosystem.
ClassHH Backscattering (dB)HV Backscattering (dB)
RangeMeanSDRangeMeanSD
WA−26.54 to −18.59−23.662.15−29.40 to −25.96−28.041.07
MV−10.98 to −05.72−8.191.35−20.37 to −14.60−16.861.18
TZ−18.77 to −15.51−16.103.36−27.96 to −24.99−26.380.65
WG−18.67 to −15.30−17.151.14−28.81 to −26.68−27.770.55
CP−24.27 to −20.58−22.460.94−29.11 to −27.44−28.260.48
Table 4. Classification Overall accuracies and Kappa coefficient of the classified datasets based on RF, CART, and SVM classifiers.
Table 4. Classification Overall accuracies and Kappa coefficient of the classified datasets based on RF, CART, and SVM classifiers.
CategoriesSubgroupsOverall Accuracy (OA) %Kappa Coefficient (K) %
RFCARTSVMRFCARTSVM
GA
(Optical data)
GA186.7874.4260.3383.4468.1450.73
GA289.2683.7274.7986.5779.6868.63
GA382.2326.0376.4577.869.0270.63
GB
(SAR data)
GB159.9253.3138.4350.1542.7121.19
GB269.8354.6545.0462.2144.3632.31
GC
(Integrated optical and SAR data)
GC174.4263.9575.9768.2854.7470.23
GC284.7168.1880.2380.8960.6275.29
GC392.1588.4362.0290.1885.5352.32
GC487.6084.1152.7184.5580.0438.83
GC584.3078.1079.2580.4272.7874.04

Share and Cite

MDPI and ACS Style

Abdel-Hamid, A.; Dubovyk, O.; Abou El-Magd, I.; Menz, G. Mapping Mangroves Extents on the Red Sea Coastline in Egypt using Polarimetric SAR and High Resolution Optical Remote Sensing Data. Sustainability 2018, 10, 646. https://doi.org/10.3390/su10030646

AMA Style

Abdel-Hamid A, Dubovyk O, Abou El-Magd I, Menz G. Mapping Mangroves Extents on the Red Sea Coastline in Egypt using Polarimetric SAR and High Resolution Optical Remote Sensing Data. Sustainability. 2018; 10(3):646. https://doi.org/10.3390/su10030646

Chicago/Turabian Style

Abdel-Hamid, Ayman, Olena Dubovyk, Islam Abou El-Magd, and Gunter Menz. 2018. "Mapping Mangroves Extents on the Red Sea Coastline in Egypt using Polarimetric SAR and High Resolution Optical Remote Sensing Data" Sustainability 10, no. 3: 646. https://doi.org/10.3390/su10030646

APA Style

Abdel-Hamid, A., Dubovyk, O., Abou El-Magd, I., & Menz, G. (2018). Mapping Mangroves Extents on the Red Sea Coastline in Egypt using Polarimetric SAR and High Resolution Optical Remote Sensing Data. Sustainability, 10(3), 646. https://doi.org/10.3390/su10030646

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop