Next Article in Journal
Small-Sample Seabed Sediment Classification Based on Deep Learning
Next Article in Special Issue
Inversion of Soil Moisture on Farmland Areas Based on SSA-CNN Using Multi-Source Remote Sensing Data
Previous Article in Journal
Research on A Special Hyper-Pixel for SAR Radiometric Monitoring
Previous Article in Special Issue
Soil Moisture Inversion Based on Data Augmentation Method Using Multi-Source Remote Sensing Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Land Cover in Complex Terrain Using Gaofen-3 SAR Ascending and Descending Orbit Data

1
College of Computer and Information Engineering, Henan University, Kaifeng 475004, China
2
Henan Engineering Research Center of Intelligent Technology and Application, Henan University, Kaifeng 475004, China
3
Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng 475004, China
4
School of Artificial Intelligence, Henan University, Kaifeng 475004, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(8), 2177; https://doi.org/10.3390/rs15082177
Submission received: 28 February 2023 / Revised: 16 April 2023 / Accepted: 16 April 2023 / Published: 20 April 2023

Abstract

:
Synthetic aperture radar (SAR) image is an effective remote sensing data source for geographic surveys. However, accurate land cover mapping based on SAR image in areas of complex terrain has become a challenge due to serious geometric distortions and the inadequate separation ability of dual-polarization data. To address these issues, a new land cover mapping framework which is suitable for complex terrain is proposed based on Gaofen-3 data of ascending and descending orbits. Firstly, the geometric distortion area is determined according to the local incident angle, based on analysis of the SAR imaging mechanism, and the correct polarization information of the opposite track is used to compensate for the geometric distortion area, including layovers and shadows. Then, the dual orbital polarization characteristics (DOPC) and dual polarization radar vegetation index (DpRVI) of dual-pol SAR data are extracted, and the optimal feature combination is found by means of Jeffries–Matusita (J-M) distance analysis. Finally, the deep learning method 2D convolutional neural network (2D-CNN) is applied to classify the compensated images. The proposed method was applied to a mountainous region of the Danjiangkou ecological protection area in China. The accuracy and reliability of the method were experimentally compared using the uncompensated images and the images without DpRVI. Quantitative evaluation revealed that the proposed method achieved better performance in complex terrain areas, with an overall accuracy (OA) score of 0.93, and a Kappa coefficient score of 0.92. Compared with the uncompensated image, OA increased by 5% and Kappa increased by 6%. Compared with the images without DpRVI, OA increased by 4% and Kappa increased by 5%. In summary, the results demonstrate the importance of ascending and descending orbit data to compensate geometric distortion and reveal the effectiveness of optimal feature combination including DpRVI. Its simple and effective polarization information compensation capability can broaden the promising application prospects of SAR images.

Graphical Abstract

1. Introduction

Accurate and timely land-cover mapping can provide reliable basis data for regional development and environmental management [1]. Many land-cover products have been developed with traditional optical images. However, high-quality traditional optical images cannot always be obtained, due to weather restrictions. Fortunately, with the development of active microwave remote sensing technology, synthetic aperture radar (SAR) is becoming a new important data source for land-cover mapping relying on its all-weather imaging capability. However, the backscattering energy of ground objects is influenced not only by the surface roughness but also by the surrounding ground objects, their inherent topography, and the incident angle of the SAR sensor. These uncertain factors limit the classification accuracy of SAR images, especially in complex terrain areas. Complex terrain refers to areas with significant variations in terrain height, shape, and detail, which can make land-cover identification and area quantification challenging. Its research has aroused widespread interest [2,3].
Recently, the effectiveness of SAR images in land-cover classification has been demonstrated [4,5]. However, most of these methods focus on the whole region without considering the geometric distortion of the complex terrain, which leads to errors in land-cover classification. Mountain areas cover about 24% of the earth’s land area. The side-view and range-imaging characteristics of SAR sensors cause geometric distortion in the images [6,7], such as layover and shadow, which may lead to scattering characteristics within the images inconsistent with the actual scene. This issue is especially acute in rolling mountains or other complicated terrain. For example, the layover phenomenon can cause the backscattering coefficient to reach the building area, and it is impossible to perceive the real types of ground objects in the layover area [8]. Moreover, the intensity and scattering angle of SAR responses are sensitive to the land surface conditions, especially in regions of complex terrain with different land-surface conditions and diversity of vegetation types. Exploring more effective feature combinations of SAR data in complex terrain regions is an important task to further improve land-cover classification accuracy [9,10].
At present, the research into land-cover classification in complex terrain areas concentrates mainly on two aspects: First, the research method based on geometric distortion regions mainly focuses on detection [11,12,13]. Second, some researchers are trying to extract efficient features to further increase the land-object discriminative capacity of SAR data.
In view of the degradation of image quality caused by the existing deformation of complex terrain, some researchers have focused on the use of interferogram analysis and geometric modeling to complete the detection of geometric distortion areas [14,15,16]. Ren et al. (2013) used correlation coefficient and interference amplitude to detect layover and shadow areas [17]. Zhang et al. (2019) determined the layover and shadow areas by using a geometric model and a morphological method of radar satellite imaging [18]. Their study showed that both the interferogram detection method and the geometric model can detect the geometric distortion region to a certain extent. However, the polarization information of geometric distortion areas has not been supplemented or corrected, and cannot properly describe the types of ground objects, nor can it be directly applied to classification. Hence, the compensation of the geometric distortion area is essential for enhancing the accuracy of land-cover classification based on SAR data. Mahdavi et al. (2019) found that the cross-track image can collect most of the information lost by the sensor in the line of sight of the opposite orbit [19]. Borlaf-Mena et al. (2021) classified the stay and shadow areas of front occlusion to avoid the influence of geometric distortion, but the real information contained was still unknown [20]. However, only a few researchers have tried to use multi-track data to compensate for the effects of geometric distortion [21,22].
An effective combination of features is one of the key factors of land-cover classification using SAR images [23,24,25,26]. Up to now, backscattering coefficient and polarization characteristics have been the main characteristics of SAR land-cover classification. Several researchers have investigated feature combinations using full-polarimetric SAR data. Guo et al. (2018) extracted twenty-two feature parameters from Radarsat-2 full-pol SAR images, and then selected the best feature-by-feature selection method to identify rice [27]. Sayedain et al. (2020) thought that better land-cover classification results could be obtained by combining NDVI with full-pol features [28]. Shen et al. (2020) used GF-3 to extract polarization decomposition features and backscattering coefficient for water extraction [29]. Compared with the full-pol modes, dual-pol modes are easier to obtain, with larger swath width and lower data volumes. However, feature separability inthe dual-polarization data of different land objects is lower and more susceptible to the impact of complex terrain. Therefore, an appropriate feature combination is important to ensure the accuracy of land-cover classification based on dual-pol modes with limited polarimetric information. Backscatter intensity from polarized channels has been widely used to identify different types of land cover [30,31,32]. Other derived information such as the H-α polarization decomposition parameters, grey level co-occurrence matrix (GLCM) [33,34,35], and co-pol ratio has been explored for classification and valuable classification results have been obtained [36]. Recently, DpRVI for dual-pol SAR data has been proposed, which has been successfully utilized in crop-growth monitoring and studies of soil moisture [37,38]. However, the effect of DpRVI and its feature combinations for land-cover classification in complex terrain areas still needs to be investigated. Moreover, these assessments have mainly been explored using VV-VH SAR data. Further experiments using the HH-HV mode are required, as the land-object responses of horizontally polarized transmitted waves and vertically transmitted waves could be different.
Based on the analysis of the above-mentioned research status, the main problems faced during the classification of ground objects in complex terrain areas can be summarized as follows: on the one hand, the current research mainly focuses on the detection of geometric distortion areas, and the real classification results of ground objects in geometric distortion areas have not been explored. On the other hand, the effect of DpRVI in land-cover classification has not been investigated. In view of the above two problems, this research aims to develop a classification method for complex terrain based on the dual-pol mode (HH-HV) SAR data from the flight path of the Gaofen-3 satellite, to improve the accuracy of land-cover classification. Firstly, by analyzing the geometric model of radar satellite imaging, the local incident angle is obtained from the radar wave side angle and slope angle. Conditional judgment is made to identify the layover and shadow area, and compensated by the data of another orbit. Then, the relevant features are extracted for each ground object. Based on the dual orbit polarization data, the DpRVI is introduced to characterize the vegetation-cover type and Jeffries–Matusita (J-M) distance analysis is performed to select the optimized feature combination. Finally, the deep learning method 2D-CNN with excellent classification ability is applied to classify the compensated images. The main contributions of this paper are as follows: designing a framework that compensates geometric distortion area by using the data of the opposite trajectory, determining the specific types of ground objects on the basis of detected geometric distortion areas. It broadens the application range of SAR remote sensing technology and has good application prospects in the field of land-cover classification. Furthermore, it demonstrates the application of DpRVI for land-cover classification using dual-pol HH-HV SAR data, especially the ability to identify vegetation types.

2. Materials and Methods

2.1. Study Area

The study area is located in Danjiangkou reservoir area (111°24′35″E~111°52′16″E, 32°31′6″N~32°50′21″N), which is the core water source area of South-to-North Water Transfer Project. It is an important ecological protection area in China, shown in Figure 1. Accurate land-cover investigation is of great significance to further protect the ecological environment.
According to official data for 2020 released by the Bureau of Land and Resources Utilization, buildings, woodlands, farmland, and water are the main types of ground elements. The slope-angle map on the right side of Figure 1b shows that the terrain fluctuates greatly. In the west and south of the study area, there are continuous mountainous areas which cause serious geometric distortion in SAR images. Major land-cover types in the eastern and northern parts of the study area are towns and farmland. The study area is characterized by its complex terrain and relatively rich land-cover types.

2.2. Data and Preprocessing

2.2.1. Gaofen-3 Data and Preprocessing

Gaofen-3 is a special remote sensing satellite of China’s High-Resolution Earth Observation System Project. It is the first C-band multi-polarization SAR imaging satellite in China and has the characteristics of multiple imaging modes, large imaging width (10~650 km), multiple resolutions (1~100 m), and multiple polarization modes. This study utilizes two images from the Gaofen-3 fine stripe (FSII) SLC product of ascending and descending orbit, with resolution of 10 m in dual polarization (HH-HV) mode. The detailed data parameters are shown in Table 1.

2.2.2. Auxiliary Data

In this study, shuttle radar topography mission (SRTM) DEM with a resolution of 30 m was selected as auxiliary data. Geometrical model construction of terrain correction and SAR imaging was completed by DEM.
The study area is mostly mountainous and it is difficult to obtain sufficient actual land-cover information from field trips. Hence, corresponding optical images from Google Earth with 1 m resolution from the same period were selected as reference annotation sources to provide samples of land cover for visual interpretation. These images were also used to evaluate qualitatively the relative accuracy of land-cover classification. The distribution of land-cover samples is shown in Figure 2. Specifically, 148 parcels were selected as samples, including 33 buildings, 58 farmland, 11 water areas, and 46 forest areas. In this study, 60% of the samples were randomly selected as training samples, and the remaining 40% were used as test samples.

2.3. Geometric Distortion Region Analysis

After standard image pre-processing, such as geocoding and topographic correction, geographic distortion remains present in a large number of SAR images, which limits their classification accuracy for complex terrain. The premise of dealing with the geometric distortion area is to identify the specific location. In this paper, the spatial geometric relationship of the synthetic aperture radar’s imaging is utilized.
The imaging method of the SAR is shown in Figure 3. Related to the projection order of the ground-target points, distant ground objects are projected on the image later. The distance to the target is directly proportional to the order of receiving information.
B C = K R B R C
where B’ and C’ are distance coordinates, RB and RC are the oblique distance of the target point, and K represents the coefficient of the actual distance and the image distance. That difference between β 2 and β 3 is very small, where λ is the oblique irradiation angle λ = π 2 β 3 + α , which can be calculated as follows:
B C = B C cos π 2 β 3 + α
It can be seen from Equation (2) that the larger the incident angle, the smaller the scale coefficient K. The phenomenon in Figure 3 shows the short-range compression characteristics of SAR imaging.
As the slope angle increases, the relief of the terrain becomes more pronounced. As shown in Figure 4a, the oblique distance from the top of the slope decreases continuously, and the local incident angle decreases continuously, until the oblique distance B2 from the top of the slope to the radar satellite equals the oblique distance A from the bottom of the slope to the satellite. At this point, the positions of the bottom of the slope and the top of the slope in the image coincide. At this time, the local incidence angle is 0°. Starting from this critical point, as the slope angle increases, the top of slope B3 is recorded before the bottom of the slope A in the SAR images, thus resulting in layover areas.
When the satellite illuminates the back of the mountain slope, the tilt angle of the back is too large, and the radar beam is blocked by the top of the mountain. Thus, it cannot illuminate the target area on the back of the mountain, resulting in no echo signal in this area, which is a shadow area. As shown in Figure 4b, when the back slope angle is α 2 , it is just at the critical point of the shaded area. At this time, λ = 2 π α , then θ can be expressed as:
θ = β 2 π α = β + α
when the local incident angle θ is greater than 90 ° , the radar signal is blocked by the mountain, which is a shadow area.
In general, the geometry of SAR images is determined by the local incidence angle of the target point. When the local incidence angle is less than 0°, the radar incidence angle is smaller than the slope angle. The imaging position of the top of the slope in the SAR image is in front of the bottom of the slope, which is the layover area. When the local incidence angle is larger than 90°, the radar signal is occluded and no data are generated; this is the shaded region. Mountainous areas in complex terrain fluctuate greatly, and the problem of layover and shadow is more prominent.

2.4. Land-Cover Classification Method for Complex Terrain Area

The flowchart of our proposed method is shown in Figure 5. After data preprocessing, the process of land-cover classification can be divided into three steps. Firstly, the geometric model of SAR satellite imaging is analyzed and the geometrically distorted region is identified based on the range of the local incidence angle. Secondly, the geometric distortion area is compensated by using images of different tracks, in order to reduce the adverse effects of layover and shadow on the classification accuracy of ground objects. Thirdly, dual-orbit polarization characteristics (DOPC) and DpRVI are extracted from the compensated image, and J-M distance analysis is carried out to verify the sample separability of different feature combinations. Finally, the proposed feature combination is used for classification by 2D-CNN, which is used to generate the land-cover classification model. Its accuracy is evaluated and compared with DOPC + DpRVI feature combination based on uncompensated images and DOPC without DpRVI based on compensated images.

2.4.1. Geometric Distortion Area Detection

On the basis of the above analysis, the geometric conditions of layover area and shadow area were analyzed according to the spatial geometric relationship of SAR image imaging time. Satellite orbit data and DEM data of the study area were used for constructing the SAR imaging model, mainly including spatial coordinate transformation and morphological filtering. Firstly, the position parameters of Gaofen-3 were calculated to fit the orbit data, and the coordinate system and the three-dimensional coordinates of the study area obtained by DEM were unified to construct the imaging model. Secondly, the local incident angle was judged conditionally, and the rough geometric distortion area determined. Finally, morphological filtering was performed to reduce the influence of noise and obtain the fine geometry. The detailed implementation steps are shown in Figure 6.
In detail, satellite orbit parameters and the trinomial fitting method were used to fit the orbit of the satellite. The orbit parameter file contains the position vector (XS, YS, ZS) and velocity vector (XV, YV, ZV) of many satellite groups, as shown in Table 2. Where a i , t i , b i , c i , A i , B i , C i are the coefficients to be determined. The coefficients were obtained by fitting the actual data with the trinomial ballistic equation.
Three-dimensional coordinates of the research area were obtained from DEM. DEM is an ordered numerical array that describes the surface morphology and spatial distribution of information within the whole simulated scene, reflecting the actual surface morphology. The satellite orbit coordinate system is the WGS-84 coordinate system while the coordinate system adopted by DEM is a geodetic coordinate system. Therefore, the satellite orbit coordinate (XYZ) was converted into the geodetic latitude and longitude (BLH) coordinate system to establish the unified coordinate system required for the experiment. The specific coordinate conversion expression is:
B = arctan z + N e 2 sin B x 2 + y 2
L = arctan y x
H = x 2 + y 2 cos B N
where N is the radius of a unitary circle, that is:
N = a 1 e 2 sin 2 B
where a = 6,378,137 m.
According to the actual needs, the geodetic coordinate system of known target points was converted into a Gaussian plane coordinate. Thus, the geometric relationship between radar satellite and irradiation target can be solved.
On the basis of the unified coordinate system, a geometric model was constructed to obtain the instantaneous local incident angle at this time, as shown in Figure 7.
The local incidence angle θ is expressed as:
θ = β α
where β is the radar wave side angle; α is the inclination angle of the target point. When θ < 0 °, the target area produces top–bottom displacement, which is layover area. When θ > 90 °, the target area produces signal loss, which is shadow area.
In the currently detected geometric distortion area, morphological methods were used to remove burr noise. Image corrosion and a dilation algorithm are the basis of morphological image processing, which extracts the geometric features of useful objects from images according to the logical relationship between pixels. The method used in this paper involves first expansion and then corrosion.
  • Expansion operation:
    A B = x , y | B x y A
The rough geometric distortion region is dataset A, where 1 is the geometric distortion region and 0 is the normal region. This formula shows that the dataset A is expanded by the structure B, and the origin of the structure element B is translated to the image pixel (x, y) position. If the intersection of B and A at the image unit (x, y) is not null, the unit (x, y) corresponding to the output image is assigned as 1, otherwise it is assigned as 0.
  • Corrosion operation:
    A B = x , y | B x y A  
This formula shows that structure B is used to corrode A. When the origin of B is translated to the pixel position (x, y) of image A, if B is at (x, y) and completely contained in the layover area of image A, then the pixel (x, y) corresponding to the output image is identified as a geometric distortion region with value 1, otherwise it is identified as a normal region with value 0.

2.4.2. Geometric Distortion Area Compensation

Detection can identify the position of the geometric distortion area, but it cannot obtain effective polarization information. Consequently, the characteristics of scatter error in layover and shadow areas still lead to incorrect classification. The images obtained using SAR sensors from different tracks in a specific area might contain different polarization information and have different geometric distortion areas because of different incident angles. That is, the geometric distorted areas in the current trajectory image may not be deformed in the opposite orbit. Therefore, most of the lost information in the line of sight of the remote sensor can be collected in the opposite orbit.
An image from any orbit can be selected as the primary image, and the image of the opposite orbit is selected as the secondary image. In this paper, the descending orbit image was set as the main image and the ascending orbit image was set as the secondary image. The geometric distortion area in the main image is replaced by the pixel value at the corresponding position in the secondary image. The image-fusion formula of ascending orbit and descending orbit is as follows:
{[M(i,j)∈AS]∨[M(i,j)∈AL]}→R(i,j) = S(i,j)
{[M(i,j)∈AN]} →R(i,j) = M(i,j)
where M(i,j) is the pixel value in the main image. AL, AS, and AN are the layover area, shadow area, and normal area in the main image, respectively. S(i,j) represents the corresponding image pixel value of the secondary image. R(i,j) is the pixel value in the compensation image based on the data of ascending and descending orbits.

2.4.3. Feature Extraction

The features used in the current study mainly include dual-orbit polarization characteristics (DOPC) and DpRVI. DOPC is a dual-pol feature combination including backscattering coefficient (σ0), total scattering power (SPAN), differential intensity (DI), power ratio (PR), and H-α polarization decomposition parameters. These features are selected for their benefits to distinguish different types of surface roughness, such as water, farmland, buildings, and woodlands for land-cover classification.
The framework of feature extraction used in the current study is shown in Figure 8. After the compensation, the polarization matrix of the image was converted to generate polarization matrix C2; that is, each pixel in the image is represented by a covariance matrix, as shown in Formula (12):
Z = S H H S H V C 2 = C 11 C 12 C 21 * C 22  
where * stands for conjugate transpose, C 11 and C 22 are real numbers. Z is the target vector, SHHSHV defines the scattering component, and V/H represents the vertical and horizontal transmit/receive polarization.
Parameters such as eigenvalues and eigenvectors are used to represent the polarization information of ground objects in SAR images. The C2 polarization matrix in Formula (12) can also be written as:
C 2 = i = 1 2 λ i e i e i *  
where λ i and e i are the eigenvalues and eigenvectors of the C2 matrix, respectively.
SPAN is an effective parameter to represent the intensity information of scattering between pixels. The S P A N value of urban areas is higher, while that of water areas is lower. It is expressed as follows:
S P A N = 1 2 S H H 2 + S H V 2  
where |SHH|2 and |SHV|2 are the intensities of HH and HV bands, dB (decibel) is the logarithm of a given number x to base 10 or log10 x.
In addition, the difference between the co-polarization and cross-polarization intensity is also reflected in DI and PR.
D I = 1 2 S H H 2 S H V 2          
P R = S H V 2 d B S H H 2 d B
H-α polarization decomposition was applied to analyze the polarization matrix, which is mainly based on the eigenvalues and eigenvectors of C2. The physical quantities of eigenvalues and eigenvectors are defined as follows:
  • Entropy indicates the randomness of target scattering:
    H = i = 1 2 P i l o g 3 P i   ,   0 H 1  
    where P i = λ i j = 1 2 λ j , i = 1 , 2 , λ i is the eigenvalues of the polarization matrix C2;
  • Alpha is the average scattering mechanism from surface scattering to volume scattering and then to dihedral angle scattering:
    α = P 1 α 1 + P 2 α 2
    where α i is the internal degree of freedom of the scatterer, with a value ranging from 0° to 90°;
  • Anisotropy indicates the degree of anisotropy of target scattering:
    A = λ 2 λ 1 λ 2 + λ 1  
A dual polarization radar vegetation index (DpRVI) for monitoring crop conditions was previously developed based on dul-pol SAR feature vector decomposition [38]. In this current study, we aimed to utilize DpRVI extracted from HH-HV SAR data to enhance the distinguishability between farmland and forest. Its expression is:
D p R V I = 1 D o P P  
where P = λ 1 λ 1 + λ 2 is the normalization parameter of eigenvalue, λ 1 represents the even and odd scattering intensity, λ 2 represents multiple scattering intensity. When the target is dominated by odd or even scattering, λ 1 λ 2 exists. DoP in Equation (21) is used to quantify the relative intensity between different scattering mechanisms of polarization. For dual polarization data, its expression is:
D o P = 1 4 C 2 T r C 2 2   = λ 1 λ 2 λ 1 + λ 2  
where Tr () is the trace of the matrix. According to the definition of the degree of polarization, the relative intensities of different scattering mechanisms are quantified. When DoP = 1, it is considered that the target is in a completely polarized state that is dominated by a single scatter mechanism. When DoP = 0, it is considered that the target is in a completely depolarization state dominated by multiple co-existing scattering mechanisms.

2.4.4. J-M Distance

After extracting the above features from the SAR images, J-M distance was used to analyze and verify the separability of these features. This is a useful tool for evaluating the effectiveness of features. The J-M distance varies from 0 and 2. When the value is greater than 1.9, it indicates that the features are well separable. When the value is greater than 1.8 and less than 1.9, the separability of features is generally possible. When the value is less than 1, these features are inseparable. This is shown in Equation (22):
D i j = { x [ P x w i P x w j ] 2 d x } 0.5
where D i j represents the separability of category i and category j, and P x w i and P x w j indicate the conditional probability density of feature x under category i and category j, respectively.

2.4.5. 2D-CNN Classifier

In order to effectively exploit the optimal feature combination, appropriate classifiers were selected for image processing. As a widely used method in deep learning, 2D-CNN has been shown to be effective in feature mining and representation. The structure of 2D-CNN is shown in Figure 9, including three convolution layers, three aggregation layers, three batch normalization layers, and two full connection layers. The convolution layer and pool layer are used to build the feature extractor, and the full connection layer is used to build the classifier.
In order to verify the separability of feature combination and the effectiveness of the proposed land-cover classification method based on images from the ascending and descending orbits, three feature combinations of the dual-pol HH-HV mode of Gaofen-3 data were constructed and used as input files. Uncompensated_DOPC_DpRVI and Compensated_DOPC were selected to compare the Compensated_DOPC_DpRVI feature combination of the proposed method.
  • Uncompensated_DOPC_DpRVI: The feature combination of DOPC (backscattering coefficient, SPAN, DI, PR, H, A, α ) and DpRVI based on the uncompensated HH and HV polarization images of Gaofen-3.
  • Compensated_DOPC: The feature combination of DOPC based on compensated HH and HV polarization images of Gaofen-3.
  • Compensated_DOPC_DpRVI: The feature combination of DOPC and DpRVI based on the compensated HH and HV polarization images of Gaofen-3.

2.4.6. Quantitative Analysis

In order to measure the accuracy of the classification results, quantitative and qualitative evaluations were made for the identification results. To evaluate quantitvely the accuracy of each model, various parameters such as Precision, Recall, F1_Score, OA, and Kappa coefficient are considered.
  • Precision:
    P r e c i s i o n = T P T P + F P
  • Recall:
    r e c a l l = T P T P + F N  
  • F1_Score:
    F 1 _ S c o r e = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l  
  • OA:
    O A = T P + T N T P + T N + F P + F N
  • Kappa coefficient:
    K a p p a = O A p e 1 p e  
    p e = T P + T N × F P + T P × F P + T N × F N + T N T P + T N + F P + F N 2  
    where TP is a positive sample predicted by the model to be positive, TN is a negative sample predicted by the model to be negative, FP is a negative sample predicted by the model to be positive, and FN is a positive sample predicted by the model to be negative.
Qualitative evaluation was applied to evaluate the classification results intuitively. Compared with optical images, the classification results were interpreted and verified intuitively. The superiority of the proposed method was verified by the comparison of the three feature combinations.

3. Results and Discussion

3.1. Geometric Distortion Region Analysis

Experiments were carried out in the Danjiangkou area, where the complex terrain is mostly mountainous and widespread problems of layover and shadow are encountered. The preprocessed ascending and descending orbits of Gaofen-3 images in the Danjiangkou area are shown in Figure 10. The dimension of the study area is 4611 × 3208 pixels. The HV polarization image of the descending track was selected as the main image, while the HV image of the ascending track was selected as the sub-image, as shown in Figure 10a,b.
Figure 10b,d shows enlargements of areas A and B in red rectangular frames in Figure 10a,c. As can be seen from Figure 10, the backscattering coefficient in the dwell zone is high and the image is bright due to the superposition of the signals, while the backscattering coefficient is low in the areas of shadow which appear dark, due to the loss of signal blocked by mountains. In addition, different images of a specific area were obtained by using SAR sensors in the ascending and descending trajectories. In the same area at the same latitude, the information from another track can be collected from the cross-track images. The geometric distortion areas in the primary image are normal in the secondary image, and the information confusion in the primary image can be corrected by geometric area compensation.

3.2. Geometric Distortion Area Detection

The geometric model of satellite imaging was established by combining the three-dimensional coordinates of DEM and orbital parameters of Gaofen-3. The geometric distortion region of the main image is shown in Figure 11. Figure 11b shows the instantaneous incident angle. Subsequently, the geometric distortion area was determined by morphological processing according to the local incident angle range (Figure 11c). In Figure 11c, white areas represent is the regions of geometric distortion, and black indicates the normal region. It can be observes that the geometric distortion areas are mainly distributed near mountains, seriously affecting the accuracy of land-cover classification.

3.3. Geometric Distortion Area Compensated

The geometric distortion areas in the primary image are processed by the secondary image. Figure 12a,b shows the original polarized images and Figure 12c,d shows the compensation results. An enlarged area was selected to analyze the compensation effect, as shown in Figure 12b,d. Compared with the original polarized image, it can be seen intuitively that most of the geometric distortion areas in Figure 12b have been compensated in Figure 12d.

3.4. J-M Distance Analysis of Features Combinaion

Buildings, water, farmland, and woodland show different separability in different combinations of features. To evaluate the separability of DpRVI and DOPC after compensation, the J-M distances of the samples in three cases were calculated according to different combinations of features, as shown in Figure 13.
According to the J-M distance analysis of the feature combinations, it can be seen that compared with Uncompensated_DOPC_DpRVI and Compensated_DOPC, the Compensated_DOPC_DPRVI proposed in this paper has the highest feature separability. The J-M distance of Compensated_DOPC_DpRVI is almost always above 1.9, which indicates that the sample height can be divided. Geometric distortion compensation and the introduction of DpRVI will improve the separability of different types of ground objects.
Generally speaking, the separability of buildings, waters and other features in Compensated_DOPC is significantly higher than that in UnCompensated_DOPC_DpRVI and Compensated_DOPC_DpRVI. The separability of farmland and forest land in Compensated_DOPC_DpRVI is higher than that in Compensated_DOPC. In other words, the treatment of geometric distortion can improve the recognition accuracy for buildings and water areas, while DpRVI is more effective for woodland and farmland.

3.5. Quantitative Analysis and Discussion

3.5.1. Quantitative Evaluation

The combinations of features were input into the model, respectively, then the predicted results were compared with the actual samples to obtain the confusion matrix and the above precision parameters. The accuracy results are shown in Figure 14 and Table 3.
In terms of overall accuracy, the Compensated_DOPC_DpRVI feature combination has the highest accuracy, with an OA of 0.93 and Kappa score of 0.92. Compensated_DOPC also achieved good classification results, with an OA score of 0.89 and Kappa score of 0.87, but its accuracy was slightly lower than that of Uncompensated_dopc_DpRVI because DpRVI was not added. Uncompensated_DOPC_DpRVI introduces DpRVI but does not deal with the geometric distortion area, which led to the lowest accuracy: the OA score was 0.88 and Kappa score was 0.86. Experiments show that OA and Kappa can be improved by about 5% after geometric distortion processing and introduction of DpRVI.
In terms of the accuracy of different land-cover types, the accuracy of the Compensated_DOPC_DpRVI method for all land types was higher than that of Uncompensated_DOPC_DpRVI or Compensated_DOPC. In Uncompensated_DOPC_DpRVI, the value for farmland and forest land was slightly higher than Compensated_DOPC, by about 7%. This proves that DpRVI has a good effect on the identification of woodland and farmland.

3.5.2. Qualitative Evaluation

Figure 15 shows the qualitative evaluation results for the three sets of feature combinations and the optical reference images. Generally speaking, the classification results for flat areas are similar, and all the feature combinations can describe the ground objects in the study area. However, there are many geometric distortions in complex terrain areas, and the classification results for the feature combinations are quite different. Two areas in the mountainous region, A and B, were selected for further analysis and magnified for observation. Enlarged classification results are shown in Figure 16.
As shown in Figure 16b,f, due to the influence of layover and shadow, a large number of ground objects were wrongly divided into buildings or water in the classification results based on Uncompensated_DOPC_DpRVI. After the geometric distortion area was compensated, the misclassification of water and buildings improved to some extent; the classification results based on Compensated_DOPC are shown in Figure 16c,g. However, compared with Compensated_DOPC_DpRVI, the Compensated_DOPC did not introduce DpRVI, and the feature separability for different vegetation is weak, resulting in lower classification accuracy for forest land and farmland.
Based on uncompensated _DOPC_DpRVI classification results, due to the influence of layover and shadow, a large number of ground objects are wrongly divided into buildings or water (Figure 16b,f). Compared with Compensated _DOPC_DpRVI, Compensated _DOPC did not introduce DpRVI, although the geometric distortion area was compensated, which led to insufficient classification accuracy for woodland and cultivated land (Figure 16c,g).
In this study, the images of the relative track were used to compensate for the geometric distortion. The compensation of geometric distortion area can replace 86.3% error information, see Table 4 for details.
Compensation   Ratio   = Compensation _ Num Geometric _ distorion _ Num
where Compensation_Num is the number of pixels in the compensation area, and Geometric_distorion_Num is the number of pixels in the geometric distortion area.
However, due to the imaging angle of the SAR sensors, even sensors with opposite orbits cannot guarantee that all the collected information is correct. As shown in Figure 17, the image is compensated by lifting the track data, but there are still a few geometric distortion areas that cannot be compensated.

4. Discussion

In this study, the classification of ground objects in complex terrain area was studied by using ascending and descending orbit HH-HV dual-pol mode SAR data from the Gaofen-3 satellite. Through geometric distortion processing and effective feature combination, the classification confusion relating to complex terrain areas can be effectively improved.
There are vast mountains and hills on the earth’s land surface, and the SAR side-looking imaging mechanism leads to serious geometric distortion in areas of complex terrain. The experimental results show that it is difficult to identify land objects using the scattered signal of the distorted area without processing. In this study, 86.3% of the geometric distortion was compensated by the data of the lifting orbit, greatly improving the incorrect scattering information in the geometric distortion area.
Because of the imaging angle of the SAR sensors, even sensors with opposite orbits cannot guarantee that all the collected information is correct. The current data for the research area are Gaofen-3 dual-track data for compensation and fusion, but there are still a few geometric distortion areas that cannot be compensated. Orbit data from different angles should be added to ensure the integrity of collected information.
Furthermore, effective feature combination is an important guarantee of classification accuracy. In addition to enhancing the separability between land-cover types, DpRVI has been introduced and assessed. DpRVI is mainly used to reflect the growth state of vegetation and the inversion of soil moisture, while its role in land-cover classification has rarely been investigated, especially in relation to HH-HV dual-pol mode SAR data. It was found that the introduction of DpRVI in HH-HV dual-pol mode is effective and can significantly increase the J-M distance between woodland and farmland. The application of DpRVI deserves more attention in follow-up research into land-cover classification.
Finally, a 2D-CNN classification model was constructed. The results show that the classification accuracy was greatly improved by geometric distortion compensation and effective feature combination with DpRVI.
Existing research mainly focuses on detecting areas of terrain distortion. In view of the present problem of classification errors caused by geometric distortion, this study explores the identification of the correct types of objects in geometric distortion areas through compensation by reverse orbit data. This research verifies the effectiveness of DpRVI in land-cover classification, especially for farmland and forest areas.

5. Conclusions

In this study, a land-cover classification method based on Gaofen-3 lifting rail data is proposed, which determines the specific types of ground objects on the basis of detecting geometric distortion areas. Firstly, by establishing the geometric model of SAR imaging, the layover and shadow areas are detected. Then, the geometric distortion region is compensated by using different trajectory data. Finally, effective feature combinations with DpRVI are extracted from the compensated image, and a 2D-CNN model is constructed to realize classification. The effectiveness of this method is verified by qualitative and quantitative evaluation.
In the quantitative evaluation, compared with the results of other methods, F1 score, OA, and Kappa were all improved about 5%. All the evaluation indicators are above 90%. In qualitative evaluation, two typical mountain areas were selected for visual observation, and were very similar to the optical images. It is proved that this method is applicable in complex terrain.
Briefly, the land-cover classification method based on ascending and descending orbit-data compensation in complex terrain areas can improve the separability of polarization characteristics and the classification accuracy. In addition, the dual polarization data required in this method are simple and easy to obtain. Compared with full polarization, land-cover classification based on dual-pol data lacks the expression of a vegetation index. Experiments show that the introduction of DpRVI can improve the classification accuracy based on double pol data. It broadens the application scope of SAR remote sensing technology, and has good application prospects in the field of land-cover classification.

Author Contributions

Conceptualization, H.W., Y.H. and N.L.; data curation, L.W. and Z.G.; investigation, H.W., Y.H. and N.L.; methodology, H.W., Y.H. and L.W.; supervision, H.W., Y.H., H.Y. and Z.G.; writing—original draft, Y.H. and L.W.; writing—review and editing, H.W., Y.H., H.Y. and N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Plan of Science and Technology of Henan Province (232102211043, 222102110439, 212102210093, 212102210101), the National Natural Science Foundation of China (42101386), the College Key Research Project of Henan Province (22A520021, 21A520004), the Key R&D Project of Science and Technology of Kaifeng City (22ZDYF006), and the Key Laboratory of Natural Resources Monitoring and Regulation in Southern Hilly Region, Ministry of Natural Resources of the People’s Republic of China (NRMSSHR2022Z01).

Data Availability Statement

All data and models presented in this study are available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ghazifard, A.; Akbari, E.; Shirani, K.; Safaei, H. Evaluating land subsidence by field survey and D-InSAR technique in Damaneh City, Iran. J. Arid. Land. 2017, 9, 778. [Google Scholar] [CrossRef]
  2. Bauer-Marschallinger, B.; Cao, S.; Tupas, M.E.; Roth, F.; Navacchi, C.; Melzer, T.; Freeman, V.; Wagner, W. Satellite-Based Flood Mapping through Bayesian Inference from a Sentinel-1 SAR Datacube. Remote Sens. 2022, 14, 3673. [Google Scholar] [CrossRef]
  3. Hall-Beyer, M. Practical guidelines for choosing GLCM textures to use in landscape classification tasks over a range of moderate spatial scales. Int. J. Remote Sens. 2017, 38, 1312. [Google Scholar] [CrossRef]
  4. Yu, R.; Wang, G.; Shi, T.; Zhang, W.; Lu, C.; Zhang, T. Potential of Land Cover Classification Based on GF-1 and GF-3 Data. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2747–2750. [Google Scholar] [CrossRef]
  5. Shi, X.; Xu, F. Land Cover Semantic Segmentation of High-Resolution Gaofen-3 SAR Image. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3049–3052. [Google Scholar] [CrossRef]
  6. Dingle, R.; Davidson, A.; McNairn, H.; Hosseini, M. Synthetic Aperture Radar (SAR) image processing for operational space-based agriculture mapping. Int. J. Remote Sens. 2020, 41, 7112. [Google Scholar] [CrossRef]
  7. Cui, Z.; Zhang, M.; Cao, Z.; Cao, C. Image Data Augmentation for SAR Sensor via Generative Adversarial Nets. IEEE Access 2019, 7, 42255. [Google Scholar] [CrossRef]
  8. Wu, L.; Wang, H.; Li, Y.; Guo, Z.; Li, N. A Novel Method for Layover Detection in Mountainous Areas with SAR Images. Remote Sens. 2021, 13, 4882. [Google Scholar] [CrossRef]
  9. Mishra, V.N.; Prasad, R.; Rai, P.K. Performance evaluation of textural features in improving land use/land cover classification accuracy of heterogeneous landscape using multi-sensor remote sensing data. Earth Sci. Inform. 2019, 12, 71. [Google Scholar] [CrossRef]
  10. Luo, S.; Tong, L. A Fast Identification Algorithm for Geometric Distorted Areas of Sar Images. IEEE Int. Geosci. Remote Sens. Symp. 2021, 7, 5111. [Google Scholar] [CrossRef]
  11. Huanxin, Z.; Cai, B.; Fan, C.; Ren, Y. Layover and shadow detection based on distributed spaceborne single-baseline InSAR. IOP Conf. Ser. Earth Environ. Sci. 2014, 17, 22–26. [Google Scholar] [CrossRef]
  12. Wang, S.; Xu, H.; Yang, B.; Luo, Y. Improved InSAR Layover and Shadow Detection using Multi-Feature. In Proceedings of the IGARSS 2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 28–31. [Google Scholar] [CrossRef]
  13. Rossi, C.; Eineder, M. High-Resolution InSAR Building Layovers Detection and Exploitation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6457. [Google Scholar] [CrossRef]
  14. Gini, F.; Lombardini, F.; Montanari, M. Layover solution in multibaseline SAR interferometry. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 1344. [Google Scholar] [CrossRef]
  15. Eineder, M.; Adam, N. A maximum-likelihood estimator to simultaneously unwrap, geocode, and fuse SAR interferograms from different viewing geometries into one digital elevation model. IEEE Trans. Geosci. Remote Sens. 2005, 43, 24. [Google Scholar] [CrossRef]
  16. Wan, Z.; Shao, Y.; Xie, C.; Zhang, F. Ortho-rectification of high resolution SAR image in mountain area by DEM. Int. Conf. Geoinf. 2010, 6, 1. [Google Scholar] [CrossRef]
  17. Ren, Y.; Zou, H.X.; Qin, X.X.; Ji, K.F. A method for layover and shadow detecting in InSAR. J. Cent. South Univ. (Sci. Technol.) 2013, 44, 396. [Google Scholar]
  18. Zhang, T.T.; Yang, H.L.; Li, D.M.; Li, Y.J.; Liu, J.N. Identification of layover and shadows regions in SAR images: Taking Badong as an example. Bull. Surv. Mapp. 2019, 11, 85. [Google Scholar] [CrossRef]
  19. Mahdavi, S.; Amani, M.; Maghsoudi, Y. The Effects of Orbit Type on Synthetic Aperture RADAR (SAR) Backscatter. Remote Sens. Lett. 2019, 10, 120–128. [Google Scholar] [CrossRef]
  20. Borlaf-Mena, O.; Badea, M.; Tanase, A. Influence of the Mosaicking Algorithm on Sentinel-1 Land Cover Classification over Rough Terrain. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, 11–16 July 2021; p. 6646. [Google Scholar] [CrossRef]
  21. Cheng, J.; Sun, G.; Zhang, A. Synergetic Use of Descending and Ascending SAR with Optical Data for Impervious Surface Mapping. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, 11–16 July 2021; pp. 4272–4275. [Google Scholar] [CrossRef]
  22. Khan, J.; Ren, X.; Hussain, M.A.; Jan, M.Q. Monitoring Land Subsidence Using PS-InSAR Technique in Rawalpindi and Islamabad, Pakistan. Remote Sens. 2022, 14, 3722. [Google Scholar] [CrossRef]
  23. Mestre-Quereda, A.; Lopez-Sanchez, J.M.; Vicente-Guijalba, F.; Jacob, A.W.; Engdahl, M.E. Time-Series of Sentinel-1 Interferometric Coherence and Backscatter for Crop-Type Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4070. [Google Scholar] [CrossRef]
  24. Zhi, F.; Dong, Z.; Guga, S.; Bao, Y.; Han, A.; Zhang, J.; Bao, Y. Rapid and Automated Mapping of Crop Type in Jilin Province UsingHistorical Crop Labels and the Google Earth Engine. Remote Sens. 2022, 14, 4028. [Google Scholar] [CrossRef]
  25. Amani, M.; Salehi, B.; Mahdavi, S.; Granger, J.E.; Brisco, B.; Hanson, A. Wetland Classification Using Multi-source and Multi-temporal Optical Remote Sensing Data in Newfoundland and Labrador, Canada. Can. J. Remote Sens. 2017, 43, 360. [Google Scholar] [CrossRef]
  26. Amarsaikhan, D.; Blotevogel, H.H.; Van Genderen, J.L.; Ganzorig, M.; Gantuya, R.; Nergui, B. Fusing High-resolution SAR and Optical Imagery for Improved Urban Land Cover Study and Classification. Int. J. Image Data Fusion. 2010, 1, 83. [Google Scholar] [CrossRef]
  27. Guo, X.; Li, K.; Wang, Z.; Li, H.; Yang, Z. Fine classification of rice by multi-temporal compact polarization SAR based on SVM+SFS strategy. Remote Sens. Land Resour. 2018, 30, 5060. [Google Scholar] [CrossRef]
  28. Sayedain, S.A.; Maghsoudi, Y.; Eini-Zinab, S. Assessing the use of cross-orbit Sentinel-1 images in land cover classification. Int. J. Remote Sens. 2020, 41, 7801. [Google Scholar] [CrossRef]
  29. Shen, G.; Fu, W. Water Body Extraction using GF-3 Polsar Data—A Case Study in Poyang Lake. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGRSS), Waikoloa, HI, USA, 26 September–2 October 2020; p. 4762. [Google Scholar] [CrossRef]
  30. Li, D.; Zhang, Y. Unified huynen phenomenological decomposition of radar targets and its classification applications. IEEE Trans. Geosci. Remote Sens. 2016, 54, 723. [Google Scholar] [CrossRef]
  31. Miao, Y.; Wu, J.; Li, Z.; Yang, J. A Generalized Wavefront Curvature Corrected Polar Format Algorithm to Focus Bistatic SAR Under Complicated Flight Paths. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3757. [Google Scholar] [CrossRef]
  32. Baghermanesh, S.S.; Jabari, S.; McGrath, H. Urban Flood Detection Using TerraSAR-X and SAR Simulated Reflectivity Maps. Remote Sens. 2022, 14, 6154. [Google Scholar] [CrossRef]
  33. Doulgeris, A.P. An automatic u-distribution and markov random field segmentation algorithm for PolSAR images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1819–1827. [Google Scholar] [CrossRef]
  34. Wang, X.; Zhou, C.; Feng, X.; Cheng, C.; Fu, B. Testing the Efficiency of Using High-Resolution Data From GF-1 in Land Cover Classifications. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3051. [Google Scholar] [CrossRef]
  35. Tassi, A.; Vizzari, M. Object-Oriented LULC Classification in Google Earth Engine Combining SNIC, GLCM, and Machine Learning Algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
  36. Atwood, D.K.; Thirion-Lefevre, L. Polarimetric phase and implications for urban classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1278. [Google Scholar] [CrossRef]
  37. Mandal, D.; Kumar, V.; Ratha, D. Dual polarimetric radar vegetation index for crop growth monitoring using sentinel-1 SAR data. Remote Sens. Environ. 2020, 247, 111954. [Google Scholar] [CrossRef]
  38. Bhogapurapu, N.; Dey, S.; Mandal, D.; Bhattacharya, A.; Karthikeyan, L.; McNairn, H.; Rao, Y.S. Soil moisture retrieval over croplands using dual-pol L-band GRD SAR data. Remote Sens. Environ. 2022, 271, 112900. [Google Scholar] [CrossRef]
Figure 1. Study area: (a) Location map of study area; (b) Slope angle of the study area.
Figure 1. Study area: (a) Location map of study area; (b) Slope angle of the study area.
Remotesensing 15 02177 g001
Figure 2. Distribution diagram of samples.
Figure 2. Distribution diagram of samples.
Remotesensing 15 02177 g002
Figure 3. SAR imaging model.
Figure 3. SAR imaging model.
Remotesensing 15 02177 g003
Figure 4. Geometric distortion phenomenon model: (a) layover. (b) shadow.
Figure 4. Geometric distortion phenomenon model: (a) layover. (b) shadow.
Remotesensing 15 02177 g004
Figure 5. The framework of the proposed method.
Figure 5. The framework of the proposed method.
Remotesensing 15 02177 g005
Figure 6. The flowchart of layover and shadow area detection.
Figure 6. The flowchart of layover and shadow area detection.
Remotesensing 15 02177 g006
Figure 7. Local incident angle model.
Figure 7. Local incident angle model.
Remotesensing 15 02177 g007
Figure 8. Frame of Features Extraction.
Figure 8. Frame of Features Extraction.
Remotesensing 15 02177 g008
Figure 9. Framework of 2D-CNN.
Figure 9. Framework of 2D-CNN.
Remotesensing 15 02177 g009
Figure 10. Gaofen-3 images of ascending and descending orbits:(a) descending orbit HV polarized SAR image as primary image; (b) enlarged area in (a); (c) ascending orbit HV polarized SAR image as secondary image; (d) enlarged area in (c). Region A and B are the selected regions for subsequent quantitative analysis.
Figure 10. Gaofen-3 images of ascending and descending orbits:(a) descending orbit HV polarized SAR image as primary image; (b) enlarged area in (a); (c) ascending orbit HV polarized SAR image as secondary image; (d) enlarged area in (c). Region A and B are the selected regions for subsequent quantitative analysis.
Remotesensing 15 02177 g010
Figure 11. Gafen-3 image and detected geometric distortion area: (a) Gaofen-3 HV polarized image; (b) local incident angle; (c) detected layover and shadow area;.
Figure 11. Gafen-3 image and detected geometric distortion area: (a) Gaofen-3 HV polarized image; (b) local incident angle; (c) detected layover and shadow area;.
Remotesensing 15 02177 g011
Figure 12. Gafen-3 image and compensated geometric distortion area: (a) Gaofen-3 original HV polarized image; (b) enlarged area in (a); (c) compensated image of Gaofen-3 HV polarized image; (d) enlarged area in (c).
Figure 12. Gafen-3 image and compensated geometric distortion area: (a) Gaofen-3 original HV polarized image; (b) enlarged area in (a); (c) compensated image of Gaofen-3 HV polarized image; (d) enlarged area in (c).
Remotesensing 15 02177 g012
Figure 13. J-M distance based on different scenic feature combinations.
Figure 13. J-M distance based on different scenic feature combinations.
Remotesensing 15 02177 g013
Figure 14. Confusion matrix of the feature combinations: (a) Compensated_DOPC; (b) Uncompensated_DOPC_DpRVI; (c) Compensated_DOPC_DpRVI.
Figure 14. Confusion matrix of the feature combinations: (a) Compensated_DOPC; (b) Uncompensated_DOPC_DpRVI; (c) Compensated_DOPC_DpRVI.
Remotesensing 15 02177 g014
Figure 15. Classification results: (a) optical image; (b) classification result of Uncompensated_DOPC_DpRVI; (c) classification result of Compensated_DOPC; (d) classification result of Compensated_DOPC_DpRVI.
Figure 15. Classification results: (a) optical image; (b) classification result of Uncompensated_DOPC_DpRVI; (c) classification result of Compensated_DOPC; (d) classification result of Compensated_DOPC_DpRVI.
Remotesensing 15 02177 g015
Figure 16. Enlarged classification results of regions A and B based on different feature combinations: (a) optical image of area A; (b) classification result of Uncompensated_DOPC_DpRVI area A; (c) classification result of area A in the result diagram of Compensated_DOPC; (d) classification result of area A in the result diagram of Compensated_DOPC_DpRVI; (e) optical image of area B; (f) classification result of area B in the result diagram of Uncompensated_DOPC_DpRVI; (g) classification result of area B in the result diagram of Compensated_DOPC; (h) classification result of area B in the result diagram of Compensated_DOPC_DpRVI.
Figure 16. Enlarged classification results of regions A and B based on different feature combinations: (a) optical image of area A; (b) classification result of Uncompensated_DOPC_DpRVI area A; (c) classification result of area A in the result diagram of Compensated_DOPC; (d) classification result of area A in the result diagram of Compensated_DOPC_DpRVI; (e) optical image of area B; (f) classification result of area B in the result diagram of Uncompensated_DOPC_DpRVI; (g) classification result of area B in the result diagram of Compensated_DOPC; (h) classification result of area B in the result diagram of Compensated_DOPC_DpRVI.
Remotesensing 15 02177 g016
Figure 17. Original and compensated SAR images: (a) Original HV polarized Gaofen-3 image; (b) part of the enlarged area of (a); (c) compensated image of (a); (d) part of the enlarged area of (c).
Figure 17. Original and compensated SAR images: (a) Original HV polarized Gaofen-3 image; (b) part of the enlarged area of (a); (c) compensated image of (a); (d) part of the enlarged area of (c).
Remotesensing 15 02177 g017
Table 1. Parameters for Gaofen-3.
Table 1. Parameters for Gaofen-3.
Gaofen-3 ParameterMaster ImageSlave Image
ProductSLCSLC
Image modeFine stripe mode IIFine stripe mode II
Incidence angle31.59725631.783311
PolarizationHH, HVHH, HV
Pixel interval10 × 10 m10 × 10 m
BandCC
Pass directionAscendingDescending
Date9 July 20179 July 2017
Table 2. Parameters for track data.
Table 2. Parameters for track data.
ParameterFormula
XS X S = i = 0 n a i t i
YS Y S = i = 0 n b i t i
ZS Z s = i = 0 n c i t i
XV X V = i = 1 n i A i 1 t i 1
YV Y V = i = 1 n i B i 1 t i 1
ZV Z V = i = 1 n i C i 1 t i 1
Table 3. Quantitative evaluation of the model generated by the combinations of features.
Table 3. Quantitative evaluation of the model generated by the combinations of features.
BuildingFarmlandWoodlandWaterOAKappa
Uncompensated_DOPC_DpRVIPrecision0.830.860.840.980.880.86
Recall0.730.940.860.99
F1_Score0.780.900.850.99
Compensated_DOPCPrecision0.900.860.760.990.890.87
Recall0.880.920.800.98
F1_Score0.890.880.780.99
Compensated_DOPC_DpRVIPrecision0.930.890.900.990.930.92
Recall0.910.950.930.98
F1_Score0.920.920.910.98
Table 4. Compensation situation.
Table 4. Compensation situation.
ImageNumber of PixelArea (Km2)
Geometric distortion region3,460,644346.1
Compensation area2,986,287298.6
Study area14,792,0881479.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Yang, H.; Huang, Y.; Wu, L.; Guo, Z.; Li, N. Classification of Land Cover in Complex Terrain Using Gaofen-3 SAR Ascending and Descending Orbit Data. Remote Sens. 2023, 15, 2177. https://doi.org/10.3390/rs15082177

AMA Style

Wang H, Yang H, Huang Y, Wu L, Guo Z, Li N. Classification of Land Cover in Complex Terrain Using Gaofen-3 SAR Ascending and Descending Orbit Data. Remote Sensing. 2023; 15(8):2177. https://doi.org/10.3390/rs15082177

Chicago/Turabian Style

Wang, Hongxia, Haoran Yang, Yabo Huang, Lin Wu, Zhengwei Guo, and Ning Li. 2023. "Classification of Land Cover in Complex Terrain Using Gaofen-3 SAR Ascending and Descending Orbit Data" Remote Sensing 15, no. 8: 2177. https://doi.org/10.3390/rs15082177

APA Style

Wang, H., Yang, H., Huang, Y., Wu, L., Guo, Z., & Li, N. (2023). Classification of Land Cover in Complex Terrain Using Gaofen-3 SAR Ascending and Descending Orbit Data. Remote Sensing, 15(8), 2177. https://doi.org/10.3390/rs15082177

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop