Next Article in Journal
A Hyperspectral Image Classification Method Based on Adaptive Spectral Spatial Kernel Combined with Improved Vision Transformer
Previous Article in Journal
Air Pollution and Human Health: Investigating the Moderating Effect of the Built Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Meticulous Land Cover Classification of High-Resolution Images Based on Interval Type-2 Fuzzy Neural Network with Gaussian Regression Model

1
School of Software, Liaoning Technical University, Huludao 125105, China
2
Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing 100101, China
3
School of Robotics, Beijing Union University, Beijing 100027, China
4
School of Precision Instruments, Tsinghua University, Beijing 100062, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(15), 3704; https://doi.org/10.3390/rs14153704
Submission received: 23 June 2022 / Revised: 30 July 2022 / Accepted: 31 July 2022 / Published: 2 August 2022
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
This paper proposes a land cover classification method that combines a Gaussian regression model (GRM) with an interval type-2 fuzzy neural network (IT2FNN) model as a classification decision model. Problems such as the increase in the complexity of ground cover, the increase in the heterogeneity of homogeneous regions, and the increase in the difficulty of classification due to the increase in similarity in different regions are overcome. Firstly, the local spatial information between adjacent pixels was introduced into the Gaussian model in image gray space to construct the GRM. Then, the GRM was used as the base model to construct the interval binary fuzzy membership function model and characterize the uncertainty of the classification caused by meticulous land cover data. Thirdly, the upper and lower boundaries of the membership degree of the training samples in all categories and the principle membership degree as input were used to build the IT2FNN model. Finally, in the membership space, the neighborhood relationship was processed again to further overcome the classification difficulties caused by the increased complexity of spatial information to achieve a classification decision. The classical method and proposed method were used to conduct qualitative and quantitative experiments on synthetic and real images of coastal areas, suburban areas, urban areas, and agricultural areas. Compared with the method considering only one spatial neighborhood relationship and the classical classification method without a classification decision model, for images with relatively simple spatial information, the accuracy of the interval type-2 fuzzy neural network Gaussian regression model (IT2FNN_GRM) was improved by 1.3% and 8%, respectively. For images with complex spatial information, the accuracy of the proposed method increased by 5.0% and 16%, respectively. The experimental results prove that the IT2FNN_GRM method effectively suppressed the influence of regional noise in land cover classification, with a fast running speed, high generalization ability, and high classification accuracy.

1. Introduction

Complex surface features and land cover information are the two major characteristics of high-resolution remote sensing images [1], and their accurate classification results can be widely used in national economy and social services [2], especially in the fields of large land use [3], land cover mapping, and disaster prediction [4]. At present, accurate land cover mapping, especially high-resolution remote sensing mapping, is facing challenges such as increased heterogeneity among the same features and increased similarity between different features [5]. High spatial resolution remote sensing images can more accurately analyze the type of land cover; however, the improvement of spatial resolution also brings new challenges to existing classification methods, on the one hand, because of the increased uncertainty between different land cover, and, on the other hand, because remote sensing technology is used in more heterogeneous areas (such as suburbs and urban areas), which contain a large number of shadow effects, multiple materials, and color differences [6]. It can be seen that prior knowledge plays a part in the coverage classification technique. There are three main difficulties in the research on land cover classification of high-resolution remote sensing images. (1) The uncertainty that a pixel belongs to a certain class increases. Complex spatial information changes the characteristic function curve of homogeneous areas, resulting in the histogram of the same ground object showing asymmetric and multi-peak or single-peak irregular distribution characteristics [7]. Moreover, the fine spatial structure also increases the heterogeneity of homogeneous regions and the similarity of heterogeneous regions. The overlapping area of the pixel gray characteristic function curves of different land cover increases [8]. (2) There is uncertainty in modeling. The improvement of the image resolution increases the uncertainty of the gray level of the feature pixels in the homogeneous area, and the model based on the uncertain feature pixels is also uncertain, which further increases the difficulty of a classification decision. (3) The amount of data increases dramatically compared to images of medium and low resolution. Under the same land cover, the number of ground objects in remote sensing images increases sharply with the increase in spatial resolution [9].
At present, there are various algorithms for land cover classification of high-resolution remote sensing images, as shown in Table 1. The supervised algorithms are overly dependent on training samples and have an insufficient ability to deal with uncertainty. The unsupervised algorithm has too many iterations, and the segmentation accuracy cannot meet the needs of land cover mapping. The training cost of applying deep learning algorithms is too high. It takes a lot of time in the process of parameter adjustment, and the requirements for hardware devices are too high. Fuzzy clustering methods are one of the most effective types of methods to deal with uncertainty [10,11]. Among them, the FCM method has been widely used in low-resolution and medium-resolution images and medical images [12,13,14], which are easily affected by outliers [15]. Researchers have integrated the spatial correlation between pixels into the FCM method for better performance in research on land cover classification of high-resolution remote sensing images [16,17]. For example, in order to solve the problem of salt-and-pepper noise, the modified FCM method has used the neighborhood effect as the regularization index [18]. A hidden Markov random field FCM (HMRF_FCM) method [19,20,21] was proposed to integrate statistical models into fuzzy structures. The above modeling method solves the uncertainty problem of pixel classification caused by the spatial correlation of pixels to a certain extent and improves the classification accuracy. However, the classification accuracy of this type of method is obviously affected by the initial value. At the same time, due to the repeated calculation of the distance between the cluster center and the pixel in the adjacent window of the pixel, the time cost for high-resolution remote sensing data with a large amount of data is very large and expensive. The impact of modeling uncertainty in high-resolution remote sensing images on classification results cannot be effectively dealt with.
In order to effectively describe the modeling uncertainty caused by the difference of local characteristic data, the method of constructing the interval type-2 fuzzy model (IT2FM) for uncertain data was proposed [22]. The IT2FM can describe the uncertainty of the pixel category and the uncertainty of the model at the same time. The IT2FM describes the uncertainty more accurately, provides richer information, and has a stronger ability to deal with uncertain information [23]. The main construction forms of the IT2FM include the IT2FM based on information entropy [24,25], the IT2FM based on the FCM method [26,27], the IT2FM based on a Gaussian mixture model [28], and the IT2FNN model [29,30,31]. The Euclidean distance between the main membership degree, the upper membership degree, and the lower membership degree between the training samples and the frequency values corresponding to the histogram decreases while the membership degree increases. According to the above principles, Wang proposed the neighborhood weighted average method of an interval type-2 fuzzy membership function in 2018 for high-resolution remote sensing image classification. This method uses the sum of the weighted average of the three membership degrees as the new membership degree of the pixel to be divided and then combines the neighborhood relationship to build a classification decision model. The Euclidean distance becomes longer due to the large difference of gray values, and the classification error still occurs after the weighted calculation. There is still a lot of regional noise in image classification results with complex ground object distribution, and the improvement of classification accuracy is limited [32]. In the existing land cover classification algorithm, the method of a spatial neighborhood relationship is applied to incorporate the relationship between the target pixel and its neighboring pixels into the modeling. Applying a 3 × 3 fixed window spatial relationship and a neighborhood relationship by weighting method both solve the noise problem to some extent [32,33]. In order to ensure the adaptability and anti-noise robustness of the proposed algorithm to different remote sensing image land cover distributions and noise sources, appropriate local spatial information is introduced in this paper. In addition, we also set a 3 × 3 sliding window to make the proposed algorithm more applicable when dealing with various high-resolution remote sensing data. The IT2FNN not only has implemented fuzzy control on the non-linearity and uncertainty function in the dynamic model of the attitude angle of an unmanned aerial vehicle [34], but it also was used to study the interpolation of medical images [35], which improved the image quality. IT2FNN has been applied to time series decision making, proving that the model has the ability to handle uncertain data and good approximation [36]. Although convolutional neural networks have achieved high classification results in low-resolution hyperspectral images with fewer training samples, the convolutional neural network still needs to select a large number of training samples in high-resolution images to obtain better classification results [37,38]. The upper and lower boundary information of IT2FM and the main membership function of a type-1 fuzzy model (T1FM) are used as the neurons of the input layer of the IT2FNN model, which effectively solves the time cost problem of large-scale training of the classification model [39]. Although the above IT2FNN model has achieved high accuracy, the influence of a data spatial correlation on a classification decision is not considered when constructing the master membership function.
Table 1. Inadequacies of different land cover classification methods.
Table 1. Inadequacies of different land cover classification methods.
Method PrincipleMajor Inadequacies
Type-1 fuzzy clustering algorithm [40,41].Low ability to deal with uncertainty.
Integrate statistical models into fuzzy structures [20,21].
Integrated adaptive interval-valued modeling and spatial information [42].
Vulnerable to initial values; outliers and noise; too many iterations.
Application of interval type-2 fuzzy neural network to deal with uncertainty [29].The classification effect contains more regional noise, and the anti-noise ability needs to be further improved.
Spatial–spectral attention fusion network using four branch multiscale block [37].
An end-to-end framework using convolutional neural networks is proposed for pixel-level classification of images [38].
Large training samples and time consumption.
A supervised maximum likelihood classification method using mean vectors and covariance measures [43].Low universality and low robustness.
The rest of this article is organized as follows. Section 2 presents the various components of the proposed method. Section 3 conducts comparative experiments and analyzes the experimental results. Section 4 discusses the comparison of image coverage classification results and contrasting methods. Section 5 provides a conclusion.

2. Materials and Methods

2.1. Type-1 Gaussian Regression Model

Most researchers have used the Gaussian model [44,45] as the T1FM of homogeneous regions. This paper constructs the GRM as the homogeneous region fuzzy membership function model. An image RSI = {rsii, i = 1, 2, 3, …, n} is defined, where i, n, and rsii represent the pixel index, maximum number of pixels, and gray value, respectively. The principal membership function of T1FM is as follows:
W i j ( r s i ; μ j , σ j ) = λ j 2 π σ j exp { ( r s i i μ j ) 2 + 1 # R i i R i ( r s i i μ j ) 2 2 σ j 2 }
where W i j represents the principal membership degree, j ( j = 1 , 2 , 3 , , k ) represents the category index, λ j represents the model coefficient of the homogeneous region of class jth and satisfies the constraints of 0 < λ j 1 , μ j and σ j represent mean and standard deviation of all pixel gray values in the jth class, respectively, Ri is the eight neighborhood pixels in a 3 × 3 window which is centered on the ith pixel, # represents the operator used to obtain the elements in Ri, and i represents the pixel index in the window. The GRM, which takes the membership degree of neighborhood pixels as the weight, considers the influence of neighborhood pixels and also emphasizes the role of central pixels in the gray space.
In each homogeneous region, training samples T j = { T q j , q = 1 , 2 , , L } , q, and L represent the pixel index of the training samples and maximum number of pixels in the training samples, respectively. Calculate the frequency value set F = { F s j , s = 0 , 1 , , 2 b ; b = 1 , 2 , 3 , , 16 } of each gray value within the gray range of the sample, where s represents the pixel gray value, b represents the gray level, and F s j = 1 / L { # { T q j = s } } represents the histogram frequency of pixel s in class jth. Taking F s j as the expected value and the membership degree, W s j of s in Equation (1) is the actual value. The equation for solving the model parameters is as follows:
( λ j , μ j , σ j ) = arg [ min ( s = 0 2 b ( F s j W s j ) ) 2 ]

2.2. Internal Type-2 Gaussian Regression Model

2.2.1. IT2FM with an Uncertain Mean Value

The IT2FM with an uncertain mean changes μ j in Equation (1) to an interval [ μ j , μ j + ] , where the parameters μ j and μ j + represent the left and right boundaries of the interval mean of class jth, respectively,
W i j + ( r s i ; μ j , σ j ) = { λ j 2 π σ j exp [ ( r s i i μ j ) 2 + 1 # R i i R i ( r s i i μ j ) 2 2 σ j 2 ] i f r s i i < μ j λ j 2 π σ j i f μ j r s i i μ j + λ j 2 π σ j exp [ ( r s i i μ j + ) 2 + 1 # R i i R i ( r s i i μ j + ) 2 2 σ j 2 ] i f r s i i > μ j +
W i j ( r s i ; μ j , σ j ) = { λ j 2 π σ j exp [ ( r s i i μ j + ) 2 + 1 # R i i R i ( r s i i μ j + ) 2 2 σ j 2 ] i f r s i i μ j + μ j + 2 λ j 2 π σ j exp [ ( r s i i μ j ) 2 + 1 # R i i R i ( r s i i μ j ) 2 2 σ j 2 ] i f r s i i > μ j + μ j + 2
where, in the case of uncertain mean, W i j and W i j + represent the lower and upper boundaries of pixel membership degree, respectively.

2.2.2. IT2FM with an Uncertain Mean Value

The IT2FM with an uncertain standard deviation changes σ j in Equation (1) to an interval [ σ j , σ j + ] , where the parameter σ j represents the left boundary of the interval standard deviation of class jth and σ j + represents the right boundary of the interval standard deviation of class jth,
P i j + ( r s i ; μ j , σ j ) = λ j 2 π σ j exp [ ( r s i i μ j ) 2 + 1 # R i i R i ( r s i i μ j ) 2 2 ( σ j ) 2 ]
P i j ( r s i ; μ j , σ j ) = λ j 2 π σ j + exp [ ( r s i i μ j ) 2 + 1 # R i i R i ( r s i i μ j ) 2 2 ( σ j + ) 2 ]
where, in the case of uncertain standard deviation, P i j _ and P i j + represent the lower and upper boundaries of pixel membership degree, respectively.
In Equations (3)–(6), μ j , μ j + , σ j , and σ j + are calculated according to Equations (7) and (8):
μ j = μ j η j × σ j μ j + = μ j + η j × σ j η j [ 0 , 3 ]
σ j = σ j / κ j σ j + = σ j × κ j κ j   [ 0.3 , 1 ]
where η j and κ j are the interval adjustment factors of the jth mean and standard deviation, respectively. Assuming that the probability of occurrence of each membership function in the interval is the same, the probability that the Gaussian distribution “true membership function” falls on [ μ 3 σ , μ + 3 σ ] is 99.7%. The η j [ 0 , 3 ] and κ j   [ 0.3 , 1 ] are taken to limit the variation of the uncertainty footprint of IT2FM in the X-axis and Y-axis directions, respectively.

2.2.3. Fitting Model

In Figure 1a,b, the gray values of the training samples present a bimodal distribution around 50 and 75 and an asymmetric unimodal distribution between 40 and 60, respectively. Therefore, the T1FM membership function with single-peak feature cannot accurately characterize the gray distribution and thus cannot fit the double-peak and tail parts. Aiming at the problems existing in T1FM, by constructing the IT2FM membership function, the above uncertainties are placed in a certain area to solve the modeling problems caused by the uncertain characteristics. From the right part of Figure 1a,b, it can be concluded that the uncertainty footprint of IT2FM with uncertain mean and standard deviation changes on the X-axis and Y-axis, respectively, as the adjustment factor changes. Therefore, IT2FM is used to describe images with uncertain pixel grayscale characteristics in homogeneous regions and small grayscale changes and high frequency difference in homogeneous regions.

2.3. Classification Decision Model

The structure of the interval type-2 fuzzy neural network is shown in Figure 2, which includes the input layer, fuzzification layer, fuzzy inference layer (membership function layer), deblurring layer, and output layer. The input vector of the input layer is the gray value of each training sample; the input layer in this model directly transmits the data to the fuzzification layer, that is, there is no weight parameter between the input layer and the fuzzification layer. A fuzzy layer can define fuzzy membership functions (Equations (3)–(6)) for each neuron node, including type-1 fuzzy membership functions and upper and lower membership functions of interval type-2 fuzzy models, and perform fuzzy operations. In this layer, the number of neuron nodes is three times the number of categories; the completed function is to model the input data and the spatial correlation of the data in the image gray space and realize the uncertain expression of the degree of membership of the input vector. The fuzzy inference layer performs fuzzy operations according to the characteristics of the input data, and the piecewise linear function is used for fuzzy inference in this paper. The deblurring layer expresses the neighborhood features of the fuzzy inference results in the membership space; the output layer outputs the decision membership degrees of the data in each category.
  • Input layer
The input layer is used to receive the pixel m  ( m = 1 , 2 , 3 , , L ) from the training samples. The expected output corresponding to each training sample is the histogram frequency of the training samples in each category y j = [ y m 1 , y m 2 , , y m k ] .
2.
Fuzzification layer
The μ j and μ j + in Equations (3) and (4) are replaced by Equation (7), and σ j and σ j + in Equations (5) and (6) are replaced by Equation (8). The IT2FNN_GRM method is defined for each neuron node according to Equations (1), (3) and (4) or Equations (1), (5) and (6). The membership degree of each gray value of the training samples in the lower and upper membership functions in all categories and the membership degree in the original membership function form the feature vector as follows:
E m = ( E m 1 , E m 1 , E m 1 + , , E m k , E m k , E m k + )
where Em represents that all the membership degrees of m in the first to kth categories form a 3 × k-dimensional feature vector.
3.
Fuzzy inference layer (membership function layer)
The feature vectors of all gray values of the training samples are used as the output of the neuron nodes in the fuzzification layer, and the following fuzzification operations are carried out:
M m j = f ( e j E m + ω j )
where M m j is the membership degree of the training samples m in the fuzzy inference layer, which meets the constraints of s = 0 2 b M m j = 1 . The e j = [ e 1 j + , e 1 j , e 1 j , , e k j + , e k j , e k j ] represents the weight vector of each input neuron node of class jth, ω j represents the offset, and f means the fuzzy operation function of the output node is a piecewise linear function and meets the following conditions:
M m j = { e j E m + ω j i f 0 M m j 1 0 i f M m j < 0 max ( y j ) i f M m j > 1
4.
Deblurring layer
In the image grayscale space, the membership degree of the pixel m belonging to the jth class is not only related to the three membership degrees but also related to the membership degree of its neighbor pixels belonging to the jth class. The greater the membership degree is of the pixel’s neighborhood pixel belonging to the jth class, the higher the membership degree is of the pixel belonging to the jth class. That is, the category of a certain pixel is determined by the membership degree of the pixel and the membership degree of the neighboring pixels. According to the above principle, a fuzzy classification decision model incorporating spatial relationships is established in the deblurring layer:
X m j = 1 # H m m H m M m j
where X m j represents the membership degree of a classification decision that m belongs to class jth.
5.
Output layer
The classification decision of training samples in all categories is obtained, and its membership degree matrix is as follows:
A * = [ X m j ] L × k
The parameters to be trained are the κ j or η j , e j , and ω j . The expected value is the histogram frequency value of each class of training data, and M s j represents the actual value. A gradient descent solves parameters based on actual and expected values. All pixels of the image to be classified are input into the training model, and the classification decision is realized following the maximum membership function criterion:
B i = arg j { max { X i j } }
where B = { B 1 , B 2 , B 3 , , B n } represents the clear region result. In the output layer, the maximum membership degree belonging to a certain category is the category of the pixel.

2.4. Local Neighborhood Pixel Information

Figure 3b shows the change of membership degree of the No. 5 pixel and its neighboring pixels in the fuzzy inference layer and the output layer in Figure 3a.
The yellow pixel represents the No. 5 pixel, the green pixel is its neighbor pixel, and the gray value of the neighbor pixel is similar to that of class II and class III. According to Figure 3b, it can be seen that the output results of the No. 5 pixel belonging to I–III categories in the fuzzy inference layer are 0.0753, 0.0221, and 0.0706, respectively. According to the principle of maximum membership, the No. 5 pixel belongs to class I. The output layer results for the No. 5 pixel belonging to classes I–III are 0.0083, 0.0805, and 0.0251, respectively. According to the principle of maximum membership, the No. 5 pixel finally belongs to class II, which is in line with the expected classification result.

2.5. Flow of the IT2FNN_GRM Method

The detailed flow of the classification results obtained by the algorithm processing of remote sensing images is shown in Figure 4.
  • Step 1: The test images with different scales, resolutions, and multiple scenes are selected from different remote sensing satellite images.
  • Step 2: Supervised sampling is adopted for the real images and random sampling is used for the synthetic images, and different training samples’ quantities, areas, and densities are selected.
  • Step 3: The histogram frequency of each training sample in the homogeneous region is calculated. The T1FM membership function model of the homogeneous region (Equation (1)) is constructed, and the weight and offset of each category is estimated by Equation (2).
  • Step 4: According to Equations (3)–(8), the IT2FM is established, and the initial parameters κ j or η j are given.
  • Step 5: The T1FM membership degree of the training samples in all categories and the upper and lower membership degrees in the IT2FM membership function are taken as the input. The IT2FNN model is established according to Equations (11) and (12). Then, the adjustment factor κ j or η j , the weight parameter e j , and the offset ω j are adaptively determined.
  • Step 6: The high-resolution remote sensing images are divided according to Equation (14).

3. Land Cover Classification Experiments

In order to verify the feasibility and effectiveness of the IT2FNN_GRM method, the FCM method, HMRF-FCM method [20], type-2 fuzzy model of Gaussian membership function (IT2FM_GM) method, interval type-2 fuzzy neural network model of Gaussian regression membership function (IT2FNN) method [29], and interval type-2 fuzzy membership function neighborhood weighted average (IT2FM_NWA) method [32] were used as comparative experiments to compare the classification performance and classification accuracy. The two unsupervised classification methods, the FCM method and the HMRF-FCM method, were run 30 times, and the overall accuracy (OA), kappa value, and average accuracy of synthetic images and real images were calculated according to the confusion matrix results. The area used for accuracy evaluation was the same for each set of experiments. The initial values of the parameters in each group of experiments were kept the same. The higher each evaluation index was, the more accurate the classification result was.

3.1. Land Cover Classification for Synthetic Images

Figure 5a is a composite image composed of four types of ground objects intercepted from the WorldView-2 panchromatic image of Dalian, China, with a resolution of 0.5 m. Figure 5b is a composite image composed of four types of ground objects intercepted from the QuickBird panchromatic image in Panjin, China, with a resolution of 0.6 m. The two composite images contain eight common, ground object categories, including paddy fields, forests, cement pavements, water, roofs, suburbs, wetlands, and grasslands.
The image sizes of Figure 5a,b are both 256 × 256 pixels. Labels 1–8 represent eight types of land cover. In the first category, the gray values of the two paddy fields are different, and the gray characteristics of the lower paddy field are similar to those of the forest. In the second category, the gray value span of the forest is large. For cement pavement, the grayscale characteristics of many pixels are quite different from this area. For the fourth category, the gray characteristics of floating ice are similar to those of cement pavement. The fifth category is the building roof, which mainly contains two different gray features. The sixth category includes sparse trees and housing areas. The seventh category is the wetland, with a little soil in the water. The eighth category includes vegetation and animals. The experimental results obtained by applying different classification methods are shown in Figure 5(a1–b6).
Table 2 shows the markers of different colors represent different types of land cover. The spatial resolutions of Figure 5a,b are 0.5 m and 0.6 m, respectively, and contain 65,536 ground feature pixels. A total of 7.5% of pixels were randomly selected as training samples for each ground feature in the two images; the user accuracy (UA), product accuracy (PA), OA, and kappa value of the training samples were calculated according to the confusion matrix (Table 3 and Table 4).
Figure 5(a1,b1) converges after 43 and 59 iterations with the solution parameters 6.5, 10−4 and 5.5, 10−4. Figure 5(a2,b2) converges after 33 and 47 iterations, and the solution parameters are 0.21, 1.96, 10−3; and 0.27, 1.9, 10−3.
In order to compare the classification performance and accuracy of each classification method, all methods were tested with the same training samples. Table 3 and Table 4 show the classification accuracy of synthetic image land cover for each method. According to Table 3 and Table 4, it can be seen that, for Figure 5a, the OA value of the IT2FNN_GRM method was 30.0%, 12.5%, 16.0%, 3.8%, and 0.9% higher than that of the FCM method, HMRF-FCM method, IT2FM_GM method, IT2FNN method, and IT2FM_NWA method. For Figure 5b, the OA value was 40.3%, 16.7%, 18.8%, 1.0%, and 1.3% higher than that of the FCM method, HMRF-FCM method, IT2FM_GM method, IT2FNN method, and IT2FM_NWA method.
The FCM method did not incorporate the spatial neighborhood relationship, and the robustness was not as strong as the HMRF-FCM method, which did not incorporate the spatial relationship; the OA value was the lowest. Although the OA value of the IT2FM_GM method was higher than that of the FCM method, it was lower than all methods incorporating spatial relationships. After the IT2FNN method was integrated into the classification decision model, the ability to deal with noise was further improved. For images with simple and complex spatial information, the OA values were higher than the IT2FM_GM method by more than 15.0% and 12.0%, respectively. Compared with the IT2FM_GM method, IT2FM_NWA method, and IT2FNN method, the kappa value of the IT2FNN_GRM method increased by at least 21.4%, 1.1%, and 1.5%, respectively. The experimental results proved that the classification effect was related to the fitted histogram model and that the ground object categories that were not fitted were judged as noise.

3.2. Land Cover Classification for Real Images

3.2.1. QuickBird Satellite Images

In Figure 6a–c, with 256 × 256 pixels, there are a large number of pixels with similar gray characteristics in vegetation and asphalt roads and there are many traffic sign lines and small cars on asphalt roads with gray characteristics similar to that of bare land. The gray features of the water area are uniform and evenly distributed. The grayscale characteristics of the water area are evenly distributed, and the grayscale of the ice–water mixing area is similar to that of the water area and the vegetation on the bank. The gray value of shadows produced by buildings and trees are highly consistent with the waters, and the grayscale characteristics of snow and frozen soil area regions are similar. Owing to different illumination levels and time, the grayscale on one side of the roof is similar to that on the road. The experimental results obtained by applying different classification methods are shown in Figure 6(a1–c6).
Table 5 shows the markers with different colors represent different ground feature coverage categories. Figure 6a–c has a spatial resolution of 0.6 m and contains 65,536 ground feature pixels. Partial pixels were manually selected as training samples for each feature in the three images, and the OA value and kappa value of the training samples were calculated according to the confusion matrix (Table 6).
Figure 6(a1–c1) converges after 56, 89, and 37 iterations with the solution parameters 8.0, 10−4; 10.0, 10−4; and 5.0, 10−4. Figure 6(a2–c2) converges after 35, 63, and 26 iterations, and the solution parameters were 0.25, 1.8, 10−3; 0.30, 2.0, 10−3; and 0.27, 1.9, 10−3.
For the unsupervised classification methods, the misclassified pixels were concentrated in the snow area, permafrost area, and building area in Figure 6b. The gray value difference between the ice and snow and permafrost regions was very small, and the unsupervised method divided them into the same category, which reduced the classification quality. However, the ice–water mixture area was highly similar to the water features, and the grayscale features of the building area were different due to different lighting. The FCM method failed to divide the ice–water mixture area and the buildings, and more than 35% of the pixels were classified incorrectly. In general, water and shadow features cannot be processed by pixel-based classification methods; so, none of the five methods divided shadows and waters in Figure 6b.
The IT2FM_GM method makes a classification decision only according to the single information of the Gaussian feature of an image. The fuzzy linear neural network method uses the membership degree of pixels in all categories to model, which contains the most abundant information and membership relationship. For the complex internal features of homogeneous regions, the richer the information provided is, the higher the classification quality is; therefore, the IT2FNN method was better than the IT2FM_GM method. The IT2FNN_GRM method had the best classification results for the classification of vegetation, buildings, water areas, and asphalt pavement in Figure 6a.
For the three real high-resolution images in Figure 6 to complete the coverage classification, the average time costs required by the IT2FNN_GRM method were 2.26 s, 3.27 s, and 3.17 s, respectively, and time costs required by the FCM method were 37.31 s, 105.21 s, and 34.54s, respectively; the HMRF-FCM method to complete the classification also consumed 5.32 s, 35.75 s, and 6.35 s, respectively. The experimental results proved that, under the premise of ensuring the classification quality, the time cost of the IT2FNN_GRM method was less affected by the number of categories and the classification accuracy was the highest. According to Table 6, compared with the IT2FM_GM method, HMRF-FCM method, IT2FNN method, and IT2FM_NWA method, the OA value of the IT2FNN_GRM method was improved by more than 4.0%, 3%, 1%, and 0.9%, respectively.

3.2.2. WorldView-2 Satellite Images

Figure 7a is a panchromatic remote sensing image of Sydney in Australia from the WorldView-2 satellite image, with an image size of 512 × 512 pixels. Images captured were a part of the city center that contains steel buildings, cement buildings, greenbelt, and ground. The ground and cement buildings have many pixels with similar gray values, and the greenbelt has the same gray value as some ground pixels, which brings huge difficulties to the coverage classification experiments.
Figure 7b is a panchromatic remote sensing image of Rakaia River in New Zealand, captured from the WorldView-2 satellite image with an image size of 1940 × 1940 pixels. The image mainly contains farmland, water, and forest, among which the farmland contains a small amount of housing area, which was not treated as a separate category during the experiment and was treated as noise. The water area includes the seawater area with uniform gray characteristics and the river area mixed with sediment and seawater. There are many pixels similar to the gray value of farmland in the river area. The experimental results obtained by applying different classification methods are shown in Figure 7(a1–b6). The two regions, 1 and 2, are partially enlarged and placed below the experimental results.
Table 7 shows the markers with different colors represent different types of ground cover. Figure 7a,b has a spatial resolution of 0.5 m, containing 262,114 and 3,763,600 ground feature pixels, respectively. Some pixels were manually selected as training samples for each feature category in the two images, and the OA value and kappa value of the training samples were calculated according to the confusion matrix (Table 8).
With the sharp increase in the number of image pixels, the number of iterations for the two unsupervised methods increased greatly. In order to avoid the problem of infinite loops, we set the maximum number of iterations to 100. Figure 7(a1,b1) converges after 78 and 100 iterations with the solution parameters 5.0, 10−4 and 5.5, 10−4. Figure 7(a2,b2) converges after 35, 89, and 26 iterations, and the solution parameters were 0.25, 1.75, 10−3; and 0.37, 1.9, 10−3.
In Figure 7a, the gray values of some cement pavements are similar to those of green belts. The FCM method did not accurately identify the two types of ground objects and misclassified the ground as green belts, resulting in the lowest OA value, of only 77.7%. The HMRF-FCM method misclassified the greenbelt as the ground. Although both methods had a large number of misclassified pixels, the ground pixels accounted for a large proportion; so, the OA value of the HMRF-FCM method was 8.8% higher than that of the FCM method. The IT2FM_GM method successfully divided the four types of ground objects, but there was a lot of salt-and-pepper noise, and the OA value was 95.2%. The IT2FNN method turned the salt-and-pepper noise in the IT2FM_GM method into partial area noise, and the OA value was only 0.1% higher than that of the IT2FM_GM method. The OA value of the IT2FNN_GRM method was the highest among all methods, at 98.6%.
In Figure 7a, the gray characteristics of the asphalt pavement and the green belt are similar. The IT2FNN method successfully classified the green belt and the ground. In addition, it also dealt with the vehicle noise. However, there was still a small amount of salt- and-pepper noise in some parts, and the classification effect was better than that of the HMRF-FCM method. Among all supervised classification methods, the IT2FNN method was better than the IT2FM_GM method, while the classification result of the IT2FNN_GRM method was better than that of IT2FNN method, which considered only one spatial adjacency relationship. In particular, the classification accuracy of the ground and green belt in area 1 and area 2 in Figure 7a was significantly improved.
In Figure 7b, pixels incorrectly classified by different methods are mainly concentrated in forest areas with large gray changes and the junction of land and grassland. Observing areas 1 and 2 in Figure 7b, the IT2FNN method and IT2FNN_GRM method can realize the division of the above two kinds of land cover. The FCM method and the IT2FM_GM method did not classify the forest and the land with a similar gray value as the forest in Figure 7b, resulting in low classification accuracy. The classification accuracy of the IT2FNN_GRM method was the highest among the six classification methods, such as pavement and water in Figure 7(a6,b6). Experiments proved that the time cost of unsupervised classification methods increased sharply with the increase in the number of pixels and that the IT2FNN_GRM method had greater advantages in time and accuracy in image classification with a large number of pixels and object categories. In Figure 7b, the OA values of all classification methods exceed 90% due to the obvious grayscale features between categories except for the river part. The gray value of the lower right corner of the water area was similar to that of the farmland, and the classification error occurred based on the single Gaussian model method, with the lowest OA value of 91.7%. Compared with the FCM method, the HMRF-FCM method, the IT2FM_GM method, and the IT2FNN method, the OA values of the IT2FNN_GRM method were improved by 4.6%, 1.4%, 6.8%, and 0.4%, respectively.
Figure 7b contains a large number of pixels, and the river part accounts for about 30% of the image. The classification results of the river area in the comparison methods had a large number of misclassified pixels, while the IT2FNN_GRM method had the best classification effect on the river part and the highest OA value was 98.5%. According to Figure 7(b1–b5), except for the river part, each classification method had little difference in the classification effect of ground objects. By observing the classification results of the river part, it can be concluded that the IT2FNN_GRM method has the ability to deal with noise and uncertainty better.
The average time required for the IT2FNN_GRM method to complete the classification of Figure 7a,b was 13.38 s and 116.52 s, respectively. The time required for the FCM method to complete the classification was 105.37 s and 1371.65 s, respectively; the HMRF-FCM method also consumed 14.85 s and 186.01 s, respectively. The time cost of the IT2FNN_GRM method was much less affected by the number of pixels than the unsupervised classification method. When the image range was increased, the time cost of the proposed method was small (the image size was increased by 7.5 times, and the consumption time increased from 3 s to 110 s); therefore, the method in this paper can be applied to large-scale remote sensing image coverage classification.

4. Discussion

4.1. Effect of Fitting Model and Classification Decision Model on Classification Performance

The IT2FNN modeling method links all the information provided by the interval type-2 fuzzy model in the form of weighted sum and uses the weight symbol to judge whether the information plays a role in the objective function. A positive weight means that the membership function information is activated, and negative and zero weights mean it is inhibited. Judging its role by the size of the weight, this modeling method greatly improves the quality of the decision-making model. For the IT2FM_NWA method, it is difficult to accurately fit the independent multimodal distribution characteristics of high-resolution remote sensing images in homogeneous regions. According to the principle that the richer the classification decision information provided is, the higher the classification quality is, two kinds of uncertainties are introduced into the modeling process. Taking the three membership degrees of training samples in all categories as input and the corresponding histogram as output, a fuzzy linear neural network classification decision model is constructed. The model can accurately fit any complex characteristic curve of high-resolution remote sensing images. Compared with the IT2FM_NWA method, the proposed improved fuzzy linear neural network model is more universal and has higher classification accuracy.
To verify the performance of the proposed method, the classification performance of different methods under random training samples was compared with other classification methods. For synthetic images, 30% of the training sample were randomly extracted to obtain Figure 8(a1,b1), respectively. The “*”, “+”, “☆”, and “◊” represent the frequency values of the pixel gray values of the training sample of the four regions, respectively. The histograms of the second and eighth categories conformed to Gaussian distribution, and the histograms were distributed symmetrically. The histogram distribution of the other six kinds of land cover was complex and did not conform to the Gaussian distribution. The gray distribution of the first, fourth, and fifth types of training sample was concentrated, and the histogram was a bimodal distribution. The gray distribution span of the third, sixth, and seventh types of land cover was large and asymmetric.
In Figure 8, the eight categories in the fitted model all have different degrees of overlap, and the misclassified pixels are mainly concentrated in the overlapping areas. By accurately fitting the histogram distribution features, we were not able to significantly improve the quality of pixel classification in overlapping regions. Therefore, we integrated the neighborhood relationship of pixels into the membership space of the image, which effectively solved the problem of misclassification of pixels in the overlapping area of the fitted model and significantly improved the cover classification quality of remote sensing images.
The histogram of the two paddy fields and the building roof in the first class area in Figure 5a shows a bimodal distribution, which led the FCM method to divide the paddy fields and the roof into two categories. Because the gray features of the dense part of the tree crown were similar to those of the water area, the overall classification accuracy of the FCM method was greatly reduced. The HMRF_FCM method considers the correlation of spatial neighborhood pixels and overcomes the influence of noise to a certain extent. The classification results of the eight ground objects in Figure 5a were better than those of the FCM method and IT2FM_GM method. However, because this method does not consider the influence of a classification decision and cannot deal with cluster noise, such as ice floes in water, the accuracy and kappa value of classification results were lower than those of the T2FNN_GM method and IT2FNN_GRM method. Although other methods incorporate a neighborhood relationship, they lack a classification decision model. For high-resolution images with simple spatial information, they can improve the classification accuracy. For images with complex spatial information, they lack the ability to deal with noise. The salt-and-pepper noise and regional noise are still retained in the classification results, such as the tree canopy in Figure 5a and the wetland in Figure 5b. The IT2FNN_GRM method considers the influence of a central pixel and achieves the best classification result on the basis of preserving the details of ground features. Because there were traffic sign lines and asphalt pavement with great differences in gray characteristics on asphalt roads, the HMRF_FCM method amplified the salt-and-pepper noise and had a large area of misclassified pixels.
The pixels misclassified by the IT2FM_GM method were mainly concentrated in the second peak part of the fitted curve, which corresponds to the lower right corner of the first type of paddy field area in Figure 5a. The misclassified part of the paddy field area was the area with a large overlap with the gray value of the forest area. Since the IT2FNN method considers the neighborhood relationship in the grayscale feature space, compared with the IT2FM_GM method, it effectively handled the salt-and-pepper noise in the forest of the second category and grassland in the eighth category. Figure 5(a5–b5) shows the results obtained by the IT2FM_NWA method for classification of synthetic influences. Since the IT2FM_NWA method is based on pixel gray value for classification, even if the neighborhood relationship is considered, due to the limitation of Euclidean distance, serious boundary blurring between categories appeared in the classification results.
Without considering the neighborhood relationship, the classification accuracy of an unsupervised classification method is lower than that of a supervised classification method. Due to the introduction of the neighborhood relationship in the feature space, compared with the method that does not consider the neighborhood relationship, the IT2FM model accurately fit the histogram of each area. Therefore, the IT2FNN method and IT2FNN_GRM method correctly divided the ice in the fourth category of waters, the dark areas in the third category of cement pavement, and sludge in the seventh category of wetlands. In addition, there were a large number of areas with grayscale differences in the second class of forest area, so there was still salt-and-pepper noise in the classification result of the IT2FNN method. Since the influence of neighborhood features was considered again in the membership degree space, the IT2FNN_GRM method effectively suppressed the regional noise, especially the influence of small regional noise, and the classification effect was further improved compared to that of the IT2FNN method. Overall, the land cover types with the fewest misclassifications of the IT2FNN_GRM method were buildings, farmland, roads, and water.

4.2. Limitations and Prospects of Interval Type-2 in High-Resolution Land Cover Classification Research

As a simplified method for the application of interval type-2 fuzzy set theory, interval type-2 fuzzy theory overcomes the computational cost of interval type-2 fuzzy sets and provides three-dimensional membership function information, which can achieve a more accurate characterization of uncertain data. Model design also has greater degrees of freedom and wider application space [46]. Therefore, data modeling based on interval type-2 fuzzy theory has become the frontier and hotspot of current fuzzy theory research.
Compared with the type-1 fuzzy model and the existing interval type-2 modeling methods, the proposed land cover classification method for high-resolution remote sensing imagery under the framework of interval type-2 fuzzy theory has higher classification accuracy, but there are still some detailed problems that need to be further improved and expanded.
On the one hand, the frequency histogram of the training data can easily represent the distribution of the grayscale data of different pixels of the same feature and very intuitively represent the shape of the distribution, and the overall distribution can be estimated by the frequency distribution of the training samples. On the other hand, the training data can exclude the interference of other pixel noise and prevent the phenomenon of overfitting. Although the proposed method can accurately fit the complex histogram distribution curve of high-resolution remote sensing images and improve the classification accuracy, in the process of quantitative evaluation of real high-resolution remote sensing images, the training data are used as the standard to evaluate the accuracy. The method can reflect the classification quality of the classification results of different methods, but the quantitative evaluation method based on local data cannot accurately reflect the classification accuracy of the entire image. Therefore, the visual interpretation of the real image was mainly used in this paper and the training sample accuracy evaluation index was supplemented with the classification quality evaluation method. Therefore, in the future, we will explore new quantitative evaluation methods that accurately reflect real images as further research work.
The method proposed in this paper only considers the membership of the category attribute and the ambiguity of the degree of membership in decision making from the grayscale features of the image and does not consider other features such as the texture of the image. Therefore, in the future work, the texture feature of the image can be embedded in the proposed method to further improve the model and improve the quality of the model [47].
There is the adaptive determination of the optimal uncertainty region of the interval type-2 fuzzy model. The interval type-2 fuzzy model is an interval membership function model including upper and lower boundaries, and the interval range can be adjusted as needed. The research in this paper proves that changing the range of the uncertainty region of the interval type-2 fuzzy model can change the classification accuracy of high-resolution remote sensing images and that the optimal classification result must correspond to the optimal range of the uncertainty region. However, there is still a lack of relevant practical and theoretical research [48,49,50]. Therefore, it is necessary to further theoretically study the use of interval type-2 fuzzy models to model the problem of generic uncertainty.
The proposed method is theoretically universal. In follow-up work, the decision-making method based on the interval type-2 fuzzy theory proposed in this paper will be applied to the classification of different types of high-resolution remote sensing images (such as multispectral images, SAR images, etc.) and other fields such as feature extraction, object recognition, etc. to demonstrate its wide applicability.
Compared with the traditional medium- and low-resolution remote sensing images, the high-resolution greatly eliminates the influence of mixed pixels in the remote sensing images, making the details of the objects more abundant and the texture features more obvious. Therefore, high-resolution remote sensing images have greater potential and advantages in large-scale accurate object classification [51].

4.3. Limitations and Prospects of Land Cover Classification in High-Resolution Remote Sensing Images

There are three factors that can contribute to misclassification in the process of high-resolution land cover classification, for example, “the uncertainty that a pixel belongs to a certain class increases”, “uncertainty in modeling”, and “the overlapped gray value characteristic function”. For example, misclassification of housing area and ground surface occurred in some areas in this study. Secondly, the limitation of classification based on gray value also leads to a little bias in the classification results. Specifically, although the proposed method has achieved high classification accuracy, there are some problems that need to be improved urgently. For example, although the classification decision model considered the local spatial neighborhood relationship, building shadows and waters were the error sources that confused the classification, and the proposed method had difficulty in successfully dividing the two categories. Third, the IT2FNN_GRM method had a good recognition effect on urban areas and suburbs, but it recognized a small area of a vehicle body as land.
With the development of satellite technology, the use of remote sensing image classification to obtain land cover is both an opportunity and a challenge. Compared with traditional low- and medium-resolution remote sensing images, high-resolution greatly eliminates the influence of mixed pixels in remote sensing images, making objects more detailed and texture features more obvious. Therefore, high-resolution remote sensing images have greater potential and advantages in large-scale accurate object classification. With the gradual improvement of the spatial resolution of satellite data, it becomes easier to obtain a large amount of high-resolution Earth observation data, and more land cover algorithms with high spatial resolution will be proposed [52,53]. How to quickly interpret remote sensing data into required information under the circumstance of limited human and material resources is one of the focuses of the development of the global geographic information industry. In this paper, a fast, high-resolution remote sensing image classification algorithm is proposed. IT2FNN_GRM can accurately model the statistical distribution of complex pixel gray values and overcome the problem of inaccurate modeling of traditional models leading to misclassification. The proposed method can also be applied to image modeling of different types of complex statistical distribution laws.
High-resolution remote sensing image classification has important research value for urban planning, forest and wetland coverage detection, and monitoring of land resources and disasters [54]. The existing high-resolution remote sensing image classification is mainly based on traditional unsupervised and semi-supervised classification methods and deep learning algorithms. The method in this paper achieves high classification accuracy under the premise of consuming very little time, and the deep learning method cannot solve this problem in a short time. The time complexity of the proposed method is small, and the generalization ability is stronger. In this paper, images of different resolutions, different area coverages, and different scales were selected to conduct experiments, which can classify and map the land cover in protected areas, agricultural areas, and urban areas, which can help authorities solve the problem of land cover and utilization.

5. Conclusions

This paper proposes a new IT2FNN model for meticulous land cover classification of high-resolution images. The effectiveness of the algorithm was verified by the test of the coverage classification of two synthetic remote sensing images of complex ground objects and the anti-noise performance test of five real remote sensing images with different sizes and intensities of noise. The IT2FNN_GRM method takes the fast acquisition of the training model as the modeling principle on the basis of continuing to use the principal membership function of T1FM and the upper and lower boundary information of the membership function of IT2FM as input neurons. The gray space correlation of neighborhood pixel was introduced into a Gaussian model to realize the effective improvement of the IT2FM. Further, the quadratic neighborhood relationship in the pixel membership space in the IT2FNN_GRM method was described, which effectively solved the problem that the complexity of the spatial neighborhood relationship of remote sensing images gives rise to the increase in uncertainty of pixel categories. By analyzing the classification results of seven sets of high-resolution images, the following conclusions can be drawn. (1) The GRM can better solve the classification problem of overlapping histograms between different land features. (2) The IT2FNN model can improve the problem of “different gray values of the same land cover”. (3) The adjustment factor can complete the coverage classification more quickly and accurately, so that the classification results can make the coverage classification results more accurate under the condition of ensuring lower time complexity. (4) When the distribution of image histograms is relatively concentrated, IT2FM with uncertain standard deviation is usually selected; otherwise, IT2FM with uncertain mean is selected. (5) When the histogram of the uniform area presents a continuous asymmetric single peak, continuous multi-peak, or irregular distribution, the IT2FNN_GRM method can fit these characteristic distributions and has higher classification accuracy. The FCM method, HMRF-FCM method, IT2FM_GM method, IT2FNN method, and IT2FM_NWA method are used as comparative experiments to test synthetic images and real images, respectively. The experimental results prove that, compared with the above methods, the IT2FNN_GRM method can not only effectively suppress the influence of regional noise but also run faster and have higher classification accuracy in the land cover experiment.

Author Contributions

Conceptualization, D.W. and M.K.; methodology, C.W.; software, C.W., X.W. and Z.L.; validation, C.W., X.W., D.W., M.K. and Z.L.; formal analysis, C.W.; investigation, X.W. and Z.L.; resources, M.K.; data curation, X.W.; writing—original draft preparation, X.W.; writing—review and editing, X.W., C.W. and Z.L.; visualization, D.W. and M.K.; supervision, C.W.; project administration, X.W.; funding acquisition, C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China Youth Project, grant number 41801368, and Fundamental Research Youth Project of the Education Department of Liaoning Province, grant number LJKQZ2021154.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the editors and anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Benediktsson, J.A.; Chanussot, J.; Moon, W.M. Very High-Resolution Remote Sensing: Challenges and Opportunities [Point of View]. Proc. IEEE 2012, 100, 1907–1910. [Google Scholar] [CrossRef]
  2. Comber, A.; Fisher, P.; Brunsdon, C.; Khmag, A. Spatial Analysis of Remote Sensing Image Classification Accuracy. Remote Sens. Environ. 2012, 127, 237–246. [Google Scholar] [CrossRef] [Green Version]
  3. Shi, Y.; Qi, Z.; Liu, X.; Niu, N.; Zhang, H. Urban Land Use and Land Cover Classification Using Multisource Remote Sensing Images and Social Media Data. Remote Sens. 2019, 11, 2719. [Google Scholar] [CrossRef] [Green Version]
  4. Zhong, Y.; Zhu, Q.; Zhang, L. Scene Classification Based on the Multifeature Fusion Probabilistic Topic Model for High Spatial Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6207–6222. [Google Scholar] [CrossRef]
  5. Yang, Y.; Yang, D.; Wang, X.; Zhang, Z.; Nawaz, Z. Testing Accuracy of Land Cover Classification Algorithms in the Qilian Mountains Based on GEE Cloud Platform. Remote Sens. 2021, 13, 5064. [Google Scholar] [CrossRef]
  6. Schindler, K. An Overview and Comparison of Smooth Labeling Methods for Land-Cover Classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4534–4545. [Google Scholar] [CrossRef]
  7. Wang, C.; Shao, F.; Zhang, Z.; Sui, Y.; Li, S. Mining the Features of Spatial Adjacency Relationships to Improve the Classification of High Resolution Remote Sensing Images Based on Complex Network. Appl. Soft Comput. 2021, 102, 107089. [Google Scholar] [CrossRef]
  8. Li, M.; Zang, S.; Zhang, B.; Li, S.; Wu, C. A Review of Remote Sensing Image Classification Techniques: The Role of Spatio-Contextual Information. Eur. J. Remote Sens. 2014, 47, 389–411. [Google Scholar] [CrossRef]
  9. Martins, V.S.; Kaleita, A.L.; Gelder, B.K.; da Silveira, H.L.F.; Abe, C.A. Exploring Multiscale Object-Based Convolutional Neural Network (Multi-OCNN) for Remote Sensing Image Classification at High Spatial Resolution. ISPRS J. Photogramm. Remote Sens. 2020, 168, 56–73. [Google Scholar] [CrossRef]
  10. Lu, D.; Weng, Q. A Survey of Image Classification Methods and Techniques for Improving Classification Performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
  11. Gorokhovatskyi, V.O.; Tvoroshenko, I.S.; Vlasenko, N.V. Using Fuzzy Clustering in Structural Methods of Image Classification. Telecommun. Radio Eng. 2020, 79, 781–791. [Google Scholar] [CrossRef]
  12. Chen, W.; Giger, M.L.; Bick, U. A Fuzzy C-Means (FCM)-Based Approach for Computerized Segmentation of Breast Lesions in Dynamic Contrast-Enhanced MR Images1. Acad. Radiol. 2006, 13, 63–72. [Google Scholar] [CrossRef] [PubMed]
  13. Soler, L.S.; Kok, K.; Camara, G.; Veldkamp, A. Using Fuzzy Cognitive Maps to Describe Current System Dynamics and Develop Land Cover Scenarios: A Case Study in the Brazilian Amazon. J. Land Use Sci. 2012, 7, 149–175. [Google Scholar] [CrossRef] [Green Version]
  14. Ortiz Toro, C.; Gonzalo Martín, C.; García Pedrero, Á.; Menasalvas Ruiz, E. Superpixel-Based Roughness Measure for Multispectral Satellite Image Segmentation. Remote Sens. 2015, 7, 14620–14645. [Google Scholar] [CrossRef] [Green Version]
  15. Kannan, S.R.; Devi, R.; Ramathilagam, S.; Takezawa, K. Effective FCM Noise Clustering Algorithms in Medical Images. Comput. Biol. Med. 2013, 43, 73–83. [Google Scholar] [CrossRef]
  16. Ghaffarian, S.; Ghaffarian, S. Automatic Histogram-Based Fuzzy C-Means Clustering for Remote Sensing Imagery. ISPRS J. Photogramm. Remote Sens. 2014, 97, 46–57. [Google Scholar] [CrossRef]
  17. Xu, J.; Zhao, T.; Feng, G.; Ni, M.; Ou, S. A Fuzzy C-Means Clustering Algorithm Based on Spatial Context Model for Image Segmentation. Int. J. Fuzzy Syst. 2021, 23, 816–832. [Google Scholar] [CrossRef]
  18. Ahmed, M.N.; Yamany, S.M.; Mohamed, N.; Farag, A.A.; Moriarty, T. A Modified Fuzzy C-Means Algorithm for Bias Field Estimation and Segmentation of MRI Data. IEEE Trans. Med. Imaging 2002, 21, 193–199. [Google Scholar] [CrossRef]
  19. Chatzis, S.P.; Varvarigou, T.A. A Fuzzy Clustering Approach Toward Hidden Markov Random Field Models for Enhanced Spatially Constrained Image Segmentation. IEEE Trans. Fuzzy Syst. 2008, 16, 1351–1361. [Google Scholar] [CrossRef]
  20. Zhao, Q.; Li, X.; Li, Y.; Zhao, X. A Fuzzy Clustering Image Segmentation Algorithm Based on Hidden Markov Random Field Models and Voronoi Tessellation. Pattern Recognit. Lett. 2017, 85, 49–55. [Google Scholar] [CrossRef]
  21. Tirandaz, Z.; Akbarizadeh, G.; Kaabi, H. PolSAR Image Segmentation Based on Feature Extraction and Data Compression Using Weighted Neighborhood Filter Bank and Hidden Markov Random Field-Expectation Maximization. Measurement 2020, 153, 107432. [Google Scholar] [CrossRef]
  22. Mendel, J.M.; John, R.I.B. Type-2 Fuzzy Sets Made Simple. IEEE Trans. Fuzzy Syst. 2002, 10, 117–127. [Google Scholar] [CrossRef]
  23. Moreno, J.E.; Sanchez, M.A.; Mendoza, O.; Rodríguez-Díaz, A.; Castillo, O.; Melin, P.; Castro, J.R. Design of an Interval Type-2 Fuzzy Model with Justifiable Uncertainty. Inf. Sci. 2020, 513, 206–221. [Google Scholar] [CrossRef]
  24. Mittal, K.; Jain, A.; Vaisla, K.S.; Castillo, O.; Kacprzyk, J. A Comprehensive Review on Type 2 Fuzzy Logic Applications: Past, Present and Future. Eng. Appl. Artif. Intell. 2020, 95, 103916. [Google Scholar] [CrossRef]
  25. Liu, X.; Dai, J.; Chen, J.; Wang, C.; Zhan, J. Measures of Uncertainty Based on Gaussian Kernel for Type-2 Fuzzy Information Systems. Int. J. Fuzzy Syst. 2021, 23, 1163–1178. [Google Scholar] [CrossRef]
  26. Xu, J.; Feng, G.; Zhao, T.; Sun, X.; Zhu, M. Remote Sensing Image Classification Based on Semi-Supervised Adaptive Interval Type-2 Fuzzy c-Means Algorithm. Comput. Geosci. 2019, 131, 132–143. [Google Scholar] [CrossRef]
  27. Choi, B.-I.; Chung-Hoon Rhee, F. Interval Type-2 Fuzzy Membership Function Generation Methods for Pattern Recognition. Inf. Sci. 2009, 179, 2102–2122. [Google Scholar] [CrossRef]
  28. Chen, Y.; Cheng, N.; Cai, M.; Cao, C.; Yang, J.; Zhang, Z. A Spatially Constrained Asymmetric Gaussian Mixture Model for Image Segmentation. Inf. Sci. 2021, 575, 41–65. [Google Scholar] [CrossRef]
  29. Wang, C.; Xu, A.; Li, C.; Zhao, X. Interval Type-2 Fuzzy Based Neural Network for High Resolution Remote Sensing Image Segmentation. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B7, 385–391. [Google Scholar] [CrossRef] [Green Version]
  30. Castillo, O.; Castro, J.R.; Melin, P.; Rodriguez-Diaz, A. Application of Interval Type-2 Fuzzy Neural Networks in Non-Linear Identification and Time Series Prediction. Soft Comput. 2014, 18, 1213–1224. [Google Scholar] [CrossRef]
  31. Tavoosi, J. A Novel Recurrent Type-2 Fuzzy Neural Network for Stepper Motor Control. Mechatron. Syst. Control 2021, 49, 2021–2201. [Google Scholar] [CrossRef]
  32. Wang, C.; Xu, A.; Li, X. Supervised Classification High-Resolution Remote-Sensing Image Based on Interval Type-2 Fuzzy Membership Function. Remote Sens. 2018, 10, 710. [Google Scholar] [CrossRef] [Green Version]
  33. Wu, C.; Guo, X. A Novel Single Fuzzifier Interval Type-2 Fuzzy C-Means Clustering with Local Information for Land-Cover Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 5903–5917. [Google Scholar] [CrossRef]
  34. Chen, X.; Li, D.; Xu, Z.; Bai, Y. Gain Adaptive Sliding Mode Controller Based on Interval Type-II Fuzzy Neural Network Designed for Attitude Control for Micro Aircraft Vehicle. Int. J. Intell. Comput. Cybern. 2014, 7, 209–226. [Google Scholar] [CrossRef]
  35. Rezaie, V.; Parnianifard, A. A New Intelligent System for Diagnosing Tumors with MR Images Using Type-2 Fuzzy Neural Network (T2FNN). Multimed. Tools Appl. 2022, 81, 2333–2363. [Google Scholar] [CrossRef]
  36. Baklouti, N.; Abraham, A.; Alimi, A.M. A Beta Basis Function Interval Type-2 Fuzzy Neural Network for Time Series Applications. Eng. Appl. Artif. Intell. 2018, 71, 259–274. [Google Scholar] [CrossRef]
  37. Shi, C.; Sun, J.; Wang, L. Hyperspectral Image Classification Based on Spectral Multiscale Convolutional Neural Network. Remote Sens. 2022, 14, 1951. [Google Scholar] [CrossRef]
  38. Maggiori, E.; Tarabalka, Y.; Charpiat, G.; Alliez, P. Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 645–657. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, D.; Li, Z.; Cao, L.; Balas, V.E.; Dey, N.; Ashour, A.S.; McCauley, P.; Dimitra, S.-P.; Shi, F. Image Fusion Incorporating Parameter Estimation Optimized Gaussian Mixture Model and Fuzzy Weighted Evaluation System: A Case Study in Time-Series Plantar Pressure Data Set. IEEE Sens. J. 2017, 17, 1407–1420. [Google Scholar] [CrossRef]
  40. D’Urso, P. Informational Paradigm, Management of Uncertainty and Theoretical Formalisms in the Clustering Framework: A Review. Inf. Sci. 2017, 400–401, 30–62. [Google Scholar] [CrossRef] [Green Version]
  41. Smits, P.C. Multiple Classifier Systems for Supervised Remote Sensing Image Classification Based on Dynamic Classifier Selection. IEEE Trans. Geosci. Remote Sens. 2002, 40, 801–813. [Google Scholar] [CrossRef]
  42. Xu, J.; Feng, G.; Fan, B.; Yan, W.; Zhao, T.; Sun, X.; Zhu, M. Landcover Classification of Satellite Images Based on an Adaptive Interval Fuzzy C-Means Algorithm Coupled with Spatial Information. Int. J. Remote Sens. 2020, 41, 2189–2208. [Google Scholar] [CrossRef]
  43. Sisodia, P.S.; Tiwari, V.; Kumar, A. Analysis of Supervised Maximum Likelihood Classification for Remote Sensing Image. In Proceedings of the International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014), Jaipur, India, 9–11 May 2014; pp. 1–4. [Google Scholar]
  44. Xing, H.; He, H.; Hu, D.; Jiang, T.; Yu, X. An Interval Type-2 Fuzzy Sets Generation Method for Remote Sensing Imagery Classification. Comput. Geosci. 2019, 133, 104287. [Google Scholar] [CrossRef]
  45. Morales-Alvarez, P.; Perez-Suay, A.; Molina, R.; Camps-Valls, G. Remote Sensing Image Classification with Large-Scale Gaussian Processes. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1103–1114. [Google Scholar] [CrossRef] [Green Version]
  46. Kayacan, E.; Sarabakha, A.; Coupland, S.; John, R.; Khanesar, M.A. Type-2 Fuzzy Elliptic Membership Functions for Modeling Uncertainty. Eng. Appl. Artif. Intell. 2018, 70, 170–183. [Google Scholar] [CrossRef] [Green Version]
  47. Guo, J.; Du, S.; Huo, H.; Du, S.; Zhang, X. Modelling the Spectral Uncertainty of Geographic Features in High-Resolution Remote Sensing Images: Semi-Supervising and Weighted Interval Type-2 Fuzzy C-Means Clustering. Remote Sens. 2019, 11, 1750. [Google Scholar] [CrossRef] [Green Version]
  48. Zhou, H.; Ying, H.; Zhang, C. Effects of Increasing the Footprints of Uncertainty on Analytical Structure of the Classes of Interval Type-2 Mamdani and TS Fuzzy Controllers. IEEE Trans. Fuzzy Syst. 2019, 27, 1881–1890. [Google Scholar] [CrossRef]
  49. Shukla, A.K.; Muhuri, P.K. Big-Data Clustering with Interval Type-2 Fuzzy Uncertainty Modeling in Gene Expression Datasets. Eng. Appl. Artif. Intell. 2019, 77, 268–282. [Google Scholar] [CrossRef]
  50. Ibrahim, A.A.; Zhou, H.; Tan, S.; Zhang, C.; Duan, J. Regulated Kalman Filter Based Training of an Interval Type-2 Fuzzy System and Its Evaluation. Eng. Appl. Artif. Intell. 2020, 95, 103867. [Google Scholar] [CrossRef]
  51. Tong, X.-Y.; Xia, G.-S.; Lu, Q.; Shen, H.; Li, S.; You, S.; Zhang, L. Land-Cover Classification with High-Resolution Remote Sensing Images Using Transferable Deep Models. Remote Sens. Environ. 2020, 237, 111322. [Google Scholar] [CrossRef] [Green Version]
  52. Fisher, J.R.B.; Acosta, E.A.; Dennedy-Frank, P.J.; Kroeger, T.; Boucher, T.M. Impact of Satellite Imagery Spatial Resolution on Land Use Classification Accuracy and Modeled Water Quality. Remote Sens. Ecol. Conserv. 2018, 4, 137–149. [Google Scholar] [CrossRef]
  53. McCarthy, M.J.; Radabaugh, K.R.; Moyer, R.P.; Muller-Karger, F.E. Enabling Efficient, Large-Scale High-Spatial Resolution Wetland Mapping Using Satellites. Remote Sens. Environ. 2018, 208, 189–201. [Google Scholar] [CrossRef]
  54. Alem, A.; Kumar, S. Transfer Learning Models for Land Cover and Land Use Classification in Remote Sensing Image. Appl. Artif. Intell. 2022, 36, 2014192. [Google Scholar] [CrossRef]
Figure 1. (a) Grassland and its corresponding T1FM (left) and IT2FM with an uncertain mean (right); (b) desert and its corresponding T1FM (left) and IT2FM with an uncertain standard deviation (right).
Figure 1. (a) Grassland and its corresponding T1FM (left) and IT2FM with an uncertain mean (right); (b) desert and its corresponding T1FM (left) and IT2FM with an uncertain standard deviation (right).
Remotesensing 14 03704 g001
Figure 2. Classification decision model.
Figure 2. Classification decision model.
Remotesensing 14 03704 g002
Figure 3. (a) Image local neighborhood pixel information; (b) the change of the pixel membership degree in the process of data processing by the IT2FNN_GRM method, the red square is the center pixel, and the white square is the neighbor pixel.
Figure 3. (a) Image local neighborhood pixel information; (b) the change of the pixel membership degree in the process of data processing by the IT2FNN_GRM method, the red square is the center pixel, and the white square is the neighbor pixel.
Remotesensing 14 03704 g003
Figure 4. Flow chart of the IT2FNN_GRM method.
Figure 4. Flow chart of the IT2FNN_GRM method.
Remotesensing 14 03704 g004
Figure 5. (a,b) Synthetic images, (a1,b1) FCM method, (a2,b2) HMRF-FCM method, (a3,b3) IT2FM_GM method, (a4,b4) IT2FNN method (d1 = d2 = d3 = d4 = 0.35; c1 = c2 = c3 = c4 = 2.0), (a5,b5) IT2FM_NWA method (b1 = b2 = b3 = 0.45, b4 = 0.55; a1 = a2 = a4 = 2.5, a3 = 1.5), (a6,b6) IT2FNN_GRM method (к1 = 0.35, к2 = 0.42, к3 = 0.54, к4 = 0.40; η1 = 3.0, η2 = 2.5, η3 = 2.0, η4 = 2.7).
Figure 5. (a,b) Synthetic images, (a1,b1) FCM method, (a2,b2) HMRF-FCM method, (a3,b3) IT2FM_GM method, (a4,b4) IT2FNN method (d1 = d2 = d3 = d4 = 0.35; c1 = c2 = c3 = c4 = 2.0), (a5,b5) IT2FM_NWA method (b1 = b2 = b3 = 0.45, b4 = 0.55; a1 = a2 = a4 = 2.5, a3 = 1.5), (a6,b6) IT2FNN_GRM method (к1 = 0.35, к2 = 0.42, к3 = 0.54, к4 = 0.40; η1 = 3.0, η2 = 2.5, η3 = 2.0, η4 = 2.7).
Remotesensing 14 03704 g005
Figure 6. (ac) QuickBird real images, (a1c1) FCM method, (a2c2) HMRF-FCM method, (a3c3) IT2FM_GM method, (a4c4) IT2FNN method (d1 = d2 = d3 = d4 = 0.30; c1 = c2 = c3 = c4 = c5 = c6 = 1.5; d1 = d2 = d3 = d4 = 0.45), (a5c5) IT2FM_NWA method (b1 = b2 = 0.3, b3 = b4 = 0.45; a1 = a2 = 3.0, a3 = a5 = 2.5, a4 = a6 = 2.0; b1 = 0.35, b2 = b3 = 0.4, b4 = 0.3), (a6c6) IT2FNN_GRM method (к1 = 0.32, к2 = 0.61, к3 = 0.34, к4 = 0.68; η1 = 1.5, η2 = 2.0, η3 = 2.5, η4 = 3.0, η4 = 2.7, η4 = 1.5; к1 = 0.45, к2 = 0.4, к3 = 0.35, к4 = 0.50).
Figure 6. (ac) QuickBird real images, (a1c1) FCM method, (a2c2) HMRF-FCM method, (a3c3) IT2FM_GM method, (a4c4) IT2FNN method (d1 = d2 = d3 = d4 = 0.30; c1 = c2 = c3 = c4 = c5 = c6 = 1.5; d1 = d2 = d3 = d4 = 0.45), (a5c5) IT2FM_NWA method (b1 = b2 = 0.3, b3 = b4 = 0.45; a1 = a2 = 3.0, a3 = a5 = 2.5, a4 = a6 = 2.0; b1 = 0.35, b2 = b3 = 0.4, b4 = 0.3), (a6c6) IT2FNN_GRM method (к1 = 0.32, к2 = 0.61, к3 = 0.34, к4 = 0.68; η1 = 1.5, η2 = 2.0, η3 = 2.5, η4 = 3.0, η4 = 2.7, η4 = 1.5; к1 = 0.45, к2 = 0.4, к3 = 0.35, к4 = 0.50).
Remotesensing 14 03704 g006
Figure 7. (a,b) WorldView-2 real images, (a1,b1) FCM method, (a2,b2) HMRF-FCM method, (a3,b3) IT2FM_GM method, (a4,b4) IT2FNN method (c1 = c2 = c3 = c4 = 2.0; d1 = d2 = d3 = 0.60), (a5,b5) IT2FM_NWA method (a1 = a2 = 3.0, a3 = a4 = 2.5; b1 = b2 = 0.55, b3 = 0.45), (a6,b6) IT2FNN_GRM method (η1 = 2.7, η2 = 2.5, η3 = 2.0, η4 = 3.0; к1 = 0.65, к2 = 0.35, к3 = 0.40).
Figure 7. (a,b) WorldView-2 real images, (a1,b1) FCM method, (a2,b2) HMRF-FCM method, (a3,b3) IT2FM_GM method, (a4,b4) IT2FNN method (c1 = c2 = c3 = c4 = 2.0; d1 = d2 = d3 = 0.60), (a5,b5) IT2FM_NWA method (a1 = a2 = 3.0, a3 = a4 = 2.5; b1 = b2 = 0.55, b3 = 0.45), (a6,b6) IT2FNN_GRM method (η1 = 2.7, η2 = 2.5, η3 = 2.0, η4 = 3.0; к1 = 0.65, к2 = 0.35, к3 = 0.40).
Remotesensing 14 03704 g007aRemotesensing 14 03704 g007b
Figure 8. Training data histogram, membership function, and objective function of synthetic images, (a1,b1) type-1 Gaussian model, (a2) the IT2FNN_GRM method with an uncertain standard deviation (к1 = 0.35, к2 = 0.42, к3 = 0.54, к4 = 0.40), (b2) the IT2FNN_GRM method with an uncertain mean (η1 = 3.0, η2 = 2.5, η3 = 2.0, η4 = 2.7).
Figure 8. Training data histogram, membership function, and objective function of synthetic images, (a1,b1) type-1 Gaussian model, (a2) the IT2FNN_GRM method with an uncertain standard deviation (к1 = 0.35, к2 = 0.42, к3 = 0.54, к4 = 0.40), (b2) the IT2FNN_GRM method with an uncertain mean (η1 = 3.0, η2 = 2.5, η3 = 2.0, η4 = 2.7).
Remotesensing 14 03704 g008
Table 2. Composition of land cover categories in synthetic imagery.
Table 2. Composition of land cover categories in synthetic imagery.
ImagesColorLand Cover DescriptionTraining Pixels’ PercentageTotal
Figure 5(a1–a6) Remotesensing 14 03704 i001Paddy fields7.5%30.0%
Remotesensing 14 03704 i002Forests7.5%
Remotesensing 14 03704 i003Cement pavements7.5%
Remotesensing 14 03704 i004Water7.5%
Figure 5(b1–b6) Remotesensing 14 03704 i005Roofs7.5%30.0%
Remotesensing 14 03704 i006Suburbs7.5%
Remotesensing 14 03704 i007Wetlands7.5%
Remotesensing 14 03704 i008Grasslands7.5%
Table 3. Quantitative evaluation of Figure 5a.
Table 3. Quantitative evaluation of Figure 5a.
AlgorithmsPrecision IndexLand Cover Category
OA (%)KappaMeasurement (%)Paddy FieldsForestCement PavementWater
FCM69.90.596PA73.127.499.862.3
UA63.117.798.692.9
HMRF-FCM87.40.830PA70.295.787.594.9
UA99.958.299.786.3
IT2FM_GM83.90.784PA78.465.699.690.5
UA91.266.089.186.2
IT2FNN96.10.948PA92.197.395.599.8
UA99.487.997.299.8
IT2FM_NWA99.00.987PA98.499.599.598.5
UA99.897.299.699.4
IT2FNN_GRM99.90.998PA100.099.999.8100.0
UA99.8100.0100.0100.0
Table 4. Quantitative evaluation of Figure 5b.
Table 4. Quantitative evaluation of Figure 5b.
AlgorithmsPrecision IndexLand Cover Category
OA (%)KappaMeasurement (%)RoofsSuburbsWetlandsGrasslands
FCM59.60.468PA95.245.568.70.61
UA68.584.992.60.24
HMRF-FCM83.20.770PA97.074.799.465.8
UA99.971.273.380.6
IT2FM_GM81.10.745PA97.062.592.567.3
UA99.564.484.767.9
IT2FNN98.90.985PA99.199.097.799.6
UA99.898.899.197.8
IT2FM_NWA98.60.981PA99.797.998.299.1
UA99.698.898.497.6
IT2FNN_GRM99.90.998PA99.999.899.999.9
UA99.999.999.999.8
Table 5. Composition of land cover categories in QuickBird real images.
Table 5. Composition of land cover categories in QuickBird real images.
ImagesColorLand Cover DescriptionTraining Pixels’ PercentageTotal
Figure 6(a1–a6) Remotesensing 14 03704 i009Paddy fields7.0%20.0%
Remotesensing 14 03704 i010Forest3.0%
Remotesensing 14 03704 i011Cement floor4.0%
Remotesensing 14 03704 i012Water6.0%
Figure 6(b1–b6) Remotesensing 14 03704 i013Roofs1.0%20.0%
Remotesensing 14 03704 i014Suburbs3.0%
Remotesensing 14 03704 i015Wetlands4.0%
Remotesensing 14 03704 i016Grasslands3.0%
Remotesensing 14 03704 i017Snow4.0%
Remotesensing 14 03704 i018Ice–water mixture5.0%
Figure 6(c1–c6) Remotesensing 14 03704 i019Buildings7.0%20.0%
Remotesensing 14 03704 i020Roads3.0%
Remotesensing 14 03704 i021Grasslands6.0%
Remotesensing 14 03704 i022Shadows4.0%
Table 6. Quantitative evaluation of QuickBird images.
Table 6. Quantitative evaluation of QuickBird images.
ImagesPrecision IndexAlgorithms
FCMHMRF-FCMIT2FM_GMIT2FNNIT2FM_NWAIT2FNN_GRM
Figure 6aOA (%)92.293.691.997.896.798.9
Kappa0.8790.9050.8670.9630.9450.978
Time (s)37.315.321.562.151.842.26
Figure 6bOA (%)73.586.996.197.698.499.2
Kappa0.6340.8220.9440.9670.9750.987
Time (s)105.2135.752.453.192.853.27
Figure 6cOA (%)79.495.483.998.598.999.5
Kappa0.7070.9210.7550.9740.9820.991
Time (s)34.546.351.782.912.053.17
Table 7. Composition of land cover categories in WorldView-2 real images.
Table 7. Composition of land cover categories in WorldView-2 real images.
ImagesColorLand Cover DescriptionTraining Pixels’ PercentageTotal
Figure 7(a1–a6) Remotesensing 14 03704 i023Steel buildings8.0%25.0%
Remotesensing 14 03704 i024Cement buildings4.0%
Remotesensing 14 03704 i025Greenbelt6.0%
Remotesensing 14 03704 i026Ground7.0%
Figure 7(b1–b6) Remotesensing 14 03704 i027Farmland10.0%30.0%
Remotesensing 14 03704 i028Water11.0%
Remotesensing 14 03704 i029Forest9.0%
Table 8. Quantitative evaluation of WordView-2 images.
Table 8. Quantitative evaluation of WordView-2 images.
ImagesPrecision IndexAlgorithms
FCMHMRF-FCMIT2FM_GMIT2FNNIT2FM_NWAIT2FNN_GRM
Figure 7aOA (%)77.786.595.295.397.698.6
Kappa0.6920.8060.9310.9330.9640.976
Time (s)105.3714.8510.2112.2110.9813.38
Figure 7bOA (%)93.997.191.798.197.598.5
Kappa0.8870.9450.8500.9630.9540.971
Time (s)1371.65186.01104.23112.24106.84116.52
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, C.; Wang, X.; Wu, D.; Kuang, M.; Li, Z. Meticulous Land Cover Classification of High-Resolution Images Based on Interval Type-2 Fuzzy Neural Network with Gaussian Regression Model. Remote Sens. 2022, 14, 3704. https://doi.org/10.3390/rs14153704

AMA Style

Wang C, Wang X, Wu D, Kuang M, Li Z. Meticulous Land Cover Classification of High-Resolution Images Based on Interval Type-2 Fuzzy Neural Network with Gaussian Regression Model. Remote Sensing. 2022; 14(15):3704. https://doi.org/10.3390/rs14153704

Chicago/Turabian Style

Wang, Chunyan, Xiang Wang, Danfeng Wu, Minchi Kuang, and Zhengtong Li. 2022. "Meticulous Land Cover Classification of High-Resolution Images Based on Interval Type-2 Fuzzy Neural Network with Gaussian Regression Model" Remote Sensing 14, no. 15: 3704. https://doi.org/10.3390/rs14153704

APA Style

Wang, C., Wang, X., Wu, D., Kuang, M., & Li, Z. (2022). Meticulous Land Cover Classification of High-Resolution Images Based on Interval Type-2 Fuzzy Neural Network with Gaussian Regression Model. Remote Sensing, 14(15), 3704. https://doi.org/10.3390/rs14153704

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop