Next Article in Journal
Lunar Exploration Based on Ground-Based Radar: Current Research Progress and Future Prospects
Previous Article in Journal
Robust Direction Estimation of Terrestrial Signal via Sparse Non-Uniform Array Reconfiguration under Perturbations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forest Change Monitoring Based on Block Instance Sampling and Homomorphic Hypothesis Margin Evaluation

1
Department of Remote Sensing Science and Technology, School of Electronic Engineering, Xidian University, Xi’an 710071, China
2
Xi’an Key Laboratory of Advanced Remote Sensing, Xi’an 710071, China
3
Key Laboratory of Collaborative Intelligence Systems, Ministry of Education, Xidian University, Xi’an 710071, China
4
Shaanxi Academy of Forestry, Xi’an 710003, China
5
Laboratory of Information Processing and Transmission, L2TI, Institut Galilée, University Paris XIII, Villetaneuse, 93430 Paris, France
6
Academy of Advanced Interdisciplinary Research, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(18), 3483; https://doi.org/10.3390/rs16183483
Submission received: 10 July 2024 / Revised: 7 September 2024 / Accepted: 15 September 2024 / Published: 19 September 2024
(This article belongs to the Section Forest Remote Sensing)

Abstract

:
Forests play a crucial role in maintaining the integrity of natural ecosystems. Accurate mapping of windfall damages following storms is essential for effective post-disaster management. While remote sensing image classification offers substantial advantages over ground surveys for monitoring changes in forests, it encounters several challenges. Firstly, training samples in classification algorithms are typically selected through pixel-based random sampling or manual regional sampling. This approach struggles with accurately modeling complex patterns in high-resolution images and often results in redundant samples. Secondly, the limited availability of labeled samples compromises the classification accuracy when they are divided into training and test sets. To address these issues, two innovative approaches are proposed in this paper. The first is a new sample selection method which combines block-based sampling with spatial features extracted by single or multiple windows. Second, a new evaluation criterion is proposed by using the homomorphic hypothesis margin map with out-of-bag (OOB) accuracy. The former can not only assess the confidence level of each pixel category but also make regional boundaries clearer, and the latter can replace the test set so that all samples can be used for change detection. The experimental results show that the OOB accuracy obtained by spatial features with whole block sampling was 7.2 % higher than that obtained by spectral features with pixel-based sampling and 2–3% higher than that for block center sampling, of which the highest value reached 98.8 % . Additionally, the feasibility of identifying storm-damaged forests using only post-storm images has been demonstrated.

1. Introduction

Forests serve multiple critical functions, including soil protection against wind and precipitation, regulation of air and water cycles, and mitigation of environmental pollution impacts on human health. The destruction of forests leads to several detrimental ecological consequences: enhanced runoff and erosion, diminished biodiversity, and contributions to global warming [1]. Consequently, prompt monitoring of forest changes and implementing appropriate management strategies are vital for ecosystem conservation.
In the context of climate change, storms have become more frequent. Hurricanes can significantly disrupt forest ecosystems, impacting their composition, structure, and ecological succession [2,3]. The Food and Agriculture Organization reports that from 2015 to 2020, approximately 10 million hectares of forest were lost annually [4]. Therefore, mapping the distribution of unexpected forest loss and estimating the affected area is of great significance for post-disaster management [5]. Ground investigations are difficult due to the remoteness of disaster-affected areas and the obstacles caused by fallen trees. These methods will also take a great amount of time and manpower and thus are highly vulnerable to various limitations. In contrast, remote sensing technology, which has been widely used in oceanographic, terrestrial, and atmospheric research areas [6], offers a swift, cost-effective, and efficient alternative for monitoring extensive and inaccessible areas. For instance, L. Lei et al. introduced a nonlocal patch similarity-based method which constructs graphs for image patches, enabling effective change detection between heterogeneous remote sensing images by measuring the similarity of graph structures [7]. Y. Sun et al. developed a method which incorporates both similarity and dissimilarity relationships into a multimodal change detection framework, enhancing accuracy by leveraging both low-frequency and high-frequency information in the image regression process [8]. These advanced methodologies demonstrate the potential of remote sensing technology in overcoming the challenges of ground-based investigations, especially in disaster-affected areas.
A wide range of methods for monitoring forest change using remote sensing image classification have been designed [9,10]. Among them, one type of classification based on spectral information is the traditional classification method, and the initial number of spectral features corresponds to the number of bands in remote sensing images. For instance, Zarco-Tejada et al. utilized high-resolution hyperspectral and Sentinel-2a imagery to detect forest decay [11]. Bar et al. used medium-resolution optical satellite data to identify pre-monsoon forest fire patches in the Himalayan region of Uttarakhand [12]. Additionally, White et al. assessed the effectiveness of a restoration spectral index derived from the Normalized Burn Ratio (NBR) as a measure of forest vegetation recovery post harvesting (clearcutting) [13]. However, the limitation of spectral features—due to their similarity across different objects—often results in reduced classification accuracy, a challenge not overcome by solely relying on pixel-derived spectral features.
Recent studies have expanded beyond spectral information to include texture features, which are statistical representations of the spatial distribution of pixel intensities within an image and have proven effective in enhancing image classification where spectral data alone are insufficient [14]. These texture features are particularly valuable in land cover and forestry mapping, enhancing both classification accuracy and reliability [15]. They offer substantial improvements in classifying forest disturbances and mapping hurricane impacts on forests. For example, Beguet et al. identified a strong correlation between forest spatial structures and textures extracted from high-resolution imagery at the stand level, underscoring the utility of texture analysis in forest inventory applications [16]. Jiang et al. recognized forest fire smog by calculating the texture features of images. The recognition accuracy of this method reached 98%, with robustness in their smog image database [17]. J. Balling et al. combined the Gray-Level Co-Occurrence Matrix (GLCM) textural features with traditional SAR backscatter data, which significantly reduced omission errors and improved the timeliness of detecting tropical forest disturbances [18]. Additionally, edge refinement and multi-feature extraction techniques, such as those used in the ERMF model, have been shown to enhance change detection accuracy by capturing crucial edge information in remote sensing imagery [19]. Furthermore, texture-based approaches in few-shot semantic segmentation have demonstrated the ability to generalize forest cover identification across different geographical regions, thereby improving the adaptability and accuracy of global forest monitoring systems [20]. That aside, Y. Sun et al. proposed a structure consistency-based method which detects changes by comparing the structures of two images rather than their pixel values, making it highly robust against noise and other interference factors [8].
The research methods mentioned above all use pixel-based sampling methods which randomly select some gathered pixels for sampling, regardless of the spectral features or texture features. There are roughly two limitations to this sampling method. On the one hand, the samples obtained through this method lack independence and cannot solve the problem of data redundancy. On the other hand, relying on individual pixels fails to effectively represent the complex patterns of built-up areas in high-resolution imagery [21]. Moreover, the feature extraction techniques previously employed typically utilize a singular window, neglecting the variable scales of different objects. This approach invariably results in a compromise between window size and classification accuracy [22]. Specifically, while larger windows might capture more relevant patterns, they also increase the risk of edge effects overpowering the classification outcomes [23]. Therefore, enhancing sample representativeness through improved texture feature extraction is an essential strategy for boosting the accuracy of image classification.
Overall, detecting forest changes poses several significant challenges. First, the inherent complexity and variability of forest ecosystems make it difficult to accurately identify changes using traditional methods. Second, the similarity of spectral features across different objects within the forest environment often results in reduced classification accuracy, complicating the detection of specific types of changes. Finally, traditional sampling methods used in remote sensing often lead to data redundancy and fail to capture the complex spatial patterns within high-resolution images, which are crucial for accurately mapping forest changes. To overcome these challenges, a new methodology is proposed in this paper, integrating advanced sampling techniques with enhanced texture feature extraction to improve the accuracy of forest change detection.
In the realm of image classification, machine learning techniques have become indispensable. During the last decade, several classifiers have been extended for the supervised classification of remote sensing images [6]. The latter is an important tool in remote sensing image analysis, where pixels are assigned to one of the available classes according to a set of given training pixels. In recent years, the focus has shifted toward employing multiple classifier systems or classifier ensembles, which are known for enhancing accuracy [24,25]. Such ensembles leverage the collective strength of various classifiers to achieve superior accuracy compared with single-classifier systems [26]. Among these, the random forest (RF) method, a compilation of decision trees developed through the Classification and Regression Trees (CART) algorithm, is frequently utilized in remote sensing classifications [27,28]. A hybrid CNN-RF approach effectively identified burned areas with 97 % accuracy by combining optical and SAR data, demonstrating the utility of SAR in diverse weather conditions [29]. Similarly, in flood-prone regions of Bangladesh, RF classifiers enhanced land use mapping and flood damage assessment, achieving 90% accuracy [30]. Given its rapid training and minimal parameterization, RF was also chosen for forest disaster monitoring in this study. In this paper, the optimal feature structure for the detection of wind damage-based forest change is studied, and the classification of damaged and undamaged forest species in Nezer forest areas is realized. Different feature structures are used as inputs in an RF classifier.
In RFs, diversity among classifiers is achieved through the decision tree, which acts as the basic classifier. This diversity is generated by randomly sampling both the training and feature sets multiple times. The aggregation of these decision trees is then performed based on majority voting. An added advantage of decision tree training is the automatic derivation of feature importance, requiring no additional computational effort. However, when labeled samples are limited, forced division into training sets and test sets according to traditional machine learning classification steps will result in poor classification accuracy. Therefore, how to find an effective evaluation method in the case of a limited number of samples similar to that in this paper such that all samples can be used for training and improve the accuracy is an important research direction in image classification.
The contribution of this paper has two parts. First, a novel sample selection method is proposed which combines block-based sampling with spatial features computed, using single or multiple windows to address the issues of sample redundancy and insufficient representation of complex patterns. Second, a new evaluation criterion is introduced through the use of a homomorphic hypothesis margin map, which can assess the confidence level of each pixel category and enhance regional boundary clarity when combined with out-of-bag (OOB) accuracy. This approach can replace the traditional test set, enabling the use of all samples for change detection and binary mapping. Experimental results demonstrate the effectiveness of the proposed methods.

2. Materials and Methods

This study was conducted in the Nezer Forest, encompassing an area of approximately 60 km2 located adjacent to the extensive European maritime pine (Pinus pinaster Ait) forests along the Atlantic coast in southwestern France (Figure 1). This study utilized two-phase Formosat-2 imagery captured before and after the windstorm Klaus on 22 December 2008 and 4 February 2009, respectively (Figure 2). The area is a mix of agricultural and forested lands. After the storm, significant damage to the forest was visible, with marked differences in texture and reflectance, particularly in the NIR band, highlighting the degradation in forest density and structure. The images feature a spatial resolution of 8 m across four spectral bands: red (R), green (G), blue (B), and near infrared (NIR). These images were processed to convert the image irradiance into Top-of-Atmosphere (TOA) reflectance and subsequently rescaled to a range between 0 and 255. Comprehensive orthorectification and georeferencing were applied to both datasets.

3. Methodology

The methodology of this study is depicted in Figure 3. Initially, a multispectral image was segmented into separate images, each representing a distinct wavelength. These images were then subjected to feature extraction, where one or two scanning windows were employed to compute five spatial feature values per window. Subsequently, training samples were selected using either the block center sampling or whole block sampling technique. These samples, along with the extracted feature values, were input into a random forest (RF) ensemble classifier to categorize the forest areas into damaged and undamaged classes. The output included a hypothesis margin map of the study area, which was used to assess the confidence level of each pixel’s classification. Furthermore, classification accuracy was evaluated using out-of-bag (OOB) accuracy and the binary map method.

3.1. Spatial Features Extracted

This section details the feature extraction process for the training samples, illustrated in Figure 4. The features extracted for the purpose of change detection were classified into two primary categories—spectral and spatial features—as outlined in Table 1.
Among these, the spectral features of a selected sample were the pixel values corresponding to the multispectral image, and the spatial features were calculated with the pixels covered by a square window of a specific size centered on it. Four kinds of windows are shared in this paper, whose sizes are 3 × 3, 4 × 4, 5 × 5, and a combination of 3 × 3 and 4 × 4. For convenience, this article only details the last method with two square windows which share the same center. The calculation methods for other cases were similar to or simpler than that described in this section.

3.1.1. Composite Window Scanning Technique

The methodology employs a composite window comprising two concentric windows centered at the same point (Figure 5).
The dimensions of the larger window are ( 2 q ) × ( 2 q ) , while the smaller window measures ( 2 p + 1 ) × ( 2 p + 1 ) , with  q = 2 and p = 1 for this discussion. Multi-spectral images of a size M × N are processed using these windows in a raster scanning sequence with a stride of one pixel. Samples located at the center of a window which extends beyond the image boundary are excluded, as depicted by the blue dashed line in the figure. The composite windows are jointly moved across the image, resulting in K = # K potential positions for processing, where K represents the set of all feasible indices:
K = ( M 2 q + 1 ) ( N 2 q + 1 )   and   K = { 1 , , K }
The coordinates of their common centers are expressed as
m k = q 1 + k 1 N 2 q + 1 n k = q 1 + k ( N 2 q ) k 1 N 2 q + 1
where k K and ⌊⌋ denotes the floor function, indicating the greatest integer less than or equal to the given value:
V = ( i , j ) | 1 i 2 p + 1   and   1 j 2 p + 1 W = ( i , j ) | 1 i 2 q   and   1 j 2 q
The values extracted from the k t h composite window are
A ( i , j , d , k ) = I ( m k + i p 1 , n k + j p 1 , d ) for i , j V B ( i , j , d , k ) = I ( m k + i q + 1 , n k + j q , d )   for   i , j W
In the notation I ( m , n , d ) , the term represents the pixel value at the location ( m , n ) for the wavelength index d. Figure 4 illustrates a flowchart for the feature extraction process, which was specifically adapted for the Nezer dataset. This dataset comprises a multispectral image with four wavelengths. Consequently, four corresponding sets of values can be extracted, denoted as I ( m , n , 1 ) , , I ( m , n , 4 ) . The image processing involves 2 × D × K windows, where 2 × D windows are applied per captured wavelength and D represents the number of spectral bands in the multispectral image. As feature extraction occurs within these moving windows, the actual training samples should be considered as sets of pixels encompassed by each window rather than individual pixels. To enhance clarity, training samples which include common pixels will be described in subsequent sections as feature pixels instead of pixel sets.

3.1.2. Spatial Features Extracted

Figure 4 delineates a flowchart illustrating the extraction of five statistical features—the median, mean, variance, kurtosis, and skewness—from each of the 2 × D windows at every k indexed locations. These features are formulated as follows, with calculations confined to the pixel values within the specified window. For features extracted from the smaller window, they are denoted as f r , d , k , and for those from the larger window, they are denoted as  g r , d , k , where r represents the feature index, which ranges from 1 to 5:
  • The median values are denoted as f 1 , d , k and g 1 , d , k and are calculated by sorting the pixel values within each window and selecting the middle value.
  • The means of the pixel values within the respective windows are represented by f 2 , d , k and g 2 , d , k and are calculated by summing all pixel values and dividing by the total number of pixels in the window:
    f 2 , d , k = 1 ( 2 p + 1 ) 2 i , j V A ( i , j , d , k ) g 2 , d , k = 1 ( 2 q ) 2 i , j W B ( i , j , d , k )
  • The variance is denoted as f 3 , d , k and g 3 , d , k , which is determined by averaging the squared deviations of the pixel values from their mean:
    f 3 , d , k = 1 ( 2 p + 1 ) 2 i , j V ( A ( i , j , d , k ) f 2 , d , k ) 2 g 3 , d , k = 1 ( 2 q ) 2 i , j W ( B ( i , j , d , k ) g 2 , d , k ) 2
  • Kurtosis is represented by f 4 , d , k and g 4 , d , k , which reflects the peak’s sharpness, calculated by taking the mean of the ratio of the pixel values’ fourth-order center distance to the square of variance:
    f 4 , d , k = 1 ( 2 p + 1 ) 2 i , j V ( A ( i , j , d , k ) f 2 , d , k ) 4 f 3 , d , k 2 g 4 , d , k = 1 ( 2 q ) 2 i , j W ( B ( i , j , d , k ) g 2 , d , k ) 4 g 3 , d , k 2
  • The estimates of the skewness are f 5 , d , k and g 5 , d , k :
    f 5 , d , k = 1 ( 2 p + 1 ) 2 f 3 , d , k 3 / 2 i , j V [ A ( i , j , d , k ) f 2 , d , k ] 3 g 5 , d , k = 1 ( 2 q ) 2 g 3 , d , k 3 / 2 i , j W [ B ( i , j , d , k ) g 2 , d , k ] 3
For each sample, the extracted features are compiled into a row vector in a predefined sequence:
x k , l = x k , r l + 5 ( d l 1 ) + 5 D s l = f r l , d l , k if s l = 0 g r l , d l , k if s l = 1   with   1 r l 5   and   1 d l D   and   0 s l 1
where l { 1 , , 10 D } , s l = l 1 5 D , d l = 1 + l 1 5 D s l 5 , and r l = l 5 ( d l 1 ) 5 D s l .
The row vectors are then stacked into a matrix X of a size ( # I ) × ( 10 D ) . Each row of the matrix is associated with a specific position, and each column is a feature defined by windows, wavelengths, and equations ranging from Equation (5) to Equation (8). Here, ( # I ) is the number of training samples in a specific location. Each row is linked to a truth label c, which is collated into a column vector Y of a dimension ( # I ) × 1 .

3.2. Sampling Methods

Previous studies typically employed pixel-based random sampling and manually defined region sampling to select training samples for classification algorithms. Traditional methods concentrate on precise modeling of spectral, textural, and spatial patterns to classify built-up areas. Nonetheless, accurately capturing the intricate patterns in high-resolution images using single-pixel methodologies poses substantial challenges. Block-based methods are more effective than object-based methods in modeling complex texture and structural patterns [31], and thus they are more commonly used for this purpose [21]. As for artificial region sampling, it is easy to cause sample redundancy because there is a strong correlation between close pixels. To solve these problems, two kinds of block-based sampling methods, block center sampling and whole block sampling, are studied in this section.

3.2.1. Block Center Sampling

The steps of block center sampling are to subdivide the original image into small non-overlapping blocks ( n × n pixels) and then select the center pixels of all blocks as training samples. Compared with traditional pixel-based random sampling, the instances produced by block-centered sampling are more independent, which significantly reduces the negative impact of redundant data. These reference instances are not divided in this study, and if they are large in number, they can also be used by dividing these instances into training sets and test sets, which is similar to the traditional random partitioning process. Figure 6a shows an example based on block center sampling with a window size of 3 × 3 .

3.2.2. Whole Block Sampling

The first step of whole block sampling is still to divide the original image into small non-overlapping blocks, while the second step is to select blocks which meet certain criteria as training samples. Since all the pixels of non-adjacent blocks are taken as reference instances, the block sampling method not only improves the independence of training samples but also obtains more reference data than the block center-based sampling. It is important to note that when dividing the generated reference instances into a training set and a test set, all pixels within the same block should be consistently allocated to the same subset. Figure 6b shows an example of whole block sampling using a 3 × 3 size window.

3.2.3. Instruction and Sampling Process

Figure 7 shows the complete sampling process. Through detailed observation and comparison of Figure 2a and Figure 2c, the damaged and undamaged parts of the study area were artificially determined, and the training samples were labeled according to this rule. The dotted box in Figure 7 shows the reference instances. It is worth noting that the algorithm in this paper only uses the overlap between pixels in the center of the blocks (or the whole blocks) and the white part of the image as training samples, and thus the total number of training samples is extremely small. It should also be noted that the size of the sample block is independent of the size of the window in which the features are calculated. For example, when extracting features through a window 3 × 3 in size, the sample block size can be either 3 × 3 or 5 × 5 .

3.3. Random Forest Algorithm

In this study, we employed a pixel-based supervised random forest (RF) machine learning algorithm, which classifies datasets by constructing multiple decorrelated decision trees. These trees collectively guide and aggregate prediction patterns, enhancing the robustness and accuracy of the classification [32]. RF classifiers are adept at handling high-dimensional data and typically outperform alternative methods such as maximum likelihood classifiers, single decision trees, and single-layer neural networks in terms of accuracy [33,34,35]. Furthermore, RF classifiers are relatively resistant to data noise and overfitting, and they offer a quantitative analysis of each variable’s influence on the classification outcome, proving particularly effective in the classification of remote sensing data. Moreover, RF classifiers incorporate an “out-of-bag” (OOB) technique for internal accuracy assessment, which is invaluable for evaluating classification results, especially when the sample size is insufficient for conventional training and test set divisions.
We employed the window size combination of 3 × 3 and 4 × 4 , as detailed in Section 3.1. The RF algorithm iteratively executes the following steps V times, where V denotes the number of decision trees to be generated (Algorithm 1):
  • Resampling of Rows: The rows of matrix X are randomly resampled using the constrained sampling method referenced in Figure 7. This operation is analogous to left multiplying X by a row selection matrix S v of a size H × ( 10 D ) . Here, each row of S v contains a single non-zero element (equal to one), positioned randomly. The variable H represents the number of samples used for each tree equal to V. The observed labels are similarly transformed to S v Y .
  • Feature Selection: A subset of G features is randomly selected from the total features. This process is equivalent to right multiplying the matrix S v X by a column selection matrix F v of a size ( 10 D ) × G , which resembles an identity matrix with 10 D G columns omitted, keeping the column containing the label unchanged.
  • Decision Tree Training: Each decision tree is trained using the pair ( S v X F v , S v Y ) . The trained tree h v maps the row vectors, which represent training samples, to an integer c { 1 , , C } , where C is the number of classes.
The outcomes from the V decision trees are combined based on a majority rule to classify each sample, where x is a row vector and  1 ( proposition ) equals one when the “proposition” is true and is zero otherwise:
h ( x ) = argmax c { 1 , , C } v = 1 V 1 ( h v ( x ) = c )
Algorithm 1 Feature extraction for RF training.
      Input:
          I : Multispectral image, dimensions M × N × D
          I : Index set for specific pixel locations
          ( 2 p + 1 ) × ( 2 p + 1 ) : Smaller window size
          ( 2 q ) × ( 2 q ) : Larger window size
         V: Number of decision trees in random forest
         G: Number of features used for training each tree
         H: Number of samples used to train decision trees
1    for k = 1 : K do
2       Calculate A ( i , j , d , k ) and B ( i , j , d , k ) from I with Equation (4)
3       for d = 1 : D do
4          Calculate f 1 , d , k ,…, f 5 , d , k from A ( i , j , d , k ) with Equations (5)–(8)
5          if q ! = 0
6             Calculate g 1 , d , k ,…, g 5 , d , k from B ( i , j , d , k ) with Equations (5)–(8)
7       end for
8       Record the values above as x k according to Equation (9)
9    end for
10    Combine [ x k ] k I and [ y k ] k I into X and Y

11    for v = 1 : V do
12       Randomly sample a certain number of data and define S v
13       Randomly sample a certain number of features and define F v
14       Train decision tree h v with ( S v X F v , S v Y )
15    end for
16    Aggregate h v to form the ensemble classifier h with Equation (10)

      Output:
         h: random forest ensemble classifier

3.4. Evaluation Methods

As indicated in Table 2 and Section 3.2.3, the dataset available for training in this study is limited. Consequently, segregating the samples into distinct training and testing subsets would compromise the classifier’s performance due to inadequate training data. OOB accuracy can be used instead of test sets to evaluate the generalization ability of RF algorithms. A hypothesis margin map can evaluate the classification confidence of a pixel and make the boundaries of classified area clearer. Therefore, a hypothesis margin map combined with the OOB accuracy and binary map were used as evaluation methods, and they are introduced below.

3.4.1. Hypothesis Margin Map

The ensemble margin, a concept introduced by Schapire to elucidate the effectiveness of boosting, serves as a robust indicator of classification confidence, emerging from the collective voting mechanisms of ensemble methods [36]. Unlike the sampling margin, which quantifies the proximity of feature vectors to decision boundaries, the ensemble margin was employed to evaluate the reliability of the classification accuracy in our analysis. Hypothesis margin maps were generated using margin-based unsupervised maximum operation. Every classification map had a corresponding hypothesis margin map. The higher the ensemble margin of a sample, the more confidence this sample obtains in classification. In general, margins near the center of a class will have significantly higher values, while smaller margins mainly correspond to class boundaries.
Ensemble margins are categorized into two types: supervised and unsupervised [37]. The supervised margin, as defined by Shapire et al., measures the vote disparity between the true category of a pixel and the category receiving the next highest number of votes [24,36,37,38]. In contrast, the unsupervised margin, introduced in [39], assesses the difference in votes between the two most favored categories, regardless of their accuracy in reflecting the true category. This study employs the unsupervised margin articulated in Equation (14) and detailed in [39].
First, we must define some notations:
  • C = { 1 C } is a collection of all pixel types.
  • The number of decision trees which map instance x to class c among V models is v ( x , c ) :
    v ( x , c ) = v = 1 V 1 ( h v ( x ) = c )
  • The class that instance x is most likely to belong to is c 1 ( x ) :
    c 1 ( x ) = arg max c { 1 C } v ( x , c )
  • The class that instance x is second-most likely to belong to is c 2 ( x ) :
    c 2 ( x ) = arg max c { 1 C } { c 1 ( x ) } v ( x , c )
The unsupervised ensemble margin is defined as follows:
margin ( x ) = 1 V v ( x , c 1 ( x ) ) v ( x , c 2 ( x ) )
Note that this ensemble margin, with values ranging from 0 to 1, is unsupervised, and it does not require the true label. A high margin ( x ) indicates a dominant preference for c 1 ( x ) among the decision trees, significantly exceeding the votes for c 2 ( x ) and any other category. This occurs as c 1 ( x ) and c 2 ( x ) , by definition, receive the highest number of votes. In other words, this instance is relatively easy to classify.

3.4.2. OOB Accuracy

Accuracy can be improved if the prediction method is unstable (i.e., small changes in the training set or classifier parameters can lead to large changes in the final prediction result) [40]. Even if the same classifier is used, different divisions of the training and test sets will lead to different classification results. In addition, reference cases are often limited in remote sensing image research. Because the samples in the test set are not used in training, the performance of the trained classifier is poor in the case of small samples. The OOB accuracy has been proven to estimate the generalization error of an algorithm instead of a test set.
In the process of generating each decision tree, a fixed number of samples is usually randomly collected from the training set by the bootstrap sampling method; that is, each sample is put back after being extracted. These collected samples are used to train a tree in a random forest. Those unsampled data are referred to as out-of-bag (OOB) data. These samples are not involved in fitting the model, and thus they can be used to test the generalization ability of it.
The OOB accuracy is calculated as follows:
  • For each sample, find the decision trees which treat it as an OOB sample and obtain the trees’ classification results for it.
  • Calculate the RF classification result of the sample with a majority voting rule.
  • Finally, take the ratio between the number of correctly classified samples and the total number of samples as the OOB accuracy value of the RF.

3.4.3. Binary Map

Accurate classification maps serve as critical tools for extensive environmental and forest monitoring initiatives. Binary maps provide a straightforward approach to assessing binary classifications. In this study, damaged areas are represented in yellow, while non-damaged regions are shown in green on these maps. Analyzing the spatial distribution of damaged forests within the binary maps allows for the evaluation of various feature combinations’ effectiveness.

4. Results and Discussions

4.1. Spectral versus Spatial Classification

In order to select the best features for the RF in the monitoring of wind-damaged forest change, the spectral features and spatial features were first compared and analyzed for classification performance. Eight primary bands, comprising images captured before and after the storm, served as spectral features. Additionally, 16 1th features, specifically the mean and variance calculated from these eight bands, were utilized as spatial features. In the classification based on spatial features, it was combined with the block center and whole block data sampling methods. The effect of the window size on spatial classification was also evaluated. The window sizes w for spatial computation and block-based data sampling were simultaneously set to 3 × 3 , 4 × 4 , and 5 × 5 . The random forest consisted of 100 decision trees, and all recorded OOB accuracy values were the averages of 10 calculations.
The experimental results of the random forest classifier using spectral and spatial features combined with block-based sampling methods are shown in Table 2. As can be seen from the table, spatial features can bring about better random forest performance than spectral features, and the best spatial features can make the OOB accuracy reach 98.8%. The binary maps of the damaged and undamaged areas of spectral and spatial feature classification are shown in Figure 8 and Figure 9. Yellow and green indicate damaged and undamaged areas, respectively. Obviously, the effect of spatial classification was better than that of spectral classification. However, there are also some differences from the OOB accuracy conclusion. The texture classification maps obtained by block center sampling were smoother, with fewer samples and higher overall accuracy than that obtained using whole block data sampling.
That aside, a larger window size can identify forest textures more efficiently, resulting in a smooth binary classification map. The hypothesis margin maps for spectral and spatial classification are shown in Figure 10. Areas which can easily be classified correctly are shown in yellow, which represents low-margin values. In an optimal hypothesis margin map, all areas of the image should be shown in yellow. At the same time, it is relatively difficult to obtain clear boundaries with traditional evaluation methods. The pixels which were difficult to classify (shown in blue on the map) could, to some extent, represent the boundaries between damaged and undamaged areas. By comparing the distribution of blue pixels between two maps, we could discern which method was more suitable. Through careful observation of Figure 10, the conclusions from the binary maps were further proven.

4.2. One Window versus Two Windows for Spatial Feature Analysis

In order to further analyze the performance of spatial features in damaged-forest change detection, in this section, we mainly compare the first-order feature combinations with one and two feature extraction windows. At the same time, the effect of the number of spatial features on the results of RF classification is also analyzed. In this experiment, the random forest consisted of 250 decision trees. First, an RF classification with a window size of 3 × 3 was achieved using 16 (mean and variance of the images before and after the storm, labeled test 1) and 40 (mean, variance, median, kurtosis, and skewness of 8 initial bands, labeled test 2) spatial features. Then, two windows with sizes of 3 × 3 and 4 × 4 were used to extract the first-order spatial features from the images before and after the storm to construct the third group of feature structures (labeled test 3). The number of features for this group of feature structures was 40 + 40 = 80 . The window size of the three tests in the block center sampling method with a texture feature was set to 3 × 3 , 3 × 3 and 4 × 4 .
The OOB accuracy of the RF classifier under three spatial feature structures is shown in Table 3, and the number of features in the three cases was 16, 40, and 80. The data in table show that there was no significant difference between 16 and 40 features in terms of OOB accuracy. In other words, in the case of using a window to calculate the spatial features, increasing the median, kurtosis, and skewness did not seem to significantly improve the OOB accuracy of the random forest. The performance of the third feature structure was obviously the best. This means that considering the first features obtained from windows of different sizes could provide more meaningful information for RF classification than the features obtained from using only one window.
In order to further verify the conclusion, the corresponding binary map (Figure 11) and hypothesis margin map (Figure 12) were calculated. Figure 11 shows that test 3 was slightly better than test 2 and test 1 in terms of map smoothness. The hypothesis margin map clearly reflects the different properties of the three feature structures. Test 2 resulted in a better hypothesis margin map than test 1, and test 3 obtained the optimal hypothesis margin graph in this experiment. The image in Figure 12c has the most yellow areas, indicating that these areas were more likely to be correctly classified. It is also worth mentioning that due to the larger size of the sampling window for the third test, the number of training samples was smaller than that for the other tests. Therefore, it can be concluded that the feature structure based on the multi-window approach can obtain better classification performance under less labeled training instances.

4.3. One Image versus Two Images

In some cases, before-storm data in the study area were lacking, which made it difficult to detect damaged forests. Therefore, in this section, we will explore whether it is possible to accurately distinguish between damaged and undamaged areas of the forest based solely on the post-storm image. Due to the positive conclusion in Section 4.2, the comparison of tests 2 and 3 was focused upon in this experiment. Depending on one image means that only the spatial features corresponding to four initial bands of the post-storm image were used in feature selection, and other operations remained unchanged, as shown in Table 4. We compared the outcomes for 20 features (Test 4) and 40 features (Test 5), which corresponded to the Test 2 and Test 3 scenarios in Section 4.2. The only change in these tests was the reduction in the number of images, while all other conditions remained constant. The random forest OOB accuracy using one image (pre-storm image) and two images (pre- and post-storm images) is shown in Table 5. The data showed no significant difference in OOB accuracy between the two tests.
The corresponding full binary map and hypothesis margin map are presented in Figure 13 and Figure 14, respectively. After detailed observations, we found that random forests could achieve similar performance with one image or two images in both cases of utilizing one window and two windows for the calculation of first-order spatial features. The results show that forest change monitoring of the RF when using one window or two windows to calculate first-order spatial features could achieve similar performance regardless of whether one image or two images were used. Therefore, the question was answered in the affirmative; that is, damaged forests can be effectively identified with only a post-storm image.

5. Discussion

This study provides a detailed examination of storm-damaged and undamaged areas within Nezer Forest using Bitemporal Formosat-2 imagery captured before and after the windstorm Klaus. The use of hypothesis margin maps, alongside out-of-bag (OOB) accuracy and binary map analyses, offers valuable insights into the effectiveness of various sampling methodologies and feature configurations in remote sensing image classification.
  • Interpretation of Results
    • Our findings indicate that spatial features derived from block sampling outperformed spectral features based on pixel sampling. The window sizes w for spatial computation and block-based data sampling were simultaneously set to 3 × 3 , 4 × 4 , and 5 × 5 , with the 5 × 5 window performing the best. Specifically, the whole block sampling method achieved an OOB accuracy of up to 98.8%, significantly higher than the block central sampling method, which had an OOB accuracy of 98.6%. In comparison, the spectral features only achieved an OOB accuracy of 91.6%. This suggests that block sampling methods can better capture the spatial and structural complexities of storm-damaged areas compared with pixel-based methods, which often fail to account for such nuances.
    • This study also demonstrated that extracting features using two windows significantly improves classification performance. The OOB accuracy achieved with 80 features from two windows was 96.9%, compared with 94.2% with 16 and 40 features from a single window. This improvement highlights the advantage of incorporating multi-scale feature extraction, which can capture a broader range of spatial information and enhance the robustness of the classification model.
    • Furthermore, the feasibility of identifying storm-damaged forests using only post-storm imagery was confirmed by the negligible difference in OOB accuracy and classification maps between using one image and two images. This finding is particularly important for practical applications, as it suggests that effective post-disaster assessment can be conducted with limited temporal data, reducing the need for extensive pre-storm imagery.
    • The superior performance of spatial features over spectral features aligns with the findings of Jiang et al. and Kulkarni et al., who highlighted the importance of texture and spatial information in forest disturbance monitoring. Our results extend these findings by demonstrating that block-based sampling methods can further enhance classification accuracy, supporting the notion that the sampling methodology plays a crucial role in remote sensing analysis. Moreover, using hypothesis margin maps as a new evaluation criterion, combined with OOB accuracy, provides a comprehensive framework for assessing classification confidence and delineating regional boundaries more clearly. This approach addresses the issue of limited labeled samples and enhances overall classification reliability.
    • While the presented methodology demonstrated significant improvements in classification accuracy using spatial features and block-based sampling methods, it is not without limitations. One of the primary constraints is the reliance on high-resolution imagery, which may not be readily available in all regions or for all events, potentially limiting the generalizability of the approach. Additionally, the computational complexity associated with multi-scale feature extraction and the analysis of large datasets could be a challenge for real-time applications, particularly in resource-constrained environments. The methodology’s effectiveness in different forest types or under varying storm conditions also requires further exploration to confirm its broader applicability. Future work should focus on optimizing the computational efficiency of the proposed methods and validating their performance across diverse ecological settings.
  • Implications and Practical Applications
    • The findings of this study have significant implications for post-disaster management and forest conservation. Accurate mapping of windfall damages using remote sensing technology can facilitate timely and effective response strategies, mitigating the long-term impacts of storm events on forest ecosystems. The proposed sampling and feature extraction methods can be integrated into existing remote sensing frameworks to enhance the accuracy and efficiency of forest monitoring programs.
    • Additionally, the demonstrated feasibility of using post-storm images alone for damage assessment suggests a cost-effective and time-efficient approach to disaster response. This capability is particularly valuable in scenarios where pre-storm imagery may not be readily available, enabling rapid deployment of monitoring efforts in the aftermath of a storm.
  • Future Research Directions
    • Future research should focus on expanding the application of the proposed methods to larger and more diverse study areas to validate their robustness and scalability. Exploring the integration of additional texture and spatial features, as well as advanced machine learning algorithms, could further improve classification performance. Moreover, investigating the potential of combining multi-temporal imagery with ancillary data sources, such as meteorological and topographical information, could enhance the comprehensiveness and accuracy of forest disturbance monitoring.

6. Conclusions

This study presented the classification of storm-damaged and undamaged areas within Nezer Forest by utilizing Bitemporal Formosat-2 imagery captured before and after the windstorm Klaus, specifically on 22 December 2008 and 4 February 2009. Subsequently, a hypothesis margin map, complemented by out-of-bag (OOB) accuracy and binary map analyses, was employed to assess various sampling methodologies and feature configurations. The classification results of spatial features based on block sampling were much better than those of spectral features based on pixel sampling. The whole block sampling method achieved a higher OOB accuracy than the block central sampling method when using one window to extract features, and the highest value reached 98.8 % . However, the latter’s binary maps and hypothesis margin maps were smoother, with fewer samples and a higher overall accuracy than the former’s. In addition, extracting features with two windows can improve the result significantly, which is far better than extracting more features with only one window. When using 16 and 40 features from one window and 80 features from two windows, the OOB accuracy was 94.2 % , 94.2 % , and 96.9 % , respectively. Finally, the feasibility of identifying damaged forests while only depending on a post-storm image was demonstrated by showing there was no obvious change in the OOB accuracy or classification maps between one image and two images.

Author Contributions

W.F. and G.D. conceived and designed the experiments; W.F. performed the experiments and wrote the paper; F.B. revised the paper and edited the manuscript; Y.Q., P.W. and M.X. provided equipment and funding support. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Nos. 62201438, 62331019, and 12005169); Natural Science Basic Research Program of Shaanxi (No. 2021JC-23); and Shaanxi Forestry Science and Technology Innovation Key Project (No. SXLK2022-02-8).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. De Luis, M.; González-Hidalgo, J.; Raventós, J. Effects of fire and torrential rainfall on erosion in a Mediterranean gorse community. Land Degrad. Dev. 2003, 14, 203–213. [Google Scholar] [CrossRef]
  2. Foster, D.R. Species and stand response to catastrophic wind in central New England, USA. J. Ecol. 1988, 76, 135–151. [Google Scholar] [CrossRef]
  3. Boutet, J.C.; Weishampel, J.F. Spatial pattern analysis of pre-and post-hurricane forest canopy structure in North Carolina, USA. Landsc. Ecol. 2003, 18, 553–559. [Google Scholar] [CrossRef]
  4. Mills, A.; Christophersen, T.; Wilkie, M.; Mansur, E. The United Nations Decade on ecosystem restoration: Catalysing a global movement. Unasylva 2020, 252, 119–126. [Google Scholar]
  5. Camarretta, N.; Harrison, P.A.; Bailey, T.; Potts, B.; Lucieer, A.; Davidson, N.; Hunt, M. Monitoring forest structure to guide adaptive management of forest restoration: A review of remote sensing approaches. New For. 2020, 51, 573–596. [Google Scholar] [CrossRef]
  6. Mather, P.; Tso, B. (Eds.) Classification Methods for Remotely Sensed Data, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2016; p. 376. [Google Scholar]
  7. Sun, Y.; Lei, L.; Li, X.; Tan, X.; Kuang, G. Structure consistency-based graph for unsupervised change detection with homogeneous and heterogeneous remote sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 4700221. [Google Scholar] [CrossRef]
  8. Sun, Y.; Lei, L.; Li, Z.; Kuang, G. Similarity and dissimilarity relationships based graphs for multimodal change detection. ISPRS J. Photogramm. Remote Sens. 2024, 208, 70–88. [Google Scholar] [CrossRef]
  9. Cohen, W.B.; Yang, Z.; Healey, S.P.; Kennedy, R.E.; Gorelick, N. A LandTrendr multispectral ensemble for forest disturbance detection. Remote Sens. Environ. 2018, 205, 131–140. [Google Scholar] [CrossRef]
  10. Fokeng, R.M.; Forje, W.G.; Meli, V.M.; Bodzemo, B.N. Multi-temporal forest cover change detection in the Metchie-Ngoum protection forest reserve, West Region of Cameroon. Egypt. J. Remote Sens. Space Sci. 2020, 23, 113–124. [Google Scholar] [CrossRef]
  11. Zarco-Tejada, P.; Hornero, A.; Hernández-Clemente, R.; Beck, P. Understanding the temporal dimension of the red-edge spectral region for forest decline detection using high-resolution hyperspectral and Sentinel-2a imagery. ISPRS J. Photogramm. Remote Sens. 2018, 137, 134–148. [Google Scholar] [CrossRef]
  12. Bar, S.; Parida, B.R.; Pandey, A.C. Landsat-8 and Sentinel-2 based Forest fire burn area mapping using machine learning algorithms on GEE cloud platform over Uttarakhand, Western Himalaya. Remote Sens. Appl. Soc. Environ. 2020, 18, 100324. [Google Scholar] [CrossRef]
  13. White, J.C.; Saarinen, N.; Kankare, V.; Wulder, M.A.; Hermosilla, T.; Coops, N.C.; Pickell, P.D.; Holopainen, M.; Hyyppä, J.; Vastaranta, M. Confirmation of post-harvest spectral recovery from Landsat time series using measures of forest cover and height derived from airborne laser scanning data. Remote Sens. Environ. 2018, 216, 262–275. [Google Scholar] [CrossRef]
  14. Salas, E.A.L.; Boykin, K.G.; Valdez, R. Multispectral and texture feature application in image-object analysis of summer vegetation in Eastern Tajikistan Pamirs. Remote Sens. 2016, 8, 78. [Google Scholar] [CrossRef]
  15. Regniers, O.; Bombrun, L.; Guyon, D.; Samalens, J.C.; Germain, C. Wavelet-Based Texture Features for the Classification of Age Classes in a Maritime Pine Forest. IEEE Geosci. Remote Sens. Lett. 2015, 12, 621–625. [Google Scholar] [CrossRef]
  16. Beguet, B.; Chehata, N.; Boukir, S.; Guyon, D. Retrieving forest structure vvariable from very high resolution satellite images using an automatic method. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, I-7, 1–6. [Google Scholar]
  17. Jiang, W.; Rule, H.; Ziyue, X.; Ning, H. Forest fire smog feature extraction based on Pulse-Coupled neural network. In Proceedings of the 2011 6th IEEE Joint International Information Technology and Artificial Intelligence Conference, Chongqing, China, 20–22 August 2011; Volume 1, pp. 186–189. [Google Scholar]
  18. Balling, J.; Herold, M.; Reiche, J. How textural features can improve SAR-based tropical forest disturbance mapping. Int. J. Appl. Earth Obs. Geoinf. 2023, 124, 103492. [Google Scholar] [CrossRef]
  19. Song, Z.; Li, X.; Zhu, R.; Wang, Z.; Yang, Y.; Zhang, X. ERMF: Edge refinement multi-feature for change detection in bitemporal remote sensing images. Signal Process. Image Commun. 2023, 116, 116964. [Google Scholar] [CrossRef]
  20. Puthumanaillam, G.; Verma, U. Texture based prototypical network for few-shot semantic segmentation of forest cover: Generalizing for different geographical regions. Neurocomputing 2023, 538, 126201. [Google Scholar] [CrossRef]
  21. Hu, Z.; Li, Q.; Zhang, Q.; Wu, G. Representation of Block-Based Image Features in a Multi-Scale Framework for Built-Up Area Detection. Remote Sens. 2016, 8, 155. [Google Scholar] [CrossRef]
  22. Murray, H.; Lucieer, A.; Williams, R. Texture-based classification of sub-Antarctic vegetation communities on Heard Island. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 138–149. [Google Scholar] [CrossRef]
  23. Puissant, A.; Hirsch, J.; Weber, C. The utility of texture analysis to improve per-pixel classification for high to very high spatial resolution imagery. Int. J. Remote Sens. 2005, 26, 733–745. [Google Scholar] [CrossRef]
  24. Feng, W.; Dauphin, G.; Huang, W.; Quan, Y.; Bao, W.; Wu, M.; Li, Q. Dynamic synthetic minority over-sampling technique-based rotation forest for the classification of imbalanced hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2159–2169. [Google Scholar] [CrossRef]
  25. Li, Q.; Feng, W.; Quan, Y.H. Trend and forecasting of the COVID-19 outbreak in China. J. Infect. 2020, 80, 469–496. [Google Scholar] [PubMed]
  26. Du, P.; Xia, J.; Chanussot, J.; He, X. Hyperspectral remote sensing image classification based on the integration of support vector machine and random forest. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 174–177. [Google Scholar]
  27. Du, P.; Xia, J.; Zhang, W.; Tan, K.; Liu, Y.; Liu, S. Multiple Classifier System for Remote Sensing Image Classification: A Review. Sensors 2012, 12, 4764–4792. [Google Scholar] [CrossRef] [PubMed]
  28. Gislason, P.; Benediktsson, J.; Sveinsson, J. Random Forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  29. Sudiana, D.; Lestari, A.I.; Riyanto, I.; Rizkinia, M.; Arief, R.; Prabuwono, A.S.; Sri Sumantyo, J.T. A hybrid convolutional neural network and random forest for burned area identification with optical and synthetic aperture radar (SAR) data. Remote Sens. 2023, 15, 728. [Google Scholar] [CrossRef]
  30. Billah, M.; Islam, A.S.; Mamoon, W.B.; Rahman, M.R. Random forest classifications for landuse mapping to assess rapid flood damage using Sentinel-1 and Sentinel-2 data. Remote Sens. Appl. Soc. Environ. 2023, 30, 100947. [Google Scholar] [CrossRef]
  31. Pesaresi, M.; G, H.; Blaes, X.; Ehrlich, D.; Ferri, S.; Gueguen, L.; Halkia, M.; Kauffmann, M.; Kemper, T.; Lu, L.; et al. A Global Human Settlement Layer From Optical HR/VHR RS Data: Concept and First Results. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2102–2131. [Google Scholar] [CrossRef]
  32. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  33. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  34. Lawrence, R.L.; Wood, S.D.; Sheley, R.L. Mapping invasive plants using hyperspectral imagery and Breiman Cutler classifications (RandomForest). Remote Sens. Environ. 2006, 100, 356–362. [Google Scholar] [CrossRef]
  35. Na, X.; Zhang, S.; Li, X.; Yu, H.; Liu, C. Improved land cover mapping using random forests combined with landsat thematic mapper imagery and ancillary geographic data. Photogramm. Eng. Remote Sens. 2010, 76, 833–840. [Google Scholar] [CrossRef]
  36. Schapire, R.E.; Freund, Y.; Bartlett, P.; Lee, W.S. Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods. Ann. Stat. 1998, 26, 1651–2080. [Google Scholar]
  37. Feng, W.; Boukir, S.; Guo, L. Identification and correction of mislabeled training data for land cover classification based on ensemble margin. In Proceedings of the IEEE International, Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 13–18 July 2015; pp. 4991–4994. [Google Scholar] [CrossRef]
  38. Feng, W.; Bao, W. Weight-based rotation forest for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2167–2171. [Google Scholar] [CrossRef]
  39. Feng, W.; Huang, W.; Ren, J. Class imbalance ensemble learning based on the margin theory. Appl. Sci. 2018, 8, 815. [Google Scholar] [CrossRef]
  40. Breiman, L. Out-of-Bag Estimation; Technical report; University of California: Berkeley, CA, USA, 1996. [Google Scholar]
Figure 1. Location of study area.
Figure 1. Location of study area.
Remotesensing 16 03483 g001
Figure 2. Formosat-2 multispectral images acquired before and after windstorm Klaus. (a) Before storm (RGB). (b) Before storm (NIR). (c) After storm (RGB). (d) After storm (NIR).
Figure 2. Formosat-2 multispectral images acquired before and after windstorm Klaus. (a) Before storm (RGB). (b) Before storm (NIR). (c) After storm (RGB). (d) After storm (NIR).
Remotesensing 16 03483 g002
Figure 3. Technical flowchart.
Figure 3. Technical flowchart.
Remotesensing 16 03483 g003
Figure 4. The feature extraction procedure.
Figure 4. The feature extraction procedure.
Remotesensing 16 03483 g004
Figure 5. The centers of windows of different sizes.
Figure 5. The centers of windows of different sizes.
Remotesensing 16 03483 g005
Figure 6. Two sampling methods for remote sample selection. (a) Block center sampling, (b) Whole block sampling. The shadow is the selected sample point.
Figure 6. Two sampling methods for remote sample selection. (a) Block center sampling, (b) Whole block sampling. The shadow is the selected sample point.
Remotesensing 16 03483 g006
Figure 7. Sampling technical flowchart.
Figure 7. Sampling technical flowchart.
Remotesensing 16 03483 g007
Figure 8. Binary map of random forests. (a) Spectral classification. (b) Spatial classification with block center sampling, w = 3 . (c) Spatial classification with block center sampling ( w = 4 ). (d) Spatial classification with block center sampling ( w = 5 ). (e) Spatial classification with whole block sampling ( w = 3 ). (f) Spatial classification with whole block sampling ( w = 4 ). (g) Spatial classification with whole block sampling, ( w = 5 ).
Figure 8. Binary map of random forests. (a) Spectral classification. (b) Spatial classification with block center sampling, w = 3 . (c) Spatial classification with block center sampling ( w = 4 ). (d) Spatial classification with block center sampling ( w = 5 ). (e) Spatial classification with whole block sampling ( w = 3 ). (f) Spatial classification with whole block sampling ( w = 4 ). (g) Spatial classification with whole block sampling, ( w = 5 ).
Remotesensing 16 03483 g008
Figure 9. Detail enlargements for the results.
Figure 9. Detail enlargements for the results.
Remotesensing 16 03483 g009
Figure 10. Margin map of random forests. (a) Spectral classification. (b) Spatial classification with block center sampling ( w = 3 ). (c) Spatial classification with block center sampling ( w = 4 ). (d) Spatial classification with block center sampling ( w = 5 ). (e) Spatial classification with whole block sampling ( w = 3 ). (f) Spatial classification with whole block sampling ( w = 4 ). (g) Spatial classification with whole block sampling ( w = 5 ).
Figure 10. Margin map of random forests. (a) Spectral classification. (b) Spatial classification with block center sampling ( w = 3 ). (c) Spatial classification with block center sampling ( w = 4 ). (d) Spatial classification with block center sampling ( w = 5 ). (e) Spatial classification with whole block sampling ( w = 3 ). (f) Spatial classification with whole block sampling ( w = 4 ). (g) Spatial classification with whole block sampling ( w = 5 ).
Remotesensing 16 03483 g010
Figure 11. Binary map of random forests. (a) Test 1: 16 spatial features ( w = 3 ). (b) Test 2: 40 spatial features ( w = 3 ). (c) Test 3: 80 spatial features ( w 1 = 3 , w 2 = 4 ).
Figure 11. Binary map of random forests. (a) Test 1: 16 spatial features ( w = 3 ). (b) Test 2: 40 spatial features ( w = 3 ). (c) Test 3: 80 spatial features ( w 1 = 3 , w 2 = 4 ).
Remotesensing 16 03483 g011
Figure 12. Margin map of random forests. (a) Test 1: 16 spatial features ( w = 3 ). (b) Test 2: 40 spatial features ( w = 3 ). (c) Test 3: 80 spatial features ( w 1 = 3 , w 2 = 4 ).
Figure 12. Margin map of random forests. (a) Test 1: 16 spatial features ( w = 3 ). (b) Test 2: 40 spatial features ( w = 3 ). (c) Test 3: 80 spatial features ( w 1 = 3 , w 2 = 4 ).
Remotesensing 16 03483 g012
Figure 13. Binary map of random forests: (a) 20 spatial features, w = 3 , (b) 40 spatial features, w = 3 , (c) 40 spatial features, w 1 = 3 , w 2 = 4 , and (d) 80 spatial features, w 1 = 3 , w 2 = 4 .
Figure 13. Binary map of random forests: (a) 20 spatial features, w = 3 , (b) 40 spatial features, w = 3 , (c) 40 spatial features, w 1 = 3 , w 2 = 4 , and (d) 80 spatial features, w 1 = 3 , w 2 = 4 .
Remotesensing 16 03483 g013
Figure 14. Margin map of random forests: (a) 20 spatial features, w = 3 , (b) 40 spatial features, w = 3 , (c) 40 spatial features, w 1 = 3 , w 2 = 4 , and (d) 80 spatial features, w 1 = 3 , w 2 = 4 .
Figure 14. Margin map of random forests: (a) 20 spatial features, w = 3 , (b) 40 spatial features, w = 3 , (c) 40 spatial features, w 1 = 3 , w 2 = 4 , and (d) 80 spatial features, w 1 = 3 , w 2 = 4 .
Remotesensing 16 03483 g014
Table 1. Spectral and first-order textural features.
Table 1. Spectral and first-order textural features.
Spectral FeatureRedGreenBlueNIR
Texutal featureMedianMeanVarianceSkewnessKurtosis
Table 2. OOB accuracy, overall accuracy, and quantity of samples of classifier using spectral and spatial features (combining with block center and whole block sampling, respectively).
Table 2. OOB accuracy, overall accuracy, and quantity of samples of classifier using spectral and spatial features (combining with block center and whole block sampling, respectively).
Spectral ClassificationSpatial Classification
8 Spectral Features16 Spatial Features
Center SamplingWhole Block Sampling
3 × 34 × 45 × 53 × 34 × 45 × 5
OOB accuracy (%)91.693.295.495.596.197.198.8
Overall accuracy (%)91.996.096.798.686.087.887.8
Quantity of samples27422789645112511811251
Table 3. OOB accuracy of r a n d o m f o r e s t s classifier using three spatial feature structures with numbers of 16, 40 and 80.
Table 3. OOB accuracy of r a n d o m f o r e s t s classifier using three spatial feature structures with numbers of 16, 40 and 80.
Test 1:Test 2:Test 3:
16 Spatial Features40 Spatial Features80 Spatial Features
OOB accuracy (%)94.294.296.9
Table 4. Parameter settings of experiment.
Table 4. Parameter settings of experiment.
Experiment NumberTest 1Test 2Test 3Test 4Test 5
Window size w for data sampling 3 × 3 3 × 3 4 × 4 3 × 3 4 × 4
w for feature calculation 3 × 3 3 × 3 3 × 3 , 4 × 4 3 × 3 3 × 3 , 4 × 4
Number of windows11212
Image used22211
Spatial feature extracted25555
Feature quantity1640802040
Table 5. Comparison of OOB accuracy of classifier between using one image and using two images.
Table 5. Comparison of OOB accuracy of classifier between using one image and using two images.
16 Spatial Features40 Spatial Features
One ImageTwo ImagesOne ImageTwo Images
OOB accuracy (%)94.294.696.996.9
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, W.; Bu, F.; Wu, P.; Dauphin, G.; Quan, Y.; Xing, M. Forest Change Monitoring Based on Block Instance Sampling and Homomorphic Hypothesis Margin Evaluation. Remote Sens. 2024, 16, 3483. https://doi.org/10.3390/rs16183483

AMA Style

Feng W, Bu F, Wu P, Dauphin G, Quan Y, Xing M. Forest Change Monitoring Based on Block Instance Sampling and Homomorphic Hypothesis Margin Evaluation. Remote Sensing. 2024; 16(18):3483. https://doi.org/10.3390/rs16183483

Chicago/Turabian Style

Feng, Wei, Fan Bu, Puxia Wu, Gabriel Dauphin, Yinghui Quan, and Mengdao Xing. 2024. "Forest Change Monitoring Based on Block Instance Sampling and Homomorphic Hypothesis Margin Evaluation" Remote Sensing 16, no. 18: 3483. https://doi.org/10.3390/rs16183483

APA Style

Feng, W., Bu, F., Wu, P., Dauphin, G., Quan, Y., & Xing, M. (2024). Forest Change Monitoring Based on Block Instance Sampling and Homomorphic Hypothesis Margin Evaluation. Remote Sensing, 16(18), 3483. https://doi.org/10.3390/rs16183483

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop