Next Article in Journal
DOA Estimation of GNSS Signals Based on Deconvolved Conventional Beamforming
Previous Article in Journal
Typhoon Early Warning and Monitoring Based on the Comprehensive Characteristics of Oceanic and Ionospheric Echoes from HFSWR: The Case of Typhoon Muifa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Instance-Level Scaling and Dynamic Margin-Alignment Knowledge Distillation for Remote Sensing Image Scene Classification †

by
Chuan Li
,
Xiao Teng
,
Yan Ding
and
Long Lan
*
College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
*
Author to whom correspondence should be addressed.
It should be noted that this article is a revised and expanded version of a paper entitled ’Instance-level Scaling and Dynamic Margin-alignment Knowledge Distillation’ which has been accepted for presentation at the 7th Chinese Conference on Pattern Recognition and Computer Vision (PRCV 2024), to be held in Urumqi, China, from 18–20 October 2024. The proceedings of the conference are not yet available. Our initial conference paper did not address the scene classification task for Remote Sensing Images (RSI). This manuscript provides a detailed analysis of the impact of RSI data on existing knowledge distillation methods and demonstrates how the Instance-level Scaling and Dynamic Margin-alignment (ISDM) approach effectively addresses this issue. Additionally, the paper constructs a framework for RSI scene classification based on knowledge distillation methods, analyzes related work in RSI scene classification, and introduces more datasets along with further experimental validation of the proposed method’s superiority.
Remote Sens. 2024, 16(20), 3853; https://doi.org/10.3390/rs16203853
Submission received: 9 September 2024 / Revised: 2 October 2024 / Accepted: 11 October 2024 / Published: 17 October 2024
(This article belongs to the Section AI Remote Sensing)

Abstract

:
Remote sensing image (RSI) scene classification aims to identify semantic categories in RSI using neural networks. However, high-performance deep neural networks typically demand substantial storage and computational resources, making practical deployment challenging. Knowledge distillation has emerged as an effective technique for developing compact models that maintain high classification accuracy in RSI tasks. Existing knowledge distillation methods often overlook the high inter-class similarity in RSI scenes, leading to low-confidence soft labels from the teacher model, which can mislead the student model. Conversely, overly confident soft labels may discard valuable non-target information. Additionally, the significant intra-class variability in RSI contributes to instability in the model’s decision boundaries. To address these challenges, we propose an efficient method called instance-level scaling and dynamic margin-alignment knowledge distillation (ISDM) for RSI scene classification. To balance the target and non-target class influence, we apply an entropy regularization loss to scale the teacher model’s target class at the instance level. Moreover, we introduce dynamic margin alignment between the student and teacher models to improve the student’s discriminative capability. By optimizing soft labels and enhancing the student’s ability to distinguish between classes, our method reduces the effects of inter-class similarity and intra-class variability. Experimental results on three public RSI scene classification datasets (AID, UCMerced, and NWPU-RESISC) demonstrate that our method achieves state-of-the-art performance across all teacher–student pairs with lower computational costs. Additionally, we validate the generalization of our approach on general datasets, including CIFAR-100 and ImageNet-1k.

1. Introduction

Rapid advancements in remote sensing technology have spurred the development of diverse algorithms designed to efficiently manage extensive Earth observation data. Scene classification, a core component of remote sensing image analysis, is crucial for accurately interpreting land use changes [1,2,3,4], optimizing agricultural practices [5,6,7,8], managing forest resources [9,10], and monitoring hydrological dynamics [11].
Traditional machine learning methods like SIFT [12,13], HOG [14], and LBP [15] have been employed to determine land cover types. While effective, these methods often require expert knowledge and manual feature extraction, making them costly and inefficient. In contrast, recent advances in deep neural networks have significantly enhanced the performance of tasks such as image classification [16,17,18], object recognition and tracking [19,20,21], and person re-identification [22,23,24]. Specifically, deep learning has revolutionized remote sensing image (RSI) scene classification, yielding substantial performance improvements [25,26,27,28,29,30].
Despite their effectiveness, high-performance models often entail significant computational and storage demands, posing substantial challenges for deployment in real-world applications. To mitigate these issues, knowledge distillation has been adopted. This method facilitates the transfer of knowledge from a large, complex model (the teacher) to a smaller, more efficient model (the student). Through this technique, the student model can reach performance levels comparable to that of the teacher by minimizing the Kullback–Leibler divergence.
However, RSI scene classification is marked by high inter-class similarity and large intra-class variability, presenting significant challenges to existing knowledge distillation methods. In particular, samples with high inter-class similarity often yield soft labels with diminished confidence in the target class, potentially leading to misguidance of the student model.
Conversely, samples with low inter-class similarity may lead to overly sharp soft target distributions, which suppress the effectiveness of logit knowledge distillation [31,32] (the reason overly sharp soft target distributions can diminish the potential benefits of knowledge distillation is explained in Appendix A). Similar to one-hot labels, these soft labels exhibit low confidence for non-target classes and lack rich category knowledge. This underscores the importance of non-target logits, which may contain valuable “dark knowledge” essential for effective model training, as illustrated by the blue dashed box in Figure 1.
Additionally, as shown in the green dashed box within Figure 1, large intra-class variability can compel the student model to develop poor decision boundaries to accommodate diverse within-class samples, adversely affecting the model’s performance during testing.
To address these challenges, we propose a straightforward and efficient logits-based distillation technique termed instance-level scaling and dynamic margin-alignment knowledge distillation (ISDM) tailored for RSI scene classification. ISDM scales the target class of the teacher’s soft label at instance level via an entropy regularization loss. As shown in the blue solid box within Figure 1, for hard samples with high inter-class similarity, the instance-level scaling can improve the teacher model’s confidence in the target class to prevent inadvertent misguidance of the student. For easy samples with low inter-class similarity, the instance-level scaling reduces the excessive confidence of the target class to amplify the difference of the non-target class and preserve the difference of the target class simultaneously. Moreover, a dynamic margin is adopted for alignment to accommodate the large intra-class variability, as shown in the green solid box within Figure 1. It can dynamically yield more rational and stable decision boundaries based on sample differences.
It should be noted that this article is an expanded version of the paper “Instance-level Scaling and Dynamic Margin-alignment Knowledge Distillation” [33] accepted for presentation at the 7th Chinese Conference on Pattern Recognition and Computer Vision (PRCV 2024). It analyzes the impact of RSI data on existing knowledge distillation methods, introduces a framework for RSI scene classification, and presents additional datasets and experimental validation to demonstrate the effectiveness of the proposed ISDM approach.
In summary, our contributions are as follows:
  • We scale the target class of the teacher’s soft label at instance level via an entropy regularization loss to address the high inter-class similarity inherent in RSI.
  • We introduce dynamic margin alignment for the probabilistic prediction scores, allowing the student model to establish more logical and adaptive decision boundaries, effectively addressing large intra-class variability.
  • We propose an effective logits-based distillation method named ISDM, which achieves state-of-the-art performances across all datasets with minimal additional computational costs.

2. Related Work

2.1. Remote Sensing Image Scene Classification

The primary goal of scene classification is to automatically determine land cover types within remote sensing image patches. Scene classification using supervised learning algorithms can be divided into three categories: low-level, mid-level, and high-level methods. Low-level methods focus on extracting handcrafted features, such as SIFT (scale-invariant feature transform) [12,13], HOG (histogram of oriented gradients) [14], and LBP (local binary patterns) [15], and use classifiers like support vector machines (SVM) [34] or K-nearest neighbors (KNN) [35] for scene classification. Although these methods work well for specific structures and arrangements, they often fall short when dealing with the varied and intricate spatial distributions present in images. Mid-level methods create scene representations by encoding low-level local features. The Bag-of-Visual-Words (BoVW) [36] model is frequently utilized and is often augmented with different low-level descriptors [37], along with Gaussian mixture models (GMMs) [38] and pyramid-based approaches [39,40,41]. Moreover, topic models are used to integrate higher-order spatial relationships between local visual words [42,43,44]. High-level methods leverage deep learning models, which have set new benchmarks in image recognition, speech recognition, semantic segmentation, and remote sensing scene classification. Prominent deep learning models, such as VGG [45], ResNet [16], and WRN [46], have demonstrated superior performance in remote sensing scene classification, surpassing both shallow models and low-level techniques by extracting deep visual features from large-scale training datasets.
However, deep neural networks require substantial computational resources and storage due to their large number of parameters. This makes them impractical for resource-constrained environments like embedded systems or real-time processing. To address this, knowledge distillation techniques can be used to compress models, balancing efficiency with performance.

2.2. Knowledge Distillation

As an effective model compression method, knowledge distillation is firstly proposed by [31], which aims to transfer the knowledge of the teacher model to the student model. Based on the type of transferred knowledge, knowledge distillation is predominantly divided into two categories: (1) logits-based distillation methods and (2) feature-based distillation methods.
The logits-based methods merely employ the teacher model’s logits to transfer the knowledge. The origin knowledge distillation [31] transfers the knowledge by minimizing the Kullback–Leibler (KL) divergence between the probabilistic prediction scores of the teacher and student, which is adopted by many subsequent works due to its simplicity and efficiency. To use the knowledge better, much research attention is drawn to transfer the knowledge from the deep intermediate layers [47,48,49,50]. Thus, the feature-based methods are normally well performing. However, they are always infeasible when confronted with many practical problems. In some practical applications, the intrinsic architecture and the intermediate features of the teacher model may not be available due to safety and privacy issues. Moreover, feature-based methods are computationally expensive.
Research on applying knowledge distillation to RSI scene classification is still limited. A key challenge is that existing KD methods struggle with the significant intra-class variability and inter-class similarity present in RSI data. Traditional distillation techniques often generate low-quality soft labels because they do not effectively manage these variations and similarities, which adversely affects the performance of the student model. Additionally, the limited capacity of student models can exacerbate this issue, resulting in overly complex decision boundaries and diminished accuracy. To address these challenges, we propose an effective approach that combines improved soft label techniques with dynamic-alignment distillation methods. This approach aims to enhance both the quality of the soft labels and the decision boundaries in RSI scene classification.

2.3. Optimization of Soft Labels

Soft labels are the predicted probability scores obtained by applying softmax to logits at temperature τ , where the target class reflects the confidence in the correct category and the non-target classes reflect the confidence in other categories. In logit-based methods, student models rely heavily on soft labels for training, making their quality crucial. Enhancing soft label quality has, thus, become a key area of research.
When soft labels are overly confident in the target class, they degenerate into one-hot labels, preventing full utilization of the knowledge in non-target classes. Many studies have addressed this issue [51,52,53,54,55,56,57]. For smoother soft labels, ATS [51] and NTCE-KD [52] reduce the target class values, while SFKD [53] uses attention mechanisms for smoothing. Label smoothing research [54,55,56,57] shows that it provides benefits similar to knowledge distillation (KD) in optimizing soft labels. For instance, [57] applied label smoothing to achieve softer labels, improving student model performance. On the other hand, insufficient confidence in the target class also hinders student training [58], as it misleads the student model into incorrect classifications. CKD [58] addresses this by replacing incorrect soft labels with hard labels, avoiding the transfer of erroneous knowledge.
However, existing studies have primarily focused on either excessive or insufficient confidence in soft labels, often relying on manual optimizations. To address this, we propose an IS module that comprehensively optimizes soft labels at instance level, ensuring that they capture both the correct knowledge from the target class and rich information from the non-target classes.

3. Materials and Methods

3.1. Preliminaries

In an image classification task with C classes, the logits can be denoted as Z = [ z 1 , z 2 , , z i , , z t , , z C ] R 1 × C , where z i represents the logit of the i-th class and z t represents the target class. The probabilistic prediction scores P are defined by applying the softmax function to the logits as follows:
p i = exp ( z i / τ ) j = 1 C ( exp ( z j / τ ) ) ,
where p i is the i-th probabilistic prediction score and τ is the temperature hyperparameter to scale the soft labels. In original knowledge distillation, the student model is forced to mimic the teacher model’s behavior by minimizing the KL divergence between the probabilistic prediction scores of the teacher model and the student model:
L K D = K L ( P t e a | | P s t u ) = i = 1 C p i t e a log ( p i t e a / p i s t u ) ,
where L K D is the knowledge distillation loss and p i t e a and p i s t u are the i-th class of the probabilistic prediction scores of the teacher model and the student model. And the P t e a are known as soft labels.

3.2. Instance-Level Scaling (IS)

In practical scenarios, traditional knowledge distillation methods often produce low-quality soft labels due to inter-class similarity and intra-class variability in remote sensing imagery. Furthermore, due to their limited capacity, student models may struggle to effectively manage intra-class variability, leading to complex decision boundaries and, consequently, reduced accuracy of the student network.
Therefore, we propose a novel instance-level scaling (IS) method to tailor an improved soft label for each instance, as shown in Figure 2.
Specifically, the IS module, a single-layer perceptron, generates a new target class of logits z ˜ t as follows:
z t ˜ = F I S ( θ I S , Z ) .
Let optimized logits Z ˜ = [ z 1 , z 2 , , z i , , z ˜ t , , z C ] R 1 × C , where z t is replaced by z t ˜ . We normalize Z ˜ through the softmax function and obtain the scaled soft labels P ˜ :
p i ˜ = S o f t m a x ( Z ˜ ) = exp ( z i / τ ) j = 1 , j t C ( exp ( z j / τ ) ) + exp ( z t ˜ / τ ) for i t , exp z t ˜ / τ j = 1 , j t C ( exp ( z j / τ ) ) + exp ( z t ˜ / τ ) for i = t ,
where t is the target index of a training sample. Different from existing methods that utilize the same soft label scaling strategy for all samples in the dataset, our proposed IS module generates an instance-level target class for each sample based on the perceptron. Therefore, the design of the perceptron loss is crucial to our IS module, and we carefully analyze the optimization objectives of the IS module in the following two aspects.
For easy samples, the teacher model generates a high value on the target class due to overconfidence. Therefore, directly aligning the output probability of teacher and student models through KL divergence may lead to the depreciation of the information about the non-target class. To tackle this, we aim to increase the entropy of the soft labels to enrich the information about the non-target class. In the implementation, this can be achieved by minimizing the negative entropy of the soft labels outputted by the IS module as follows:
L N E = min θ I S ( P log P t e a ) = min θ I S i = 1 C p i t e a log p i t e a ,
where θ I S is parameters of the IS module.
For hard samples, the teacher model tends to generate a relatively small value on the target class, which is insufficient for the student model to obtain discriminative representations. To tackle this, the cross-entropy loss is utilized as the optimization objective for the IS module to further enhance the discrimination of its soft labels as follows:
L C E = min θ I S Y log P t e a = min θ I S i = 1 C y i log p i t e a ,
where y i is the one-hot representation of the ground truth:
y i = 0 for i t , 1 for i = t .
To address both of the above situations, we optimize the soft labels via an entropy regularization loss consisting of L N E and L C E :
L I S = L N E + ω L C E = i = 1 C p i t e a log p i t e a + ω i = 1 C y i log p i t e a ,
where ω is the balance weight.
In summary, we design the entropy regularization loss to obtain a better target class of logits z ˜ t and then obtain an optimized soft label P ˜ via softmax function.

3.3. Dynamic Margin-Alignment (DM)

Benefitting from the IS module, the refined soft labels from the teacher model can balance the effect of the target and non-target classes for each sample, thereby mitigating the impact of high inter-class similarity to some extent. High intra-class variability can cause smaller student models to struggle with fitting most of the samples effectively. To address this issue, we propose the dynamic margin-alignment (DM) module (see Figure 2), which generates specific margins for each sample to create a more reasonable boundary. Specifically, for simple samples, the margin can be large to ensure sufficient discrimination by the student model, while for hard samples that are difficult to classify, the margin is relatively small to prevent overfitting.

3.3.1. Margin-Alignment for KD

For better understanding, we first introduce the margin-alignment module, which explicitly encourages intra-class compactness and inter-class separability and produces a reasonable decision boundary for the target class with a marginal region.
Firstly, we introduce a hyperparameter Δ for setting the marginal region and subtract Δ from the target class within logits while keeping the non-target class unchanged.
z ^ i s t u = z i for i t , z t Δ for i = t .
And the margin-enhanced probabilistic prediction scores of the student model P ˜ s t u can be calculated via the softmax function.
P ^ s t u = S o f t m a x ( Z ^ s t u ) ,
where z i ^ s t u is logits’ value on the i-th class and P ^ s t u is the probability vector.
Secondly, we align the margin-enhanced student’s probabilistic prediction scores P ^ s t u with optimized soft labels P ˜ t e a via the KL divergence.
L K D = K L ( P ˜ t e a | | P ^ s t u ) = i = 1 C p i ˜ t e a log ( p i ^ t e a / p i ^ s t u ) .

3.3.2. Dynamic Adjustment of Margin

Taking into account the influence of the difficulty level of samples on the decision boundary, the adaptive adjustment of the hyperparameter Δ is adopted for dynamic margin.
In order to clearly explore the relationship between the margin and the difficulty of samples, we conduct a geometric analysis on binary classification tasks focusing on both easy and hard samples. Considering features x of a sample belonging to class 1, the fixed alignment criterion is used to ensure that z 1 Δ > z 2 (in Equation (9)) where z i = W i T x and W i is the parameters of classifier for class i, that is,
W 2 x cos ( θ 2 ) < W 1 x cos ( θ 1 ) Δ = W 1 x ( cos ( θ 1 ) Δ W 1 x ) .
For simplicity, we analyze the scenario when W 1 = W 2 , as shown in Figure 3. Thus, the marginal region m can be calculated as follows:
m = Δ W 1 x < cos ( θ 1 ) cos ( θ 2 ) .
Specifically, for easy samples, a fixed margin can result in a decision boundary, as shown in Figure 3a; the model can classify them correctly but lacks sufficient discrimination. Thus, the margin needs to be increased to further reduce the intra-class distance and expand the inter-class distance, which can yield a more stringent decision boundary (in Figure 3b), expressed as
cos ( θ 1 ) m > cos ( θ 1 ) m > cos ( θ 2 ) .
For hard samples, the model struggles with accurate classification. Maintaining the margin fixed in such a case may lead to model overfitting:
cos ( θ 1 ) m < cos ( θ 1 ) < cos ( θ 2 ) ,
resulting in overlapping decision boundaries, as depicted in Figure 3c. Therefore, it becomes necessary to set the margin negative for hard samples. This adjustment encourages the model to sacrifice the accuracy of certain hard samples,
cos ( θ 1 ) m > cos ( θ 2 ) ,
ultimately enhancing overall accuracy during the testing phase, as illustrated in Figure 3d.
In summary, due to m Δ , increasing Δ for easy samples makes z ˜ t greater than z t . For difficult samples, relaxing Δ makes z ˜ t approaching or less than z t .
Because the target class value reflects the difficulty of a sample [32], we use its variation as the delta value. In the implementation, Δ (in Equation (9)) can be adjusted dynamically as follows:
Δ j = z ˜ t , j z t , j ,
where t is the target class index and j is the sample index.
The overall workflow of our proposed ISDM can be found in Algorithm 1.
Algorithm 1 Instance-level scaling and dynamic margin-alignment distillation
Require: 
Dataset D, Teacher model F T
Ensure: 
Student model F S , IS module F I S
   1:
Initialize student model F S , IS module F I S with parameters θ S , θ I S
   2:
Set hyper-parameters: learning rate η , batch size B, number of epochs E, weight of KD α , weight of Entrop ω ,
   3:
for  e p o c h = 1 to E do
   4:
   Shuffle dataset D
   5:
   for each ( x , y ) in D with size B do
   6:
     % Forward pass
   7:
      Z t e a F T ( x ) , Z s t u F S ( x )
   8:
     % Instance-level Scaling
   9:
      z ˜ t s t u F I S ( Z t e a )
  10:
      Z ˜ t e a C o n c a t ( Z n t e a , z ˜ t s t u )
  11:
      P ˜ t e a S o f t m a x ( Z ˜ t e a ) , P t e a S o f t m a x ( Z t e a )
  12:
     % Dynamic Margin-alignment
  13:
      Δ z ˜ t a r s t u z t a r t e a
  14:
      z ^ t a r s t u z t a r s t u Δ
  15:
      Z ^ s t u C o n c a t ( Z n o n s t u , z ^ t a r s t u )
  16:
      P ^ s t u S o f t m a x ( Z ^ s t u )
  17:
     % Compute loss and update parameters
  18:
      L I S L N E + ω L C E
  19:
      L C E C E ( P t e a , y ) , L K D K L ( P ˜ t e a , P ^ s t u )
  20:
      L t o t a l L C E + α L K D
  21:
      θ S θ S η L t o t a l , θ I S θ I S η L I S
  22:
   end for
  23:
end for
  24:
Return trained student model F S

4. Results

4.1. Datasets

We evaluate our method on three popular RSI scene classification benchmark datasets and two widely-used general image classification benchmark datasets.

4.1.1. NWPU-RESISC45 Dataset

The NWPU-RESISC45 dataset [59] is a comprehensive resource for remote sensing image classification, featuring 31,500 images from over 100 countries. It includes 45 scene categories with 700 images per category, each image sized at 256 × 256 pixels in RGB format. The dataset’s challenge lies in its varying spatial resolutions (300 cm to 20 cm per pixel), which can result in significant inter-class similarities, necessitating advanced classification methods.

4.1.2. Aerial Image Dataset (AID)

The AID dataset [60] contains 10,000 high-resolution aerial images across 30 scene types. Each type is represented by 200 to 400 images at a resolution of 600 × 600 pixels in RGB format. The images have varying spatial resolutions (800 cm to 50 cm per pixel), adding complexity and relevance for testing sophisticated aerial image classification algorithms.

4.1.3. UC Merced Land-Use Dataset (UCM)

The UCM dataset [61] comprises 2100 images across 21 land-use categories, with 100 images per category. The images are uniformly sized at 256 × 256 pixels and are in RGB format. With a consistent spatial resolution of 30 cm per pixel, this dataset simplifies analysis while providing detailed land-use patterns for research purposes.

4.2. Settings and Implementation Details

For the three remote sensing (RS) datasets, ResNet34 is used as the teacher model and ResNet18 as the student model. We apply two data splitting ratios: one with 80% of the data for training and 20% for testing, and another with 50% for both training and testing, to evaluate the model performance with less training data. The top-1 and top-5 accuracy on the test set are used as evaluation metrics.
We compare our methods with various SOTA methods, including logit-based methods, such as KD [31], DKD [32], MLLD [62], LS [63], and SDD [64], and feature-based methods, such as FitNet [47], AT [50], RKD [65], OFD [66], CRD [67], ReviewKD [49], and CAT [68].
We set the training batch size to 64 and the testing batch size to 128. The temperature parameter is set as 4. The initial learning rate is 0.1, with a total of 200 epochs. The learning rate is reduced by a factor of 10 at 60, 120, and 160 epochs. We use the SGD optimizer with a momentum of 0.9. We apply a weight decay of 5 × 5 × 10 4 . The base weight for the cross-entropy loss is set to 1, and the base weight for the knowledge distillation loss α is set to 4. The base balance weight for the entropy regularization loss ω , as defined in Equation (8), is set to 0.1. Experiments are conducted using Python 3.7 with PyTorch on an NVIDIA V100 GPU.

4.3. Main Results

In our experiments across the NWPU-RESISC45, AID, and UCM datasets, the ISDM method consistently outperforms other techniques, using ResNet34 as the teacher network and ResNet18 as the student network. On the NWPU-RESISC45 dataset presented in Table 1, ISDM achieves the highest top-1 and top-5 accuracy across both split ratios, demonstrating its robustness in knowledge distillation.
Similarly, on the AID dataset in Table 2, ISDM surpasses all competing methods, including those with feature-based and logits-based distillation, with significant improvements in top-1 accuracy (95.55%) and top-5 accuracy (99.75%) under the 8:2 split ratio.
The results are even more pronounced on the UCM dataset in Table 3, where ISDM sets new benchmarks with a top-1 accuracy of 92.62% and a top-5 accuracy of 99.76%, surpassing the second-best method by 1.43%. This highlights its ultimate efficacy in leveraging teacher–student networks. This superior performance across multiple datasets underscores ISDM’s effectiveness in distilling knowledge.

4.4. Ablation Study

The results of ablation experiments are shown in Table 4. The first row presents the experimental results of ISDM. Removing the IS component from ISDM, there is a performance decrease of 0.65% (see ① and ②). When the DM component is removed, it results in a performance drop of 0.73% (see ① and ③), and removing both IS and DM components leads to a significant performance decrease of 2.79% (see ① and ④).
Under equivalent conditions, we also compare DM and FM (fixed margin-alignment). We set Δ (seen in Equation (9)) fixed as 3, 5, and 7 in FM and contrast the best result with Δ as 5 against DM (see ① and ⑤). The results indicate that DM shows performance improvement over FM, proving its superiority over FM.

4.5. Sensitivity of Hyperparameters

The selection of loss weights α and β is essential for effectively balancing the cross-entropy loss L C E and distillation loss L K D . Based on previous studies [31,32,62,64], we keep α constant at 1.0. To find the best value for β , we conduct a systematic grid search, testing various options: { 1.0 , 2.0 , 4.0 , 6.0 , 8.0 } . We choose the option that results in the highest accuracy, as shown in Table 5. Notably, β = 4.0 achieves the highest accuracy, indicating that increasing β from 1.0 to 4.0 improves performance, but further increases lead to smaller gains. This highlights the need for careful tuning of β to optimize knowledge distillation.
In Table 6, we observe that with an 8:2 dataset split, the model’s accuracy peaks at α = 1.0 . For a 5:5 split, the best performance is at α = 0.5 . Keeping α below 1.0 generally maintains good performance, but it drops sharply if α exceeds 1.0. This suggests that higher α values may overly rely on cross-entropy loss, especially with limited data in a 5:5 split, raising the risk of overfitting. Hence, setting α to 1.0 is a sensible choice.
The results in Table 7 indicate that the parameter ω achieves optimal performance at a value of 0.1, with the model attaining accuracies of 92.62% and 88.29% for the 8:2 and 5:5 split ratios, respectively. Although slight variations in accuracy are observed across different ω values, the overall performance remains relatively stable. This suggests a degree of robustness in the performance across various settings of ω .

4.6. Motivation Validation

To better understand the characteristics of the RS dataset, we performed experiments to measure feature similarity between categories in the UCM dataset. We trained ResNet18 models on the UCM dataset using different methods. We then calculated the average logits for each category to use as category centers and computed the cosine similarity between each sample’s logits and all category centers. Figure 4b–d show the inter-class similarity results for the UCM dataset using models trained with Vanilla (“Vanilla” represents the standard ResNet18 model training with only the cross-entropy loss), KD, and MISD methods, respectively. Figure 4a shows the results from Vanilla on the CIFAR-100 dataset. For comparison, we only used data from 21 out of 100 classes.
Comparing Figure 4a,b, the RS dataset has more complex category similarity patterns than CIFAR-100. Some categories, like 12, 19, and 20, are quite similar to other categories, while others like 5 and 7 differ more with others. In contrast, CIFAR-100 has a more balanced similarity across categories, which helps the model learn features better. As shown in Figure 4b–d, KD can somewhat reduce the negative effects of the RS dataset, but ISDM can largely remove these effects, coming close to the results seen with CIFAR-100.

4.7. Effect of Instance-Level Scaling

To further evaluate the effects of our proposed IS module, Figure 5 visualizes the soft labels processed by the IS module for both easy and hard samples, respectively.
Figure 5a illustrates the scaling process of soft labels from easy images in the beach and tennis-court classes. The second column describes the values of the target class before and after scaling for an easy sample. The third column shows the original soft labels; it can generate a high value on the target class due to its low inter-class similarity, thus causing the depreciation of the information about the non-target class. The last column shows the soft labels processed by our IS module. Through the IS module, the overconfidence effect on the target class can be relieved and the information about the non-target class can be enhanced.
Figure 5b illustrates the scaling process of soft labels from easy images in the sparse-residential and baseball-diamond classes. The third column illustrates the raw soft labels, which have a small target class value due to their high inter-class similarity. The last column shows the soft labels processed by our IS module, demonstrating how the insufficiency of the target class can be alleviated via the IS module.

4.8. Effect of Dynamic Margin-Alignment

To assess the effectiveness of DM, we employ visualization of t-SNE. Figure 6a,b showcase features learned by KD and ISDM, respectively. Figure 6a shows that KD results in the features of most samples being mixed together, making them hard to distinguish, with more complex decision boundaries and smaller inter-class distances. As demonstrated in Figure 6b, dynamic margin sets greater and negative margins for simple and difficult samples, respectively. It achieves more dispersed inter-class distances for the vast majority of samples, resulting in more marginal decision boundaries.

4.9. Distillation Fidelity

To provide a comprehensive understanding of distillation fidelity, we follow [32,49] and present our visualizations in Figure 7. Specifically, when focusing on the ResNet34–ResNet18 model pair trained on the UCM dataset, we calculate the absolute distance between the correlation matrices of the teacher and student models. Our findings show that ISDM enhances the alignment of the student model’s predictions with those of the teacher model. Specifically, ISDM results in a maximum difference of 1.71 and a mean difference of 0.35, whereas KD has a maximum difference of 1.95 and a mean difference of 0.42. The lower difference metrics for ISDM indicate that it achieves better alignment between the student and teacher models compared to KD.

4.10. Generalization Exploration

In evaluating the generalization of our ISDM across different datasets, we observed its robust performance on two general image classification datasets, beyond its initial remote sensing application.
Our method consistently outperforms or matches state-of-the-art methods across various teacher–student pairs shown in Table 8, demonstrating competitive results against both feature-based and logits-based methods. ISDM shows notable improvements over other approaches in most configurations, especially in the context of ResNet-56 to ResNet-20 and ResNet-32×4 to ResNet-8×4. This suggests that ISDM not only generalizes well within the CIFAR-100 dataset but also performs comparably to, or better than, existing methods that leverage intermediate feature representations and logits.
On the large-scale ImageNet-1k dataset shown in Table 9, ISDM continues to exhibit superior performance. It outperforms both feature-based and logits-based methods across different teacher–student configurations, including ResNet34 to ResNet18 and ResNet50 to MobileNetV2. This consistent performance across datasets of varying sizes and complexities indicates that ISDM’s effectiveness extends well beyond the original remote sensing tasks, highlighting its strong generalization capability.
In summary, these results confirm that ISDM is not only effective in its primary domain but also shows impressive versatility and robustness in other diverse and challenging scenarios, reinforcing its broad applicability.

4.11. Training Efficiency

We evaluate the training overhead and accuracy for SOTA methods shown in Figure 8. Our approach involves improvements to KD by enhancing soft labels and alignment strategy. Consequently, it exhibits a similar time overhead to KD, providing a substantial advantage over other methods while achieving the highest model performance.
Furthermore, the ISDM method introduces only a perceptron with extra parameters less than 0.01 M, which is negligible compared to the trainable parameters during distillation. Notably, this perceptron is exclusively used to optimize soft labels in training. It does not participate in student’s inference, thereby incurring no overhead.

5. Conclusions

In this paper, we addressed the challenge of high inter-class similarity and large intra-class variance in remote sensing datasets by proposing a distillation method named ISDM. This method optimizes teacher soft labels through instance-level scaling and employs a margin-alignment strategy during the distillation process to enhance model generalization. The ISDM method showed significant improvements on the NWPU-RESISC45, AID, and UCM datasets while maintaining lower costs.
Additionally, we validated the effectiveness of our approach through extensive experiments and demonstrated its generalizability on standard datasets such as CIFAR-100 and ImageNet-1k. We hope that this paper will contribute to advancements in scene classification for remote sensing images and improvements in logits-based distillation methods.

Author Contributions

Conceptualization, C.L. and X.T.; methodology, C.L.; software, C.L.; validation, C.L. and X.T.; formal analysis, C.L.; investigation, C.L.; resources, L.L.; data curation, C.L.; writing—original draft preparation, C.L.; writing—review and editing, X.T. and Y.D.; visualization, C.L.; supervision, L.L.; project administration, L.L.; funding acquisition, L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 62376282).

Data Availability Statement

The five datasets (NWPU-RESISC45, AID, UCM, CIFAR-100, and ImageNet-1k) used to illustrate and evaluate the proposed method are publicly available.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
RSIRemote sensing image
KDKnowledge distillation
ISDMInstance-level scaling and dynamic margin-alignment knowledge distillation
KLKullback–Leibler
SIFTScale-invariant feature transform
HOGHistogram of oriented gradients

Appendix A

The success of knowledge distillation hinges on the quality of the soft labels used. When these soft target distributions are overly sharp, they may inadvertently stifle the potential benefits of distillation. In this discussion, we will explore this issue from various perspectives, highlighting how the entropy of soft labels and the non-target class elements within them influence the effectiveness of distillation.
In the paper “Distilling the Knowledge in a Neural Network”, Hinton points out that soft labels with higher entropy provide more information, while sharper soft label distributions have lower entropy, yielding less informative signals for distillation [31]. This is encapsulated in the following description:
“When the soft targets have high entropy, they provide much more information per training case than hard targets and much less variance in the gradient between training cases, so the small model can often be trained on much less data than the original cumbersome model and using a much higher learning rate.”
Furthermore, Hinton observed that using softmax outputs as soft labels that are too sharp hampers the model’s performance. To address this, he proposed a temperature parameter to smooth the soft labels, demonstrating that smoother soft labels can enhance the performance of knowledge distillation. This further substantiates the assertion that overly sharp soft label distributions can indeed hinder the potential of knowledge distillation. The relevant description states:
“For tasks like MNIST in which the cumbersome model almost always produces the correct answer with very high confidence, much of the information about the learned function resides in the ratios of very small probabilities in the soft targets. For example, one version of a 2 may be given a probability of 10 6 of being a 3 and 10 9 of being a 7, whereas for another version it may be the other way around. This is valuable information that defines a rich similarity structure over the data (i.e., it says which 2’s look like 3’s and which look like 7’s), but it has very little influence on the cross-entropy cost function during the transfer stage because the probabilities are so close to zero. Our more general solution, called ‘distillation,’ is to raise the temperature of the final softmax until the cumbersome model produces a suitably soft set of targets. We then use the same high temperature when training the small model to match these soft targets. We show later that matching the logits of the cumbersome model is actually a special case of distillation.”
In the paper “Decoupled Knowledge Distillation”, the authors demonstrate that a significant component of the effectiveness of knowledge distillation lies in the non-target class elements of the soft labels [32]. The alignment weight for the non-target class elements is given by 1 p t , where p t is the probability of the target class. Thus, as the soft labels become sharper, the probability p t increases, resulting in a smaller alignment weight for the non-target class elements and consequently diminishing the effectiveness of knowledge distillation. This is articulated in the following description:
“To revitalize logits-based methods, we start this work by delving into the mechanism of KD. Firstly, we divide a classification prediction into two levels: (1) a binary prediction for the target class and all the non-target classes, and (2) a multi-category prediction for each non-target class. Based on this, we reformulate the classical KD loss into two parts, as shown in Figure 1b. One is a binary logit distillation for the target class and the other is a multi-category logit distillation for non-target classes. For simplification, we respectively name them as target classification knowledge distillation (TCKD) and non-target classification knowledge distillation (NCKD). The reformulation allows us to study the effects of the two parts independently. TCKD transfers knowledge via binary logit distillation, which means only the prediction of the target class is provided while the specific prediction of each non-target class is unknown. A reasonable hypothesis is that TCKD transfers knowledge about the ‘difficulty’ of training samples, i.e., the knowledge describes how difficult it is to recognize each training sample. To validate this, we design experiments from three aspects to increase the ‘difficulty’ of training data, i.e., stronger augmentation, noisier labels, and inherently challenging datasets. NCKD only considers the knowledge among non-target logits. Interestingly, we empirically prove that applying NCKD alone achieves comparable or even better results than classical KD, indicating the vital importance of knowledge contained in non-target logits, which could be the prominent ‘dark knowledge’.”

References

  1. Estoque, R.C.; Murayama, Y.; Akiyama, C.M. Pixel-based and object-based classifications using high-and medium-spatial-resolution imageries in the urban and suburban landscapes. Geocarto Int. 2015, 30, 1113–1129. [Google Scholar] [CrossRef]
  2. Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
  3. Zhang, X.; Wang, Q.; Chen, G.; Dai, F.; Zhu, K.; Gong, Y.; Xie, Y. An object-based supervised classification framework for very-high-resolution remote sensing images using convolutional neural networks. Remote Sens. Lett. 2018, 9, 373–382. [Google Scholar] [CrossRef]
  4. Chen, G.; Zhang, X.; Wang, Q.; Dai, F.; Gong, Y.; Zhu, K. Symmetrical dense-shortcut deep fully convolutional networks for semantic segmentation of very-high-resolution remote sensing images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1633–1644. [Google Scholar] [CrossRef]
  5. Gualtieri, J.A.; Cromp, R.F. Support vector machines for hyperspectral remote sensing classification. In Proceedings of the 27th AIPR Workshop: Advances in Computer-Assisted Recognition, Washington, DC, USA, 14–16 October 1998; SPIE: Bellingham, WA, USA, 1999; Volume 3584, pp. 221–232. [Google Scholar]
  6. Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  7. Cheriyadat, A.M. Unsupervised feature learning for aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2013, 52, 439–451. [Google Scholar] [CrossRef]
  8. Peña, J.M.; Gutiérrez, P.A.; Hervás-Martínez, C.; Six, J.; Plant, R.E.; López-Granados, F. Object-based image classification of summer crops with machine learning methods. Remote Sens. 2014, 6, 5019–5041. [Google Scholar] [CrossRef]
  9. Lu, D.; Li, G.; Moran, E.; Kuang, W. A comparative analysis of approaches for successional vegetation classification in the Brazilian Amazon. Giscience Remote Sens. 2014, 51, 695–709. [Google Scholar] [CrossRef]
  10. De Chant, T.; Kelly, M. Individual object change detection for monitoring the impact of a forest pathogen on a hardwood forest. PHotogrammetr. Eng. Remote Sens. 2009, 75, 1005–1013. [Google Scholar] [CrossRef]
  11. Dribault, Y.; Chokmani, K.; Bernier, M. Monitoring seasonal hydrological dynamics of minerotrophic peatlands using multi-date GeoEye-1 very high resolution imagery and object-based classification. Remote Sens. 2012, 4, 1887–1912. [Google Scholar] [CrossRef]
  12. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  13. Yang, Y.; Newsam, S. Comparing SIFT descriptors and Gabor texture features for classification of remote sensed imagery. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1852–1855. [Google Scholar]
  14. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2005; Volume 1, pp. 886–893. [Google Scholar]
  15. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  16. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  17. Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
  18. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  19. Lan, L.; Wang, X.; Zhang, S.; Tao, D.; Gao, W.; Huang, T.S. Interacting tracklets for multi-object tracking. IEEE Trans. Image Process. 2018, 27, 4585–4597. [Google Scholar] [CrossRef]
  20. Lan, L.; Wang, X.; Hua, G.; Huang, T.S.; Tao, D. Semi-online multi-people tracking by re-identification. Int. J. Comput. Vis. 2020, 128, 1937–1955. [Google Scholar] [CrossRef]
  21. Tan, H.; Zhang, X.; Zhang, Z.; Lan, L.; Zhang, W.; Luo, Z. Nocal-siam: Refining visual features and response with advanced non-local blocks for real-time siamese tracking. IEEE Trans. Image Process. 2021, 30, 2656–2668. [Google Scholar] [CrossRef]
  22. Lan, L.; Teng, X.; Zhang, J.; Zhang, X.; Tao, D. Learning to purification for unsupervised person re-identification. IEEE Trans. Image Process. 2023, 32, 3338–3353. [Google Scholar] [CrossRef]
  23. Teng, X.; Lan, L.; Zhao, J.; Li, X.; Tang, Y. Highly Efficient Active Learning With Tracklet-Aware Co-Cooperative Annotators for Person Re-Identification. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–14. [Google Scholar] [CrossRef]
  24. Teng, X.; Li, C.; Li, X.; Liu, X.; Lan, L. TIG-CL: Teacher-Guided Individual and Group Aware Contrastive Learning for Unsupervised Person Re-Identification in Internet of Things. IEEE Internet Things J. 2024. [Google Scholar] [CrossRef]
  25. Hu, F.; Xia, G.S.; Hu, J.; Zhang, L. Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef]
  26. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  27. Marmanis, D.; Wegner, J.D.; Galliani, S.; Schindler, K.; Datcu, M.; Stilla, U. Semantic segmentation of aerial images with an ensemble of CNSS. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 473–480. [Google Scholar] [CrossRef]
  28. Nogueira, K.; Penatti, O.A.; Dos Santos, J.A. Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognit. 2017, 61, 539–556. [Google Scholar] [CrossRef]
  29. Liu, Y.; Huang, C. Scene classification via triplet networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 220–237. [Google Scholar] [CrossRef]
  30. Li, W.; Fu, H.; Yu, L.; Gong, P.; Feng, D.; Li, C.; Clinton, N. Stacked Autoencoder-based deep learning for remote-sensing image classification: A case study of African land-cover mapping. Int. J. Remote Sens. 2016, 37, 5632–5646. [Google Scholar] [CrossRef]
  31. Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
  32. Zhao, B.; Cui, Q.; Song, R.; Qiu, Y.; Liang, J. Decoupled knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11953–11962. [Google Scholar]
  33. Li, C.; Teng, X.; Yan, D.; Lan, L. Instance-level Scaling and Dynamic Margin-alignment Knowledge Distillation. In Proceedings of the 7th Chinese Conference on Pattern Recognition and Computer Vision (PRCV 2024), Urumqi, China, 18–20 October 2024. Accepted for presentation; proceedings not yet available. [Google Scholar]
  34. Cortes, C. Support-Vector Networks. Mach. Learn. 1995. [Google Scholar] [CrossRef]
  35. Altman, N.S. An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 1992, 46, 175–185. [Google Scholar] [CrossRef]
  36. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 3–5 November 2010; pp. 270–279. [Google Scholar]
  37. Chen, L.; Yang, W.; Xu, K.; Xu, T. Evaluation of local features for scene classification using VHR satellite images. In Proceedings of the 2011 Joint Urban Remote Sensing Event, Munich, Germany, 11–13 April 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 385–388. [Google Scholar]
  38. Perronnin, F.; Dance, C. Fisher kernels on visual vocabularies for image categorization. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 1–8. [Google Scholar]
  39. Yang, Y.; Newsam, S. Spatial pyramid co-occurrence for image classification. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1465–1472. [Google Scholar]
  40. Lazebnik, S.; Schmid, C.; Ponce, J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; IEEE: Piscataway, NJ, USA, 2006; Volume 2, pp. 2169–2178. [Google Scholar]
  41. Chen, Y.; Zhao, X.; Jia, X. Spectral–spatial classification of hyperspectral data based on deep belief network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
  42. Bosch, A.; Zisserman, A.; Munoz, X. Scene classification via pLSA. In Proceedings of the Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006; Proceedings, Part IV 9. Springer: Berlin/Heidelberg, Germany, 2006; pp. 517–530. [Google Scholar]
  43. Lienou, M.; Maitre, H.; Datcu, M. Semantic annotation of satellite images using latent Dirichlet allocation. IEEE Geosci. Remote Sens. Lett. 2009, 7, 28–32. [Google Scholar] [CrossRef]
  44. Zhong, Y.; Zhu, Q.; Zhang, L. Scene classification based on the multifeature fusion probabilistic topic model for high spatial resolution remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6207–6222. [Google Scholar] [CrossRef]
  45. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  46. Zagoruyko, S.; Komodakis, N. Wide residual networks. arXiv 2016, arXiv:1605.07146. [Google Scholar]
  47. Adriana, R.; Nicolas, B.; Ebrahimi, K.S.; Antoine, C.; Carlo, G.; Yoshua, B. Fitnets: Hints for thin deep nets. arXiv 2014, arXiv:1412.6550. [Google Scholar]
  48. Komodakis, N.; Zagoruyko, S. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In Proceedings of the ICLR, Toulon, France, 24–26 April 2017. [Google Scholar]
  49. Chen, P.; Liu, S.; Zhao, H.; Jia, J. Distilling Knowledge via Knowledge Review. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  50. Yim, J.; Joo, D.; Bae, J.; Kim, J. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4133–4141. [Google Scholar]
  51. Li, X.C.; Fan, W.S.; Song, S.; Li, Y.; Shao, Y.; Zhan, D.C. Asymmetric temperature scaling makes larger networks teach well again. Adv. Neural Inf. Process. Syst. 2022, 35, 3830–3842. [Google Scholar]
  52. Li, C.; Teng, X.; Ding, Y.; Lan, L. NTCE-KD: Non-Target-Class-Enhanced Knowledge Distillation. Sensors 2024, 24, 3617. [Google Scholar] [CrossRef]
  53. Yuan, M.; Lang, B.; Quan, F. Student-friendly knowledge distillation. Knowl.-Based Syst. 2024, 296, 111915. [Google Scholar] [CrossRef]
  54. Müller, R.; Kornblith, S.; Hinton, G.E. When does label smoothing help? Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
  55. Yuan, L.; Tay, F.E.; Li, G.; Wang, T.; Feng, J. Revisiting knowledge distillation via label smoothing regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 3903–3911. [Google Scholar]
  56. Chandrasegaran, K.; Tran, N.T.; Zhao, Y.; Cheung, N.M. Revisiting label smoothing and knowledge distillation compatibility: What was missing? In Proceedings of the International Conference on Machine Learning. PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 2890–2916. [Google Scholar]
  57. Li, C.; Cheng, G.; Han, J. Boosting knowledge distillation via intra-class logit distribution smoothing. IEEE Trans. Circuits Syst. Video Technol. 2023. [Google Scholar] [CrossRef]
  58. Meng, Z.; Li, J.; Zhao, Y.; Gong, Y. Conditional teacher-student learning. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 6445–6449. [Google Scholar]
  59. Cheng, G.; Han, J.; Lu, X. Remote sensing image scene classification: Benchmark and state of the art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
  60. Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
  61. Zou, Q.; Ni, L.; Zhang, T.; Wang, Q. Deep learning based feature selection for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2321–2325. [Google Scholar] [CrossRef]
  62. Jin, Y.; Wang, J.; Lin, D. Multi-Level Logit Distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 24276–24285. [Google Scholar]
  63. Sun, S.; Ren, W.; Li, J.; Wang, R.; Cao, X. Logit Standardization in Knowledge Distillation. arXiv 2024, arXiv:2403.01427. [Google Scholar]
  64. Wei, S.; Luo, C.; Luo, Y. Scaled Decoupled Distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 15975–15983. [Google Scholar]
  65. Park, W.; Kim, D.; Lu, Y.; Cho, M. Relational knowledge distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3967–3976. [Google Scholar]
  66. Heo, B.; Kim, J.; Yun, S.; Park, H.; Kwak, N.; Choi, J.Y. A comprehensive overhaul of feature distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–29 October 2019; pp. 1921–1930. [Google Scholar]
  67. Tian, Y.; Krishnan, D.; Isola, P. Contrastive Representation Distillation. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
  68. Guo, Z.; Yan, H.; Li, H.; Lin, X. Class Attention Transfer Based Knowledge Distillation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 11868–11877. [Google Scholar]
Figure 1. Motivation (dashed boxes) and main idea (solid boxes). Blue dashed box: high and low inter-class similarities lead to poor soft labels. Green dashed box: large intra-class variability leads to a poor decision boundary. Blue solid box: instance-level scaling can optimize the poor soft labels. Green solid box: dynamic margin-alignment can result in a more rational decision boundary. Pink circles and yellow triangles are samples from two classes. Orange arrows represent the margin.
Figure 1. Motivation (dashed boxes) and main idea (solid boxes). Blue dashed box: high and low inter-class similarities lead to poor soft labels. Green dashed box: large intra-class variability leads to a poor decision boundary. Blue solid box: instance-level scaling can optimize the poor soft labels. Green solid box: dynamic margin-alignment can result in a more rational decision boundary. Pink circles and yellow triangles are samples from two classes. Orange arrows represent the margin.
Remotesensing 16 03853 g001
Figure 2. Overview of our method. The left side depicts the overall framework of ISDM: the instance-level scaling module optimizes the teacher model’s soft labels at instance-level, followed by distillation with a dynamic margin-alignment strategy. The right side illustrates the specific implementations of IS and DM. Notably, different shapes represent sample categories, and color variation indicates soft label optimization in IS. Different shapes represent samples from different classes. Blue and green represent the outputs of the student model and the teacher model respectively.
Figure 2. Overview of our method. The left side depicts the overall framework of ISDM: the instance-level scaling module optimizes the teacher model’s soft labels at instance-level, followed by distillation with a dynamic margin-alignment strategy. The right side illustrates the specific implementations of IS and DM. Notably, different shapes represent sample categories, and color variation indicates soft label optimization in IS. Different shapes represent samples from different classes. Blue and green represent the outputs of the student model and the teacher model respectively.
Remotesensing 16 03853 g002
Figure 3. Examples of fixed and dynamic margin for easy and hard samples. Orange circles and blue circles are samples from two classes.
Figure 3. Examples of fixed and dynamic margin for easy and hard samples. Orange circles and blue circles are samples from two classes.
Remotesensing 16 03853 g003aRemotesensing 16 03853 g003b
Figure 4. Motivation validation. (a): Visualization of inter-class similarity for ResNet18 model trained with Vanilla method on CIFAR-100 dataset. (bd): Visualization of inter-class similarity for ResNet18 model trained with Vanilla, KD, and ISDM methods on UCM datasets.
Figure 4. Motivation validation. (a): Visualization of inter-class similarity for ResNet18 model trained with Vanilla method on CIFAR-100 dataset. (bd): Visualization of inter-class similarity for ResNet18 model trained with Vanilla, KD, and ISDM methods on UCM datasets.
Remotesensing 16 03853 g004aRemotesensing 16 03853 g004b
Figure 5. Visualizations illustrating the scaling process of soft labels from both easy sample (top) and hard sample (bottom). (a) Scaling process of soft label from easy samples; (b) scaling process of soft label from hard samples.
Figure 5. Visualizations illustrating the scaling process of soft labels from both easy sample (top) and hard sample (bottom). (a) Scaling process of soft label from easy samples; (b) scaling process of soft label from hard samples.
Remotesensing 16 03853 g005aRemotesensing 16 03853 g005b
Figure 6. t-SNE of features learned by KD (left) and ISDM (right). (a) KD; (b) ISDM.
Figure 6. t-SNE of features learned by KD (left) and ISDM (right). (a) KD; (b) ISDM.
Remotesensing 16 03853 g006
Figure 7. Difference of student and teacher logits. ResNet34–ResNet18 as the teacher–student pair on the UCM dataset. (a) KD (max diff: 1.95; mean diff: 0.42); (b) ISDM (max diff: 1.71; mean diff: 0.35).
Figure 7. Difference of student and teacher logits. ResNet34–ResNet18 as the teacher–student pair on the UCM dataset. (a) KD (max diff: 1.95; mean diff: 0.42); (b) ISDM (max diff: 1.71; mean diff: 0.35).
Remotesensing 16 03853 g007
Figure 8. Training time vs. top-1 accuracy on CIFAR-100 with ResNet34 as teacher and ResNet18 as student.
Figure 8. Training time vs. top-1 accuracy on CIFAR-100 with ResNet34 as teacher and ResNet18 as student.
Remotesensing 16 03853 g008
Table 1. Results of NWPU-RESISC45 validation. ResNet34 and ResNet18 are adopted as the teacher network and the student network. The best and second-best results are emphasized in bold and underlined cases.
Table 1. Results of NWPU-RESISC45 validation. ResNet34 and ResNet18 are adopted as the teacher network and the student network. The best and second-best results are emphasized in bold and underlined cases.
MethodTeacher/StudentSplit Ratio (8:2)Split Ratio (5:5)
Top 1 Top 5 Top 1 Top 5
Teacher96.1399.9294.1899.64
Student93.4898.5491.8698.47
FeatureAT (ICLR17)94.7199.8192.2399.53
RKD (CVPR19)95.0299.6793.2099.56
ReviewKD (CVPR21)94.8499.8193.2599.57
LogitsKD (NIPS14)95.6399.8994.1199.59
DKD (CVPR22)95.7999.8494.2599.64
MLLD (CVPR23)95.0099.7393.4799.16
LSKD (CVPR24)95.9799.7994.2899.59
SDD (CVPR24)96.0599.8194.1799.66
ISDM (ours)96.2799.9694.5899.68
Table 2. Results of AID validation. ResNet34 and ResNet18 are adopted as the teacher network and the student network. The best and second-best results are emphasized in bold and underlined cases.
Table 2. Results of AID validation. ResNet34 and ResNet18 are adopted as the teacher network and the student network. The best and second-best results are emphasized in bold and underlined cases.
MethodTeacher/StudentSplit Ratio (8:2)Split Ratio (5:5)
Top 1 Top 5 Top 1 Top 5
Teacher95.0099.4091.6699.24
Student92.9599.2588.8698.56
FeatureAT (ICLR17)94.1599.4591.0899.12
RKD (CVPR19)93.7099.5088.8498.88
ReviewKD (CVPR21)93.8599.6590.9499.02
LogitsKD (NIPS14)94.5599.5591.8499.24
DKD (CVPR22)94.6099.5592.1499.16
MLLD (CVPR23)93.2599.3090.1698.84
LSKD (CVPR24)94.3099.5591.7699.12
SDD (CVPR24)94.6099.4592.0699.18
ISDM (ours)95.5599.7592.9299.34
Table 3. Results of UCM validation. ResNet34 and ResNet18 are adopted as the teacher network and the student network. The best and second-best results are emphasized in bold and underlined cases.
Table 3. Results of UCM validation. ResNet34 and ResNet18 are adopted as the teacher network and the student network. The best and second-best results are emphasized in bold and underlined cases.
MethodTeacher/StudentSplit Ratio (8:2)Split Ratio (5:5)
Top 1 Top 5 Top 1 Top 5
Teacher90.9599.5289.7199.71
Student88.1099.2983.8198.45
FeatureAT (ICLR17)90.8199.0585.9099.24
RKD (CVPR19)89.2999.2984.3398.38
ReviewKD (CVPR21)91.1999.5284.6498.57
LogitsKD (NIPS14)90.2499.2986.8698.76
DKD (CVPR22)90.7699.5287.4299.05
MLLD (CVPR23)86.1498.8185.3898.95
LSKD (CVPR24)87.3398.8084.7197.24
SDD (CVPR24)90.4899.2987.9599.14
ISDM (ours)92.6299.7688.2999.43
Table 4. Results of ablation study on NWPU-RESISC45 with split ratio of 8:2. ResNet34 and ResNet18 are adopted as the teacher network and the student network. IS: instance-level scaling. DM: dynamic margin-alignment. FM: fixed margin-alignment. ✔ and × present whether the module is adopted or not.
Table 4. Results of ablation study on NWPU-RESISC45 with split ratio of 8:2. ResNet34 and ResNet18 are adopted as the teacher network and the student network. IS: instance-level scaling. DM: dynamic margin-alignment. FM: fixed margin-alignment. ✔ and × present whether the module is adopted or not.
#AblationISDMFMAcc
ISDM×96.27
DM only××95.62
IS only××95.54
Baseline×××93.48
IS + FM×95.95
Table 5. Results of different β values. The experiments are conducted on UCM, with ResNet34 as the teacher and ResNet18 as the student. The parameter α is set to 1.0 and ω is set to 0.1. The best results are emphasized in bold.
Table 5. Results of different β values. The experiments are conducted on UCM, with ResNet34 as the teacher and ResNet18 as the student. The parameter α is set to 1.0 and ω is set to 0.1. The best results are emphasized in bold.
Split Ratio β = 1.0 β = 2.0 β = 4.0 β = 6.0 β = 8.0
8:290.7192.1492.6291.2490.71
5:587.8187.6288.2987.2386.57
Table 6. Results of different α values. The experiments are conducted on UCM, with ResNet34 as the teacher and ResNet18 as the student. The parameter β is set to 4.0 and ω is set to 0.1. The best results are emphasized in bold.
Table 6. Results of different α values. The experiments are conducted on UCM, with ResNet34 as the teacher and ResNet18 as the student. The parameter β is set to 4.0 and ω is set to 0.1. The best results are emphasized in bold.
Split Ratio α = 0.2 α = 0.5 α = 1.0 α = 1.5 α = 2.0
8:290.9591.4392.6291.1989.29
5:588.0088.3888.2987.1481.33
Table 7. Results of different ω values. The experiments are conducted on UCM, with ResNet34 as the teacher and ResNet18 as the student. The parameter α is set to 1.0 and β is set to 4.0. The best results are emphasized in bold.
Table 7. Results of different ω values. The experiments are conducted on UCM, with ResNet34 as the teacher and ResNet18 as the student. The parameter α is set to 1.0 and β is set to 4.0. The best results are emphasized in bold.
Split Ratio ω = 0.05 ω = 0.1 ω = 0.2 ω = 0.3 ω = 0.4
8:291.1992.6291.4390.4890.71
5:587.9088.2987.3387.0587.90
Table 8. Results of CIFAR-100 validation. Teachers and students are in the same architecture. The best and second-best results are emphasized in bold and underlined cases.
Table 8. Results of CIFAR-100 validation. Teachers and students are in the same architecture. The best and second-best results are emphasized in bold and underlined cases.
MethodTeacherResNet-56ResNet-32×4WRN-40-2VGG13ResNet-32×4VGG13
72.34 79.42 75.61 74.64 75.61 74.64
Student ResNet-20 ResNet-8×4 WRN-16-2 VGG8 ShuffleNetV1 MobileNetV2
69.06 72.50 73.26 70.36 70.50 64.60
FeatureFitNet (ICLR15)69.2173.5073.5871.0273.5964.14
AT (ICLR17)70.5573.4474.0871.4371.7359.40
RKD (CVPR19)69.6171.9073.3571.4872.2864.52
ReviewKD (CVPR21)71.8975.6376.1274.8477.4570.37
CAT (CVPR23)71.6276.9175.6074.6578.2669.13
LogitsKD (NIPS14)70.6673.3374.9272.9874.0767.37
DKD (CVPR22)71.4275.9775.7774.5576.2369.58
CTKD (AAAI23)71.1973.3975.4573.5274.4868.46
ISDM(Ours)72.0276.9876.2674.8776.7270.26
Table 9. Results of ImageNet-1k validation. The best and second-best results are emphasized in bold and underlined cases.
Table 9. Results of ImageNet-1k validation. The best and second-best results are emphasized in bold and underlined cases.
MethodTeacher/StudentResNet34/ResNet18ResNet50/MN-V1
Top 1 Top 5 Top 1 Top 5
Teacher73.3191.4276.1692.86
Student69.7589.0768.8788.76
FeatureAT(ICLR17)70.6990.0169.5689.33
OFD(ICCV19)70.8189.9871.2590.34
CRD(ICLR20)71.1790.1371.3790.41
ReviewKD(CVPR21)71.6190.5172.5691.00
CAT(CVPR23)71.2690.4572.2491.13
LogitsKD(NIPS14)70.6689.8868.5888.98
DKD(CVPR22)71.7090.4172.0591.05
LS(CVPR24)71.4290.2972.1890.80
ISDM (ours)71.8790.6072.9891.14
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, C.; Teng, X.; Ding, Y.; Lan, L. Instance-Level Scaling and Dynamic Margin-Alignment Knowledge Distillation for Remote Sensing Image Scene Classification. Remote Sens. 2024, 16, 3853. https://doi.org/10.3390/rs16203853

AMA Style

Li C, Teng X, Ding Y, Lan L. Instance-Level Scaling and Dynamic Margin-Alignment Knowledge Distillation for Remote Sensing Image Scene Classification. Remote Sensing. 2024; 16(20):3853. https://doi.org/10.3390/rs16203853

Chicago/Turabian Style

Li, Chuan, Xiao Teng, Yan Ding, and Long Lan. 2024. "Instance-Level Scaling and Dynamic Margin-Alignment Knowledge Distillation for Remote Sensing Image Scene Classification" Remote Sensing 16, no. 20: 3853. https://doi.org/10.3390/rs16203853

APA Style

Li, C., Teng, X., Ding, Y., & Lan, L. (2024). Instance-Level Scaling and Dynamic Margin-Alignment Knowledge Distillation for Remote Sensing Image Scene Classification. Remote Sensing, 16(20), 3853. https://doi.org/10.3390/rs16203853

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop