Next Article in Journal
Joint Wideband Beamforming Algorithm for Main Lobe Jamming Suppression in Distributed Array Radar
Next Article in Special Issue
Hierarchical Mixed-Precision Post-Training Quantization for SAR Ship Detection Networks
Previous Article in Journal
Ensemble One-Class Support Vector Machine for Sea Surface Target Detection Based on k-Means Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Q-A2NN: Quantized All-Adder Neural Networks for Onboard Remote Sensing Scene Classification

by
Ning Zhang
,
He Chen
*,
Liang Chen
,
Jue Wang
,
Guoqing Wang
and
Wenchao Liu
National Key Laboratory of Science and Technology on Space-Born Intelligent Information Processing, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(13), 2403; https://doi.org/10.3390/rs16132403
Submission received: 23 May 2024 / Revised: 20 June 2024 / Accepted: 26 June 2024 / Published: 30 June 2024

Abstract

:
Performing remote sensing scene classification (RSSC) directly on satellites can alleviate data downlink burdens and reduce latency. Compared to convolutional neural networks (CNNs), the all-adder neural network (A2NN) is a novel basic neural network that is more suitable for onboard RSSC, enabling lower computational overhead by eliminating multiplication operations in convolutional layers. However, the extensive floating-point data and operations in A2NNs still lead to significant storage overhead and power consumption during hardware deployment. In this article, a shared scaling factor-based de-biasing quantization (SSDQ) method tailored for the quantization of A2NNs is proposed to address this issue, including a powers-of-two (POT)-based shared scaling factor quantization scheme and a multi-dimensional de-biasing (MDD) quantization strategy. Specifically, the POT-based shared scaling factor quantization scheme converts the adder filters in A2NNs to quantized adder filters with hardware-friendly integer input activations, weights, and operations. Thus, quantized A2NNs (Q-A2NNs) composed of quantized adder filters have lower computational and memory overheads than A2NNs, increasing their utility in hardware deployment. Although low-bit-width Q-A2NNs exhibit significantly reduced RSSC accuracy compared to A2NNs, this issue can be alleviated by employing the proposed MDD quantization strategy, which combines a weight-debiasing (WD) strategy, which reduces performance degradation due to deviations in the quantized weights, with a feature-debiasing (FD) strategy, which enhances the classification performance of Q-A2NNs through minimizing deviations among the output features of each layer. Extensive experiments and analyses demonstrate that the proposed SSDQ method can efficiently quantize A2NNs to obtain Q-A2NNs with low computational and memory overheads while maintaining comparable performance to A2NNs, thus having high potential for onboard RSSC.

1. Introduction

Remote sensing scene classification (RSSC) is of critical importance in interpreting remote sensing imagery and has been widely employed in disaster detection, urban planning, environmental monitoring, and national security tasks [1]. Deep learning-based methods have recently gained widespread application in the context of RSSC due to their potent feature abstraction and generalization capabilities [2]. In particular, convolutional neural network (CNN)-based RSSC approaches have emerged as a key focus of RSSC research [3,4].
For traditional remote sensing processing, images collected by satellites are downloaded to perform RSSC for interpretation [5,6]. Recently, the volume and resolution of the acquired remote sensing images have significantly increased. Unfortunately, the relatively limited improvement in the data downlink bandwidth imposes considerable transmission pressure [7,8]. Furthermore, the travel time of satellites increases the total latency for users to obtain the processing results [9], posing challenges in the context of time-constrained tasks such as military surveillance, natural disasters, and emergency situations. Thus, deploying deep learning-based models on satellite edge devices for onboard processing comprises an intuitive solution [10]. Nevertheless, most existing CNN-based RSSC methods, although showcasing remarkable performance, necessitate billions of multiplication and addition operations, along with numerous parameters. Given the constrained computational and memory resources of space platforms, deploying these methods directly on edge devices in space platforms proves challenging [11].
Many researchers have employed model compression methods to reduce model complexity, mainly including low-rank decomposition [12,13], pruning [14,15], knowledge distillation [16], and quantization [17] approaches. Low-rank decomposition methods reduce the redundancies in the specified layers by decomposing the large matrix of convolution kernels; however, these methods cannot perform global parameter compression and are not applicable with the common 1 × 1 convolution kernel [18]. Pruning methods reduce the computational complexity by removing redundant weights in CNN models; however, they require manual definition of the pruning criteria and fine-tuning to maintain model performance, which is time-consuming and sub-optimal [19,20]. Knowledge distillation methods aim to enable lightweight models to achieve similar performance as larger models through distilling knowledge from the large model to the lightweight model; however, the design of lightweight models remains an additional challenge [21]. Quantization approaches reduce the number of parameters and computational demand through converting floating-point numerical representations of models into integer representations; however, ensuring the accuracy of low-bit-width quantized models poses a significant challenge [22,23]. Moreover, many studies have shown that deploying multiplication operations on edge devices entails more resources and energy overheads, compared to deploying addition operations [24,25]. Figure 1 shows the energy and area costs for various operations in 45 nm ASICs at 0.9 V [24]. Most of the multiplication operations in CNN models are concentrated in convolutional layers [26]. Although the above-mentioned model compression methods can achieve a reduction in model complexity, optimized CNN-based models still heavily rely on convolutions involving billions of multiplication operations. Consequently, directly applying such models for onboard deployment remains challenging.
AdderNet, which is based on adder filters, has recently been proposed [27], allowing for a significant reduction in the number of multiplication operations in the network. However, AdderNet maintains convolutional filters in the first and last layers. Subsequently, Zhang et al. [28] proposed a novel basic neural network, named the all-adder neural network (A2NN), which converts all multiplications in the convolutional layers into additions. Nevertheless, despite this breakthrough, A2NNs have a comparable number of parameters to CNNs. Given the limited memory resources of edge devices on space platforms, quantization approaches can be incorporated to further enhance the hardware efficiency when deploying A2NN. Compared with other model compression methods, quantization methods can effectively reduce the memory overhead without changing the original network structure. Moreover, the use of integer operations can further reduce the computational overhead during deployment. Thus, the quantized A2NN (Q-A2NN) has the advantages of low computational and memory overheads during deployment on edge devices and, thus, is more suitable for onboard RSSC than CNNs and A2NNs. However, directly applying traditional quantization schemes to quantized A2NNs re-introduces numerous multiplication operations, which causes forfeiture of their inherent advantage of low computational overhead. Moreover, low-bit-width quantized models usually suffer from significant performance degradation [25,29].
To address these issues, this article proposes a novel shared scaling factor-based de-biasing quantization (SSDQ) method to reduce the memory overhead of A2NNs while minimizing performance degradation. The proposed SSDQ method includes a powers-of-two (POT)-based shared scaling factor quantization scheme and a multi-dimensional de-biasing (MDD) quantization strategy. Specifically, the POT-based shared scaling factor quantization scheme converts the adder filters with 32-bit floating-point (FP32) input activations, weights, and operations to quantized adder filters with hardware-friendly integer input activations, weights, and operations. Thus, the Q-A2NNs composed of quantized adder filters have lower computational and memory overheads than existing basic networks such as CNNs and A2NNs. The reduced accuracy of Q-A2NN for RSSC tasks can be alleviated through the proposed MDD quantization strategy. The MDD strategy combines a weight-debiasing (WD) strategy, which reduces the performance degradation caused by deviations in the quantized weights, with a feature-debiasing (FD) strategy, which enhances the classification performance of Q-A2NNs through minimizing the deviation in the output features of each layer.
The main contributions of this article can be summarized as follows:
  • A novel shared scaling factor-based debiasing quantization method is proposed to reduce the hardware resource overheads of A2NNs while minimizing performance degradation, which includes a POT-based shared scaling factor quantization scheme and an MDD quantization strategy.
  • A POT-based shared scaling factor quantization scheme is proposed to quantize the adder filters in the A2NN. The proposed quantization scheme converts the input activations and weights of the adder filters from floating-point to integer type, thereby transforming the floating-point addition operations in the adder filters into hardware-friendly integer addition and bit-shift operations.
  • An MDD quantization strategy combining the WD and FD strategies is proposed to effectively prevent the decrease in accuracy of Q-A2NNs due to the deviations in weights and features during quantization. The WD strategy mitigates the performance degradation of Q-A2NNs by correcting deviations in the quantized weight distribution. It re-defines the weight scaling factor when the weight distribution is skewed and spans considerably beyond the target quantization range, ensuring an adequate quantization range for weights densely distributed near zero. The FD strategy enhances the classification performance of Q-A2NNs by minimizing deviations among the output features across layers, thus aligning the output features of the intermediate and last layers in the Q-A2NN with those of the corresponding layers in the A2NN, reducing quantization errors in a layer-by-layer manner, and improving the feature extraction ability of the Q-A2NN.
To evaluate the efficacy of the proposed SSDQ method tailored for quantizing A2NNs, exhaustive experiments are conducted using five commonly used RSSC data sets. The experimental results demonstrate that Q-A2NNs with low computational and memory overheads and minimal performance degradation for onboard RSSC can be obtained by employing the proposed SSDQ method.
The remainder of this article is structured as follows: Section 2 provides an overview of the related works. Section 3 covers preliminary knowledge about A2NNs and traditional quantization schemes for CNNs. Section 4 elaborates on the proposed SSDQ method tailored for quantizing A2NNs. Section 5 details and analyses the experimental results. Finally, Section 6 concludes the article.

2. Related Works

In this section, we present a brief overview of the related works on RSSC and quantization.

2.1. Remote Sensing Scene Classification

RSSC annotates remote sensing scene images with specific high-level semantic categories, which can be effectively analyzed to obtain meaningful and valuable semantic information.
Existing RSSC methods are primarily classified into three categories: handcrafted feature-based methods [30,31], feature encoding methods [32,33], and deep learning-based methods [34,35]. Recently, deep learning-based methods—particularly CNN-based methods—have made significant strides in RSSC due to their robust feature abstraction and generalization abilities. For instance, Sun et al. [36] integrated hierarchical feature aggregation and interference information elimination schemes into a deep network. The proposed method hierarchically aggregates complementary information and eliminates interference information among different convolutional features, markedly enhancing RSSC accuracy. Wang et al. [37] presented a sphere loss to learn the unique center for each class. Through incorporating a right-angle triangle constraint, the sphere loss aggregates intraclass features while effectively separating the centers of different classes, thereby enhancing RSSC accuracy through intraclass compactness and superior interclass discrimination abilities. Transformer models have recently demonstrated remarkable performance in computer vision tasks and have been utilized for RSSC. For instance, Bazi et al. [38] introduced the vision transformer (ViT) [39] to solve RSSC tasks. Sha et al. [40] presented a multi-instance vision transformer, which enhances the discriminative capability of traditional ViT models by highlighting the feature response of key local regions and simultaneously learning global features.
Although these deep learning-based RSSC methods can achieve remarkable classification performance, directly deploying them on edge devices is challenging due to their high computational complexity and large number of parameters.

2.2. Quantization

Quantization is a highly effective model compression method for reducing the resource overhead of CNNs. In quantized CNNs, the representations of input activations and weights in convolutional layers are converted from FP32 to integer type. Quantization methods are usually divided into two types: post-training quantization (PTQ) and quantization-aware training (QAT) [41,42,43]. PTQ methods directly convert pre-trained floating-point networks into integer representation networks without re-training. Although PTQ methods are fast and lightweight, they suffer from severe accuracy degradation. QAT methods model the quantization noise source by re-training the quantized CNN for a limited number of iterations, enabling the quantized CNN to find more optimized solutions than with the PTQ [41].
While existing quantization methods have achieved tremendous success with CNNs, directly applying traditional quantization schemes to quantized A2NNs re-introduces numerous multiplication operations, which causes the forfeiture of their inherent advantage of low computational overhead. Several recent works have investigated quantizing AdderNets, which contain adder filters [27]. Wang et al. [25] proposed a quantization method for AdderNets that integrates input activations and weights, adopting the same scaling factor to quantize them simultaneously. However, when the distributions of the weights and input activations show significant differences or low-bit-width quantization, directly using the same scaling factor to quantize the adder kernels leads to quantized models with poor performance. Similarly, Zhang et al. [29] directly adopted the weight scaling factor to quantize the weights and input activations of the adder filters; however, this method led to a low-bit-width quantized model with substantially decreased accuracy. This article proposes a novel SSDQ method to develop low-bit-width Q-A2NNs with comparable performance to A2NNs.

3. Preliminary Knowledge

In this section, the A2NN model and the quantization scheme for CNNs are briefly reviewed and analyzed.

3.1. All-Adder Neural Network

The convolution filter used in CNNs can be defined as:
O t , x , y = k = 0 c i n p = 0 d q = 0 d ( I k , x + p , y + q × F t , k , p , q ) ,
where I R c i n × h i n × w i n and O R c o u t × h o u t × w o u t denote the input activation and output feature tensors, respectively, and F R c o u t × c i n × d × d denotes the weight tensor of the filter. The computation of the standard convolution kernel is shown in Figure 2a. The convolution filters use the L2-norm as a similarity measure. In contrast, the adder filters use the L1-norm to indicate the similarity between the filters and input activations to eliminate multiplication operations [27]. The adder filter is defined as:
O t , x , y = k = 0 c i n p = 0 d q = 0 d | I k , x + p , y + q F t , k , p , q | .
The computation of the adder kernel is shown in Figure 2b.
CNNs employ backpropagation to compute the gradients and stochastic gradient descent for parameter updating. Given that the adder filter does not involve multiplication operations, its partial derivative remains constant. Therefore, the gradients of the adder filter are defined as:
O t , x , y F t , k , p , q = I k , x + p , y + q F t , k , p , q
O t , x , y I k , x + p , y + q = H T F t , k , p , q I k , x + p , y + q ,
where HT(·) represents the HardTanh function, defined as:
H T x = 1 , x , 1 , x 1 1 < x < 1 x 1 .
Based on adder filters, Zhang et al. [28] proposed an A2NN, which converts all convolution filters in the CNN into adder filters. The architectures of the CNN and A2NN models are depicted in Figure 2c,d, respectively.

3.2. Quantization Scheme for CNNs

The core concept of quantization is an affine transformation that converts the floating-point operations in the network to efficient integer operations [41]. In this process, a floating-point vector R can be approximated as an integer vector multiplied by a scalar:
R = s R · R int ,
where s R is a quantization parameter scaling factor and R int is an integer vector. For N-bit quantization, the elements in R int are N-bit signed integers. In the commonly used hardware-friendly symmetric uniform quantization scheme, the scaling factor s R is calculated as follows:
s R = max ( | max ( R ) | , | min ( R ) | ) 2 N 1 1 ,
where max(·) and min(·) are functions utilized to determine the maximum and minimum elements in the given vector, respectively.
In CNNs, the convolution filter in Equation (1) can be re-written in a quantized form by quantizing the input activations and weights:
O x , y , t = k = 0 c i n p = 0 d q = 0 d s I · I i n t k , x + p , y + q × s F · F i n t t , k , p , q ,
where I int R c i n × h i n × w i n and F int R c o u t × c i n × d × d represent the quantized input activations and quantized weights, respectively; s I and s F represent the quantization parameter scaling factors for the input activations and weights, respectively. According to the distributive and associative laws of multiplication, the quantized convolution filters can use integer-type input activations and weights to perform integer multiplication and addition operations, resulting in quantized output features. Finally, the quantized output features of the quantized convolution filters can be de-quantized with the scaling factors for the input activations s I and weights s F . Then, Equation (9) can be re-written as:
O t , x , y = s I s F · k = 0 c i n p = 0 d q = 0 d I i n t k , x + p , y + q × F i n t t , k , p , q = s I s F · O i n t t , x , y ,
where O int R c o u t × h o u t × w o u t represents the quantized output feature tensor of the quantized convolution filters.
However, in A2NNs, due to discrepancies in the scaling factors for the input activations s I and weights s F , the operation of the adder filter—as shown in Equation (3)—cannot be directly transformed into the form shown in Equation (10). Thus, the Q-A2NN quantized with traditional quantization schemes re-introduces numerous multiplication operations during deployment, leading to forfeiture of the A2NN’s inherent advantage of low computational overhead.

4. Method

This section details the SSDQ method tailored for quantizing A2NNs, including a POT-based shared scaling factor quantization scheme and an MDD quantization strategy. The framework of the SSDQ method is presented in Figure 3. Firstly, the POT-based shared scaling factor quantization scheme is devised to quantize the adder filters in the A2NN. This quantization scheme converts adder filters with FP32 input activations, weights, and operations into quantized adder filters with hardware-friendly integer input activations, weights, and operations. Consequently, Q-A2NNs obtained by the POT-based shared scaling factor quantization scheme are entirely composed of quantized adder filters, resulting in lower computational and memory overheads than A2NNs during hardware deployment, although with reduced accuracy for RSSC tasks. Then, the MDD quantization strategy is formulated to mitigate performance degradation of Q-A2NNs due to the deviations in weights and features during the quantization process. The MDD strategy synergistically integrates the WD strategy, which prevents performance degradation caused by deviations in the quantized weights, and the FD strategy, which enhances the classification performance of Q-A2NNs by minimizing the deviations in the output features of each layer. As a result, the Q-A2NNs obtained by the proposed SSDQ method have low computational overhead, low memory overhead, and comparable performance to A2NNs that are more suitable for hardware deployment than CNNs and A2NNs. The principle and details of the POT-based shared scaling factor quantization scheme are expounded in Section 4.1, and the WD strategy and FD strategy in the MDD quantization strategy are detailed in Section 4.2 and Section 4.3, respectively.

4.1. POT-Based Shared Scaling Factor Quantization Scheme

To address the issue that traditional quantization schemes are not suitable for quantizing A2NNs, a POT-based shared scaling factor quantization scheme is proposed. This quantization scheme can quantize the adder filters in A2NNs, transitioning the input activations, weights, and operations from floating-point to hardware-friendly integer type.
Similar to the quantization of the convolution filter in Equation (9), the adder filter in Equation (3) can be re-written in a quantized form through quantizing the input activations I and weights F :
O t , x , y = k = 0 c i n p = 0 d q = 0 d | s I · I i n t k , x + p , y + q s F · F i n t t , k , p , q | .
The scaling factors for the input activations s I and weights s F are usually not equal. To avoid introducing multiplication operations into the fully additive neural network during quantization, s I and s F can be approximated as POT values, which can be formulated as:
s I = max ( | max ( I ) | , | min ( I ) | ) 2 N 1 1 2 i
s F = max ( | max ( F ) | , | min ( F ) | ) 2 N 1 1 2 j ,
where
i = round log 2 max ( | max ( I ) | , | min ( I ) | ) 2 N 1 1
j = round log 2 max ( | max ( F ) | , | min ( F ) | ) 2 N 1 1 ,
where round(·) denotes rounding the value to the nearest integer value. According to Equations (12) and (13), the quantized input activations I i n t and weights F i n t can be calculated as:
I i n t = clamp INT 2 i I , 2 N 1 + 1 , 2 N 1 1
F i n t = clamp INT 2 j F , 2 N 1 + 1 , 2 N 1 1 ,
where clamp(·) confines the quantized elements within the target quantization range 2 N 1 + 1 , 2 N 1 1 .
According to the above analysis, Equation (11) can be re-written as:
O t , x , y = k = 0 c i n p = 0 d q = 0 d 2 i I i n t k , x + p , y + q 2 j F i n t t , k , p , q .
To further the operation of the adder filter into a hardware-friendly form, similar to Equation (10), the POT-based shared scaling factor s A can be defined according to the relative magnitudes of the scaling factors for the input activations s I and weights s F , as follows:
s A = min ( 2 i , 2 j ) .
According to Equations (18) and (19), the operation of the adder filter can be converted into a quantized adder filter with a de-quantized operation, similar to Equation (10). Specifically, the process is divided into the three following cases:
(1) If i < j :
O t , x , y = s A k = 0 c i n p = 0 d q = 0 d I i n t k , x + p , y + q 2 j i F i n t t , k , p , q = s A · O i n t t , x , y .
(2) If i = j :
O t , x , y = s A k = 0 c i n p = 0 d q = 0 d I i n t k , x + p , y + q F i n t t , k , p , q = s A · O i n t t , x , y .
(3) If i > j :
O t , x , y = s A k = 0 c i n p = 0 d q = 0 d 2 i j I i n t k , x + p , y + q F i n t t , k , p , q = s A · O i n t t , x , y .
With this quantization scheme, all floating-point addition operations in the adder filter can be converted to integer addition operations. Notably, except for the hardware-friendly bit-shift operations, no additional operations are introduced. The quantized adder filter uses integer-type input activations and weights to perform integer addition operations, resulting in quantized output features. Then, the quantized output features are de-quantized using the POT-based shared scaling factor s A to obtain the final output features of the adder filter.

4.2. Weight-DeBiasing Strategy

During the quantization process, especially in low-bit-width quantization, discernible deviations emerge between the quantized weight distribution and the original weight distribution. The deviations in the quantized weight distribution cause severe performance degradation of the Q-A2NN model. Consequently, a WD strategy is devised to correct the quantized weight distribution. This subsection first analyzes the causes of the deviations in the weight distribution after quantization. Then, the details of the proposed WD strategy are presented.
The weights of adder filters in well-trained A2NN models often follow Laplacian distributions [27,28]; therefore, for a well-trained A2NN model, many weights are concentrated around zero. Figure 4a illustrates the parameter count histogram for the adder filter in the first layer of the well-trained A2NN-VGGNet-11 model [28]. It can be seen that the weight distribution of the adder filters is skewed. While the weight distribution of this filter ranges from 50 to 40, most weights are distributed between 2 and 2, accounting for over 80%. In contrast, the weights with values below 30 or above 20 are rare, accounting for less than 1%. As the weight scaling factor s F is determined according to the boundary values of the weight distribution F m i n and F m a x , the quantized weights may seriously deviate in low-bit-width Q-A2NN. As shown in Figure 4b, when the target quantization range is much smaller than the actual weight distribution range, many weights that are densely distributed around zero are compressed to the integer zero after quantization. Thus, most of the information in the well-trained full-precision A2NN model is lost during quantization, resulting in a severe decrease in accuracy. Notably, the deviation in the weight distribution after quantization escalates as the quantization bit width decreases.
The proposed WD strategy can effectively alleviate the severe decrease in accuracy of the Q-A2NN caused by deviations in the weight distribution during quantization, the details of which are presented in Figure 4c. When the weight distribution is skewed and the weight distribution range is much more extensive than the target quantization range, the median value F M e reflects the actual weight distribution better than F m i n and F m a x . Therefore, when the weight scaling factor s F obtained using the boundary values F m i n and F m a x exceeds 1, the weight scaling factor s F is re-defined as:
s F = max | ( F M e | , | F M e | ) 2 N 2 1 2 j ,
where
j = round log 2 max | ( F M e | , | F M e | ) 2 N 2 1 ,
where F M e and F M e represent the negative median value and positive median value in the weight distribution, respectively.
As shown in Figure 4c, the median value of the weight distribution F M e can be mapped to the median value of the quantization range F int M e using the re-defined weight scaling factor s F . The WD strategy ensures an adequate quantization range for the weights densely distributed near zero in the adder filters, effectively alleviating the performance degradation resulting from substantial information loss during the quantization process. In particular, the proposed WD strategy does not introduce additional computational or memory overhead during the inference phase of the Q-A2NN model.

4.3. Feature-DeBiasing Strategy

During the quantization process of A2NNs, not only does the weight distribution present discernible deviations, but the output features of each layer also have discernible deviations due to the accumulation of quantization errors. Consequently, another de-biasing strategy integrated into the multi-dimensional de-biasing strategy—namely, the FD strategy—is devised to improve the classification performance of the Q-A2NN by reducing the deviation among the output features of each layer. The framework of the FD strategy is interpreted in Figure 5. The well-trained A2NN is employed as the benchmark model, and the Q-A2NN is set as the target model. By aligning the output features of each intermediate layer in the Q-A2NN model with the corresponding layer’s output features in the A2NN model, the quantization errors of the middle layers can be effectively reduced in a layer-by-layer manner. With this strategy, the accumulation of quantization errors—which considerably affect the subsequent layers in the Q-A2NN model—can be mitigated. Furthermore, through aligning the output features of the last layer in the target Q-A2NN model with the output features of the last layer in the benchmark A2NN model, the feature extraction ability of the target Q-A2NN model can be brought as close as possible to that of the benchmark A2NN model.
Thus, a joint loss function L F D of the FD strategy is designed to optimize the Q-A2NN model, which can be defined as follows:
L F D = L Q + λ ( L I F + L L F ) ,
where L Q represents the quantization loss, L I F represents the feature loss of the intermediate layers, and L L F represents the feature loss of the final layer. These losses are used to construct the joint loss function L F A , and λ is a hyperparameter for balancing these loss functions.
The quantization loss function L Q is used to minimize the distance between the ground truth (GT) and classification logits of the Q-A2NN model, which can be expressed as follows:
L Q = C E [ f Q A 2 N N ( S ) , l ] ,
where CE(·) represents the cross-entropy loss function, f Q A 2 N N (·) denotes the function set of the Q-A2NN model, and l denotes the GT of the training samples.
The intermediate feature loss function L I F is devised to measure the gap between the output features of the intermediate batch normalization (BN) layers in the Q-A2NN and benchmark A2NN models. The A2NN model employs a BN layer after each adder layer to stabilize the feature distribution. Thus, the deviation among the output features of each intermediate layer due to quantization errors can be reduced as much as possible through minimizing L I F , thereby attenuating the adverse effects of accumulated quantization errors on subsequent layers. The intermediate feature loss function L I F is formulated as follows:
L I F = b = 1 M 1 M S E ( F ^ b , F b ) ,
where M denotes the number of BN layers; F ^ b and F b are the output features of the bth BN layer in the Q-A2NN and benchmark A2NN models, respectively; and MSE(·) represents the mean squared error operator, expressed as follows:
M S E ( F ^ b , F b ) = 1 B i = 0 B F ^ b F b 2 2 ,
where B denotes the batch size in the training phase.
The last feature loss function L L F is formulated to determine the disparity between the class probabilities in the last layer of the Q-A2NN and A2NN models. Due to information loss during the quantization process, the feature extraction ability of the Q-A2NN model is worse than that of the A2NN model. L L F is minimized to optimize the Q-A2NN model by ensuring that the classification boundary of the last layer in the Q-A2NN model imitates that of the benchmark A2NN model. The Kullback–Leibler (KL) divergence is used to construct the loss function L L F , which is formulated as follows:
L L F = T 2 · K L D i v [ P Q A 2 N N , P A 2 N N ] = T 2 · k = 0 C P Q A 2 N N ( k ) log P Q A 2 N N ( k ) P A 2 N N ( k ) ,
where C represents the number of classes represented by the training samples, T is a crucial hyperparameter dictating the softness in the class probability, and P Q A 2 N N and P A 2 N N denote the class probability for the target Q-A2NN model and the benchmark A2NN model, respectively. The p th elements in P Q A 2 N N and P A 2 N N can be computed as:
P Q A 2 N N ( p ) = log e Q A 2 N N ( p ) T c = 0 C e Q A 2 N N ( c ) T
P A 2 N N ( p ) = e A 2 N N ( p ) T c = 0 C e A 2 N N ( c ) T .
Notably, determining the appropriate hyperparameter T for Q-A2NN models with different bit widths is difficult and resource-intensive. Inspired by [44], an adaptive temperature is introduced for adaptation to Q-A2NN models with different bit widths. Furthermore, the adaptive temperature is learned utilizing a reverse gradient strategy that aims to maximize L L F . Through gradually increasing the challenge posed to the Q-A2NN model in emulating the classification boundary of the well-trained full-precision A2NN model, an adversarial process is introduced to continually improve the Q-A2NN model’s feature extraction capability. Consequently, the feature extraction ability of the Q-A2NN model approaches that of the benchmark A2NN model. The adaptive temperature T can be updated using the reversed gradient as follows:
T p + 1 = T p l r T × ( L F D T p ) ,
where l r T denotes the learning rate of the adaptive temperature, and T p and T p + 1 denote the value of the adaptive temperature in the pth and ( p + 1 ) th iterations, respectively.
Through the above analysis, it can be concluded that Q-A2NN models with low computational overhead, low memory overhead, and minimal performance degradation can be obtained using the proposed SSDQ method tailored for the quantization of A2NNs.

5. Experiments

To assess the efficacy of the proposed SSDQ method tailored for quantizing A2NNs, extensive experiments were conducted using five commonly used RSSC data sets. These experiments were accelerated by an NVIDIA Titan RTX graphical processing unit and performed using PyTorch 1.8 [45].

5.1. Data Set Description and Pre-Processing

Five RSSC data sets were utilized to conduct the experiments: WHU-RS19 (WHU) [46], UC-Merced Land Use Data Set (UCM) [47], SIRI-WHU [33], RSSCN7 [48], and Aerial Image Data Set (AID) [49]. These five widely employed data sets encompass numerous scene classes, boasting high inter-class similarity and within-class diversity. A portion of images from each data set was randomly chosen as the training samples, while the remainder served as the testing samples. The essential information and settings of these five data sets for the following experiments are detailed in Table 1.
Moreover, some data augmentation methods were applied to the samples in each data set, following recent research [28,50]. Firstly, all samples used in the following experiments were resized to 256 × 256 pixels. Then, the training samples and testing samples were random-cropped and center-cropped to 224 × 224 patches. Furthermore, the cropped training samples were flipped horizontally.

5.2. Evaluation Metrics

In accordance with recent research in the RSSC field [28,51], we adopted six widely accepted metrics to evaluate the classification performance of the proposed method: overall accuracy (OA), confusion matrix, precision (Pre), recall (Rec) and F1 score (F1). Moreover, two widely used indicators were adopted to evaluate the computational and memory overheads of different models: the number of operations (OPs) and the model size.
The OA reflects the overall classification performance, which is defined as follows:
O A = N c N T ,
where N c denotes the number of correctly classified test samples, while N T denotes the number of test samples.
The confusion matrix illustrates the detailed confusion degree and between-class classification errors of the model [28]. The entry in the nth column and mth row in the confusion matrix can be calculated as:
C m , n = N m , n N m ,
where N m , n represents the number of samples of the mth class classified as the nth class, and N m represents the total number of images in the mth class.
The Pre reflects the proportion of true positive samples among all samples predicted as positive. The Pre of the ith class is defined as follows:
P r e i = T P i T P i + F P i ,
where T P and F P represent the number of true positives and false positives, respectively.
The Rec reflects the proportion of true positive samples among all truly positive samples in the ground truth. The Rec of the ith class is defined as follows:
R e c i = T P i T P i + F N i ,
where T P and F N represent the number of true positives and false negatives, respectively.
The F1 balances Pre and Rec by calculating the harmonic mean of Pre and Rec, which can better reflect the generalization ability of the model. The F1 of the ith class is defined as follows:
F 1 i = 2 × P r e i × R e c i P r e i + R e c i .
Notably, the macro-average method was used to obtain the performance metrics of Pre, Rec, and F1 across the entire data set by directly averaging the evaluation metrics of different categories.

5.3. Experimental Settings

5.3.1. Network

In our experiments, we employed two representative neural network architectures as network backbones: ResNet-18 and VGGNet-11. To avoid redundancy, only the final fully connected (FC) layer was retained as the classifier in VGGNet-11. To facilitate deployment, the FC layer in both backbones was replaced with a 1 × 1 convolutional layer. Since the output of the adder filter is always negative, the BN layer was introduced to normalize the output of the adder layers to an appropriate range. BN layers were added after each convolutional layer and FC layer in both backbones to maintain consistency across structures. Additionally, the pooling layers and activation functions in the network backbones were retained to enhance the robustness and feature extraction capabilities of the network model.
Six models (sets) were developed based on each backbone. For the ResNet-18 backbone, the following models (sets) were created: the CNN-ResNet-18 model, binary neural network (BNN)-ResNet-18 model, AdderNet-ResNet-18 model, A2NN-ResNet-18 model, quantized CNN (Q-CNN)-ResNet-18 model set, and Q-A2NN-ResNet-18 model set. Similarly, for the VGGNet-11 backbone, the CNN-VGGNet-11 model, BNN-VGGNet-11 model, AdderNet-VGGNet-11 model, A2NN-VGGNet-11 model, Q-CNN-VGGNet-11 model set, and Q-A2NN-VGGNet-11 model set were created. The BNN [52] is a lightweight model utilizing only two possible values to constrain weights. The Q-CNN and Q-A2NN model sets encompass multiple models with varying quantization bit widths, with the quantization bit width N taking values from { 4 , 5 , 6 , 7 , 8 , 10 } . The models with different quantization bit widths in the Q-CNN-ResNet-18 model set are denoted as Q-CNN-ResNet-18-Nbit models, while the models with different quantization bit widths in the Q-A2NN-ResNet-18 model set are denoted as Q-A2NN-ResNet-18-Nbit models. The models with different quantization bit widths in the Q-CNN-VGGNet-11 and Q-A2NN-VGGNet-11 model sets are denoted as Q-CNN-VGGNet-11-Nbit models and Q-A2NN-VGGNet-11-Nbit models, respectively. For the Q-CNN and Q-A2NN model sets, the input activations and weights of all convolutional or adder layers were quantized. To ensure fairness, none of the full-precision models were loaded with pre-trained parameters.

5.3.2. Hyperparameter Settings

Following [28], all full-precision models were trained for 300 epochs. They were trained using stochastic gradient descent (SGD) with a batch size of 32, momentum of 0.9, and weight decay of 0.0005. The learning rate had an initial value of 0.05, which changed according to the cosine learning rate decay [53]. Similarly, all quantized models were trained using the quantization-aware training (QAT) method for 60 epochs. They were trained using SGD with batch size of 64, momentum of 0.9, and weight decay of 0.0005. The learning rate had an initial value of 0.002, and the cosine learning rate decay was employed.

5.4. Hyperparameter Analysis

We conducted hyperparameter sensitivity experiments on the RSSCN7 data set using four Q-A2NN models: Q-A2NN-ResNet-18-4bit, Q-A2NN-ResNet-18-8bit, Q-A2NN-VGGNet-11-4bit, and Q-A2NN-VGGNet-11-8bit. These experiments aimed to analyze the influence of the balance coefficient hyperparameter λ in the joint loss function L F D on the performance, in order to select the optimal value of this hyperparameter. Subsequently, this selected value was applied in the following experiments on the other four data sets with the remaining Q-A2NN models.
The λ value was varied within the set { 0.1 , 0.5 , 1 , 3 , 5 , 7 , 10 , 12 , 15 } . The variation in the OA with λ is depicted in Figure 6. The proposed SSDQ method demonstrated satisfactory OA across a wide range of λ values, with the best classification results of the four models achieved when λ was set to 1 within the given set. Therefore, in the following experiments, λ was set to 1.

5.5. Comparison with Other Approaches

To demonstrate the effectiveness and advancement of the proposed SSDQ method tailored for the quantization of A2NNs, extensive experiments were conducted on the five commonly used RSSC data sets. Table 2 presents the classification results of the different models on the five data sets. To validate the reliability of the classification results, each experiment was repeated three times. Moreover, to compare the computational and memory overheads during the inference phase, the OPs and Params of the aforementioned models were determined, which are presented in Table 3. The major computational layers in the table refer to convolutional/adder/quantized convolutional/quantized adder/binarize layers, while the other layers primarily consist of activation and BN layers. As the computational and memory overheads of the other layers are substantially lower than those of the major computational layers, they can be disregarded. Thus, the computational and memory overheads of models were primarily influenced by the convolutional/adder/quantized convolutional/quantized adder/binarized layers.
As presented in Table 2 and Table 3, the Q-A2NN model sets (POT), obtained using only the proposed POT-based shared scaling factor quantization scheme, had lower memory overhead than the A2NN models and showed great classification performance with 8-bit quantization. However, as the quantization bit-width decreased, the performance of the Q-A2NN model sets (POT) decreased sharply. For example, on the RSSCN7 data set, the Q-A2NN-ResNet-18-8bit model (POT) achieved 83.20% accuracy, which is 0.3% higher than that of the A2NN model, and the memory overhead was only 1 4 that of the A2NN model; in contrast, the Q-A2NN-ResNet-18-4bit model (POT) achieved only 20.64% accuracy.
As shown in Table 2, for most RSSC data sets, the Q-A2NN model sets (SSDQ) obtained with the proposed SSDQ method achieved comparable classification accuracies to the A2NN models. For some data sets, the accuracies of the Q-A2NN model sets (SSDQ) even surpassed those of the A2NN models. For instance, the OA of the Q-A2NN-ResNet-18-6bit model (SSDQ) was improved by approximately 0.28%, 0.42%, and 0.98% on the RSSCN7, AID, and WHU data sets, respectively. The Q-A2NN-ResNet-18-4bit model (SSDQ) achieved 90.73% accuracy on the WHU data set, which is the same accuracy as the A2NN model. Moreover, the computational and memory overheads of the Q-A2NN model sets (POT) and Q-A2NN model sets (SSDQ) during the inference phase were identical, as shown in Table 3.
According to Table 2 and Table 3, the Q-A2NN model sets (SSDQ) had the same or even lower performance degradation, when compared with the Q-CNN model sets quantized using the symmetric uniform quantization scheme [54,55]. Moreover, the memory overhead of the Q-A2NN model sets (SSDQ) and Q-CNN model sets during the inference phase were identical. For example, the performance degradation of the Q-A2NN-VGGNet-11-4bit models (SSDQ) was nearly 0.19%, 0.49%, and 1.28% lower than that of the Q-CNN-VGGNet-11-4bit models for the RSSCN7, WHU, and SIRI-WHU data sets, respectively. The memory overhead required to deploy these models was the same—approximately 4.42 MB. Notably, on some data sets, the accuracies of Q-A2NN model sets (SSDQ) even surpassed that of full-precision CNN models, and the computational and memory overheads were much lower than those of the CNN models.
The classification accuracy of the Q-A2NN model sets (SSDQ) surpassed that of the AdderNet models on most of the data sets, according to Table 2. For example, the Q-A2NN-ResNet-18-4bit models (SSDQ) enhanced the OA compared to the AdderNet models by approximately 1.42%, 1.37%, 3.51%, and 1.02% on the RSSCN7, SIRI-WHU, WHU, and UCM data sets, respectively; the Q-A2NN-VGGNet-11-4bit models (SSDQ) enhanced the OA by approximately 1.76%, 1.11%, and 0.17% on the WHU, RSSCN7, and UCM data sets, respectively. Furthermore, the computational and memory overheads of the Q-A2NN model sets (SSDQ) were lower than those of the AdderNet models, as presented in Table 3.
As illustrated in Table 2 and Table 3, while the BNN models exhibited smaller computational and memory overheads, compared to other models, their accuracy was significantly lower. For instance, the accuracy of the BNN-ResNet-18 model was only 60.29% on the WHU data set, while the Q-A2NN-ResNet-18-4bit model (SSDQ) achieved an accuracy of 90.73% on the same data set. As such, the reduced accuracy of BNN models can be considered unacceptable.
To evaluate the proposed SSDQ method from multiple dimensions, Table 4 presents a comparison of the Pre, Rec, and F1 metrics for CNN, A2NN, Q-CNN, Q-A2NN (POT), and Q-A2NN (SSDQ) models based on different backbones on the WHU data set. From the numerical comparisons in Table 4, it can be seen that the Q-A2NN (SSDQ) model achieves the best performance in the vast majority of data precision scenarios. For example, the Q-A2NN-ResNet-18-6bit model (SSDQ) outperforms the Q-CNN-ResNet-18-6bit model and the Q-A2NN-ResNet-18-6bit model (POT) by 0.97%/3.15%, 1.08%/3.36%, and 1.21%/3.36% in Pre, Rec, and F1, respectively. Notably, the A2NN-ResNet-18 model performs lower than the CNN-ResNet-18 model by 0.32% in Pre, Rec, and F1 metrics. This demonstrates that the Q-A2NN obtained through the proposed SSDQ method can achieve more comprehensive performance. Moreover, in most cases, the Q-A2NN (SSDQ) models show the least performance degradation compared to the floating-point precision model, and in some instances, even a slight improvement, at the same data precision. Overall, the above quantitative analysis proves the effectiveness of the proposed SSDQ method.
Figure 7 shows a comprehensive performance comparison of different models on the WHU data set. Figure 7a depicts the variation in the OA for quantization models with different quantization bit widths. The Q-A2NN model sets (POT), obtained solely using the proposed POT-based shared scaling factor quantization scheme, still demonstrated excellent classification performance at 7-bit quantization. However, as the quantization bit width decreased, the performance of the Q-A2NN model sets (POT) deteriorated sharply. In contrast, the Q-A2NN model sets (SSDQ) and the Q-CNN model sets maintained excellent classification performance, even at low quantization bit widths. Moreover, the classification accuracies of the Q-A2NN model sets (SSDQ) even surpassed those of the A2NN and CNN models for some quantization bit widths, achieving the best classification performance among these models. Figure 7b displays the OA against the computational overhead and memory overhead for different models. The computational overhead of each model is represented by the size of the circle. While the full-precision CNN model, positioned in the top right corner of the figure, demonstrated outstanding classification performance, it entails a substantial number of parameters and has high computational demand. The AdderNet and A2NN models effectively reduced the computational overhead, and the A2NN model achieved comparable classification performance to the CNN model. Nevertheless, the memory overhead of the A2NN model remains significant. Although the BNN model exhibited the smallest computational and memory overheads, its classification performance was insufficient for practical applications. The quantized model sets reduce the memory and computational overheads by converting the floating-point operations into low-bit-width integer operations. Among these quantized model sets, the Q-A2NN model sets (SSDQ), positioned in the top left corner, achieved similar classification accuracy as the full-precision CNN model with reduced computational and memory overheads. As a result, the resource-efficient and high-performance Q-A2NN model sets (SSDQ) are more suitable for edge hardware deployment than the other models. Notably, depending on the classification performance and resource overhead requirements in diverse application scenarios, a suitable model among the Q-A2NN model sets (SSDQ) can be selected for onboard RSSC.
Figure 8 presents the confusion matrices of six different models. Among them, the Q-A2NN-VGGNet-11-6bit model (SSDQ) presented the best performance. Specifically, the accuracies for all classes of the Q-A2NN-VGGNet-11-6bit model exceeded 80%, and the accuracies for more than half of the classes were greater than 90%. The BNN-VGGNet-11 and AdderNet-VGGNet-11 models achieved accuracies exceeding 80% for only 6 and 10 classes, respectively. Additionally, the CNN-VGGNet-11 and Q-CNN-VGGNet-11-6bit models demonstrated accuracies exceeding 90% for only three classes.

5.6. Ablation Studies

We designed four variants to obtain Q-A2NN model sets for ablation studies. The OAs of the Q-A2NN model sets quantized with different component combinations of the proposed SSDQ method are displayed in Table 5.
First, only using the proposed POT-based shared scaling factor quantization scheme, the Q-A2NN model sets (POT) exhibited remarkable classification performance, even when subjected to 7-bit quantization. Nonetheless, as the quantization bit width decreased, the performance of the Q-A2NN model sets (POT) sharply deteriorated. This performance decline can be attributed to the deviations in the quantized weights and output features of each layer due to the skewed weight distribution and accumulation of quantization errors.
Next, the WD and FD strategies in the MDD quantization strategy were independently introduced as variants to verify their respective effects. As illustrated in Table 5, each of the two proposed strategies positively impacted the classification performance of the Q-A2NN model sets. The FD strategy enhances the performance of the Q-A2NN model sets by reducing the deviation among the output features of each layer. For example, compared to the Q-A2NN model sets (POT), for the ResNet-18 backbone, the Q-A2NN model sets (POT + FD) improved the OA by approximately 0.39–28.34% on the UCM data set; for the VGGNet-11 backbone, the Q-A2NN model sets (POT + FD) improved the OA by approximately 0.08–19.49% on the UCM data set. However, at 4-bit and 5-bit quantization, the classification performance of the Q-A2NN model sets (POT + FD) was insufficient for practical applications. This issue is caused by the significant deviations in the low-bit-width quantized weights. The WD strategy can mitigate the severe performance degradation of the Q-A2NN model sets by reducing the deviations in the quantized weights. The performance of the Q-A2NN model sets (POT + WD) with low-bit-width quantization was significantly improved, compared with the baseline. For example, compared to the Q-A2NN-4bit models (POT), the Q-A2NN-ResNet-18-4bit model (POT + WD) improved the OA by approximately 80%, 59.06%, and 81.59% on the WHU, RSSCN7, and UCM data sets, respectively, and the Q-A2NN-VGGNet-11-4bit model (POT + WD) improved the OA by approximately 84.44%, 81.30%, and 63.28% on the UCM, WHU, and RSSCN7 data sets, respectively. However, the classification accuracy of the Q-A2NN model sets (POT + WD) still had a certain discrepancy, compared with that of the A2NN models, on some data sets.
Finally, the Q-A2NN model sets (SSDQ) achieved the best classification results across the above variants at most quantization bit widths. The ablation study results support the efficacy of the SSDQ method tailored for the quantization of A2NNs. The proposed SSDQ method presents an efficient and effective solution to attain Q-A2NN models with minimal performance degradation for onboard RSSC.

5.7. Visualization Analysis

To explain and verify that the proposed WD strategy can prevent the significant information loss caused by deviations in the quantized weights, parameter count histograms were employed to visualize the weight distributions of the adder filters or quantized adder filters in different models.
As shown in Figure 9a,b, for the A2NN models, the weight distribution of the adder filter was wide and skewed. The weight distribution ranged from 50 to 90 in the 16th adder layer in the A2NN-ResNet-18 model, while most weights were distributed between 2 and 2, accounting for over 80%. In contrast, the weights with values below 25 or above 50 were rare, accounting for less than 0.1%. The weight distribution in the third adder layer in the A2NN-VGGNet model presented the same phenomenon. The quantized adder filters cause many weights that are densely distributed around zero to be compressed to the integer zero after quantization, as illustrated in Figure 9c,d. For 4-bit quantization, the quantized weight can be represented by 15 integer values from 7 to 7. However, whether in the 16th adder layer of the Q-A2NN-ResNet-18-4bit model (POT) or the third adder layer of the Q-A2NN-VGGNet-11-4bit model (POT), more than 60% of the quantized weight values were equal to zero. In this case, most of the feature extraction ability of the full-precision A2NN model after quantization was lost, resulting in severely decreased accuracy. Notably, the above situation can be avoided by applying the proposed WD strategy. Figure 9e,f show that the quantized weight distributions were uniform after introducing the WD strategy. The number of quantization weights with values equal to zero in the 16th adder layer in the Q-A2NN-ResNet-18-4bit model (POT + WD) was reduced by 70%; similarly, the number of quantization weights with values equal to zero in the third adder layer in the Q-A2NN-VGGNet-11-4bit model (POT + WD) was reduced by 75%. In this way, the feature extraction ability possessed by the well-trained A2NN models can be preserved as much as possible after the quantization process.
Moreover, the gradient-weighted class activation mapping (Grad-CAM) method [56] was employed to visually assess the impact of the proposed SSDQ method. The resulting class activation map (CAM) is presented as a thermal map, illustrating how the models focus on specific regions of the input image. In the visual representation, regions with stronger responses are highlighted in red, whereas regions with lower responses are depicted in blue. Figure 10 shows the CAM results for eight classes of images randomly selected from the five public RSSC data sets. From top to bottom is the input image, the CAM corresponding to the A2NN-VGGNet-11 model, the CAM corresponding to the Q-A2NN-VGGNet-11-4bit model (POT), the CAM corresponding to the Q-A2NN-VGGNet-11-4bit model (POT + WD), and the CAM corresponding to the Q-A2NN-VGGNet-11-4bit model (SSDQ). The CAM generated by the Q-A2NN-VGGNet-11-4bit model (POT) did not effectively capture critical information in the remote sensing images, as the feature extraction ability possessed by the well-trained A2NN model was severely degraded due to the deviations in the quantized weights and features caused by low-bit-width quantization. With the introduction of the WD strategy, the CAM generated by the Q-A2NN-VGGNet-11-4bit model (POT + WD) could successfully concentrate on the crucial regions in most scene classes. However, it can still be observed, from the CAM results of certain scene classes, that critical parts (e.g., storage tanks and rivers) were not fully covered or even could not be extracted accurately, indicating a certain discrepancy compared with the A2NN model. In contrast, the Q-A2NN-VGGNet-11-4bit model (SSDQ) obtained through incorporating the proposed SSDQ method comprehensively covered the critical parts in the remote sensing scenes while minimizing interference from intricate backgrounds. Taking the center and tennis court as examples, the Q-A2NN-VGGNet-11-4bit model (SSDQ) located the critical regions more comprehensively than the A2NN model. The above analysis indicates that the classification performance of Q-A2NN models can be significantly enhanced through the use of the SSDQ method.

6. Conclusions

This article proposed an SSDQ method tailored for quantizing A2NNs, including a POT-based shared scaling factor quantization scheme and an MDD quantization strategy. The POT-based shared scaling factor quantization scheme is devised to quantize the adder filters, converting the adder filters in A2NNs to quantized adder filters with hardware-friendly integer input activations, weights, and operations. Thus, Q-A2NNs composed of quantized adder filters have lower computational and memory overheads than A2NNs during hardware deployment. The MDD quantization strategy is formulated to avoid the performance degradation of Q-A2NNs due to deviations in the weights and features during the quantization process. The MDD strategy synergistically integrates the WD strategy, which mitigates performance degradation stemming from deviations in the quantized weights, and the FD strategy, which enhances the classification performance of Q-A2NNs by minimizing the deviations among the output features of each layer. Extensive experimentation and analysis on five commonly used RSSC data sets revealed that Q-A2NN models with low computational overhead, low memory overhead, and minimal performance degradation can be obtained with the proposed SSDQ method. In particular, the obtained Q-A2NN models are more suitable for onboard RSSC than CNNs and A2NNs. In future work, we will attempt to enhance the performance of Q-A2NN models by accounting for the complexity of spatial distributions in remote sensing images. Additionally, we plan to investigate the use of high-throughput and energy-efficient field-programmable gate array (FPGA)-based accelerators for Q-A2NN models to fulfill the requirements of on-board real-time RSSC.

Author Contributions

N.Z. designed the model, then implemented the model and wrote the paper. H.C. and L.C. contributed to the supervision of the work, analysis of the method, and paper writing. J.W., G.W. and W.L. contributed to the analysis of the method, and paper writing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Natural Science Foundation for Young Scientists of China (under Grant 62201059), in part by the Foundation (under Grant JCKY2021602B037), and in part by the BIT Research and Innovation Promoting Project (under Grant No. 2023YCXY006).

Data Availability Statement

The data used in this study are available upon request from the corresponding author due to privacy restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, X.; Hong, D.; Chanussot, J. Convolutional neural networks for multimodal remote sensing data classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5517010. [Google Scholar] [CrossRef]
  2. Du, X.; Zheng, X.; Lu, X.; Doudkin, A.A. Multisource remote sensing data classification with graph fusion network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10062–10072. [Google Scholar] [CrossRef]
  3. Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
  4. Wang, W.; Chen, Y.; Ghamisi, P. Transferring CNN With Adaptive Learning for Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5533918. [Google Scholar] [CrossRef]
  5. Tong, W.; Chen, W.; Han, W.; Li, X.; Wang, L. Channel-attention-based DenseNet network for remote sensing image scene classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4121–4132. [Google Scholar] [CrossRef]
  6. Hong, D.; Gao, L.; Yokoya, N.; Yao, J.; Chanussot, J.; Du, Q.; Zhang, B. More diverse means better: Multimodal deep learning meets remote-sensing imagery classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4340–4354. [Google Scholar] [CrossRef]
  7. Grøtte, M.E.; Birkeland, R.; Honoré-Livermore, E.; Bakken, S.; Garrett, J.L.; Prentice, E.F.; Sigernes, F.; Orlandić, M.; Gravdahl, J.T.; Johansen, T.A. Ocean color hyperspectral remote sensing with high resolution and low latency—The hypso-1 cubesat mission. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1000619. [Google Scholar] [CrossRef]
  8. Caba, J.; Díaz, M.; Barba, J.; Guerra, R.; de la Torre, J.A.; López, S. Fpga-based on-board hyperspectral imaging compression: Benchmarking performance and energy efficiency against gpu implementations. Remote Sens. 2020, 12, 3741. [Google Scholar] [CrossRef]
  9. Wiehle, S.; Mandapati, S.; Günzel, D.; Breit, H.; Balss, U. Synthetic aperture radar image formation and processing on an MPSoC. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5226814. [Google Scholar] [CrossRef]
  10. Zhang, B.; Wu, Y.; Zhao, B.; Chanussot, J.; Hong, D.; Yao, J.; Gao, L. Progress and challenges in intelligent remote sensing satellite systems. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1814–1822. [Google Scholar] [CrossRef]
  11. Fu, C.; Cao, Z.; Li, Y.; Ye, J.; Feng, C. Onboard real-time aerial tracking with efficient Siamese anchor proposal network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5606913. [Google Scholar] [CrossRef]
  12. Jaderberg, M.; Vedaldi, A.; Zisserman, A. Speeding up convolutional neural networks with low rank expansions. arXiv 2014, arXiv:1405.3866. [Google Scholar]
  13. Zhang, X.; Zou, J.; Ming, X.; He, K.; Sun, J. Efficient and accurate approximations of nonlinear convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1984–1992. [Google Scholar]
  14. Han, S.; Pool, J.; Tran, J.; Dally, W. Learning both weights and connections for efficient neural network. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, USA, 7–12 December 2015; Volume 28. [Google Scholar]
  15. Li, H.; Kadav, A.; Durdanovic, I.; Samet, H.; Graf, H.P. Pruning Filters for Efficient ConvNets. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
  16. Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
  17. Gupta, S.; Agrawal, A.; Gopalakrishnan, K.; Narayanan, P. Deep learning with limited numerical precision. In Proceedings of the International Conference on Machine Learning, Lille, France, 6 July–11 July 2015; pp. 1737–1746. [Google Scholar]
  18. Lin, S.; Ji, R.; Chen, C.; Tao, D.; Luo, J. Holistic cnn compression via low-rank decomposition with knowledge transfer. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 2889–2905. [Google Scholar] [CrossRef] [PubMed]
  19. Huang, Z.; Wang, N. Data-driven sparse structure selection for deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 304–320. [Google Scholar]
  20. Zhang, Y.; Zhen, Y.; He, Z.; Yen, G.G. Improvement of efficiency in evolutionary pruning. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–8. [Google Scholar]
  21. Liu, Y.; Cao, J.; Li, B.; Yuan, C.; Hu, W.; Li, Y.; Duan, Y. Knowledge distillation via instance relationship graph. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7096–7104. [Google Scholar]
  22. Zhuang, B.; Shen, C.; Tan, M.; Liu, L.; Reid, I. Towards effective low-bitwidth convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7920–7928. [Google Scholar]
  23. Jacob, B.; Kligys, S.; Chen, B.; Zhu, M.; Tang, M.; Howard, A.; Adam, H.; Kalenichenko, D. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2704–2713. [Google Scholar]
  24. Horowitz, M. 1.1 computing’s energy problem (and what we can do about it). In Proceedings of the 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), San Francisco, CA, USA, 9–13 February 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 10–14. [Google Scholar]
  25. Wang, Y.; Huang, M.; Han, K.; Chen, H.; Zhang, W.; Xu, C.; Tao, D. AdderNet and its minimalist hardware design for energy-efficient artificial intelligence. arXiv 2021, arXiv:2101.10015. [Google Scholar]
  26. Valueva, M.V.; Nagornov, N.; Lyakhov, P.A.; Valuev, G.V.; Chervyakov, N.I. Application of the residue number system to reduce hardware costs of the convolutional neural network implementation. Math. Comput. Simul. 2020, 177, 232–243. [Google Scholar] [CrossRef]
  27. Chen, H.; Wang, Y.; Xu, C.; Shi, B.; Xu, C.; Tian, Q.; Xu, C. AdderNet: Do we really need multiplications in deep learning? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1468–1477. [Google Scholar]
  28. Zhang, N.; Wang, G.; Wang, J.; Chen, H.; Liu, W.; Chen, L. All Adder Neural Networks for On-board Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5607916. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Sun, B.; Jiang, W.; Ha, Y.; Hu, M.; Zhao, W. WSQ-AdderNet: Efficient Weight Standardization based Quantized AdderNet FPGA Accelerator Design with High-Density INT8 DSP-LUT Co-Packing Optimization. In Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, San Diego, CA, USA, 29 October–3 November 2022; pp. 1–9. [Google Scholar]
  30. Li, H.; Gu, H.; Han, Y.; Yang, J. Object-oriented classification of high-resolution remote sensing imagery based on an improved colour structure code and a support vector machine. Int. J. Remote Sens. 2010, 31, 1453–1470. [Google Scholar] [CrossRef]
  31. Cheng, G.; Han, J.; Lu, X. Remote sensing image scene classification: Benchmark and state of the art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
  32. Zhu, Q.; Zhong, Y.; Zhao, B.; Xia, G.S.; Zhang, L. Bag-of-visual-words scene classifier with local and global features for high spatial resolution remote sensing imagery. IEEE Geosci. Remote Sens. Lett. 2016, 13, 747–751. [Google Scholar] [CrossRef]
  33. Zhao, B.; Zhong, Y.; Xia, G.S.; Zhang, L. Dirichlet-derived multiple topic scene classification model for high spatial resolution remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2015, 54, 2108–2123. [Google Scholar] [CrossRef]
  34. Wang, S.; Guan, Y.; Shao, L. Multi-granularity canonical appearance pooling for remote sensing scene classification. IEEE Trans. Image Process. 2020, 29, 5396–5407. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, Q.; Huang, W.; Xiong, Z.; Li, X. Looking closer at the scene: Multiscale representation learning for remote sensing image scene classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 1414–1428. [Google Scholar] [CrossRef]
  36. Sun, H.; Li, S.; Zheng, X.; Lu, X. Remote sensing scene classification by gated bidirectional network. IEEE Trans. Geosci. Remote Sens. 2019, 58, 82–96. [Google Scholar] [CrossRef]
  37. Wang, J.; Chen, H.; Ma, L.; Chen, L.; Gong, X.; Liu, W. Sphere Loss: Learning Discriminative Features for Scene Classification in a Hyperspherical Feature Space. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5601819. [Google Scholar] [CrossRef]
  38. Bazi, Y.; Bashmal, L.; Rahhal, M.M.A.; Dayil, R.A.; Ajlan, N.A. Vision transformers for remote sensing image classification. Remote Sens. 2021, 13, 516. [Google Scholar] [CrossRef]
  39. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  40. Sha, Z.; Li, J. MITformer: A multiinstance vision transformer for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6510305. [Google Scholar] [CrossRef]
  41. Nagel, M.; Fournarakis, M.; Amjad, R.A.; Bondarenko, Y.; Van Baalen, M.; Blankevoort, T. A white paper on neural network quantization. arXiv 2021, arXiv:2106.08295. [Google Scholar]
  42. Liang, T.; Glossner, J.; Wang, L.; Shi, S.; Zhang, X. Pruning and quantization for deep neural network acceleration: A survey. Neurocomputing 2021, 461, 370–403. [Google Scholar] [CrossRef]
  43. Yuan, Z.; Xue, C.; Chen, Y.; Wu, Q.; Sun, G. Ptq4vit: Post-training quantization for vision transformers with twin uniform quantization. In Proceedings of the Computer Vision—ECCV 2022: 17th European Conference, Tel Aviv, Israel, 23–27 October 2022; Part XII. Springer: Cham, Switzerland, 2022; pp. 191–207. [Google Scholar]
  44. Li, Z.; Li, X.; Yang, L.; Zhao, B.; Song, R.; Luo, L.; Li, J.; Yang, J. Curriculum temperature for knowledge distillation. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 1504–1512. [Google Scholar]
  45. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in pytorch. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  46. Sheng, G.; Yang, W.; Xu, T.; Sun, H. High-resolution satellite scene classification using a sparse coding based multiple feature combination. Int. J. Remote Sens. 2012, 33, 2395–2412. [Google Scholar] [CrossRef]
  47. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
  48. Zou, Q.; Ni, L.; Zhang, T.; Wang, Q. Deep learning based feature selection for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2321–2325. [Google Scholar] [CrossRef]
  49. Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
  50. Hu, Y.; Huang, X.; Luo, X.; Han, J.; Cao, X.; Zhang, J. Variational Self-Distillation for Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5627313. [Google Scholar] [CrossRef]
  51. Xu, K.; Deng, P.; Huang, H. Vision Transformer: An Excellent Teacher for Guiding Small Networks in Remote Sensing Image Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5618715. [Google Scholar] [CrossRef]
  52. Courbariaux, M.; Bengio, Y.; David, J.P. Binaryconnect: Training deep neural networks with binary weights during propagations. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, USA, 7–12 December 2015; Volume 28. [Google Scholar]
  53. Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
  54. Wei, X.; Chen, H.; Liu, W.; Xie, Y. Mixed-precision quantization for CNN-based remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1721–1725. [Google Scholar] [CrossRef]
  55. Wei, X.; Liu, W.; Chen, L.; Ma, L.; Chen, H.; Zhuang, Y. FPGA-based hybrid-type implementation of quantized neural networks for remote sensing applications. Sensors 2019, 19, 924. [Google Scholar] [CrossRef]
  56. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
Figure 1. Energy and area overheads for different operations in 45 nm ASICs under 0.9 V.
Figure 1. Energy and area overheads for different operations in 45 nm ASICs under 0.9 V.
Remotesensing 16 02403 g001
Figure 2. Comparison of the CNN and A2NN structures: (a) The convolution kernel used in the CNN; (b) the adder kernel used in the A2NN; (c) framework of CNN; and (d) framework of A2NN.
Figure 2. Comparison of the CNN and A2NN structures: (a) The convolution kernel used in the CNN; (b) the adder kernel used in the A2NN; (c) framework of CNN; and (d) framework of A2NN.
Remotesensing 16 02403 g002
Figure 3. Framework of the proposed SSDQ method tailored for the quantization of A2NNs.
Figure 3. Framework of the proposed SSDQ method tailored for the quantization of A2NNs.
Remotesensing 16 02403 g003
Figure 4. An overview of the proposed WD strategy: (a) Parameter count histogram for the adder filter in the first layer of the well-trained A2NN-VGGNet-11 model; (b) the reason for the deviation in the quantized weight distribution; and (c) demonstration of the WD strategy.
Figure 4. An overview of the proposed WD strategy: (a) Parameter count histogram for the adder filter in the first layer of the well-trained A2NN-VGGNet-11 model; (b) the reason for the deviation in the quantized weight distribution; and (c) demonstration of the WD strategy.
Remotesensing 16 02403 g004
Figure 5. Framework of the proposed FD strategy. The well-trained A2NN model is employed as the benchmark model to enhance the classification performance of the target Q-A2NN model by reducing the deviation among the output features of each layer.
Figure 5. Framework of the proposed FD strategy. The well-trained A2NN model is employed as the benchmark model to enhance the classification performance of the target Q-A2NN model by reducing the deviation among the output features of each layer.
Remotesensing 16 02403 g005
Figure 6. OA of four Q-A2NN models with different hyperparameter values on the RSSCN7 data set.
Figure 6. OA of four Q-A2NN models with different hyperparameter values on the RSSCN7 data set.
Remotesensing 16 02403 g006
Figure 7. Comprehensive performance comparison of different models on the WHU data set: (a) OA of the Q-CNN-ResNet-18-Nbit models and Q-A2NN-ResNet-18-Nbit models (SSDQ); and (b) comparisons of the OA and resource overhead of various models.
Figure 7. Comprehensive performance comparison of different models on the WHU data set: (a) OA of the Q-CNN-ResNet-18-Nbit models and Q-A2NN-ResNet-18-Nbit models (SSDQ); and (b) comparisons of the OA and resource overhead of various models.
Remotesensing 16 02403 g007
Figure 8. Confusion matrices of different models on the SIRI-WHU data set: (a) The CNN-VGGNet-11 model; (b) the Q-CNN-VGGNet-11-6bit model; (c) the AdderNet-VGGNet-11 model; (d) the BNN-VGGNet-11 model; (e) the A2NN-VGGNet-11 model; and (f) the Q-A2NN-VGGNet-11-6bit model (SSDQ).
Figure 8. Confusion matrices of different models on the SIRI-WHU data set: (a) The CNN-VGGNet-11 model; (b) the Q-CNN-VGGNet-11-6bit model; (c) the AdderNet-VGGNet-11 model; (d) the BNN-VGGNet-11 model; (e) the A2NN-VGGNet-11 model; and (f) the Q-A2NN-VGGNet-11-6bit model (SSDQ).
Remotesensing 16 02403 g008
Figure 9. Parameter count histograms of the adder filter or quantized adder filters in different models on the AID data set: (a) The 16th layer in the A2NN-ResNet-18 model; (b) the third layer in the A2NN-VGGNet-11 model; (c) the 16th layer in the Q-A2NN-ResNet-18-4bit model (POT); (d) the third layer in the Q-A2NN-VGGNet-11-4bit model (POT); (e) the 16th layer in the Q-A2NN-ResNet-18-4bit model (POT + WD); and (f) the third layer in the Q-A2NN-VGGNet-11-4bit model (POT + WD).
Figure 9. Parameter count histograms of the adder filter or quantized adder filters in different models on the AID data set: (a) The 16th layer in the A2NN-ResNet-18 model; (b) the third layer in the A2NN-VGGNet-11 model; (c) the 16th layer in the Q-A2NN-ResNet-18-4bit model (POT); (d) the third layer in the Q-A2NN-VGGNet-11-4bit model (POT); (e) the 16th layer in the Q-A2NN-ResNet-18-4bit model (POT + WD); and (f) the third layer in the Q-A2NN-VGGNet-11-4bit model (POT + WD).
Remotesensing 16 02403 g009
Figure 10. Visualization of the CAM results for eight scene classes of images randomly selected from the five public RSSC data sets.
Figure 10. Visualization of the CAM results for eight scene classes of images randomly selected from the five public RSSC data sets.
Remotesensing 16 02403 g010
Table 1. Information and settings of five RSSC data sets used in the experiments.
Table 1. Information and settings of five RSSC data sets used in the experiments.
Data SetWHUUCMSIRI-WHURSSCN7AID
Classes192112730
Total images100521002400280010,000
Images per class∼50100200400220∼420
Training sample ratio0.80.80.40.20.2
Testing sample ratio0.20.20.60.80.8
Resolution (m)up to 0.50.32-0.5∼8
Image size600 × 600256 × 256200 × 200400 × 400600 × 600
Data sourceGoogle EarthUSGSGoogle EarthGoogle EarthGoogle Earth
Table 2. Classification accuracies of various models on the five RSSC data sets [OA ± STD(%)].
Table 2. Classification accuracies of various models on the five RSSC data sets [OA ± STD(%)].
Data SetBackbonePrecisionBasic Network
CNN [28]Q-CNN [54,55]BNN [52]AdderNet [27]A2NN [28]Q-A2NN
(POT) (Ours)
Q-A2NN
(SSDQ) (Ours)
UCMResNet-18Floating-point/Binarize *96.00 ± 0.6695.9550.67 ± 0.9294.14 ± 0.9395.62 ± 0.2795.7195.71
8-bit-95.79 ± 0.14---94.29 ± 0.2495.40 ± 0.14
6-bit-95.56 ± 0.14---88.02 ± 0.2795.40 ± 0.14
4-bit-95.32 ± 0.14---11.98 ± 1.7995.16 ± 0.14
VGGNet-11Floating-point/Binarize96.10 ± 0.8996.6789.09 ± 0.5794.67 ± 0.4696.76 ± 0.5596.996.9
8-bit-97.14 ± 0---95.48 ± 097.14 ± 0.24
6-bit-97.06 ± 0.14---67.54 ± 0.6096.51 ± 0.14
4-bit-94.68 ± 0.50---10.16 ± 0.9094.84 ± 0.28
WHUResNet-18Floating-point/Binarize92.58 ± 0.8791.2260.29 ± 1.1287.22 ± 0.6390.24 ± 0.4990.7390.73
8-bit-90.73 ± 0---90.40 ± 0.2892.20 ± 0
6-bit-90.24 ± 0---88.45 ± 0.2891.71 ± 0
4-bit-89.60 ± 0.29---10.73 ± 090.73 ± 0
VGGNet-11Floating-point/Binarize91.90 ± 0.7491.7182.73 ± 1.5689.46 ± 0.7492.30 ± 0.8092.292.2
8-bit-90.57 ± 0.28---90.89 ± 0.2891.71 ± 0
6-bit-90.24 ± 0---80.49 ± 0.4991.22 ± 0
4-bit-90.24 ± 0.97---9.76 ± 091.22 ± 0.49
RSSCN7ResNet-18Floating-point/Binarize84.16 ± 0.6083.7162.70 ± 0.4979.98 ± 0.9882.42 ± 0.6082.982.9
8-bit-83.52 ± 0.63---83.20 ± 0.0983.36 ± 0.05
6-bit-83.54 ± 0.09---70.61 ± 0.1483.18 ± 0.11
4-bit-81.85 ± 0.16---20.64 ± 0.2081.40 ± 0.18
VGGNet-11Floating-point/Binarize82.12 ± 0.4282.1978.03 ± 0.6279.98 ± 0.8283.29 ± 0.4583.0883.08
8-bit-82.78 ± 0.07---83.62 ± 0.2883.91 ± 0.14
6-bit-82.66 ± 0.05---54.40 ± 0.4983.96 ± 0.09
4-bit-80.01 ± 0.29---17.49 ± 081.09 ± 0.30
AIDResNet-18Floating-point/Binarize85.29 ± 0.5684.7442.67 ± 0.2277.58 ± 0.6379.15 ± 0.3278.8578.85
8-bit-84.75 ± 0.03---79.33 ± 0.0379.55 ± 0.10
6-bit-84.75 ± 0.03---69.47 ± 0.5479.27 ± 0.13
4-bit-83.07 ± 0.03---5.64 ± 1.6576.60 ± 0.05
VGGNet-11Floating-point/Binarize83.28 ± 0.5983.0666.65 ± 0.8581.28 ± 0.6583.46 ± 0.2383.3683.36
8-bit-83.30 ± 0.08---84.49 ± 0.0183.92 ± 0.07
6-bit-83.13 ± 0.08---72.00 ± 0.0383.87 ± 0.11
4-bit-80.29 ± 0.07---5.14 ± 0.5178.06 ± 0.09
SIRI-WHUResNet-18Floating-point/Binarize91.90 ± 0.2491.9457.07 ± 1.4586.22 ± 0.9189.61 ± 0.2489.8689.86
8-bit-91.74 ± 0.07---89.65 ± 0.0789.81 ± 0.08
6-bit-91.58 ± 0.04---76.78 ± 0.1489.05 ± 0.04
4-bit-89.74 ± 0.15---16.41 ± 0.8787.59 ± 0.23
VGGNet-11Floating-point/Binarize87.53 ± 0.4087.6480.56 ± 0.2186.68 ± 0.7388.80 ± 0.6188.3388.33
8-bit-87.78 ± 0---89.17 ± 0.0789.05 ± 0.23
6-bit-87.45 ± 0.11---31.69 ± 1.5189.40 ± 0.35
4-bit-83.89 ± 0.24---11.39 ± 0.8485.86 ± 0.21
* For the CNN, AdderNet, and A2NN models, this precision refers to floating-point type; for the BNN model, this precision refers to binary type.
Table 3. Computational and memory overheads of different models.
Table 3. Computational and memory overheads of different models.
BackboneBasic NetworkComputational OverheadMemory Size *
Major Computational LayersOther LayersOPsParamsMajor Computational LayersOther Layers
AddMulXNORMACs
ResNet-18CNN [28]1.81 G1.81 G04.98 M3.63 G42.79 MB42.66 MB0.04 MB
AdderNet [27]3.56 G59.01 M04.98 M3.63 G42.79 MB42.66 MB0.04 MB
A2NN [28]3.62 G004.98 M3.63 G42.79 MB42.66 MB0.04 MB
BNN [52]1.81 G01.81 G4.98 M3.63 G1.37 MB1.33 MB0.04 MB
Q-CNN-8bit [54,55]1.81 G1.81 G04.98 M3.63 G10.70 MB10.66 MB0.04 MB
Q-CNN-6bit [54,55]1.81 G1.81 G04.98 M3.63 G8.04 MB8.00 MB0.04 MB
Q-CNN-4bit [54,55]1.81 G1.81 G04.98 M3.63 G5.37 MB5.33 MB0.04 MB
Q-A2NN-8bit (POT) (ours)3.62 G004.98 M3.63 G10.70 MB10.66 MB0.04 MB
Q-A2NN-6bit (POT) (ours)3.62 G004.98 M3.63 G8.04 MB8.00 MB0.04 MB
Q-A2NN-4bit (POT) (ours)3.62 G004.98 M3.63 G5.37 MB5.33 MB0.04 MB
Q-A2NN-8bit (SSDQ) (ours)3.62 G004.98 M3.63 G10.70 MB10.66 MB0.04 MB
Q-A2NN-6bit (SSDQ) (ours)3.62 G004.98 M3.63 G8.04 MB8.00 MB0.04 MB
Q-A2NN-4bit (SSDQ) (ours)3.62 G004.98 M3.63 G5.37 MB5.33 MB0.04 MB
VGGNet-11CNN [28]7.49 G7.49 G014.85 M15.00 G35.24 MB35.22 MB0.02 MB
AdderNet [27]14.93 G43.36 M014.85 M15.00 G35.24 MB35.22 MB0.02 MB
A2NN [28]14.97 G0014.85 M15.00 G35.24 MB35.22 MB0.02 MB
BNN [52]7.49 G07.49 G14.85 M15.00 G1.12 MB1.10 MB0.02 MB
Q-CNN-8bit [54,55]7.49 G7.49 G014.85 M15.00 G8.83 MB8.81 MB0.02 MB
Q-CNN-6bit [54,55]7.49 G7.49 G014.85 M15.00 G6.62 MB6.60 MB0.02 MB
Q-CNN-4bit [54,55]7.49 G7.49 G014.85 M15.00 G4.42 MB4.40 MB0.02 MB
Q-A2NN-8bit (POT) (ours)14.97 G0014.85 M15.00 G8.83 MB8.81 MB0.02 MB
Q-A2NN-6bit (POT) (ours)14.97 G0014.85 M15.00 G6.62 MB6.60 MB0.02 MB
Q-A2NN-4bit (POT) (ours)14.97 G0014.85 M15.00 G4.42 MB4.40 MB0.02 MB
Q-A2NN-8bit (SSDQ) (ours)14.97 G0014.85 M15.00 G8.83 MB8.81 MB0.02 MB
Q-A2NN-6bit (SSDQ) (ours)14.97 G0014.85 M15.00 G6.62 MB6.60 MB0.02 MB
Q-A2NN-4bit (SSDQ) (ours)14.97 G0014.85 M15.00 G4.42 MB4.40 MB0.02 MB
* The memory size varies slightly for different data sets; here, the AID data set is used as an example.
Table 4. Comparison results on the WHU data sets [Pre/Rec/F1 (%)].
Table 4. Comparison results on the WHU data sets [Pre/Rec/F1 (%)].
BackbonePrecisionQ-CNN [54,55]Q-A2NN
(POT) (Ours)
Q-A2NN
(SSDQ) (Ours)
ResNet-18Floating-point92.13/91.23/91.2791.81/90.91/90.9591.81/90.91/90.95
8-bit91.57/90.75/90.7692.10/90.43/90.8393.08/92.31/92.47
6-bit91.58/90.75/90.7689.40/88.47/88.6192.55/91.83/91.97
4-bit90.54/89.74/89.794.40/10.37/5.091.38/90.96/91.0
VGGNet-11Floating-point92.50/91.75/91.8492.93/92.33/92.1592.93/92.33/92.15
8-bit91.31/90.84/90.6491.36/90.89/90.3592.01/91.89/91.58
6-bit90.52/90.35/90.2783.79/80.30/80.1391.90/91.41/91.22
4-bit91.22/90.48/90.184.47/9.39/3.5491.88/91.48/91.23
Table 5. Ablation study on the proposed SSDQ method [OA ± STD (%)].
Table 5. Ablation study on the proposed SSDQ method [OA ± STD (%)].
Data SetBackboneBasic NetworkPrecision
Floating-Point10-bit8-bit7-bit6-bit5-bit4-bit
UCMResNet-18A2NN95.71------
Q-A2NN (POT)-95.56 ± 0.1394.29 ± 0.2492.54 ± 0.1488.02 ± 0.2711.98 ± 1.7911.98 ± 1.79
Q-A2NN (POT + FD)-95.95 ± 095.24 ± 094.13 ± 0.3690.48 ± 1.0340.32 ± 2.0812.94 ± 1.45
Q-A2NN (POT + WD)-95.56 ± 0.1394.29 ± 0.2494.29 ± 0.2493.97 ± 0.6094.05 ± 0.2493.57 ± 0.24
Q-A2NN (SSDQ)-95.95 ± 0.2495.40 ± 0.1495 ± 095.40 ± 0.1495.24 ± 095.16 ± 0.14
VGGNet-11A2NN96.9------
Q-A2NN (POT)-96.75 ± 0.1395.48 ± 091.11 ± 0.1467.54 ± 0.6010.08 ± 0.7210.16 ± 0.90
Q-A2NN (POT + FD)-96.67 ± 096.98 ± 0.3694.76 ± 0.2487.03 ± 0.8411.67 ± 2.1610.24 ± 0.24
Q-A2NN (POT + WD)-96.75 ± 0.1395.48 ± 095.79 ± 0.1496.35 ± 0.2895.79 ± 0.1494.60 ± 0.60
Q-A2NN (SSDQ)-96.67 ± 097.14 ± 0.2496.51 ± 0.1496.51 ± 0.1496.03 ± 0.1494.84 ± 0.28
WHUResNet-18A2NN90.73------
Q-A2NN (POT)-90.57 ± 0.2890.40 ± 0.2891.38 ± 0.2888.45 ± 0.2872.68 ± 010.73 ± 0
Q-A2NN (POT + FD)-91.87 ± 0.2892.20 ± 092.68 ± 0.4990.57 ± 0.7479.51 ± 0.9810.08 ± 0.28
Q-A2NN (POT + WD)-90.73 ± 090.24 ± 091.38 ± 0.2889.27 ± 091.55 ± 0.2890.73 ± 0
Q-A2NN (SSDQ)-91.87 ± 0.2892.20 ± 092.68 ± 0.4991.71 ± 092.36 ± 0.2890.73 ± 0
VGGNet-11A2NN92.2------
Q-A2NN (POT)-91.54 ± 0.2990.89 ± 0.2887.48 ± 0.2880.49 ± 0.499.92 ± 0.289.76 ± 0
Q-A2NN (POT + FD)-91.71 ± 091.71 ± 089.11 ± 0.2886.01 ± 0.2810.08 ± 0.289.76 ± 0
Q-A2NN (POT + WD)-91.87 ± 0.2890.89 ± 0.2887.48 ± 0.2890.40 ± 0.2892.04 ± 0.2891.06 ± 0.28
Q-A2NN (SSDQ)-91.71 ± 091.71 ± 089.11 ± 0.2891.22 ± 091.71 ± 091.22 ± 0.49
RSSCN7ResNet-18A2NN82.9------
Q-A2NN (POT)-83.68 ± 0.0583.20 ± 0.0982.40 ± 0.0770.61 ± 0.1440.61 ± 0.1320.64 ± 0.20
Q-A2NN (POT + FD)-83.30 ± 0.1483.21 ± 0.0583.24 ± 0.0577.90 ± 0.1250.25 ± 0.1424.84 ± 0.22
Q-A2NN (POT + WD)-83.68 ± 0.0583.20 ± 0.0982.40 ± 0.0783.32 ± 0.3281.90 ± 0.0779.70 ± 0.27
Q-A2NN (SSDQ)-83.36 ± 0.1183.36 ± 0.0583.29 ± 0.1783.18 ± 0.1182.69 ± 0.0281.40 ± 0.18
VGGNet-11A2NN83.08------
Q-A2NN (POT)-84.49 ± 0.2983.62 ± 0.2881.13 ± 0.2854.40 ± 0.4918.32 ± 0.2817.49 ± 0
Q-A2NN (POT + FD)-84.27 ± 0.0783.91 ± 0.1482.75 ± 0.1372.57 ± 0.3319.70 ± 0.0518.07 ± 0.05
Q-A2NN (POT + WD)-84.50 ± 0.0783.63 ± 0.0781.07 ± 0.0584.55 ± 0.1582.72 ± 0.0980.77 ± 0.61
Q-A2NN (SSDQ)-84.30 ± 0.0983.91 ± 0.1482.75 ± 0.1383.96 ± 0.0983.16 ± 0.0581.09 ± 0.30
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, N.; Chen, H.; Chen, L.; Wang, J.; Wang, G.; Liu, W. Q-A2NN: Quantized All-Adder Neural Networks for Onboard Remote Sensing Scene Classification. Remote Sens. 2024, 16, 2403. https://doi.org/10.3390/rs16132403

AMA Style

Zhang N, Chen H, Chen L, Wang J, Wang G, Liu W. Q-A2NN: Quantized All-Adder Neural Networks for Onboard Remote Sensing Scene Classification. Remote Sensing. 2024; 16(13):2403. https://doi.org/10.3390/rs16132403

Chicago/Turabian Style

Zhang, Ning, He Chen, Liang Chen, Jue Wang, Guoqing Wang, and Wenchao Liu. 2024. "Q-A2NN: Quantized All-Adder Neural Networks for Onboard Remote Sensing Scene Classification" Remote Sensing 16, no. 13: 2403. https://doi.org/10.3390/rs16132403

APA Style

Zhang, N., Chen, H., Chen, L., Wang, J., Wang, G., & Liu, W. (2024). Q-A2NN: Quantized All-Adder Neural Networks for Onboard Remote Sensing Scene Classification. Remote Sensing, 16(13), 2403. https://doi.org/10.3390/rs16132403

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop