Next Article in Journal
A Hierarchical Heuristic Architecture for Unmanned Aerial Vehicle Coverage Search with Optical Camera in Curve-Shape Area
Next Article in Special Issue
Zero-Shot Sketch-Based Remote-Sensing Image Retrieval Based on Multi-Level and Attention-Guided Tokenization
Previous Article in Journal
Study on the Impact of Urban Morphologies on Urban Canopy Heat Islands Based on Relocated Meteorological Stations
Previous Article in Special Issue
CNN-BiLSTM: A Novel Deep Learning Model for Near-Real-Time Daily Wildfire Spread Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Ten Deep-Learning-Based Out-of-Distribution Detection Methods for Remote Sensing Image Scene Classification

1
School of Geography and Ocean Science, Nanjing University, Nanjing 210023, China
2
Key Laboratory of Land and Ocean Safety Decision Technology, Ministry of Education, Nanjing University, Nanjing 210023, China
3
Situation Autonomous Awareness Integrated Research Platform for Key Technologies, Ministry of Education, Nanjing University, Nanjing 210023, China
4
Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing 210023, China
5
Collaborative Innovation Center of South China Sea Studies, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(9), 1501; https://doi.org/10.3390/rs16091501
Submission received: 10 March 2024 / Revised: 19 April 2024 / Accepted: 21 April 2024 / Published: 24 April 2024

Abstract

:
Although deep neural networks have made significant progress in tasks related to remote sensing image scene classification, most of these tasks assume that the training and test data are independently and identically distributed. However, when remote sensing scene classification models are deployed in the real world, the model will inevitably encounter situations where the distribution of the test set differs from that of the training set, leading to unpredictable errors during the inference and testing phase. For instance, in the context of large-scale remote sensing scene classification applications, it is difficult to obtain all the feature classes in the training phase. Consequently, during the inference and testing phases, the model will categorize images of unidentified unknown classes into known classes. Therefore, the deployment of out-of-distribution (OOD) detection within the realm of remote sensing scene classification is crucial for ensuring the reliability and safety of model application in real-world scenarios. Despite significant advancements in OOD detection methods in recent years, there remains a lack of a unified benchmark for evaluating various OOD methods specifically in remote sensing scene classification tasks. We designed different benchmarks on three classical remote sensing datasets to simulate scenes with different distributional shift. Ten different types of OOD detection methods were employed, and their performance was evaluated and compared using quantitative metrics. Numerous experiments were conducted to evaluate the overall performance of these state-of-the-art OOD detection methods under different test benchmarks. The comparative results show that the virtual-logit matching methods without additional training outperform the other types of methods on our benchmarks, suggesting that additional training methods are unnecessary for remote sensing image scene classification applications. Furthermore, we provide insights into OOD detection models and performance enhancement in real world. To the best of our knowledge, this study is the first evaluation and analysis of methods for detecting out-of-distribution data in remote sensing. We hope that this research will serve as a fundamental resource for future studies on out-of-distribution detection in remote sensing.

1. Introduction

In the past five years, the field of remote sensing image scene classification has seen significant advancements through the use of deep-learning-based methods [1,2]. The goal of remote sensing image scene classification is to convert satellite images into clear, structured semantics that automatically identify the type of land and how it is used, such as for residential or industrial areas [3]. This technique is crucial for analyzing aerial and satellite images to categorize them into specific types of land use and land cover (LULC) based on what is in the image [4,5]. However, traditional models for remote sensing scene classification, which primarily depend on supervised learning, face several challenges. These methods usually train on closed datasets and struggle to correctly identify rare or previously unseen types of land cover in the real world. When encountering unfamiliar land covers, these models tend to misclassify these anomalies into existing categories with high confidence [6,7,8], leading to inaccurate scene classification. Figure 1 illustrates a case in remote sensing scene classification: when models trained on urban datasets confront unknown land cover types, they tend to over-assign confidence to unknown classes, limiting the reliability and safety of the model in real-world applications [9,10].
Supervised-learning-based remote sensing scene classification models are based on the closed-world assumption [11,12], that is, the test data are assumed to be independently and identically distributed to the training data [13], a situation referred to as in-distribution (ID). However, when the model is deployed in an open real-world scenario, the test data may be from a distribution different from that of the training dataset, referred to as out-of-distribution (OOD) [13]. For large-scale remote sensing scene classification tasks, it is common for the distributions of training and test sets to exhibit shifts [14]. Given the intricate nature of surface categories across diverse landscapes, model are prone to encountering semantic shifts during their application and deployment phases [13]. Additionally, domain shift [15,16] occurs in the distribution of remote sensing images collected across different datasets, owing to sensor differences and geographical disparities. We illustrate the concepts of semantic shift and domain shift in Figure 2. In these situations, the model tends to assign excessively high confidence levels, raising security concerns [17].
Over the past 5 years, numerous OOD detection methods have been proposed to ensure the safety and reliability of models [13]. The goal of OOD detection is to detect samples in which the model cannot be generalized [18]. Currently, the main OOD detection methods can be categorized as post hoc [19,20,21,22,23], training-time regulization [24,25], training with outlier exposure [26,27,28], and model uncertainty [29,30,31]. However, minimal attention has been given to OOD detection in remote sensing scene classification tasks. Previous research has focused on semantic shift due to the presence of new categories in the test set and addressed it using open-set-recognition (OSR) methods [32,33,34]. Al Rahhal et al. [35] proposed an end-to-end learning approach based on vision transformers and employed energy-based learning to jointly model the class labels and data distribution. Liu et al. [14] proposed a new loss function based on prototype learning and uncertainty measurement to enhance the interclass discrimination and intraclass compactness of the learned deep features. Gawlikowski et al. [16] developed a model based on a Dirichlet prior network to quantify the distributional uncertainty of deep-learning-based remote sensing models, utilizing this approach for OOD detection.
However, to the best of our knowledge, there is no unified benchmark for comparing and analyzing the effectiveness of various types of state-of-the-art OOD detection methods applied to remote sensing scene classification tasks, thereby leading to unfair comparisons and uncertain results. First, traditional evaluation benchmarks for OOD detection are not applicable to OOD detection of remote sensing imagery as these benchmarks are designed for general image datasets [36,37]. Nonetheless, remote sensing images from different datasets not only encounter semantic shifts but also face domain shifts due to variations in sensors and spatial distributions [38]. In addition, the performance of OOD detection methods vary widely across different datasets and benchmark comparisons [28,39]. Many simple comparison benchmarks for general image datasets are close to saturation, rendering improvements insignificant [18,40]. Therefore, it is crucial to design an out-of-distribution (OOD) detection benchmark and accurately assess the performance of existing methods on remote sensing datasets.
In this paper, we present the following key contributions:
  • We establish benchmarks for evaluating OOD detection in remote sensing scene classification, using ResNet-50 as the backbone for all methods;
  • We assess the effectiveness of various OOD detection methods across challenging datasets like AID, UCM, and EuroSAT, using metrics such as AUROC, FPR@95, and AUPR;
  • We analyze performance disparities and challenges in applying these OOD detection methods to large-scale scene classification.

2. Methodology

In this section, we explore the concept of OOD and its related concepts and discuss the selected OOD detection methods and evaluation metrics. We prioritize using open-source code packages to quantitatively evaluate and compare their performance across different benchmarks. The selected methods align with the four most prominent research directions for OOD detection, as categorized in Table 1. Additionally, we present three benchmark remote sensing datasets and a backbone for scene classification and evaluate the performance of the models. Detailed implementation details are provided in the final part of this section.

2.1. Definition and Related Concepts

2.1.1. Definition

Remote sensing scene classification is a typical supervised multi-classification task [2,48]. For the remainder of this paper, we have assumed that X represents the input space of the remote sensing images and Y = { 1 , 2 , , C } represents the labeling space of the remote sensing images. Thus, the training data can be represented as D in = X i , y i i = 1 n , its distribution can be expressed as P X Y , and the marginal distribution can be denoted as P in . The process of a model trained on the training data is presented as f : X R | Y | , and the output of the model is denoted by the logit vector z, which is used to predict the output of the model.
We aim to deploy the model for remote sensing scene classification in the real world with a trustworthy OOD detector that not only accurately classifies data from the distribution P X Y (ID), but also recognizes data that do not belong to that distribution (OOD). The problem is expressed as a binary classification problem; that is, at the time of testing, the model must determine whether the input x X is from P in . This can be calculated using the following expression:
G λ ( x ) = ID S ( x ) λ OOD S ( x ) < λ ,
where S ( x ) represents the score of the sample, and samples with scores above a threshold λ are classified as ID; otherwise, they are classified as OOD. Scene classification models should not predict OOD samples, as no corresponding intersection in Y can be identified.
To better evaluate the performance of different models on remote sensing scene classification and examine the labels of OOD samples, we initially segmented the whole data space D into four subspaces: D ID , D Simi - OOD , D Near - OOD , and D Far - OOD . This division simplifies analysis by organizing samples into categories based on their connection to certain distributions. Each category shows a different level of uncertainty, from low to high. Our approach to segmenting the data and defining these categories draws from the methodology proposed by Liang et al. [19].
  • ( D ID . Let X denote the input, X D . { f ( X , θ ) : θ Θ } is a family of density functions on D , θ is the parameter, Θ denotes all the possible parameters that could generate samples in D . Y = { 1 , 2 , , C } represents the labeling space of the remote sensing images. Given a subset Θ 0 Θ , we define ID data space as:
    D ID : = ( X , y ) D × Y : θ Θ 0 , X = D u f ( u , θ ) d u
  • ( D Simi - OOD . We define Simi-OOD data space as:
    D Simi - OOD : = ( X , y ) D × Y Simi - OOD : θ Θ Θ 0 , X = D u f ( u , θ ) d u
    where Y Simi - OOD Y .
  • ( D Near - OOD . We define Near-OOD data space as:
    D Near - OOD : = ( X , y ) D × Y Near - OOD : θ Θ Θ 0 , X = D u f ( u , θ ) d u
    where Y Near - OOD Y =
  • ( D Far - OOD ) . We define Far-OOD data space as:
    D Far - OOD : = D D ID D Simi - OOD D Near - OOD
For example, in UCM image sence classification, D is the collection of all possible 256 × 256 images and D ID is UCM. If we consider each land cover category as a sample from a distribution, then { f ( X , θ ) : θ Θ } is the collection of all the distributions with land use label as their expectations. Since UCM consists of 21 land cover categories, the density functions of UCM should be a subset of { f ( X , θ ) : θ Θ } , that is, f ( X , θ ) : θ Θ 0 Θ .
In the context of remote sensing scene classification with the UCM dataset, the D Simi - OOD consists of remote sensing images similar to UCM but with different styles. D Near - OOD includes remote sensing images without UCM’s classes. D Far - OOD comprises images unrelated to land cover or use. Labels for OOD data are inaccessible.

2.1.2. Related Concepts

The domains pertinent to Out-of-Distribution (OOD) detection encompass Open Set Recognition (OSR), Outlier Detection (OD), and One-Class Classification (OCC). A schematic representation delineating the conceptual distinctions among these domains is provided in Figure 3. We elucidate the specific differences between these concepts in the following.
  • OOD detection vs. Open Set Recognition (OSR): In the context of remote sensing image scene classification tasks, OOD Detection and Open Set Recognition (OSR) share common ground, as both are concerned with identifying data points that deviate from the known distribution of training data. OSR focuses on distinguishing between known and unknown classes within classification problems, while OOD Detection involves a broader spectrum of learning tasks and extensive solution space;
  • OOD detection vs. Outlier Detection (OD): In remote sensing outlier detection, a deviation from the conventional train–test paradigm occurs through the simultaneous presentation of all data, aligning with the framework of OOD detection by earmarking the principal data distribution as ID;
  • OOD detection vs. One-Class Classification (OCC): In remote sensing one-class classification, normal or ID images are in one category; conversely, test images with semantic shift are classified as OOD, indicating they deviate from the norm.

2.2. Evaluating Methods

2.2.1. Post Hoc Methods

Post hoc methods do not require additional models and training data.These methods directly utilize the parameters in the original model to determine whether sample x is from P in . The advantages of this class of methods lie in their time efficiency and ease of use in practical production environments. For this class of methods, three widely used methods were selected for evaluation.
Maximum Softmax Probability (MSP) [17] is the simplest baseline method. This method detects OOD samples based on maximum softmax category probability.
S M S P ( x ) = m a x ( e z i c = 1 C e z c )
The MSP method performs better when the difference between ID and OOD is large. However, when the difference between ID and OOD is small, this method may classify samples overconfidently owing to pretrained neural networks, which limits its detection performance.
Virtual-logit matching (VIM) [41] responds to diverse OOD samples by combining multiple inputs. The method first defines a virtual logit l 0 to generalize the common logit. The subspace S is set to be the orthogonal complementary space P of the D-dimensional principal space P consisting of all training sample features. The larger the projection on P , the more likely the sample is OOD.
l 0 : = α x P
The obtained l 0 is combined with other logits in softmax to obtain the final predicted probability p 0 for each class. l 0 corresponds to p 0 , which is the probability that the sample is OOD. Notate the set of orthogonal bases of P as the matrix R R N × ( N D ) , the complete expression is as follows:
S V I M ( x ) = e α x T R T x i = 1 C e l i + e α x T R R T x
where α is the matching coefficient.
α : = i = 1 K max j = 1 , , C l j i i = 1 k x i P
This method demonstrates better overall performance on various types of datasets, does not require additional data for retraining, and offers a good degree of convenience.
Deep Nearest Neighbors (KNN) [42] utilizes a non-parametric nearest neighbor approach for OOD detection. It employs the normalized penultimate feature vector z = x / x 2 , where ϕ : X R m is a feature encoder. During testing, the normalized feature vector z * for a test sample x * is derived, and the Euclidean distances z i z * 2 are calculated with respect to embedding vectors z i Z n , where Z n represents the embedding set of training data. The data sequence Z n is reordered based on the increasing distance z i z * 2 , denoted as Z n = z ( 1 ) , z ( 2 ) , , z ( n ) . The decision function for OOD detection is defined as
S K N N ( x ) = z * z ( k ) 2
The advantages of KNN-based OOD detection include distributional assumption-free testing, independence from unknown data information, user-friendly operation, and applicability to diverse model architectures.

2.2.2. Training-Time Regularization Methods

The training-time regularization class introduces additional setup conditions on top of the original model to solve the OOD detection problem using training-time regularization.
ConfBranch [43] incorporates an additional confidence branch to calculate confidence c and utilizes c as S C o n f B r a n c h ( x )
p , c = f ( x , θ )
The softmax prediction probability p i is adjusted using the confidence level c to obtain the new prediction probability p i .
p i = c · p i + ( 1 c ) y i
Under this method, the model can effectively learn the decision boundaries of ID samples to obtain OOD detection.
Logit Normalization (LogitNorm) [44] was proposed to solve the problem of classifier overconfidence on OOD data. Specifically, the method limits the logit norm to a constant during the training process, while keeping the direction of the logit vector unchanged. The LogitNorm cross entropy can be expressed as:
L logitNorm ( f ( x ; θ ) , y ) = log e f y / ( τ f ) i = 1 k e f i / ( τ f )
This method does not require changes in the structure of the model and can be employed for OOD detection using metrics from a variety of post hoc methods. In this study, to ensure fair comparison, the maximum softmax probability value was computed via the benchmark method MSP to S L o g i t N o r m ( x ) .
Generalized ODIN (G-ODIN) [45] defines the logits of category i as:
f i ( x ) = h i ( x ) g ( x )
g ( x ) can be computed using the following formula, where f p ( x ) is the feature of the penultimate layer, σ is the sigmoid function, BN denotes Batch Normalization, and w and b represent the learnable weights.
g ( x ) = σ B N w g f p ( x ) + b g
For h i ( x ) , this can be realized by using a simple inner product (I):
h i ( x ) = h i I ( x ) = w i T f p ( x ) + b i
The computational expression for S O D I N is given by:
S O D I N ( x ) = max i exp ( f i ( x ) ) j = 1 C exp ( f j ( x ) )

2.2.3. Training with Outlier Exposure Methods

This approach uses outliers for model training through an unsupervised approach. Outliers usually refer to the OOD data that can be collected. In this study, experiments with reference to these methods were performed using a Tiny-ImageNet dataset as outliers for model training.
Outlier Exposure (OE) [28] represents the baseline work for this branch. The method introduces a large-scale selected set of OODs as OEs and sets an additional training goal of expecting f to produce uniform softmax scores for the added data. Setting the original learning objective L, OE can be formalized as minimizing the objective:
E ( x , y ) D in L ( f ( x ) , y ) + λ E x D out OE L OE f x , f ( x ) , y
This approach improves the generalization ability of the OOD detector, making it better suited for outlier distributions not observed previously. Additionally, this approach is suitable for models with different architectures.
Maximum Classifier Discrepancy (MCD) [27] utilizes a two-head deep convolutional neural network (CNN) and maximizes the discrepancy between classifiers F 1 and F 2 to detect OOD based on the discrepancy between the outputs of the two classifiers. p 1 ( y x ) and p 2 ( y x ) denote the K-dimensional softmax class probabilities for input x obtained by F 1 and F 2 , respectively. We used d p 1 ( y x ) , p 2 ( y x ) to measure the divergence between the two softmax class probabilities for an input. The discrepancy loss can be defined using the following equation:
d p 1 ( y x ) , p 2 ( y x ) = H p 1 ( y x ) H p 2 ( y x )
where H ( · ) is the entropy over the softmax distribution. The experimental results of this method indicate that it has good generalization in real-world scenarios.

2.2.4. Model Uncertainty Methods

This approach allows the model to learn an attribute that is uncertain about the input samples. For the test data, the samples within a division exhibit low uncertainty, whereas those outside the distribution demonstrate high uncertainty. Model uncertainty methods primarily use Bayesian modeling to solve model reliability problems with less-principled approximations.
Monte Carlo Dropout (MCDropout) [46] predicts the same model and sample T times, and the variance of these T predictions is calculated to compute the uncertainty. Specifically, the method samples the posterior distribution of weights at test time using dropout to obtain the posterior distribution of softmax class probabilities. The means of these samples are used to segment the predictions and the variance is used to output the model uncertainty for each class. The probability of T sub-predictions can be expressed as:
p ( y = c x , X , Y ) 1 T t = 1 T Softmax f W ^ t ( x )
where W ^ t q θ * ( W ) denotes the model parameters for each sample. The uncertainty can be measured using the following expression:
H ( p ) = c = 1 C p c log p c
This method is easy to use and does not require modification of the existing neural network or additional training; it only requires the neural network to drop out.
TempScaling [47] learns and uses a temperature parameter T to calibrate the network. The calibrated predicted output is:
q ^ i = max σ SM z i / T ( k )
where σ SM denotes the softmax function, and when T tends to zero, the probability q ^ i tends to 1 / K , representing the maximum uncertainty. T = 1 is the original softmax input. The parameter T is learned from the validation set using the NLL loss function. Temperature scaling does not affect the model accuracy.
TempScaling is one of the earliest and simplest methods for calibrating uncertainty measures; nevertheless, TempScaling is a variant of Platt Scaling, which is very effective in calibrating predictions.

2.3. Evaluating Metrics

OOD detection employs distinct evaluation metrics in contrast to conventional classification tasks [18]. Primarily, the distribution of categories in OOD detection typically exhibits an imbalance, characterized by fewer instances of unknown categories. Consequently, this imbalance predisposes models to favor known categories, thereby impacting accuracy metrics. Secondly, OOD detection places greater emphasis on the model’s false alarm rate, wherein the misclassification of unknown samples as known categories is a critical concern. Referring to Hendrycks’ metrics [17] and relevant assessments with reference to remote sensing [16,49], we used the following five metrics to quantitatively assess the effectiveness of the OOD detection method on the selected remote sensing datasets:
  • Area Under the Receiver Operating Characteristic Curve (AUROC) is a common metric for evaluating the performance of a binary classification model, which represents the size of the area enclosed by the Receiver Operating Characteristic (ROC) curve and the axes, with a value range of 0–1. The ROC curve is plotted with False Positive Rate (FPR) as the horizontal axis and True Positive Rate (TPR) as the vertical axis, AUROC can be obtained by calculating the area enclosed under the ROC curve with the formula:
    AUROC = 0 1 TPR FPR d ( FPR )
  • Area Under the Precision–Recall Curve (AUPR) represents the size of the area enclosed by the Precision–Recall (PR) curve and the coordinate axes, and has values ranging from 0 to 1. AUPR can be obtained by calculating the area enclosed under the PR curve, and its formula is:
    AUPR = 0 1 Precision Recall d ( Recall )
  • False Positive Rate at 95% specificity (FPR@95) is the proportion of negative samples that are incorrectly predicted by the model when the model has a TPR of 95%. The formula for FPR@95 is as follows.
    FPR @ 95 = F P F P + T N
    where F P denotes the number of false positive classes (predicting negative classes as positive) and T N denotes the number of true negative classes (predicting negative classes as negative);
  • ID classification accuracy (ID ACC) measures the overall correctness of predictions made by a model across all ID classes.The formula for ID ACC is as follows.
    Accuracy = T P + T N T P + T N + F P + F N
    where F P denotes false positives (predicting negative classes as positive) and T N denotes true negatives (predicting negative classes as negative). F N represents false negatives (misclassifying positive classes as negative), while T P represents true positives (correctly classifying negative classes as negative);
  • Computation time, measured in seconds, is a key factor affecting the method’s practicality and is detailed in the paper.

2.4. Remote Sensing Datasets and Scene Classification Models

We conducted experiments on three different datasets: the Aerial Image dataset (AID), the UC-Merced Land Use (UCM) dataset, and Land Use and the Land Cover Classification with the Sentinel-2 (EuroSAR) dataset. We provide a brief description of these datasets below and the categories of the different datasets are shown in Figure 4.
  • UCM Dataset: The UCM dataset [50] is a high-resolution aerial RGB image dataset. The size of each image is 256 × 256 pixels, and the dataset contains 21 categories with 100 samples in each category;
  • AID Dataset: The AID dataset [51] is a high-resolution aerial RGB image dataset. The size of each image is 600 × 600 pixels, the dataset contains 30 categories, with each category containing 300 samples;
  • EuroSAT Dataset: The EuroSAT Dataset [52] is a collection of images taken by the Sentinel-2 satellite, covering 13 spectral bands. Each image is 64 × 64 pixels in size, and the dataset consists of 10 categories, each containing 2000 to 3000 images, totaling 27,000 samples.
In the field of scene classification, numerous models have been developed for the UCM, AID, and EuroSAT datasets. Among these, Resnet [53] has achieved state-of-the-art results on all three datasets [2]. Based on Table 2, ResNet-50 has relatively fewer parameters, requires lower FLOPS, and has a shorter training time. Considering both performance and efficiency, this study selects ResNet-50 as the backbone for all the tested Out-of-Distribution (OOD) detection models.

2.5. Implementation Details and Parameter Selection

To compare different methods from different domains fairly, we used a unified setup and hyperparameter architecture. For the remote sensing scene classification models, we uniformly used ResNet-50 as the benchmark. If the implemented method required training, we used the accepted settings of the SGD optimizer with a learning rate of 0.01, momentum of 0.9, and weight decay of 0.0005 for 100 epochs to prevent over-tuning. If the method requires hyperparameter tuning, we explored only the five most common values and selected hyperparameters based on the performance of AUROC on the validation set. The logic of the OOD validation set selection is based on real-world practices, all of which are designed for fairness and utility in comparison with benchmarks. The main benchmark development and testing were conducted using four Nvidia RTX 2080Ti cards.

2.6. Benchmarks

To test the performance of different OOD detection methods under semantic shift and domain shift, we referred to the construction of OOD detection benchmarks for general images, designed a series of benchmarks on AID, UCM, and EuroSAT datasets by combining the characteristics of remote sensing images themselves, and conducted many experiments. Our benchmarks can be categorized into OSR benchmarks and OOD benchmarks. Refer to Figure 5 for a diagram illustrating these benchmarks.

2.6.1. OSR Benchmarks

In developing benchmarks for evaluating Open-Set-Recognition (OSR) performance, we drew inspiration from established benchmarks such as MNIST [61] and CIFAR [62]. These benchmarks typically use different class divisions, for example, 6/4 and 50/50, to test models’ abilities to identify instances of open set samples within the test set. To implement this, we divided dataset categories into two groups: closed and open sets. Models are then trained exclusively on data from the closed set and are evaluated on the entire test set to ascertain their proficiency in distinguishing between closed and open set samples.
For datasets like AID, UCM, and EuroSAT, we introduced specific configurations—namely AID7/3, AID6/4, AID5/5, UCM7/3, UCM6/4, UCM5/5, and EuroSAT 7/3, EuroSAT 6/4, EuroSAT 5/5. These configurations indicate the ratio of closed-set to open-set classes, such as 7:3, 6:4, and 5:5, respectively. A detailed methodology for one of these randomized divisions is presented in Table 3, illustrating our approach.
The benchmarks we developed are based on randomized class divisions. The performance metrics we utilize are derived from the average outcomes of five distinct splits, ensuring a comprehensive assessment of a model’s ability to handle open set samples. This methodology ensures that our benchmarks effectively measure a model’s OSR performance, contributing valuable insights into their capabilities in dealing with open set scenarios.

2.6.2. OOD Benchmarks

The common practice for building OOD detection benchmarks is to consider an entire dataset as in-distribution (ID), and then collect several datasets that are disconnected from any ID categories as OOD datasets [17]. To better evaluate the model under semantic and domain shifts, according to the definitions of D Simi - OOD , D Near - OOD , and D Far - OOD , we have designed a total of nine out-of-distribution (OOD) benchmarks across three datasets: UCM, AID, and EuroSAT. These benchmarks are named using the dataset name followed by the definition of the distribution. Specifically, these benchmarks are UCM-Simi-OOD, UCM-Near-OOD, UCM-Far-OOD, AID-Simi-OOD, AID-Near-OOD, AID-Far-OOD, EuroSAT-Simi-OOD, EuroSAT-Near-OOD, and EuroSAT-Far-OOD. We provide detailed descriptions of these nine benchmarks below. In Table 4, we present the specific categorization of the Simi-OOD and Near-OOD classes across different datasets.
  • In the Simi-OOD benchmark for the UCM dataset, we incorporated 20 categories exhibiting semantic overlap with RSI-CB256 [63] and UCM [50]. Conversely, the Near-OOD subset featured an additional 15 categories that do not intersect with this overlap. Our Far-OOD compilation encompasses datasets such as Places365 [64], ImageNet-O [65], and OpenImage-O [41], which we resized to 256 × 256 to align with our Far-OOD criterion;
  • In the Simi-OOD benchmark for the AID dataset, we focused on 20 categories sharing semantic traits between NWPU-RESISC45 [2] and AID [51]. In contrast, the Near-OOD category embraced an additional 15 categories with no overlap. To address Far-OOD scenarios, we integrated datasets like Places365 [64], ImageNet-O [65], and OpenImage-O [41], resizing images to 600 × 600 to match our definitions of Simi-OOD, Near-OOD and Far-OOD;
  • In the EuroSAT dataset within the Simi-OOD benchmark, we examined 10 categories sharing semantic features between RSI-CB128 [63] and EuroSAT [52]. Conversely, the Near-OOD subset encompassed an additional 35 categories devoid of overlap. Our Far-OOD consideration included datasets like MNIST [61], CIFAR-100 [62], and Tiny-Imagenet [12]. Resizing images to 64 × 64 aligned with our definitions of Near-OOD and Far-OOD.

3. Results

3.1. Results on OSR Benchmark

The methods we tested rely on a ResNet-50 backbone trained on closed-set data corresponding to UCM, AID, and EuroSAT. To quantify the reliability of the OOD detection model, we used AUROC, FPR@95, AUPR-IN, and AUPR-OUT metrics. In addition, the computation time, as an aspect of the applicability of the methods, was evaluated and recorded in seconds. For the AUROC score, AUPR-IN, and AUPR-OUT score, higher scores were considered better, and for the FPR@95 score and computation time metric, lower scores were considered better.
According to Table 5, the VIM and KNN methods, which require no additional training, achieved the best results in AUROC, ranking in the top three across different OSR-AID and OSR-UCM benchmark splits. Table 6 reveals that, in addition to VIM and KNN, the LogiNorm method also secured a top-three position in FPR@95 across different benchmarks, achieving the lowest values in some benchmarks. As per Table 7 and Table 8, the OE method, alongside VIM and KNN, performed well in the AUPR-IN and AUPR-OUT metrics. According to Table 9, all evaluated methods scored highly on the ID ACC metric. Table 10 indicates that methods such as MSP, VIM, and KNN, which do not require additional training time, had the shortest computation times, making them more suitable for scenarios with high real-time requirements.
Overall, the VIM and KNN methods demonstrated excellent performance across all evaluation metrics without the need for additional training. Surprisingly, methods requiring substantial additional training time, such as OE and MCD, did not achieve better AUROC values, suggesting that additional training processes are unnecessary for the OSR benchmarks. The performance of ConfBranch and G-ODIN methods was suboptimal across all OSR benchmarks. An analysis of Table 5 shows that ConfBranch had higher AUROC values on the EuroSAT benchmark than on the AID benchmark, and even more so compared to the UCM dataset, indicating a propensity for overfitting on smaller-scale datasets (e.g., UCM) and suggesting its better suitability for larger datasets. The performance of the G-ODIN method, which requires an additional training process, was inferior to other methods, indicating the need for finer parameter tuning when applied to remote sensing imagery, such as exploring different distance functions h ( x ) .
Furthermore, results from Figure 6 illustrate that nearly all methods performed better on the 7/3 split for AUROC and AUPR metrics compared to the 6/4 split, and significantly better than the 5/5 split. This can likely be attributed to the openness of the different benchmarks, with the 5/5 split having the highest openness and thus presenting a greater challenge. Additionally, the analysis of the AUPR-IN and AUPR-OUT results indicates an inverse effect of category division on these metrics. The 5/5 split showed better performance in AUPR-IN compared to other splits, while its performance in AUPR-OUT was poorer, likely due to the higher probability of similar features being classified as in-distribution or out-of-distribution, leading to higher AUPR-IN and lower AUPR-OUT.

3.2. Results on OOD Benchmark

The methods we tested rely on a ResNet-50 backbone network trained on the entire UCM, AID, and EuroSAT datasets. Figure 7 illustrates the test results on the OOD benchmark, showing the AUROC, FPR@95, AUPR-IN, and AUPR-OUT metrics of 10 methods primarily on the OOD benchmark. Moreover, more detailed results are provided in Table 11, Table 12, Table 13 and Table 14. In addition to the previously mentioned metrics, the in-distribution accuracy (ID-ACC) and overall computation time are also presented in Table 15 and Table 16, respectively.
Overall, among all performance metrics, the VIM method without additional training and the OE method requiring extra training and auxiliary data achieve high AUROC values and low [email protected], the ConfBranch and G-ODIN methods exhibit below-average performance across all OOD benchmarks, indicating limited applicability in OOD benchmarking. Surprisingly, the KNN method, which performs well on the OSR benchmark, demonstrates poor performance on our UCM and AID benchmarks of OOD. Specifically, the AUPR-OUT scores are notably low on UCM and AID, indicating poor classification of out-of-distribution data. This result might arise from the unequal sample sizes between out-of-distribution and in-distribution data, necessitating further fine-tuning of hyperparameters, particularly K, to align with our OOD benchmarks.
Additionally, based on Figure 7, it is observed that nearly all methods demonstrate better performance in AUROC and AUPR metrics on Far-OOD compared to Near-OOD, which in turn outperforms Simi-OOD. This suggests that the Simi-OOD benchmarks, which identify domain shift exclusively, pose the greatest challenge. Conversely, simultaneously detecting both domain shift and semantic shift in Near-OOD benchmarks is relatively easier, while Far-OOD benchmarks, characterized by significantly greater semantic shift, are the most manageable. In AUPR-IN, different methods generally perform better on Near-OOD benchmarks. However, in AUPR-OUT, Far-OOD benchmarks exhibit superior performance overall. This indicates that models find it relatively more challenging to discern remote sensing image datasets compared to conventional image datasets, consistent with our expectations.

4. Discussion

Based on our research findings, we have discovered that existing methods for detecting out-of-distribution (OOD) instances are quite applicable to remote sensing scene classification tasks. However, the current research is predominantly based on common datasets, and there is a notable gap when it comes to applying these methods to real-world remote sensing scene classifications. Firstly, the semantic clarity under different scenes in remote sensing scene classification tasks is lacking, with insufficiently pronounced differences between classes, necessitating the detection of certain features at a finer granularity. Additionally, significant variance exists within the same category, and identical feature types can vary extensively due to time, geographical location, and spatial scale. Moreover, due to the resolution of images used in real-world scenarios and the substantial variance in the spectral reflectance of images, identifying sensor shifts caused by different sensors is a topic worth investigating.
To enhance the reliability of remote sensing scene classification models in open scenarios, we believe further exploration in the following directions could improve the performance of OOD detection models. Firstly, this research has validated the effectiveness of post hoc methods, which do not require an additional training process. Therefore, these methods can be further explored in real-world scenarios. Secondly, in real-world scene classification tasks, it is sometimes necessary to differentiate between in-distribution (ID) and OOD instances within a small range, such as distinguishing between airports and roads [66]. Hence, fine-grained features could be further extracted from a fine-grained classification perspective for OOD detection. Lastly, due to the semantic ambiguity in single-label classification of remote sensing samples [67], we believe that developing OOD detection methods suited for multi-label classification will be useful for large-scale remote sensing scene classification tasks.

5. Conclusions

We evaluated different classes of OOD detection methods that are highly representative of the corresponding research directions to improve the reliability and security of remote sensing scene classification models. To further compare the performances of the different methods under semantic and domain shifts, we set up a series of benchmarks on the AID, UCM, and EuroSAT datasets. We quantitatively evaluated them using AUROC, AUPR, FPR@95, ID-ACC, and computation time metrics. We conducted numerous experiments to quantitatively evaluate the performance of these methods across different benchmarks. Based on the evaluation results, we found that that virtual-logit matching methods, without extra training, perform better than other methods on both OSR and OOD benchmarks. This suggests that additional training methods are unnecessary for scene classification applications in remote sensing imagery. Our results show that existing OOD detection methods can provide reliability and security for further deployment of remote sensing scene categorization applications with large-scale, diverse ground coverage involving multiple types of sensors. Additionally, our findings provide valuable insights to explore better OOD detection methods suitable for large-scale remote sensing applications.

Author Contributions

Conceptualization, S.L.; methodology, S.L.; software, S.L.; validation, S.L. and N.L.; formal analysis, S.L.; investigation, S.L.; resources, S.L. and M.J.; data curation, S.L.; writing— original draft preparation, S.L.; writing—review and editing, C.J.; visualization, S.L. and N.L.; supervision, C.J.; project administration, L.C.; funding acquisition, L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Foundation of Science & Technology on Integrated Information System Laboratory (HLJGXQ20220916032) and by the National Key Research and Development Program of China (2022YFB3903603).

Data Availability Statement

Data associated with this research are available online. The UCM dataset is available at http://weegee.vision.ucmerced.edu/datasets/landuse.html (accessed on 5 March 2024). The AID datasets are available at https://opendatalab.com/OpenDataLab/AID (accessed on 5 March 2024). The EuroSAT dataset is available at https://opendatalab.com/OpenDataLab/EuroSAT (accessed on 5 March 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  2. Cheng, G.; Han, J.; Lu, X. Remote sensing image scene classification: Benchmark and state of the art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
  3. Nogueira, K.; Penatti, O.A.; Dos Santos, J.A. Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recognit. 2017, 61, 539–556. [Google Scholar] [CrossRef]
  4. Bouslihim, Y.; Kharrou, M.H.; Miftah, A.; Attou, T.; Bouchaou, L.; Chehbouni, A. Comparing pan-sharpened Landsat-9 and Sentinel-2 for land-use classification using machine learning classifiers. J. Geovis. Spat. Anal. 2022, 6, 35. [Google Scholar] [CrossRef]
  5. Dimitrovski, I.; Kitanovski, I.; Kocev, D.; Simidjievski, N. Current trends in deep learning for Earth Observation: An open-source benchmark arena for image classification. ISPRS J. Photogramm. Remote Sens. 2023, 197, 18–35. [Google Scholar] [CrossRef]
  6. Vernekar, S.; Gaurav, A.; Denouden, T.; Phan, B.; Abdelzad, V.; Salay, R.; Czarnecki, K. Analysis of confident-classifiers for out-of-distribution detection. arXiv 2019, arXiv:1904.12220. [Google Scholar]
  7. Tang, K.; Miao, D.; Peng, W.; Wu, J.; Shi, Y.; Gu, Z.; Tian, Z.; Wang, W. Codes: Chamfer out-of-distribution examples against overconfidence issue. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 1153–1162. [Google Scholar]
  8. Berger, C.; Paschali, M.; Glocker, B.; Kamnitsas, K. Confidence-based out-of-distribution detection: A comparative study and analysis. In Proceedings of the Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis: 3rd International Workshop, UNSURE 2021, and 6th International Workshop, PIPPI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, 1 October 2021; Proceedings 3. Springer: Berlin/Heidelberg, Germany, 2021; pp. 122–132. [Google Scholar]
  9. Hendrycks, D.; Carlini, N.; Schulman, J.; Steinhardt, J. Unsolved problems in ml safety. arXiv 2021, arXiv:2109.13916. [Google Scholar]
  10. Hendrycks, D.; Mazeika, M. X-risk analysis for ai research. arXiv 2022, arXiv:2206.05862. [Google Scholar]
  11. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 11–18 December 2015; pp. 1026–1034. [Google Scholar]
  12. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 60, 84–90. [Google Scholar] [CrossRef]
  13. Yang, J.; Zhou, K.; Li, Y.; Liu, Z. Generalized out-of-distribution detection: A survey. arXiv 2021, arXiv:2110.11334. [Google Scholar]
  14. Liu, W.; Nie, X.; Zhang, B.; Sun, X. Incremental Learning With Open-Set Recognition for Remote Sensing Image Scene Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  15. Zhou, K.; Liu, Z.; Qiao, Y.; Xiang, T.; Loy, C.C. Domain generalization: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 4396–4415. [Google Scholar] [CrossRef] [PubMed]
  16. Gawlikowski, J.; Saha, S.; Kruspe, A.; Zhu, X.X. An advanced dirichlet prior network for out-of-distribution detection in remote sensing. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–19. [Google Scholar] [CrossRef]
  17. Hendrycks, D.; Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv 2016, arXiv:1610.02136. [Google Scholar]
  18. Yang, J.; Wang, P.; Zou, D.; Zhou, Z.; Ding, K.; Peng, W.; Wang, H.; Chen, G.; Li, B.; Sun, Y.; et al. Openood: Benchmarking generalized out-of-distribution detection. Adv. Neural Inf. Process. Syst. 2022, 35, 32598–32611. [Google Scholar]
  19. Liang, S.; Li, Y.; Srikant, R. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv 2017, arXiv:1706.02690. [Google Scholar]
  20. Lee, K.; Lee, K.; Lee, H.; Shin, J. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. Adv. Neural Inf. Process. Syst. 2018, 31. [Google Scholar]
  21. Liu, W.; Wang, X.; Owens, J.; Li, Y. Energy-based out-of-distribution detection. Adv. Neural Inf. Process. Syst. 2020, 33, 21464–21475. [Google Scholar]
  22. Sastry, C.S.; Oore, S. Detecting out-of-distribution examples with gram matrices. In Proceedings of the International Conference on Machine Learning. PMLR, Virtual, 13–18 July 2020; pp. 8491–8501. [Google Scholar]
  23. Sun, Y.; Li, Y. Dice: Leveraging sparsification for out-of-distribution detection. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 691–708. [Google Scholar]
  24. Du, X.; Wang, Z.; Cai, M.; Li, Y. Vos: Learning what you do not know by virtual outlier synthesis. arXiv 2022, arXiv:2202.01197. [Google Scholar]
  25. Tack, J.; Mo, S.; Jeong, J.; Shin, J. Csi: Novelty detection via contrastive learning on distributionally shifted instances. Adv. Neural Inf. Process. Syst. 2020, 33, 11839–11852. [Google Scholar]
  26. Yang, J.; Wang, H.; Feng, L.; Yan, X.; Zheng, H.; Zhang, W.; Liu, Z. Semantically coherent out-of-distribution detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 8301–8309. [Google Scholar]
  27. Yu, Q.; Aizawa, K. Unsupervised out-of-distribution detection by maximum classifier discrepancy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9518–9526. [Google Scholar]
  28. Hendrycks, D.; Mazeika, M.; Dietterich, T. Deep anomaly detection with outlier exposure. arXiv 2018, arXiv:1812.04606. [Google Scholar]
  29. Lakshminarayanan, B.; Pritzel, A.; Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  30. Scheirer, W.J.; Jain, L.P.; Boult, T.E. Probability models for open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2317–2324. [Google Scholar] [CrossRef] [PubMed]
  31. Smith, R. Extreme value theory. In Handbook of Applicable Mathematics; Wiley: Hoboken, NJ, USA, 1990; Volume 7. [Google Scholar]
  32. Ge, Z.; Demyanov, S.; Chen, Z.; Garnavi, R. Generative openmax for multi-class open set classification. arXiv 2017, arXiv:1707.07418. [Google Scholar]
  33. Neal, L.; Olson, M.; Fern, X.; Wong, W.K.; Li, F. Open set learning with counterfactual images. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 613–628. [Google Scholar]
  34. Scheirer, W.J.; de Rezende Rocha, A.; Sapkota, A.; Boult, T.E. Toward open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1757–1772. [Google Scholar] [CrossRef]
  35. Al Rahhal, M.M.; Bazi, Y.; Al-Dayil, R.; Alwadei, B.M.; Ammour, N.; Alajlan, N. Energy-based learning for open-set classification in remote sensing imagery. Int. J. Remote Sens. 2022, 43, 6027–6037. [Google Scholar] [CrossRef]
  36. Li, C.L.; Sohn, K.; Yoon, J.; Pfister, T. Cutpaste: Self-supervised learning for anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 9664–9674. [Google Scholar]
  37. Hein, M.; Andriushchenko, M.; Bitterwolf, J. Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 41–50. [Google Scholar]
  38. da Silva, C.C.; Nogueira, K.; Oliveira, H.N.; dos Santos, J.A. Towards open-set semantic segmentation of aerial images. In Proceedings of the 2020 IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS), Santiago, Chile, 22–26 March 2020; pp. 16–21. [Google Scholar]
  39. Torralba, A.; Fergus, R.; Freeman, W.T. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1958–1970. [Google Scholar] [CrossRef] [PubMed]
  40. Zou, Y.; Yu, Z.; Liu, X.; Kumar, B.; Wang, J. Confidence regularized self-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5982–5991. [Google Scholar]
  41. Wang, H.; Li, Z.; Feng, L.; Zhang, W. Vim: Out-of-distribution with virtual-logit matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 4921–4930. [Google Scholar]
  42. Sun, Y.; Ming, Y.; Zhu, X.; Li, Y. Out-of-distribution detection with deep nearest neighbors. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 20827–20840. [Google Scholar]
  43. DeVries, T.; Taylor, G.W. Learning confidence for out-of-distribution detection in neural networks. arXiv 2018, arXiv:1802.04865. [Google Scholar]
  44. Wei, H.; Xie, R.; Cheng, H.; Feng, L.; An, B.; Li, Y. Mitigating neural network overconfidence with logit normalization. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 25–27 July 2022; pp. 23631–23644. [Google Scholar]
  45. Hsu, Y.C.; Shen, Y.; Jin, H.; Kira, Z. Generalized odin: Detecting out-of-distribution image without learning from out-of-distribution data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10951–10960. [Google Scholar]
  46. Gal, Y.; Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 19–24 June 2016; pp. 1050–1059. [Google Scholar]
  47. Guo, C.; Pleiss, G.; Sun, Y.; Weinberger, K.Q. On calibration of modern neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 1321–1330. [Google Scholar]
  48. Faqe Ibrahim, G.R.; Rasul, A.; Abdullah, H. Improving crop classification accuracy with integrated Sentinel-1 and Sentinel-2 data: A case study of barley and wheat. J. Geovis. Spat. Anal. 2023, 7, 22. [Google Scholar] [CrossRef]
  49. He, Y.; Zhao, Z.; Zhu, Q.; Liu, T.; Zhang, Q.; Yang, W.; Zhang, L.; Wang, Q. An integrated neural network method for landslide susceptibility assessment based on time-series InSAR deformation dynamic features. Int. J. Digit. Earth 2024, 17, 2295408. [Google Scholar] [CrossRef]
  50. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
  51. Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
  52. Helber, P.; Bischke, B.; Dengel, A.; Borth, D. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
  53. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  54. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  55. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  56. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  57. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  58. Tolstikhin, I.O.; Houlsby, N.; Kolesnikov, A.; Beyer, L.; Zhai, X.; Unterthiner, T.; Yung, J.; Steiner, A.; Keysers, D.; Uszkoreit, J.; et al. Mlp-mixer: An all-mlp architecture for vision. Adv. Neural Inf. Process. Syst. 2021, 34, 24261–24272. [Google Scholar]
  59. Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A convnet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]
  60. Liu, Z.; Hu, H.; Lin, Y.; Yao, Z.; Xie, Z.; Wei, Y.; Ning, J.; Cao, Y.; Zhang, Z.; Dong, L.; et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 12009–12019. [Google Scholar]
  61. Mu, N.; Gilmer, J. MNIST-C: A Robustness Benchmark for Computer Vision. arXiv, 2019; arXiv:1906.02337. [Google Scholar]
  62. Krizhevsky, A.; Hinton, G.; Sutskever, I.; Salakhutdinov, R.; Osindero, S.; Teh, Y.W.; Tieleman, T.; Mnih, A.; Hadsell, R.; Eslami, S.M.A.; et al. Learning Multiple Layers of Features from Tiny Images; Technical Report; University of Toronto: Toronto, ON, Canada, 2009; pp. 32–33. [Google Scholar]
  63. Li, H.; Dou, X.; Tao, C.; Wu, Z.; Chen, J.; Peng, J.; Deng, M.; Zhao, L. RSI-CB: A large-scale remote sensing image classification benchmark using crowdsourced data. Sensors 2020, 20, 1594. [Google Scholar] [CrossRef] [PubMed]
  64. Zhou, B.; Lapedriza, A.; Khosla, A.; Oliva, A.; Torralba, A. Places: A 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 1452–1464. [Google Scholar] [CrossRef] [PubMed]
  65. Hendrycks, D.; Zhao, K.; Basart, S.; Steinhardt, J.; Song, D. Natural adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15262–15271. [Google Scholar]
  66. Li, N.; Cheng, L.; Ji, C.; Dongye, S.; Li, M. An Improved Framework for Airport Detection Under the Complex and Wide Background. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 9545–9555. [Google Scholar] [CrossRef]
  67. Shao, Z.; Yang, K.; Zhou, W. Performance evaluation of single-label and multi-label remote sensing image retrieval using a dense labeling dataset. Remote Sens. 2018, 10, 964. [Google Scholar] [CrossRef]
Figure 1. A remote sensing scene classification model trained on a closed dataset tends to encounter challenges when faced with unknown categories in open-world scenarios. In such cases, the model often categorizes them as known ones.
Figure 1. A remote sensing scene classification model trained on a closed dataset tends to encounter challenges when faced with unknown categories in open-world scenarios. In such cases, the model often categorizes them as known ones.
Remotesensing 16 01501 g001
Figure 2. When deploying a remote sensing scene classification model in the real world, challenges arise during inference and testing. These challenges include images with land cover categories not found in the training dataset (referred to as semantic shift) or images with the same categories but differing sensor differences and geographical disparities (referred to as domain shift). Models often tend to classify such images as known categories.
Figure 2. When deploying a remote sensing scene classification model in the real world, challenges arise during inference and testing. These challenges include images with land cover categories not found in the training dataset (referred to as semantic shift) or images with the same categories but differing sensor differences and geographical disparities (referred to as domain shift). Models often tend to classify such images as known categories.
Remotesensing 16 01501 g002
Figure 3. Conception of One-Class Classification, Open Set Recognition, Out-of-Distribution Detection and Outlier Detection.
Figure 3. Conception of One-Class Classification, Open Set Recognition, Out-of-Distribution Detection and Outlier Detection.
Remotesensing 16 01501 g003
Figure 4. Classes and corresponding examples for the EuroSAT dataset, the UCM dataset, and the AID dataset. For the EuroSAT dataset, only images consisting of the three bands red, green, and blue are shown.
Figure 4. Classes and corresponding examples for the EuroSAT dataset, the UCM dataset, and the AID dataset. For the EuroSAT dataset, only images consisting of the three bands red, green, and blue are shown.
Remotesensing 16 01501 g004
Figure 5. We established nine OSR benchmarks and nine OOD benchmarks on the UCM, AID, and EuroSAT datasets. Among them, OSR benchmarks only detect semantic shifts, while OOD benchmarks further detect domain shifts.
Figure 5. We established nine OSR benchmarks and nine OOD benchmarks on the UCM, AID, and EuroSAT datasets. Among them, OSR benchmarks only detect semantic shifts, while OOD benchmarks further detect domain shifts.
Remotesensing 16 01501 g005
Figure 6. Performance against OSR Benchmark.
Figure 6. Performance against OSR Benchmark.
Remotesensing 16 01501 g006
Figure 7. Performance against OOD benchmark.
Figure 7. Performance against OOD benchmark.
Remotesensing 16 01501 g007
Table 1. List of different types of OOD detection methods for remote sensing scene classification tasks.
Table 1. List of different types of OOD detection methods for remote sensing scene classification tasks.
MethodologyReference
OOD detection methodspost hocMSP[17]
VIM[41]
KNN[42]
training-time regularizationConfBranch[43]
LogitNorm[44]
G-ODIN[45]
training with outlier exposureOE[28]
MCD[27]
model uncertaintyMCDropout[46]
Tempscaling[47]
Table 2. Summary of recent representative model architectures.
Table 2. Summary of recent representative model architectures.
ModelYearLayersParametersFLOPSReference
AlexNet20128 57 × 10 6 0.72 G[12]
VGG16201416 134.2 × 10 6 15.47 G[54]
ResNet50201550 23.5 × 10 6 4.09 G[53]
ResNet1522015152 23.5 × 10 6 11.52 G[53]
DenseNet1612017161 26.4 × 10 6 7.73 G[55]
EfficientNet B02019237 5.2 × 10 6 0.39 G[56]
Vision Transformer202012 86.5 × 10 6 17.57 G[57]
MLPMixer202112 59.8 × 10 6 12.61 G[58]
ConvNeXt2022174 28 × 10 6 4.46 G[59]
Swin Transformer202224 49.7 × 10 6 11.55 G[60]
Table 3. The 30 classes in AID, the 21 classes in UCM, and the 10 classes in EuroSAT were divided into closed-set and open-set classes according to defined proportions in various benchmarks. Five randomizations were conducted for AID, UCM, and EuroSAT during the evaluation. Here, we present an example of one random partition.
Table 3. The 30 classes in AID, the 21 classes in UCM, and the 10 classes in EuroSAT were divided into closed-set and open-set classes according to defined proportions in various benchmarks. Five randomizations were conducted for AID, UCM, and EuroSAT during the evaluation. Here, we present an example of one random partition.
BenchmarkClosed-Set Classes (ID)Open-Set Classes (OOD)
UCM-7/3agricultural airplane baseballdiamond buildings chaparral denseresidential forest freeway golfcourse mobilehomepark overpass parkinglot river sparseresidential tenniscourtbeach harbor intersection mediumresidential runway storagetanks
UCM-6/4agricultural baseballdiamond beach buildings chaparral forest freeway golfcourse intersection mediumresidential overpass sparseresidential storagetanksairplane denseresidential harbor mobilehomepark parkinglot river runway tenniscourt
UCM-5/5agricultural buildings chaparral golfcourse harbor intersection mobilehomepark parkinglot river runway storagetanksairplane baseballdiamond beach denseresidential forest freeway mediumresidential overpass sparseresidential tenniscourt
AID-7/3airport baseballfield bareland beach bridge denseresidential desert forest mediumresidential park parking playground pond port railwaystation river school square stadium storagetanks viaductcenter church commercial farmland industrial meadow mountain resort sparseresidentia
AID-6/4baseballfield bareland bridge center desert denseresidential farmland industrial mediumresidential mountain parking port resort railwaystation school sparseresidential stadium storagetanksairport beach church commercial forest meadow park playground pond river square viaduct
AID-5/5baseballfield beach center church desert farmland industrial mediumresidential mountain park parking pond port stadium viaductairport bareland bridge commercial denseresidential forest meadow playground railwaystation resort river school sparseresidential square storagetanks
EuroSAT-7/3AnnualCrop Industrial Pasture PermanentCrop Residential River SeaLakeHerbaceousVegetation Highway Industrial
EuroSAT-6/4AnnualCrop HerbaceousVegetation Industrial Residential River SeaLakeForest Highway Pasture PermanentCrop
EuroSAT-5/5AnnualCrop Forest Highway Residential RiverHerbaceousVegetation Industrial Pasture PermanentCrop SeaLake
Table 4. Specific Categorization of Simi-OOD and Near-OOD Classes Across UCM, AID, and EuroSAT.
Table 4. Specific Categorization of Simi-OOD and Near-OOD Classes Across UCM, AID, and EuroSAT.
ID DatasetOOD DatasetSimi-OOD ClassesNear-OOD Classes
UCMRSI-CB256sea desert snow-mountain mangrove sparse-forest bare-land hirst sandbeach sapling artificial-grassland shrubwood mountain dam pipeline river-protection-forest container stream avenue lakeshore bridgeairport-runway residents marina crossroads green-farmland town parkinglot river forest coastline airplane dry-farm storage-room city-building highway
AIDNWPU-RESISC45snowberg wetland intersection runway island cloud basketball-court lake golf-course sea-ice roundabout mobile-home-park freeway terrace airplane thermal-power-station ship circular-farmland railway chaparralparking-lot desert airport tennis-court church mountain medium-residential sparse-residential commercial-area river palace forest dense-residential storage-tank ground-track-field stadium railway-station meadow baseball-diamond overpass harbor industrial-area bridge beach rectangular-farmland
EuroSATRSI-CB128sea sparse-forest residents green-farmland river natural-grassland forest dry-farm city-building highwayturning-circle fork-road desert snow-mountain mangrove airport-runway bare-land hirst sandbeach marina crossroads sapling artificial-grassland shrubwood mountain town dam parkinglot rail city-avenue coastline tower city-green-tree mountain-road pipeline river-protection-forest container stream grave avenue storage-room overpass lakeshore city-road bridge
Table 5. Results from nine OSR benchmarks summarized by the top three average AUROC scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Table 5. Results from nine OSR benchmarks summarized by the top three average AUROC scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSPVIMKNNConfBranchLogiNormG-ODINOEMCDMCDropoutTempscaling
UCM7/394.5395.0194.7858.6194.4870.9293.4383.8694.6994.16
6/494.8493.8594.1552.3192.2567.8591.7881.9592.6793.18
5/593.5992.6193.2550.4890.1565.6790.6680.7592.2191.51
AID7/394.8095.2695.6275.1893.2180.8492.5488.4293.9694.30
6/493.5195.2895.7667.2293.7280.5092.7888.9793.8893.59
5/591.7094.0394.7459.0792.7979.4592.2988.4393.5492.93
EuroSAT7/394.0494.5492.8173.6396.6083.3794.2491.7892.7594.27
6/491.4094.9590.5373.4496.2378.6091.5590.7790.6191.90
5/590.2594.5689.6572.9894.3872.8789.4989.6688.4690.96
Table 6. Results from nine OSR benchmarks summarized by the top three average FPR@95 scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Table 6. Results from nine OSR benchmarks summarized by the top three average FPR@95 scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCM7/326.3322.1517.3388.0021.0075.6736.6772.3322.6725.33
6/432.3123.4625.3889.6228.4672.3141.5475.3841.9229.62
5/535.8228.1831.5587.7332.9573.1845.0079.5558.6434.27
AID7/322.2017.5819.0770.5220.4456.8922.7036.0027.9224.30
6/424.3422.8323.9387.2423.4260.1424.0533.5925.9523.97
5/528.3425.6327.1291.8527.6566.6027.9642.6928.9523.02
EuroSAT7/339.5723.8633.7380.5118.6369.4935.3236.2736.7855.63
6/447.7423.4753.4479.9120.9175.7649.5340.2636.1557.59
5/553.7922.4378.0083.5426.9380.5767.3961.0446.4361.24
Table 7. Results from nine OSR benchmarks summarized by the top three average AURP-IN scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Table 7. Results from nine OSR benchmarks summarized by the top three average AURP-IN scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCM7/385.6290.2389.6136.2785.9547.1991.8168.3387.0085.75
6/488.2092.5191.1435.9687.0155.9596.9771.5289.4790.35
5/589.8894.6592.8948.2885.8464.1897.1476.4692.0993.90
AID7/386.9190.9092.4644.4787.2963.0387.9177.7888.7886.98
6/490.0994.4494.0438.6790.6268.7189.2280.9791.9290.25
5/593.6894.1394.7258.6093.3171.5492.1184.4593.5594.63
EuroSAT7/381.0385.8086.7757.0688.3754.3978.9875.5477.3382.13
6/488.2088.9287.3165.7592.8769.2686.0781.8284.3987.57
5/594.0193.5689.9572.5492.2478.5193.9890.5990.9993.74
Table 8. Results from nine OSR benchmarks summarized by the top three average AURP-OUT scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Table 8. Results from nine OSR benchmarks summarized by the top three average AURP-OUT scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCM7/397.7296.8098.0976.7797.6785.4097.1089.7097.7297.54
6/494.6495.9196.3767.4094.1774.6995.8486.4594.5294.44
5/590.3193.5494.7057.1490.7768.9591.5481.8390.5791.30
AID7/396.9097.1897.5176.8196.9791.8397.2395.1296.9297.04
6/494.7096.4995.2669.2894.7585.5595.0592.7394.6394.68
5/594.8994.7294.5860.3794.0778.2293.6590.1092.9495.19
EuroSAT7/393.9597.4496.4383.2597.6884.4694.4891.6895.6994.02
6/493.9795.7793.2680.0794.6782.3294.4191.0692.6694.64
5/594.3694.5391.7675.1292.4181.1494.8891.8293.4793.96
Table 9. Results from nine OSR benchmarks summarized by the top three average ID-ACC scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Table 9. Results from nine OSR benchmarks summarized by the top three average ID-ACC scores (percentage) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCM7/398.6798.6798.6799.0099.0083.0098.0092.0098.6799.00
6/499.2399.2399.2398.4698.4681.1598.4692.3199.2398.85
5/599.5599.5599.5598.6498.6482.2797.7389.0999.0998.64
AID7/397.0897.0897.0896.8096.2486.2196.1091.5796.5297.14
6/497.2597.2597.2597.0897.2584.8895.9693.3096.3996.99
5/598.8198.8198.8199.1198.9189.5398.4295.4598.9199.01
EuroSAT7/398.8498.8498.8498.9598.7095.6898.5197.5197.2298.92
6/499.6299.6299.6299.6299.4196.5699.0398.0396.6599.56
5/599.5499.5499.5499.4399.4695.8299.2598.7199.3299.50
Table 10. Results from nine OSR benchmarks summarized by the top three average computation time (seconds) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Table 10. Results from nine OSR benchmarks summarized by the top three average computation time (seconds) calculated over seven runs and five random category splits. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCMAvg.444668744638293015727365
AIDAvg.101010257026822397102057370264412
EuroSATAvg.5556988107324832569940216
Table 11. Results from nine OOD benchmarks summarized by the top three average AUROC scores (percentage) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Table 11. Results from nine OOD benchmarks summarized by the top three average AUROC scores (percentage) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCMSemi-OOD79.4685.4345.4166.0779.7066.1377.4368.0379.4678.93
Near-OOD89.6887.1166.9283.2789.9667.1093.0783.6988.1388.80
Far-OOD97.0399.5414.7254.7597.1478.0298.4876.0797.9897.22
AIDSemi-OOD78.6082.6837.7352.2878.7666.9876.6470.6375.5078.95
Near-OOD91.6696.0830.8155.0492.0577.0188.4785.2291.1091.90
Far-OOD95.8899.6426.3654.8596.5882.1091.3591.2596.6095.72
EuroSATSemi-OOD93.4398.5496.4086.8993.9890.7399.5591.8484.6092.79
Near-OOD89.1297.7095.3785.2489.3987.4498.8494.9571.2090.72
Far-OOD95.5999.9599.3565.2996.0695.5299.9897.2644.7682.26
Table 12. Results from nine OOD benchmarks summarized by the top three average FPR@95 scores (percentage) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Table 12. Results from nine OOD benchmarks summarized by the top three average FPR@95 scores (percentage) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCMSemi-OOD76.9059.2998.3389.0575.9582.62100.0088.1072.3876.90
Near-OOD36.1957.8695.2454.7635.0080.7124.5253.1043.3340.95
Far-OOD12.061.98100.0085.8711.0368.176.2796.839.3710.95
AIDSemi-OOD75.7059.2098.8093.6075.6586.3081.7088.2078.5576.55
Near-OOD31.5016.3598.8590.6031.9065.9551.9052.8538.3530.30
Far-OOD14.781.4896.2287.1513.3057.4735.2332.3014.7215.52
EuroSATSemi-OOD24.486.3916.3151.2022.3130.913.0733.07100.0034.48
Near-OOD100.0012.1923.7461.78100.0056.156.3717.43100.00100.00
Far-OOD15.340.143.7081.7413.7518.251.068.28100.0052.31
Table 13. Results from nine OOD benchmarks summarized by the top three average AUPR-IN scores (percentage) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Table 13. Results from nine OOD benchmarks summarized by the top three average AUPR-IN scores (percentage) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCMSemi-OOD98.8599.1896.1097.7798.8697.7598.8097.9498.8798.83
Near-OOD99.6199.5398.5299.3399.6298.4599.7399.3299.5699.58
Far-OOD99.6799.9885.6891.8299.6896.8699.8697.4499.7999.70
AIDSemi-OOD96.5697.1786.7890.8596.6594.1896.2894.8896.1496.66
Near-OOD98.3799.2980.0688.4698.5195.0497.8297.0298.4098.48
Far-OOD97.9099.9168.7077.2998.2991.4796.2895.6998.4997.92
EuroSATSemi-OOD95.7999.1598.0591.5296.1693.3299.7894.8090.3095.68
Near-OOD96.8699.3698.6795.2596.9495.6398.7198.1490.7297.25
Far-OOD98.5999.9999.8688.9498.7498.3098.8998.8665.4396.77
Table 14. Results from nine OOD benchmarks summarized by the top three average AUPR-OUT scores (percentage) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Table 14. Results from nine OOD benchmarks summarized by the top three average AUPR-OUT scores (percentage) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCMSemi-OOD11.9231.863.336.8612.239.8915.558.0212.9311.85
Near-OOD56.9031.534.2428.1157.859.4454.2323.5847.5045.08
Far-OOD75.0195.944.009.4076.3319.4785.9812.9080.5076.94
AIDSemi-OOD28.9345.537.5611.1228.4817.8324.8118.6525.2629.37
Near-OOD69.6486.178.4115.6068.2239.1454.5753.0964.1569.76
Far-OOD86.7798.2817.6926.5087.8353.3867.3774.3185.8684.70
EuroSATSemi-OOD89.3297.5693.6775.9290.1984.5899.1286.1277.5387.34
Near-OOD67.8092.7087.9960.9267.4766.3395.2988.9651.4572.84
Far-OOD83.7899.6495.8226.4085.0485.6798.7793.4238.4759.12
Table 15. Results from nine OOD benchmarks summarized by the top three average ID-ACC scores (percentage), calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Table 15. Results from nine OOD benchmarks summarized by the top three average ID-ACC scores (percentage), calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCMID98.3398.3398.3398.1098.3383.8197.3890.2498.1098.33
AIDID96.3596.3596.3596.8596.4584.7595.4092.0096.2596.50
EuroSATID98.2898.2898.2898.3998.3094.5497.8196.8571.4898.26
Table 16. Results from nine OOD benchmarks summarized by the top three average computation time (seconds) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Table 16. Results from nine OOD benchmarks summarized by the top three average computation time (seconds) calculated over seven runs. Bold highlights indicate the top three averages per benchmark.
Benchmark Post HocTraining-Time RegularizationOutlier ExposureModel Uncertainty
MSP VIM KNN ConfBranch LogitNorm G-ODIN OE MCD MCDropout Tempscaling
UCMSemi+Near+Far11212314211919871244411629401266114
AIDSemi+Near+Far1181582213705352576517,88512,3103822117
EuroSATSemi+Near+Far446216112681128125367635699121844
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, S.; Li, N.; Jing, M.; Ji, C.; Cheng, L. Evaluation of Ten Deep-Learning-Based Out-of-Distribution Detection Methods for Remote Sensing Image Scene Classification. Remote Sens. 2024, 16, 1501. https://doi.org/10.3390/rs16091501

AMA Style

Li S, Li N, Jing M, Ji C, Cheng L. Evaluation of Ten Deep-Learning-Based Out-of-Distribution Detection Methods for Remote Sensing Image Scene Classification. Remote Sensing. 2024; 16(9):1501. https://doi.org/10.3390/rs16091501

Chicago/Turabian Style

Li, Sicong, Ning Li, Min Jing, Chen Ji, and Liang Cheng. 2024. "Evaluation of Ten Deep-Learning-Based Out-of-Distribution Detection Methods for Remote Sensing Image Scene Classification" Remote Sensing 16, no. 9: 1501. https://doi.org/10.3390/rs16091501

APA Style

Li, S., Li, N., Jing, M., Ji, C., & Cheng, L. (2024). Evaluation of Ten Deep-Learning-Based Out-of-Distribution Detection Methods for Remote Sensing Image Scene Classification. Remote Sensing, 16(9), 1501. https://doi.org/10.3390/rs16091501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop