Next Article in Journal
Analysis of a Relative Offset between the North American and the Global Vertical Datum in Gravity Potential Space
Next Article in Special Issue
Contrastive Self-Supervised Two-Domain Residual Attention Network with Random Augmentation Pool for Hyperspectral Change Detection
Previous Article in Journal
Rethinking Representation Learning-Based Hyperspectral Target Detection: A Hierarchical Representation Residual Feature-Based Method
Previous Article in Special Issue
Multi-Scale Spectral-Spatial Attention Network for Hyperspectral Image Classification Combining 2D Octave and 3D Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Anomaly Detection Using Spatial–Spectral-Based Union Dictionary and Improved Saliency Weight

School of Aerospace Science and Technology, Xidian University, Xi’an 710126, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2023, 15(14), 3609; https://doi.org/10.3390/rs15143609
Submission received: 2 June 2023 / Revised: 10 July 2023 / Accepted: 14 July 2023 / Published: 19 July 2023
(This article belongs to the Special Issue Computational Intelligence in Hyperspectral Remote Sensing)

Abstract

:
Hyperspectral anomaly detection (HAD), which is widely used in military and civilian fields, aims to detect the pixels with large spectral deviation from the background. Recently, collaborative representation using union dictionary (CRUD) was proved to be effective for achieving HAD. However, the existing CRUD detectors generally only use the spatial or spectral information to construct the union dictionary (UD), which possibly causes a suboptimal performance and may be hard to use in actual scenarios. Additionally, the anomalies are treated as salient relative to the background in a hyperspectral image (HSI). In this article, a HAD method using spatial–spectral-based UD and improved saliency weight (SSUD-ISW) is proposed. To construct robust UD for each testing pixel, a spatial-based detector, a spectral-based detector and superpixel segmentation are jointly considered to yield the background set and anomaly set, which provides pure and representative pixels to form a robust UD. Differently from the conventional operation that uses the dual windows to construct the background dictionary in the local region and employs the RX detector to construct the anomaly dictionary in a global scope, we developed a robust UD construction strategy in a nonglobal range by sifting the pixels closest to the testing pixel from the background set and anomaly set to form the UD. With a preconstructed UD, a CRUD is performed, and the product of the anomaly dictionary and corresponding representation coefficient is explored to yield the response map. Moreover, an improved saliency weight is proposed to fully mine the saliency characteristic of the anomalies. To further improve the performance, the response map and saliency weight are combined with a nonlinear fusion strategy. Extensive experiments performed on five datasets (i.e., Salinas, Texas Coast, Gainesville, San Diego and SpecTIR datasets) demonstrate that the proposed SSUD-ISW detector achieves the satisfactory AUCdf values (i.e., 0.9988, 0.9986, 0.9939, 0.9945 and 0.9997), as compared to the comparative detectors whose best AUCdf values are 0.9938, 0.9956, 0.9833, 0.9919 and 0.9991.

1. Introduction

A hyperspectral image (HSI) with plenty of narrow and continuous bands can provide rich spectral and spatial characteristics, and it may be used to discriminate various materials in different applications, such as target detection [1], image classification [2,3], and anomaly detection [4,5]. Among them, hyperspectral anomaly detection (HAD), aiming to detect the pixels with large spectral deviation from the background [6], has attracted considerable attention in relation to space exploration, battlefield reconnaissance and so on. Compared with the hyperspectral target detection, which has known anomaly spectral prior information, HAD becomes more challenging, as no prior information about the anomaly can be employed.
Recently, the collaborative representation detectors using the union dictionary have attracted widespread attention from researchers [7,8]. Although a tremendous effort has been put into the CR detectors with union dictionary, these CR detectors use only the spatial or spectral information to construct the union dictionary, which possibly results in a suboptimal performance.
In addition to the widely used CR detectors with union dictionary, the visual saliency detection technique also has been paid much attention in the recent years [9,10]. The visual saliency detection technique, which aims to highlight the most attractive and distinctive regions in a scene, is valuable to mine the potential anomalies in an HSI, thus improving the detection performance of HAD.
In this article, a HAD method using a spatial–spectral-based union dictionary and improved saliency weight (SSUD-ISW) is proposed. Specifically, the morphological-based spatial branch and Mahalanobis distance-based spectral branch are first exploited to yield response maps, which are jointly used to construct the background set and anomaly set in the aid of the superpixels. Then, some pixels are separately selected from background set and anomaly set to construct the union dictionary. Next, the testing pixel is expressed by the constructed union dictionary, and the response map is obtained by the product of anomaly dictionary and corresponding representation coefficient. In addition, to fully mine the saliency characteristic of the anomalies, an improved saliency weight is developed. Finally, the response map obtained by the CRUD and the saliency weight are combined with a nonlinear fusion strategy to further highlight the anomalies and suppress the background. The main contributions of this work are described as follows:
(1) To acquire a better representation effect, a spatial–spectral-based union dictionary construction strategy is proposed. Unlike the conventional operation that uses the sliding concentric dual windows to construct the background dictionary in a local region and employs the RX detector to construct the anomaly dictionary in a global scope, we innovatively construct the background set and anomaly set by means of the spatial and spectral detectors, providing the pixels in a nonglobal range to form the union dictionary for each testing pixel.
(2) Inspired by the human visual attention, an improved saliency weight is proposed to further enhance the performance of the proposed SSUD-ISW detector. Compared with the context-aware saliency weight, the improved saliency weight simultaneously considers the influence arrived from the background and anomaly and ignores the effect of the spatial information that may result in unstable performance.
The rest of this article is organized as follows. Section 2 gives the related work. Section 3 introduces the preliminary concepts. The methodology is presented in Section 4. Section 5 introduces the experiments and results. Section 6 gives the discussion. The conclusions are given in Section 7.

2. Related Work

2.1. Statistical-Theory-Based Anomaly Detection

To detect anomalies, the statistical theory (ST)-based method was firstly proposed. The classical ST-based method is the RX detector [11], which hypothesizes that the background obeys a multivariate Gaussian distribution. It uses all pixels of the HSI to calculate the mean vector and covariance matrix to model the background and utilizes the Mahalanobis distance to measure the anomaly degree of each pixel. In addition to the global RX detector mentioned above, the local RX detector (LRXD) [12], which employs the local pixels selected with a dual-window manner for background modeling, was also proposed. Nevertheless, the background distribution assumption is not suitable for the complicated hyperspectral scenes, and the background modeling is susceptible to being contaminated by the anomalies. To address these issues, some variants regarding the RX detector, such as random-selection-based anomaly detector (RS-AD) [13], the fractional Fourier entropy RX detector (FrFE-RXD) [14], etc., were developed in the subsequent work. However, the statistical distribution assumption is often not ideal for the hyperspectral scenes with complicated land-cover distributions [15].

2.2. Representation-Based Anomaly Detection

Recently, the representation-based detectors have achieved unprecedented success. To be concrete, the representation-based detectors are generally classified into three categories: low-rank representation (LRR), sparse representation (SR) and collaborative representation (CR). The LRR holds that the HSI data are able to be decomposed into the low-rank background and sparse anomaly. Zhang et al. [16] proposed the low-rank and sparse matrix decomposition (LSMAD) detector, which is an early research to leverage the idea of LRR. Furthermore, the low-rank component was surrogated by the multiplication of the background dictionary and representation matrix in [17]. To achieve better background and anomaly modeling, Huyan et al. [18] jointly employed the dual dictionaries to perform the representation process. Moreover, other researchers have made tremendous efforts to improve the LRR performance, which can be classified into two categories: constructing more robust dictionaries [19,20] and imposing valuable regularization terms onto the LRR model [21,22].
The SR considers that the testing pixel is able to be expressed by several atoms in the overcomplete dictionary. The background joint SR detector (BJSRD) [23], which models the background by selecting the valuable bases covering all local areas, is a typical SR-based detector. Additionally, similar to the LRR, the improved version of the SR also focuses on how to construct a robust overcomplete dictionary or impose the regularization terms on the SR model. To achieve better representation, Ma et al. [24] proposed a discriminative feature learning with multiple-dictionary SR (DFL-MDSR) for HAD. In terms of introducing the regularization terms, the researchers proposed some meaningful works to perform HAD, such as archetypal analysis and structured SR model (AA-SSR) [25] and constrained SR model (CSR) [26].
Similar to the SR, the CR reckons that the background pixel under testing is able to be represented by the background pixels (also called “background dictionary”) in the local region, while it could not be achieved for the anomalous pixels. The CR detector (CRD) proposed in [27], is the benchmark research work to introduce the CR into HAD. Nevertheless, the background dictionary is inevitably interfered by the anomalies in the scenes with complex background or large anomalies, due to the fact that the ideal scale of sliding concentric dual windows is hard to set [28]. To cope with this problem, some background purification works rejecting the potential anomalous pixels occurred in the background dictionary were proposed, such as the CR detector with outlier removal (CRD-OR) [29], CR-based with outlier removal anomaly detector (CR-ORAD) [30] and density peak guided CR (DPGCR) [31]. Additionally, some researchers think that the strategy of sifting the background pixels for the background dictionary (i.e., sifting the background pixels within the sliding concentric dual windows) is unreliable. To address this issue, some research works focusing on the background dictionary construction approach, such as relaxed CR (RCR) [28] and SR and CR (SR-CR) [5], were proposed. Furthermore, instead of using the background dictionary to perform the CR, Chang et al. [7] proposed a nonnegative-constrained joint CR with union dictionary (NJCR) to perform HAD. Similarly, a CR detector with union dictionary was developed for HAD by considering the intrinsic nonlinear characteristics of HSIs [8]. Although tremendous effort has been put into the CR detectors with union dictionary, these CR detectors use only the spatial or spectral information to construct the union dictionary, which possibly results in a suboptimal performance.

2.3. Deep-Learning-Based Anomaly Detection

In recent years, the deep learning (DL) technique has become the mainstream method to deal with various tasks, such as co-saliency detection [32,33], object detection [34,35,36,37], anomaly detection [18,38], etc. To fully employ the potential of DL to extract the abstract and hierarchical and high-level features, the DL technique has been introduced into the field of HAD. The transferred deep convolutional neural network (DCNN) that was proposed in [39] is a valuable exploration to introduce the CNN to solve the problem of HAD. Subsequently, Song et al. [40] developed a novel AD detector that combines the CNN and low-rank representation (LRR). Instead of the aforementioned methods that utilize the CNN to achieve HAD, some researchers focused on the deep models composed of fully connected layer. Zhang and Cheng [41] developed an adaptive subspace model via a stacked autoencoder (SAE), which adopts the deep features of differences obtained from the SAE model to perform HAD. To make the extracted feature more discriminative, a spectral constrained adversarial AE (SCAAE), which was proposed in [42], was developed to suppress the variational background while preserving the anomaly. Jiang et al. [43] similarly designed a novel generative adversarial network (GAN)-based model for HAD by imposing the shrink and consistency-enhanced constraints, with the consideration of learning a discriminative background reconstruction while the anomalies are suppressed. To keep the local intrinsic structure of the observed data in the latent features, the embedding manifold of AE was explored by Lu et al. [44] to perform HAD. Similarly, a robust graph AE (RGAE) [45] was proposed to retain the geometric structure of the HSI data in the latent feature with the aid of the superpixels. In addition, some research works, such as the deep belief network [46], variational AE [47], transformer [48], etc., were introduced into HAD.

3. Preliminary Concepts

3.1. Collaborative Representation Model

The CR, which was proposed in [27], assumes that the background pixel under test (PUT) is able to be represented by the background pixels (also namely “background dictionary”) in the local region, while this could not be achieved for the anomalous pixels. Let  x  represent the PUT,  D  denote the background dictionary and  α  signify the representation vector. The CR aims at determining the representation coefficient vector,  α , by minimizing both  x D α 2 2  and  α 2 2 :
arg min α x D α 2 2 + β α 2 2
where  β  denotes a regularization parameter. To consider the importance of the pixels in  D , a distance-weight-based optimization objective function is induced:
arg min α x D α 2 2 + β Γ α 2 2
where  Γ = diag ( x d 1 2 ,   x d 2 2 ,   ,   x d m 2 )  refers to the diagonal regularization matrix;  diag ( )  is the operator to form a diagonal matrix; and  d 1 d 2 d m  stand for the entries from  D  (i.e., the columns of  D ). Then, the solution of Equation (2) is expressed as follows:
α ^ = ( D T D + β Γ T Γ ) 1 D T x
Finally, the  l 2  norm of the residual vector (i.e.,  x D α ^ ) is used to measure the anomaly degree of the PUT:
r = x D α ^ 2

3.2. Context-Aware Saliency Detection

Inspired by human visual attention, the context-aware saliency detection method holds that the areas with distinctive patterns or colors have high saliency, and conversely, the blurred or homogeneous regions keep low saliency [9]. On the basis of this fact, the anomalies can be considered to keep high saliency relative to the background in an HSI [10].
Let  x  and  x s  represent the PUT and sth pixel in the surrounding pixels of the PUT, respectively. The context-aware saliency detection is formulated as follows:
d s a l ( x , x s ) = d s p e ( x , x s ) 1 + c d s p a ( x , x s )
where  d s p e  and  d s p a  refer to the spectral distance and spatial distance, respectively; and  c  denotes a constant. The spectral distance,  d s p e , is represented by the following:
d s p e ( x , x s ) = arccos ( t = 1 d x t x s t t = 1 d x t t = 1 d x s t )
where d refers to the number of spectral bands. Similarly, the spatial distance,  d s p a , is denoted by the following:
d s p a ( x , x s ) = ( x r x s r ) 2 + ( x c x s c ) 2
where  ( x r , x c )  and  ( x s r , x s c )  stand for position coordinates corresponding to  x  and  x s , respectively.
Finally, the saliency weight of the PUT is expressed as follows:
d s a l f i n ( x ) = s = 1 m d s a l ( x , x s ) m
where  m  refers to the number of surrounding pixels for the PUT.

3.3. Nonlinear Fusion Strategy

In the existing research works regarding HAD, in order to better combine two or more detection results to highlight the anomaly and suppress the background, a nonlinear fusion strategy is generally adopted [42,49]. Taking detection results  d 1  and  d 2  as examples, the fused detection result,  d , can be obtained by the following formula:
d = d 1 ( 1 e γ d 2 )
where  γ  represents the fusion coefficient.

4. Methodology

In this section, a HAD method using spatial–spectral-based union dictionary and improved saliency weight (SSUD-ISW) is proposed, as illustrated in Figure 1. Specifically, the spatial-based and spectral-based detectors are firstly used to detect the potential anomalies, which are adopted to guide the construction of the background set and anomaly set with the aid of the superpixels (the details of the construction process can be seen in Figure 2 and Section 4.1.2. Then, for each testing pixel, several pixels are separately sifted from the background set and anomaly set to form the union dictionary. Next, the union-dictionary-based collaborative representation is modeled, and the corresponding response values are achieved by means of the product of the anomaly dictionary and anomaly representation coefficient. In addition, as for the testing pixel mentioned above, the improved saliency weight is further performed considering the pixels chosen from the background set and anomaly set. Finally, a nonlinear fusion strategy is employed to combine the response values and improved saliency weight for better performance.

4.1. Union-Dictionary-Based Collaborative Representation

4.1.1. CR Model via Union Dictionary

In the conventional CR-based detectors, only background pixels participate in the representation of the PUT, ignoring the valuable anomalous pixels. To address this concern, a CR model via the union dictionary is exploited, in which the union dictionary is concatenated by some background pixels (also namely “background dictionary”) and anomalous pixels (also called “anomaly dictionary”). Correspondingly, the representation coefficient is composed of a background representation coefficient and anomaly representation coefficient, expressed as follows:
arg min α i x i D i α i 2 2 + β α i 2 2 s . t .     D i = [ D i B ,     D i A ] ,     α i = [ ( α i B ) T ,     ( α i A ) T ] T
where  x i b × 1  is the ith pixel in the HSI;  D i b × ( k B + k A ) D i B b × k B  and  D i A b × k A  are the union dictionary, background dictionary and anomaly dictionary belonging to the ith pixel in the HSI, respectively; and  α i ( k B + k A ) × 1 α i B k B × 1  and    α i A k A × 1  stand for the representation vector of the union dictionary, background dictionary and anomaly dictionary, respectively, where  b  represents the number of the spectral bands, and  k B  and  k A  separately refer to the number of the atoms in  D i B  and  D i A .
Pixels with high similarity to the PUT are expected to have a large coefficient; otherwise, the coefficient should be small [27]. To this end, a distance-weighted Tikhonov regularization is imposed into the representation coefficient vector, which can be formulated by the following:
Γ i = [ x i d i b 1 2 x i d i b k B 2 x i d i a 1 2 x i d i a k A 2 ]
where  d i b 1 d i b k B  are the columns of  D i B ; and  d i a 1 d i a k A  are the columns of  D i A . Then, the weighted optimization problem becomes the following:
arg min α i x i D i α i 2 2 + β Γ i α i 2 2 s . t .     D i = [ D i B ,     D i A ] ,     α i = [ ( α i B ) T ,     ( α i A ) T ] T
The solution of problem (12) can be obtained by making derivative of  α i  be 0, which is calculated as follows:
α ^ i = ( D i T D i + β Γ i T Γ i ) 1 D i T x i
Once  α ^ i  is obtained, the response value corresponding to the ith pixel in the HSI can be calculated by the following:
d C R U D ( x i ) = D i A α i A 2
For a background PUT, the background representation coefficient may be very high, while the anomaly representation coefficient remains at a low level. Conversely, with regard to the anomaly PUT, the background representation coefficient remains as small values, while the anomaly representation coefficient does the opposite. Therefore, through Equation (14), clearly, it can yield a highly anomaly-related response value.

4.1.2. Construction of Union Dictionary

In the conventional CR detectors, the purity of the background dictionary is key for their performance [29,30,50]. Similarly, it is also pretty significant to construct a pure union dictionary customized for CR. To cope with it, some research works focus on how to construct a pure union dictionary, such as [7,8]. However, these detectors only use spatial or spectral information to construct the union dictionary, which possibly leads to a suboptimal performance.
To resolve the aforementioned problem, a spatial–spectral-based union dictionary construction approach is proposed. The whole procedure of the union dictionary construction approach is composed of four parts: (1) spatial-based detector; (2) spectral-based detector; (3) construction of background set and anomaly set; and (4) construction of union dictionary. The details are described as follows.
(1)
Spatial-Based Detector
To effectively detect anomalies, the spatial-based detector is exploited for HAD, as shown in Figure 2. First, to decrease the calculation complexity, the principal component analysis (PCA) is conducted to reduce the spectral dimension of the HSI, and the first three principal components are obtained. Then, the morphological processing (i.e., opening operation and closing operation) and differential operation are carried out for each principal component, so as to detect the small dark area and bright connected parts [51], which can be summarized as follows:
D ( B q ) = | B q M O ( B q ) | + | M C ( B q ) B q | ,       q = 1 , 2 , 3
where  M O ( )  and  M C ( )  represent the opening and closing operation, respectively;  D ( )  denotes the differential operation; and  B q  refers to the qth band in the dimension-reduced HSI. Once the above procedure is finished, an element-wise average operation will be executed for the generated three differential maps to acquire the initial response map,  d 1 0 .
Then, we use the guided filter to rectify  d 1 0  by using the principal component  B q  (q = 1, 2, 3), acting as the guidance images, so as to keep the pixels belonging to the same object with similar values. The details of guided filter are as follows:
d 1 , i = a ¯ i B i q + b ¯ i ,       q = 1 , 2 , 3 ,       i ω j
a j = ( σ j 2 + ε ) 1 [ ( i ω j B i q d 1 , i 0 ) / | ω | μ j d ¯ 1 , j 0 ]
b j = d ¯ 1 , j 0 a j μ j
where  a ¯ i = ( Σ j ω i a j ) / | ω |  and  b ¯ i = ( Σ j ω i b j ) / | ω | , in which  | ω |  indicates the pixels number in the local region,  ω j , centered at pixel  j , corresponding to the guidance image  B q  (q = 1, 2, 3);  ε  represents the regularization parameter; and  μ j  and  σ j 2  are, respectively, the mean and variance of  ω j . After the rectification process is terminated, the spatial-based response map,  d 1 , can be obtained by averaging the three rectified maps pixel by pixel.
(2)
Spectral-Based Detector
We employed the Mahalanobis distance to execute the spectral-based detection, as shown in Figure 2. For the HSI  X = [ x 1 , x 2 , , x u , , x N ] , in which N is the number of pixels in the HSI, the anomaly degree of testing pixel can be calculated by the following formula:
d 2 , u = ( x u μ ) T Σ 1 ( x u μ ) ,         u [ 1 , N ]
where  μ  and  Σ  are, respectively, the mean vector and covariance matrix of the pixels in the HSI. Once all pixels in the HSI are computed, the spectral-based response map,  d 2 , is generated.
(3)
Construction of Background Set and Anomaly Set
Figure 2 illustrates the construction of the background set and anomaly set. Concretely, to comprehensively employ the spectral and spatial information, the element-wise product  d  of  d 1  and  d 2  enables us to highlight the anomaly and suppress the background, and it acts as the director to help the following background set and anomaly set construction. Subsequently, a thresholding operation on  d  is achieved by the OTSU algorithm [51], yielding a binary map,  d b i n . Finally, the background set and anomaly set are constructed by combining the superpixels, which are generated by the simple linear iterative clustering (SLIC) [21] according to the first three principal components and  d b i n . The details can be summarized as follows:
D B = { x p c | Σ t = 1 S p d t b i n = 0 p }
D A = { x ( i 1 ) * W + j | d i , j b i n = 1 ( i , j ) }
where  D B  refers to the background set, and  D A  stands for the anomaly set;  p [ 1 , n S ]  signifies the index of the superpixels, with  n S  indicating the preconfigured superpixels number;  S p  refers to the number of pixels belonging to the pth superpixel;  Σ t = 1 S p d t b i n  stands for the sum of entries in  d b i n  which have same location with pth superpixel;  x p c  denotes the center of the pth superpixel;  i  and  j  represent the index of the height  H  and width  W  in the HSI, respectively;  x ( i 1 ) * W + j  is the pixel at position  ( i , j )  in  X .
(4)
Construction of Union Dictionary
Once the background set and anomaly set are constructed, the next issue is to construct the background dictionary and anomaly dictionary for each PUT. Unlike the conventional operation that utilizes the sliding concentric dual windows to construct the background dictionary in the local region and employs the RX detector to construct the anomaly dictionary within a global scope, in this study, a nonglobal strategy, which separately sifts the closest  k B  and  k A  pixels from the background set and anomaly set relative to the PUT to form the background dictionary and anomaly dictionary, was developed. To be specific, the distance between the PUT  x i  and the pixel  d B j  in  D B  can be computed by the following:
d i s t = x i d B j 2 2 ,         i [ 1 , N ] ,         j [ 1 , n B ]
where  n B  is the number of pixels in  D B . Sorting  d i s t  in ascending order, according to Equation (22), the first  k B  pixels are collected to act as the background dictionary,  D i B . Similarly, the anomaly dictionary,  D i A , can be constructed by selecting  k A  pixels from the anomaly set. Finally, the union dictionary can be constructed by concatenating  D i B  and  D i A  along the column direction, i.e.,  D i = [ D i B ,     D i A ] .

4.2. Improved Saliency Weight

As stated in Section 3.2, the anomalies have higher saliency weight relative to the background in an HSI. Clearly, it is easily considered to exploit the context-aware saliency detection method to compute the saliency weight of all pixels in an HSI. Unlike with the conventional strategy that employs the surrounding pixels of the PUT to compute the saliency weight [10], the pixels in the background set that are very close to the PUT are considered to calculate the saliency weight. However, the spatial distance between the PUT and the pixels sifted from the background set may be very large, which possibly causes an unstable performance. To this end, the spatial distance in Equation (5) is omitted for better and stable performance, and the spectral distance is used to compute the saliency weight. Hence, an improved saliency weight related to the background is formulated as follows:
d s a l B ( x i , D i B S ) = 1 k v = 1 k x i d i v b s 2
where k represents the number of pixels closest to the PUT sifted from the background set, which is used to compute the saliency weight;  x i  refers to the ith PUT in the HSI;  D i B S = [ d i 1 b s , d i 2 b s , , d i k b s ]  is the pixels collection obtained from the background set. Obviously, the larger the distance, the higher the saliency weight. To illustrate the effect of the spatial distance introduced in Equation (5), the visualization of the saliency weights with or without spatial distance is shown in the first two rows of Figure 3. It can be seen that the saliency weight without the spatial distance can effectively locate the anomalies compared with the saliency weight with spatial distance, thus proving that the spatial distance is harmful to use to identify the anomalies.
Similarly, inspired by the idea of the union dictionary, an improved saliency weight related to anomaly is also considered. Clearly, the PUT having a high spectral distance with the pixels sifted from the anomaly set is expected to have small saliency weight, and vice versus. Therefore, the improved saliency weight related to the anomaly can be described by the following:
d s a l A ( x i , D i A S ) = 1 k v = 1 k x i d i v a s 2
where  D i A S = [ d i 1 a s , d i 2 a s , , d i k a s ]  is the collection of pixels sifted from the anomaly set.
To comprehensively reveal the saliency weight of the PUT, the above two saliency weights are summarized as follows:
d s a l ( x i , D i B S , D i A S ) = d s a l B ( x i , D i B S ) + d s a l A ( x i , D i A S ) = 1 k v = 1 k ( x i d i v b s 2 x i d i v a s 2 )
To further illustrate the effect of the saliency weight related to the anomaly, the saliency weights without or with an anomaly are considered, as shown in the last two rows of Figure 3. By comparing them, it is easily found that the anomalies are further strengthened, with the consideration of the saliency weight related to the anomaly, which indicates that the saliency weight, considering the background and anomaly simultaneously, is effective.

4.3. Nonlinear Fusion

To better highlight the anomaly and suppress the background, a nonlinear fusion operation is exploited, which is as follows:
d i f = d C R U D ( x i ) ( 1 e ρ d s a l ( x i , D i B S , D i A S ) )
where ρ denotes the fusion coefficient. Through the above way, the final detection result,  d f = [ d 1 f , d 2 f , , d N f ] , is achieved by merging the response values of all pixels in the HSI.

5. Experiments and Results

5.1. Datasets and Evaluation Metrics

5.1.1. Datasets

In this section, five hyperspectral datasets, composed of one synthetic dataset and four real datasets, are adopted to verify the effect of the proposed SSUD-ISW detector.
Salinas dataset [21]: A synthetic dataset was obtained by implanting the target into the Salinas dataset, which was captured by the AVIRIS sensor. There are 204 bands in total after removing some corrupted bands for the Salinas dataset. In terms of the pixels in the spatial scope, there are 120 × 120 pixels in each band. Figure 4(I-a,I-b) show the composite image and reference map, respectively.
Texas Coast dataset [43]: The second dataset, which was collected by the AVIRIS sensor, covers the area of the Texas Coast. The size of the Texas Coast is 100 × 100 × 204, and the wavelength spans from 0.45 to 1.35 um. Several buildings are treated as the anomalies. Figure 4(II-a,II-b) give the composite image and reference map, respectively.
Gainesville dataset [6]: The third dataset, which was captured through the AVIRIS sensor, covers the area of Gainesville, FL, USA. Its spatial size is 150 × 150 and the bands number is 102. Some boats floating on the water surface are viewed as the anomalies. Figure 4(III-a,III-b) separately exhibit the composite image and reference map.
San Diego dataset [45]: The fourth dataset, whose imaging location is in the area of the San Diego Airport, was acquired through the AVIRIS sensor. For the San Diego dataset, the bands number is 189, with some noisy bands’ removal, and the spatial size is 100×100. The anomalies are three airplanes. The composite image and reference map are displayed in Figure 4(IV-a,IV-b), respectively.
SpecTIR dataset [52]: The fifth dataset was collected from the SpecTIR hyperspectral airborne Rochester experiment, and there are 180 × 180 pixels and 120 bands. The anomalies are several manmade square fabrics. Figure 4(V-a,V-b) display the composite image and reference map, respectively.

5.1.2. Evaluation Metrics

Four receiver-operating-characteristic (ROC) curves (i.e., a 3D ROC curve and three 2D ROC curves corresponding to (Pd, Pf), (Pd, τ) and (Pf, τ)), eight area-under-the-curve (AUC) values (i.e., AUCdf, AUCdτ, AUCfτ, AUCtd, AUCbs, AUCsnpr, AUCtdbs and AUCodp) and a separability map were adopted to evaluate the effect of the proposed SSUD-ISW detector. Notably, the first three AUC values were computed by means of the above three 2D ROC curves and the remaining five were generated by the first three AUC values:
{ AUC t d = AUC d f + AUC d τ AUC b s = AUC d f AUC f τ AUC s n p r = AUC d τ AUC f τ AUC t d b s = AUC d τ AUC f τ AUC o d p = AUC d f + AUC d τ AUC f τ
where AUCdf, AUCtdbs, and AUCodp are employed to assess the overall performance of a detector. AUCdτ and AUCtd are utilized to assess the detection probability of a detector. The remaining three metrics are used to assess the background-suppression capability of a detector. For most of the metrics, except for AUCfτ, the higher the AUC values, the better performance of the detector. A low AUCfτ indicates a better performance for a detector. In addition, as for the separability map, a larger distance between the anomaly and background means a better separability effect.

5.2. Experimental Setup

5.2.1. Implementation Details

Our experiments were conducted on a 2.00 GHz machine with an Intel® Core™ i7-9700T CPU and 16GB of RAM. The experiment software of all comparative detectors was MATLAB, and the proposed SSUD-ISW detector employed Python. The performance of all comparative detectors was configured as the possibly optimal AUCdf values.

5.2.2. Comparative Detectors

Eight comparative detectors were used to prove the superiority of the proposed SSUD-ISW detector. To be concrete, these detectors consist of three classical detectors (i.e., RX [11], CRD [27] and LSMAD [16]) and five state-of-the-art detectors (i.e., CRDBPSW [10], LSDM-MoG [53], RGAE [45], GAED [54] and NJCR [7]).

5.3. Parameter Analysis

There are six parameters to be considered in this article: the superpixels number, ns; the tradeoff parameter, β; the saliency-weight-related parameter, k; fusion coefficient, ρ; and union-dictionary-related parameters, kB and kA. Notably, the other parameters are fixed as the optimal values, as listed in Table 1, when one of them is discussed.
(1) Superpixels number, ns: A group of possible numbers ranging from 100 to 600 at an interval of 100 were configured, with the consideration of the computation burden and detection performance, and the corresponding detection results are exhibited in Figure 5a. In Figure 5a, the AUCdf of the proposed SSUD-ISW detector remains quite high and stable for SpecTIR dataset with the variation of ns. In contrast, the AUCdf of the proposed SSUD-ISW detector fluctuates when ns increases for the other datasets, especially for the San Diego dataset. Notably, the proposed SSUD-ISW detector can achieve optimal values on most datasets when the ns equals 200, except for Gainesville dataset (in which the optimal ns is 300).
(2) Tradeoff parameter, β: For the tradeoff parameter, β, we empirically configured it as {10−6, 10−5, 10−4, 10−3, 10−2, 10−1, 100}, as shown in Figure 5b. As we can see in Figure 5b, the proposed SSUD-ISW detector performs well on both the Texas Coast and SpecTIR datasets when the β varies. Note that the AUCdf of the proposed SSUD-ISW detector gradually decreases in a varying degree as the β increases for the Salinas and Gainesville datasets, while there is an opposite phenomenon for the San Diego dataset. In Figure 5b, the proposed SSUD-ISW detector achieves the best results on all datasets when the β is configured as 10−5, 10−4, 10−4, 10−1 and 10−2, respectively.
(3) Saliency-weight-related parameter, k: Figure 5c shows the parameter analysis regarding k. As clearly seen, the AUCdf is very stable as the k changes for almost all of the datasets, thus indicating that the performance of the proposed SSUD-ISW detector is robust under these possible values. The optimal k value for different datasets is listed in Table 1.
(4) Fusion coefficient, ρ: Some possible parameter settings for ρ are configured, and the variation of AUCdf under different datasets is illustrated in Figure 5d. In these curves, the variation tendency of the curves is relatively obvious for the Salinas and Gainesville datasets, and the others are pretty stable. The optimal parameter settings are listed in the fifth row of Table 1.
(5) Union-dictionary-related parameters, kB and kA: For the union-dictionary-related parameters, considering that the background pixels account for a large proportion, while the anomalies occupy a small one, we configured kB and kA as {5, 10, 15, 20, 25} and {3, 5, 7, 9, 11}, respectively. Figure 6 plots the 3D histogram of the AUCdf of the proposed SSUD-ISW detector over all datasets. In these histograms, the AUCdf fluctuates significantly for the most datasets when kB and kA change, indicating that these parameters are very vital for the performance of the proposed SSUD-ISW detector. To obtain the optimal results, a large number of experiments were carried out on these datasets, and optimal parameters values are listed in Table 1.

5.4. Component Analysis

To prove the effectiveness of the proposed components, some experiments were conducted, and the experiment results are discussed in this subsection.

5.4.1. Effectiveness Analysis of Spatial–Spectral-Based Union Dictionary

To validate the effectiveness of the spatial–spectral-based union dictionary, two kinds of union dictionary construction strategies customized for CR (i.e., strategy 1, which was developed in NJCR [7], and strategy 2, which was proposed in KNUD [8]) that only use spatial or spectral information are employed for comparison, as illustrated in Figure 7. As can be seen in Figure 7, compared with the two existing union dictionary construction strategies (i.e., “NJCR” and “KNUD”), the proposed spatial–spectral-based union dictionary achieves a better performance over all datasets. Although the two existing union dictionary construction strategies perform well on the SpecTIR dataset, they are still slightly lower than that of the proposed spatial–spectral-based union dictionary. As a whole, the effectiveness of the spatial–spectral-based union dictionary can be verified.

5.4.2. Effectiveness Analysis of Improved Saliency Weight

To verify the effectiveness of the improved saliency weight, we compared the improved saliency weight with the context-aware saliency weight, as shown in Figure 8. Obviously, the improved saliency weight significantly improved the AUCdf relative to the context-aware saliency weight for the Salinas, Gainesville and San Diego datasets. In addition, compared with the context-aware saliency weight, there was slight improvement for the improved saliency weight on the Texas Coast and SpecTIR datasets. In summary, the improved saliency weight improves the detection effect of the detector over all datasets, thus proving the effectiveness of the improved saliency weight.

5.5. Detection Performance

Eight comparative detectors are used to compare with the proposed SSUD-ISW detector according to the evaluation metrics introduced in Section 5.1.2.
(1)
Salinas Dataset
Figure 9 displays the detection maps generated by various detectors. Clearly, the proposed SSUD-ISW detector can locate the anomalies completely, with an acceptable false alarm. In contrast, almost all comparative detectors cannot comprehensively locate the anomalies, except for the RGAE detector. To qualitatively evaluate the detection effect of the proposed SSUD-ISW detector, four ROC curves are given, as illustrated in Figure 10. In these ROC curves, the proposed SSUD-ISW detector performs well relative to the comparative detectors. Correspondingly, the AUC values corresponding to various detectors are listed in Table 2. As obviously seen, the proposed SSUD-ISW detector obtains the optimal AUCdf value (i.e., 0.9988) compared with that of the comparative detectors. Moreover, the proposed SSUD-ISW detector shows a competitive performance in terms of the AUC, AUCtd, AUCbs, AUCtdbs and AUCodp values compared against the most comparative detectors. Although the AUC and AUCsnpr values are not ideal for the proposed SSUD-ISW detector, they are still acceptable relative to the most comparative detectors. For the comparative detectors, the CRDBPSW and RGAE detectors perform well according to several AUC values, while the remaining AUC values are pretty terrible. As a whole, the proposed SSUD-ISW detector has a promising performance with compared to the other detectors.
(2)
Texas Coast Dataset
The detection maps belonging to various detectors over the Texas Coast dataset are visualized in Figure 11. Clearly, the location effect of the proposed SSUD-ISW detector is excellent relative to that of the comparative detectors. Figure 12 shows four ROC curves corresponding to various detectors on the Texas Coast dataset. In terms of these ROC curves, the proposed SSUD-ISW detector has an excellent performance when compared against the comparative detectors, especially for the 2-D ROC curve of (Pd, Pf). The AUC values regarding various detectors on the Texas Coast dataset are listed in Table 3. As evidently seen, the proposed SSUD-ISW detector obtains the optimal values for the AUCdf, AUC, AUCtd, AUCsnpr, AUCtdbs and AUCodp, whose values are 0.9986, 0.6233, 1.6219, 32.9678, 0.6044 and 1.6030, respectively. In addition, the AUC and AUCbs of the proposed SSUD-ISW detector are slightly lower than the optimal values obtained by the CRDBPSW detector. In summary, the performance of the proposed SSUD-ISW detector on the Texas Coast dataset is preeminent compared with the comparative detectors.
(3)
Gainesville Dataset
Figure 13 visualizes the detection maps belonging to various detectors over the Gainesville dataset. Compared with the comparative detectors, the proposed SSUD-ISW detector fully identifies the anomalies, having a slight false alarm. Four ROC curves corresponding to various detectors over the Gainesville dataset are plotted in Figure 14. Obviously, the overall performance of the proposed SSUD-ISW detector is superior to that of the comparative detectors to a large extent. Table 4 lists eight AUC values of various detectors on the Gainesville dataset. In these AUC values, the proposed SSUD-ISW detector achieves the optimal values for seven of them, except for AUC, and these AUC values obviously outperform the second-best AUC values (i.e., 0.9939 vs. 0.9833 for AUCdf, 0.3866 vs. 0.3190 for AUC, 1.3805 vs. 1.2770 for AUCtd, 0.9765 vs. 0.9604 for AUCbs, 22.2538 vs. 12.0465 for AUCsnpr, 0.3692 vs. 0.2531 for AUCtdbs and 1.3631 vs. 1.2365 for AUCodp). Additionally, the proposed SSUD-ISW detector ranks second among all detectors with respect to the AUC value. To sum up, the performance of the proposed SSUD-ISW detector is outstanding relative to the comparative detectors over the Gainesville dataset.
(4)
San Diego Dataset
The detection maps belonging to various detectors over the San Diego dataset are illustrated in Figure 15. Clearly, we can find that three airplanes are well identified by the proposed SSUD-ISW detector, with a considerable background suppression effect. With respect to the comparative detectors, the background suppression effect is unsatisfactory, especially for the LSDM-MoG detector. Figure 16 displays the ROC curves corresponding to various detectors over the San Diego dataset. For these ROC curves, it is easily seen that the performance of the proposed SSUD-ISW detector is in the lead. Accordingly, the AUC values corresponding to various detectors are listed in Table 5. In these AUC values, the optimal values, which are 0.9945, 0.0053, 0.9892 and 49.9891, are obtained by the proposed SSUD-ISW detector for the AUCdf, AUC, AUCbs and AUCsnpr, respectively. Additionally, the proposed SSUD-ISW detector achieves the second-best values for the remaining AUC: 0.2637, 1.2582, 0.2584 and 1.2529 for the AUC, AUCtd, AUCtdbs and AUCodp, respectively. Notably, the NJCR detector performs pretty well for the AUC, AUCtd, AUCtdbs and AUCodp; however, the other AUC values are remarkably terrible relative to those of the proposed SSUD-ISW detector. In a word, the competitive performance on the San Diego dataset is achieved through the proposed SSUD-ISW detector relative to the comparative detectors.
(5)
SpecTIR Dataset
Figure 17 shows the detection maps belonging to various detectors on the SpecTIR dataset. Clearly, the proposed SSUD-ISW detector enables us to locate the anomalies well and keeps the number of false alarms pretty low relative to the comparative detectors. The ROC curves corresponding to various detectors on the SpecTIR dataset are plotted in Figure 18. Evidently, the ROC curves corresponding to the proposed SSUD-ISW detector show a satisfactory performance when compared against the comparative detectors. Correspondingly, Table 6 lists the AUC values of various detectors over the SpecTIR dataset. For these AUC values, the proposed SSUD-ISW detector obtains the best values for AUCdf, AUC, AUCtd, AUCsnpr, AUCtdbs and AUCodp, which are 0.9997, 0.4797, 1.4795, 110.9198, 0.4754 and 1.4752, respectively. Moreover, these AUC values obtained by the proposed SSUD-ISW detector are evidently higher than those of the second-best AUC values among all comparative detectors, whose AUC values are separately 0.9991, 0.4003, 1.3993, 91.6075, 0.3666 and 1.3657. The proposed SSUD-ISW detector also achieves the second-rank performance among all detectors in terms of the AUC and AUCbs, which are 0.0043 and 0.9954, respectively. As a whole, for the SpecTIR dataset, the proposed SSUD-ISW detector achieves a competent detection effect with respect to the comparative detectors.
In addition, the separability map regarding the background and anomaly are exhibited in Figure 19. As shown in Figure 19, the proposed SSUD-ISW detector can well separate the background and anomaly compared with the comparative detectors over all datasets. Although the separability effect on all datasets is also nice for the NJCR detector, the separability degree of the NJCR detector is obviously lower than that of the proposed SSUD-ISW detector. Even worse, most detectors fail to effectively separate the background and anomaly. In summary, among these detectors, the proposed SSUD-ISW detector achieves the best separability effect.
Table 7, additionally, lists the average running time of the aforementioned detectors. Clearly, the computation consumption of the RX detector is the minimum among these detectors. Although the detection time of the proposed SSUD-ISW detector exceeds that of the comparative detectors, the detection effect is pretty excellent compared to the comparative detectors. To sum up, the overall performance is acceptable for the proposed SSUD-ISW detector.

6. Discussion

To better analyze the performance regarding the proposed SSUD-ISW detector and comparative detectors, a more in-depth analysis is given as follows.
As stated in Section 5.5, selecting reasonable parameters analyzed in Section 5.3 enabled us to achieve the desired detection performance for the proposed SSUD-ISW detector. In terms of the proposed SSUD-ISW detector, the key to obtain an excellent performance is to construct a representative and pure background set and anomaly set, considering the fact that both the background set and anomaly set provide the pixels for the following union dictionary and improved saliency weight. In other words, the background set should cover all categories of the background and exclude the interference of the anomalous pixels as much as possible, and the anomaly set needs to cover all categories of anomalies and reject the contamination of the background pixels. Based on this fact, to yield an outstanding detection performance, it is necessary to form a representative and pure background set and anomaly set for the proposed SSUD-ISW detector. In addition, we also should tune other parameters to a certain degree for the proposed SSUD-ISW detector, which is analyzed in Section 5.3.
Unlike the proposed SSUD-ISW detector, the comparative detectors could not detect the anomalies and suppress the background well over all experimental datasets. For the comparative detectors, we can roughly divide them into three types, i.e., statistical theory-based detectors (i.e., RX); representation-based detectors (i.e., collaborative representation-based (i.e., CRD, CRD-BPSW and NJCR), low-rank and sparse representation (i.e., LSMAD and LSDM-MoG)); and deep-learning-based detectors (i.e., RGAE and GAED). The statistical-theory-based detector hypothesizes that the background obeys a multivariate Gaussian distribution, and thus, it only performs well for the simple scenarios, while its performance is poor for the complicated scenarios. With regard to the representation-based detectors, it is vital to construct a representative and pure dictionary, which is similar to the proposed SSUD-ISW detector, for better detection performance. Moreover, deep-learning-based detectors aim to well reconstruct the background and poorly reconstruct the anomalies. Based on this, the key to deep-learning-based detectors is to strengthen the reconstruction of the background and suppress the anomalies’ reconstruction.
For the proposed SSUD-ISW detector, in addition to the strength that can identify the anomalies and suppress the background well to a large extent over the most datasets relative to the comparative detectors, the weakness of the proposed SSUD-ISW detector is that it is time-consuming, so it cannot directly be used in the real scenarios. In this research, the main calculation burden is the pixel-by-pixel processing. Therefore, in the future work, it may be a plausible choice to process the hyperspectral image at once, so as to reduce the complexity of the proposed SSUD-ISW detector. If possible, we also hope to deploy the simplified model into a mobile device. In addition, it is worth noting that the proposed SSUD-ISW detector enables us to simultaneously use the spatial and spectral information to improve the effect of the union dictionary, which provides a new way for the following dictionary construction in the field of hyperspectral anomaly detection.

7. Conclusions

In this article, a HAD method using a spatial–spectral-based union dictionary and improved saliency weight is proposed. Different from the existing union dictionary construction strategies customized for CR that only use the spatial or spectral information, we propose a spatial–spectral-based union dictionary construction strategy, which simultaneously considers the valuable spatial and spectral information to construct a robust union dictionary, so as to improve the representation effect of CR. In addition, inspired by humans’ visual attention, an improved saliency weight was proposed to further enhance the performance of the detector. To verify the effectiveness and superiority of the proposed SSUD-ISW detector, the experiments were conducted on five datasets with different spectral characteristics. The effectiveness of the spatial–spectral-based union dictionary and improved saliency weight was validated by a component analysis, using the AUCdf values over all datasets. Similarly, the superiority of the proposed SSUD-ISW detector was also proved by comparing it with the classical and state-of-the-art detectors, using eight metrics (i.e., AUCdf, AUC, AUC, AUCtd, AUCbs, AUCsnpr, AUCtdbs and AUCodp) on all datasets.

Author Contributions

S.L., M.Z. and H.W. provided the ideas; S.L., X.C., S.Z. and L.S. implemented this algorithm; S.L. wrote this article; M.Z. and H.W. revised this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No.12003018 and No. 61871302), the Fundamental Research Funds for the Central Universities (No. XJS191305), the Innovation Fund of Xidian University (No. YJS2217) and the Innovation Capability Support Program of Shaanxi (No. 2022TD-37).

Data Availability Statement

The datasets used in this article are available at http://xudongkang.weebly.com/data-sets.html (accessed on 1 June 2023).

Acknowledgments

Thanks to the authors who provided experimental datasets and comparative detectors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Guo, T.; Luo, F.; Fang, L.; Zhang, B. Meta-Pixel-Driven Embeddable Discriminative Target and Background Dictionary Pair Learning for Hyperspectral Target Detection. Remote Sens. 2022, 14, 481. [Google Scholar] [CrossRef]
  2. Wang, J.; Li, L.; Liu, Y.; Hu, J.; Xiao, X.; Liu, B. Ai-Tfnet: Active Inference Transfer Convolutional Fusion Network for Hyperspectral Image Classification. Remote Sens. 2023, 15, 1292. [Google Scholar] [CrossRef]
  3. Chang, Y.-L.; Tan, T.-H.; Lee, W.-H.; Chang, L.; Chen, Y.-N.; Fan, K.-C.; Alkhaleefah, M. Consolidated Convolutional Neural Network for Hyperspectral Image Classification. Remote Sens. 2022, 14, 1571. [Google Scholar] [CrossRef]
  4. Lin, S.; Zhang, M.; Cheng, X.; Zhou, K.; Zhao, S.; Wang, H. Dual Collaborative Constraints Regularized Low Rank and Sparse Representation via Robust Dictionaries Construction for Hyperspectral Anomaly Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 16, 2009–2024. [Google Scholar] [CrossRef]
  5. Lin, S.; Zhang, M.; Cheng, X.; Zhou, K.; Zhao, S.; Wang, H. Hyperspectral Anomaly Detection via Sparse Representation and Collaborative Representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 16, 946–961. [Google Scholar]
  6. Cheng, X.; Zhang, M.; Lin, S.; Zhou, K.; Wang, L.; Wang, H. Multiscale Superpixel Guided Discriminative Forest for Hyperspectral Anomaly Detection. Remote Sens. 2022, 14, 4828. [Google Scholar] [CrossRef]
  7. Chang, S.; Ghamisi, P. Nonnegative-Constrained Joint Collaborative Representation with Union Dictionary for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  8. Gao, Y.; Gu, J.; Cheng, T.; Wang, B. Kernel-Based Nonlinear Anomaly Detection via Union Dictionary for Hyperspectral Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  9. Goferman, S.; Zelnik-Manor, L.; Tal, A. Context-Aware Saliency Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1915–1926. [Google Scholar] [CrossRef] [Green Version]
  10. Hou, Z.; Li, W.; Tao, R.; Ma, P.; Shi, W. Collaborative Representation with Background Purification and Saliency Weight for Hyperspectral Anomaly Detection. Sci. China Inf. Sci. 2022, 65, 112305. [Google Scholar] [CrossRef]
  11. Reed, I.S.; Yu, X. Adaptive Multiple-Band Cfar Detection of an Optical Pattern with Unknown Spectral Distribution. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  12. Molero, J.M.; Garzon, E.M.; Garcia, I.; Plaza, A. Analysis and Optimizations of Global and Local Versions of the Rx Algorithm for Anomaly Detection in Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 801–814. [Google Scholar] [CrossRef]
  13. Du, B.; Zhang, L. Random-Selection-Based Anomaly Detector for Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1578–1589. [Google Scholar] [CrossRef]
  14. Tao, R.; Zhao, X.; Li, W.; Li, H.-C.; Du, Q. Hyperspectral Anomaly Detection by Fractional Fourier Entropy. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4920–4929. [Google Scholar] [CrossRef]
  15. Wang, S.; Wang, X.; Zhang, L.; Zhong, Y. Auto-Ad: Autonomous Hyperspectral Anomaly Detection Network Based on Fully Convolutional Autoencoder. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Du, B.; Zhang, L.; Wang, S. A Low-Rank and Sparse Matrix Decomposition-Based Mahalanobis Distance Method for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1376–1389. [Google Scholar] [CrossRef]
  17. Xu, Y.; Wu, Z.; Li, J.; Plaza, A.; Wei, Z. Anomaly Detection in Hyperspectral Images Based on Low-Rank and Sparse Representation. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1990–2000. [Google Scholar] [CrossRef]
  18. Huyan, N.; Zhang, X.; Zhou, H.; Jiao, L. Hyperspectral Anomaly Detection via Background and Potential Anomaly Dictionaries Construction. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2263–2276. [Google Scholar] [CrossRef] [Green Version]
  19. Su, H.; Wu, Z.; Zhu, A.X.; Du, Q. Low Rank and Collaborative Representation for Hyperspectral Anomaly Detection via Robust Dictionary Construction. ISPRS J. Photogramm. Remote Sens. 2020, 169, 195–211. [Google Scholar] [CrossRef]
  20. Lin, S.; Zhang, M.; Cheng, X.; Wang, L.; Xu, M.; Wang, H. Hyperspectral Anomaly Detection via Dual Dictionaries Construction Guided by Two-Stage Complementary Decision. Remote Sens. 2022, 14, 1784. [Google Scholar] [CrossRef]
  21. Feng, R.; Li, H.; Wang, L.; Zhong, Y.; Zhang, L.; Zeng, T. Local Spatial Constraint and Total Variation for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–16. [Google Scholar] [CrossRef]
  22. Tan, K.; Hou, Z.; Ma, D.; Chen, Y.; Du, Q. Anomaly Detection in Hyperspectral Imagery Based on Low-Rank Representation Incorporating a Spatial Constraint. Remote Sens. 2019, 11, 1578. [Google Scholar] [CrossRef] [Green Version]
  23. Li, J.; Zhang, H.; Zhang, L.; Ma, L. Hyperspectral Anomaly Detection by the Use of Background Joint Sparse Representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2523–2533. [Google Scholar] [CrossRef]
  24. Ma, D.; Yuan, Y.; Wang, Q. Hyperspectral Anomaly Detection via Discriminative Feature Learning with Multiple-Dictionary Sparse Representation. Remote Sens. 2018, 10, 745. [Google Scholar] [CrossRef] [Green Version]
  25. Zhao, G.; Li, F.; Zhang, X.; Laakso, K.; Chan, J.C.-W. Archetypal Analysis and Structured Sparse Representation for Hyperspectral Anomaly Detection. Remote Sens. 2021, 13, 4102. [Google Scholar] [CrossRef]
  26. Ling, Q.; Guo, Y.; Lin, Z.; An, W. A Constrained Sparse Representation Model for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2358–2371. [Google Scholar] [CrossRef]
  27. Li, W.; Du, Q. Collaborative Representation for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1463–1474. [Google Scholar] [CrossRef]
  28. Wu, Z.; Su, H.; Tao, X.; Han, L.; Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Hyperspectral Anomaly Detection with Relaxed Collaborative Representation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–17. [Google Scholar] [CrossRef]
  29. Su, H.; Wu, Z.; Du, Q.; Du, P. Hyperspectral Anomaly Detection Using Collaborative Representation with Outlier Removal. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 5029–5038. [Google Scholar] [CrossRef]
  30. Vafadar, M.; Ghassemian, H. Anomaly Detection of Hyperspectral Imagery Using Modified Collaborative Representation. IEEE Geosci. Remote Sens. Lett. 2018, 15, 577–581. [Google Scholar] [CrossRef]
  31. Feng, S.; Tang, S.; Zhao, C.; Cui, Y. A Hyperspectral Anomaly Detection Method Based on Low-Rank and Sparse Decomposition with Density Peak Guided Collaborative Representation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–13. [Google Scholar] [CrossRef]
  32. Qian, X.; Cheng, X.; Cheng, G.; Yao, X.; Jiang, L. Two-Stream Encoder Gan with Progressive Training for Co-Saliency Detection. IEEE Signal Process. Lett. 2021, 28, 180–184. [Google Scholar] [CrossRef]
  33. Qian, X.; Zeng, Y.; Wang, W.; Zhang, Q. Co-Saliency Detection Guided by Group Weakly Supervised Learning. IEEE Trans. Multimed. 2022, 25, 1810–1818. [Google Scholar] [CrossRef]
  34. Qian, X.; Lin, S.; Cheng, G.; Yao, X.; Ren, H.; Wang, W. Object Detection in Remote Sensing Images Based on Improved Bounding Box Regression and Multi-Level Features Fusion. Remote Sens. 2020, 12, 143. [Google Scholar] [CrossRef] [Green Version]
  35. Qian, X.; Huo, Y.; Cheng, G.; Gao, C.; Yao, X.; Wang, W. Mining High-Quality Pseudo Instance Soft Labels for Weakly Supervised Object Detection in Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar]
  36. Huo, Y.; Qian, X.; Li, C.; Wang, W. Multiple Instance Complementary Detection and Difficulty Evaluation for Weakly Supervised Object Detection in Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  37. Qian, X.; Huo, Y.; Cheng, G.; Yao, X.; Li, K.; Ren, H.; Wang, W. Incorporating the Completeness and Difficulty of Proposals into Weakly Supervised Object Detection in Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1902–1911. [Google Scholar] [CrossRef]
  38. Li, S.; Zhang, K.; Duan, P.; Kang, X. Hyperspectral Anomaly Detection with Kernel Isolation Forest. IEEE Trans. Geosci. Remote Sens. 2020, 58, 319–329. [Google Scholar] [CrossRef]
  39. Li, W.; Wu, G.; Du, Q. Transferred Deep Learning for Anomaly Detection in Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 597–601. [Google Scholar] [CrossRef]
  40. Song, S.; Zhou, H.; Yang, Y.; Song, J. Hyperspectral Anomaly Detection via Convolutional Neural Network and Low Rank with Density-Based Clustering. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 3637–3649. [Google Scholar] [CrossRef]
  41. Zhang, L.; Cheng, B. A Stacked Autoencoders-Based Adaptive Subspace Model for Hyperspectral Anomaly Detection. Infrared Phys. Technol. 2019, 96, 52–60. [Google Scholar] [CrossRef]
  42. Xie, W.; Lei, J.; Liu, B.; Li, Y.; Jia, X. Spectral Constraint Adversarial Autoencoders Approach to Feature Representation in Hyperspectral Anomaly Detection. Neural Netw. 2019, 119, 222–234. [Google Scholar] [CrossRef]
  43. Jiang, T.; Li, Y.; Xie, W.; Du, Q. Discriminative Reconstruction Constrained Generative Adversarial Network for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4666–4679. [Google Scholar] [CrossRef]
  44. Lu, X.; Zhang, W.; Huang, J. Exploiting Embedding Manifold of Autoencoders for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 1527–1537. [Google Scholar] [CrossRef]
  45. Fan, G.; Ma, Y.; Mei, X.; Fan, F.; Huang, J.; Ma, J. Hyperspectral Anomaly Detection with Robust Graph Autoencoders. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar]
  46. Lei, J.; Xie, W.; Yang, J.; Li, Y.; Chang, C.I. Spectral–Spatial Feature Extraction for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8131–8143. [Google Scholar] [CrossRef]
  47. Xie, W.; Li, Y.; Lei, J.; Yang, J.; Chang, C.I.; Li, Z. Hyperspectral Band Selection for Spectral–Spatial Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3426–3436. [Google Scholar] [CrossRef]
  48. Huyan, N.; Zhang, X.; Quan, D.; Chanussot, J.; Jiao, L. Aud-Net: A Unified Deep Detector for Multiple Hyperspectral Image Anomaly Detection via Relation and Few-Shot Learning. IEEE Trans. Neural Netw. Learn. Syst. 2022. [Google Scholar] [CrossRef] [PubMed]
  49. Liu, Y.; Xie, W.; Li, Y.; Li, Z.; Du, Q. Dual-Frequency Autoencoder for Anomaly Detection in Transformed Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–13. [Google Scholar] [CrossRef]
  50. Tu, B.; Li, N.; Liao, Z.; Ou, X.; Zhang, G. Hyperspectral Anomaly Detection via Spatial Density Background Purification. Remote Sens. 2019, 11, 2618. [Google Scholar] [CrossRef] [Green Version]
  51. Kang, X.; Zhang, X.; Li, S.; Li, K.; Li, J.; Benediktsson, J.A. Hyperspectral Anomaly Detection with Attribute and Edge-Preserving Filters. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5600–5611. [Google Scholar] [CrossRef]
  52. Cheng, X.; Zhang, M.; Lin, S.; Zhou, K.; Zhao, S.; Wang, H. Two-Stream Isolation Forest Based on Deep Features for Hyperspectral Anomaly Detection. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  53. Li, L.; Li, W.; Du, Q.; Tao, R. Low-Rank and Sparse Decomposition with Mixture of Gaussian for Hyperspectral Anomaly Detection. IEEE Trans. Cybern. 2020, 51, 4363–4372. [Google Scholar] [CrossRef] [PubMed]
  54. Xiang, P.; Ali, S.; Jung, S.K.; Zhou, H. Hyperspectral Anomaly Detection with Guided Autoencoder. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–18. [Google Scholar] [CrossRef]
Figure 1. The schematic of the proposed SSUD-ISW detector using the Texas Coast dataset (see Section 5.1.1). Note that  x i  represents the ith testing pixel;  D i B D i A  and  D i  separately stand for the background dictionary, anomaly dictionary and union dictionary corresponding to the ith testing pixel;  α i B α i A  and  α i  indicate the representation coefficients corresponding to the background dictionary, anomaly dictionary and union dictionary belonging to the ith testing pixel, respectively; and  e i  refers to the residual vector of the ith testing pixel.
Figure 1. The schematic of the proposed SSUD-ISW detector using the Texas Coast dataset (see Section 5.1.1). Note that  x i  represents the ith testing pixel;  D i B D i A  and  D i  separately stand for the background dictionary, anomaly dictionary and union dictionary corresponding to the ith testing pixel;  α i B α i A  and  α i  indicate the representation coefficients corresponding to the background dictionary, anomaly dictionary and union dictionary belonging to the ith testing pixel, respectively; and  e i  refers to the residual vector of the ith testing pixel.
Remotesensing 15 03609 g001
Figure 2. The schematic of the construction of the background set and anomaly set using the Texas Coast dataset (see Section 5.1.1).
Figure 2. The schematic of the construction of the background set and anomaly set using the Texas Coast dataset (see Section 5.1.1).
Remotesensing 15 03609 g002
Figure 3. Visualization of the saliency weights under different situations for all datasets (see Section 5.1.1). Numbers (IIII) are the saliency weights obtained by Equation (5), the saliency weights acquired by Equation (23) and the saliency weights yielded by Equation (25), respectively. Letters (ae) indicate the Salinas, Texas Coast, Gainesville, San Diego and SpecTIR datasets, respectively.
Figure 3. Visualization of the saliency weights under different situations for all datasets (see Section 5.1.1). Numbers (IIII) are the saliency weights obtained by Equation (5), the saliency weights acquired by Equation (23) and the saliency weights yielded by Equation (25), respectively. Letters (ae) indicate the Salinas, Texas Coast, Gainesville, San Diego and SpecTIR datasets, respectively.
Remotesensing 15 03609 g003
Figure 4. Visualization of the hyperspectral datasets: (a) color composites and (b) reference map. (I) Salinas, (II) Texas Coast, (III) Gainesville, (IV) San Diego and (V) SpecTIR.
Figure 4. Visualization of the hyperspectral datasets: (a) color composites and (b) reference map. (I) Salinas, (II) Texas Coast, (III) Gainesville, (IV) San Diego and (V) SpecTIR.
Remotesensing 15 03609 g004
Figure 5. With the parameters tuning over all datasets.
Figure 5. With the parameters tuning over all datasets.
Remotesensing 15 03609 g005
Figure 6. With the parameters kB and kA tuning over all datasets.
Figure 6. With the parameters kB and kA tuning over all datasets.
Remotesensing 15 03609 g006
Figure 7. Histogram of AUCdf of detection results obtained by using different union dictionary construction strategies over all datasets.
Figure 7. Histogram of AUCdf of detection results obtained by using different union dictionary construction strategies over all datasets.
Remotesensing 15 03609 g007
Figure 8. Histogram of AUCdf of detection results obtained by using different saliency weights over all datasets.
Figure 8. Histogram of AUCdf of detection results obtained by using different saliency weights over all datasets.
Remotesensing 15 03609 g008
Figure 9. Detection maps belonging to various detectors over the Salinas dataset.
Figure 9. Detection maps belonging to various detectors over the Salinas dataset.
Remotesensing 15 03609 g009
Figure 10. ROC curves belonging to various detectors over the Salinas dataset. (ad) The ROC curves introduced in Section 5.1.2.
Figure 10. ROC curves belonging to various detectors over the Salinas dataset. (ad) The ROC curves introduced in Section 5.1.2.
Remotesensing 15 03609 g010
Figure 11. Detection maps belonging to various detectors over the Texas Coast dataset.
Figure 11. Detection maps belonging to various detectors over the Texas Coast dataset.
Remotesensing 15 03609 g011
Figure 12. ROC curves belonging to various detectors over the Texas Coast dataset. (ad) The ROC curves introduced in Section 5.1.2.
Figure 12. ROC curves belonging to various detectors over the Texas Coast dataset. (ad) The ROC curves introduced in Section 5.1.2.
Remotesensing 15 03609 g012
Figure 13. Detection maps belonging to various detectors over the Gainesville dataset.
Figure 13. Detection maps belonging to various detectors over the Gainesville dataset.
Remotesensing 15 03609 g013
Figure 14. ROC curves belonging to various detectors over the Gainesville dataset. (ad) The ROC curves introduced in Section 5.1.2.
Figure 14. ROC curves belonging to various detectors over the Gainesville dataset. (ad) The ROC curves introduced in Section 5.1.2.
Remotesensing 15 03609 g014
Figure 15. Detection maps belonging to various detectors over the San Diego dataset.
Figure 15. Detection maps belonging to various detectors over the San Diego dataset.
Remotesensing 15 03609 g015
Figure 16. ROC curves belonging to various detectors over the San Diego dataset. (ad) The ROC curves introduced in Section 5.1.2.
Figure 16. ROC curves belonging to various detectors over the San Diego dataset. (ad) The ROC curves introduced in Section 5.1.2.
Remotesensing 15 03609 g016
Figure 17. Detection maps belonging to various detectors over the SpecTIR dataset.
Figure 17. Detection maps belonging to various detectors over the SpecTIR dataset.
Remotesensing 15 03609 g017
Figure 18. ROC curves belonging to various detectors over the SpecTIR dataset. (ad) The ROC curves introduced in Section 5.1.2.
Figure 18. ROC curves belonging to various detectors over the SpecTIR dataset. (ad) The ROC curves introduced in Section 5.1.2.
Remotesensing 15 03609 g018
Figure 19. Separability maps regarding the background and anomaly of various detectors over all datasets. The characters (i.e., “a”~“i”) on the horizontal axis are RX, CRD, CRDBPSW, LSMAD, LSDM-MoG, RGAE, GAED, NJCR and the proposed SSUD-ISW detector, respectively.
Figure 19. Separability maps regarding the background and anomaly of various detectors over all datasets. The characters (i.e., “a”~“i”) on the horizontal axis are RX, CRD, CRDBPSW, LSMAD, LSDM-MoG, RGAE, GAED, NJCR and the proposed SSUD-ISW detector, respectively.
Remotesensing 15 03609 g019
Table 1. Optimal parameters’ configuration for all datasets.
Table 1. Optimal parameters’ configuration for all datasets.
ParametersSalinasTexas CoastGainesvilleSan DiegoSpecTIR
ns200200300200200
β10−510−410−410−110−2
k55553
ρ15515110
kB1020151515
kA77777
Table 2. Eight AUC values of various detectors on the Salinas dataset.
Table 2. Eight AUC values of various detectors on the Salinas dataset.
AUCdf AUC AUC AUCtdAUCbsAUCsnprAUCtdbsAUCodp
RX0.80730.21430.03141.02160.77596.83180.18290.9903
CRD0.96350.30120.00691.26470.956643.40250.29421.2577
CRDBPSW0.99320.31180.00081.30500.9925408.33160.31101.3042
LSMAD0.93750.33010.01231.26750.925126.78650.31781.2552
LSDM-MoG 0.99380.52740.01681.52120.977131.45620.51061.5044
RGAE0.98620.69490.04471.68110.941615.55140.65021.6364
GAED0.95280.23180.00231.18460.9505100.14170.22951.1823
NJCR0.98920.53820.03471.52750.954615.52410.50361.4928
SSUD-ISW0.99880.57960.00891.57840.989965.22130.57071.5695
Table 3. Eight AUC values of various detectors on the Texas Coast dataset.
Table 3. Eight AUC values of various detectors on the Texas Coast dataset.
AUCdf AUC AUC AUCtdAUCbsAUCsnprAUCtdbsAUCodp
RX0.99070.31430.05561.30490.93515.65700.25871.2494
CRD0.97960.34950.04801.32910.93177.28530.30151.2811
CRDBPSW0.99560.23790.00901.23350.986726.46100.22891.2245
LSMAD0.99280.32100.03111.31380.961710.31500.28991.2827
LSDM-MoG0.99130.53350.12771.52480.86364.17890.40581.3971
RGAE0.98210.37570.01691.35780.965322.29210.35881.3410
GAED0.95670.37000.05461.32670.90206.77250.31541.2720
NJCR0.98840.59650.06061.58490.92789.85100.53601.5243
SSUD-ISW0.99860.62330.01891.62190.979732.96780.60441.6030
Table 4. Eight AUC values of various detectors on the Gainesville dataset.
Table 4. Eight AUC values of various detectors on the Gainesville dataset.
AUCdf AUC AUC AUCtdAUCbsAUCsnprAUCtdbsAUCodp
RX0.95970.16650.02291.12620.93677.26630.14361.1033
CRD0.96970.19400.02871.16360.94106.76100.16531.1349
CRDBPSW0.95220.10410.01491.05630.93747.00520.08921.0414
LSMAD0.96450.13120.02581.09570.93865.08120.10541.0699
LSDM-MoG0.94380.17420.06191.11800.88192.81320.11231.0561
RGAE0.81770.10100.03820.91870.77952.64410.06280.8805
GAED0.98330.27600.02291.25940.960412.04650.25311.2365
NJCR0.95800.31900.07221.27700.88584.41890.24681.2048
SSUD-ISW0.99390.38660.01741.38050.976522.25380.36921.3631
Table 5. Eight AUC values of various detectors on the San Diego dataset.
Table 5. Eight AUC values of various detectors on the San Diego dataset.
AUCdf AUC AUC AUCtdAUCbsAUCsnprAUCtdbsAUCodp
RX0.94030.17780.05891.11810.88143.01760.11891.0592
CRD0.94120.09110.02141.03230.91994.26580.06981.0110
CRDBPSW0.98620.22850.01911.21470.967211.99510.20951.1957
LSMAD0.97010.16080.02751.13090.94265.84790.13331.1034
LSDM-MoG0.93390.23810.13651.17200.79741.74410.10161.0355
RGAE0.99190.18070.00751.17270.984524.20400.17331.1652
GAED0.99070.23370.00791.22440.982829.57550.22581.2165
NJCR0.97360.38070.06921.35420.90445.50010.31141.2850
SSUD-ISW0.99450.26370.00531.25820.989249.98910.25841.2529
Table 6. Eight AUC values of various detectors on the SpecTIR dataset.
Table 6. Eight AUC values of various detectors on the SpecTIR dataset.
AUCdf AUC AUC AUCtdAUCbsAUCsnprAUCtdbsAUCodp
RX0.99140.36830.02491.35970.966514.81830.34341.3348
CRD0.99200.17430.01591.16620.976110.95700.15841.1503
CRDBPSW0.99910.27450.00301.27360.996191.60750.27151.2706
LSMAD0.99840.30420.00851.30260.989935.66550.29571.2941
LSDM-MoG0.99910.40030.03371.39930.965411.88480.36661.3657
RGAE0.87770.08890.01760.96660.86005.04080.07130.9490
GAED0.97030.06900.00841.03930.96198.23450.06061.0310
NJCR0.99650.26750.01871.26400.977814.34150.24881.2453
SSUD-ISW0.99970.47970.00431.47950.9954110.91980.47541.4752
Table 7. Average detection time (in seconds) of various detectors over all datasets.
Table 7. Average detection time (in seconds) of various detectors over all datasets.
RXCRDCRDBPSWLSMADLSDM-MoGRGAEGAEDNJCRSSUD-ISW
Salinas0.558.088.4617.6534.330.311.0626.3756.78
Texas Coast0.367.2412.6012.7611.840.250.7635.3549.79
Gainesville0.234.1512.9311.8318.980.270.6326.0653.09
San Diego0.3310.1570.6311.619.210.240.4921.5538.83
SpecTIR0.3512.3363.4328.7789.440.950.7520.55122.44
Average0.368.3933.6116.5232.760.400.7425.9864.19
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, S.; Zhang, M.; Cheng, X.; Zhao, S.; Shi, L.; Wang, H. Hyperspectral Anomaly Detection Using Spatial–Spectral-Based Union Dictionary and Improved Saliency Weight. Remote Sens. 2023, 15, 3609. https://doi.org/10.3390/rs15143609

AMA Style

Lin S, Zhang M, Cheng X, Zhao S, Shi L, Wang H. Hyperspectral Anomaly Detection Using Spatial–Spectral-Based Union Dictionary and Improved Saliency Weight. Remote Sensing. 2023; 15(14):3609. https://doi.org/10.3390/rs15143609

Chicago/Turabian Style

Lin, Sheng, Min Zhang, Xi Cheng, Shaobo Zhao, Lei Shi, and Hai Wang. 2023. "Hyperspectral Anomaly Detection Using Spatial–Spectral-Based Union Dictionary and Improved Saliency Weight" Remote Sensing 15, no. 14: 3609. https://doi.org/10.3390/rs15143609

APA Style

Lin, S., Zhang, M., Cheng, X., Zhao, S., Shi, L., & Wang, H. (2023). Hyperspectral Anomaly Detection Using Spatial–Spectral-Based Union Dictionary and Improved Saliency Weight. Remote Sensing, 15(14), 3609. https://doi.org/10.3390/rs15143609

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop