Spectral unmixing is a common and very crucial step for CBHIR systems available in the literature. It aims to find pure spectral signatures of the matter, so-called endmembers, in an image and decompose mixed pixel signatures, considering endmembers to calculate the abundances of those matter at a given pixel. Linear unmixing methods assume that mixed pixel signatures measured by hyperspectral imaging systems are composed of (i) a combination of pure material signatures (endmembers) in proportion to their abundances in a pixel and (ii) additive noise at each spectral band. On the other hand, since pure endmembers may not exist in a hyperspectral image due to insufficient spatial resolution of the imaging system or any other reason, specific linear unmixing methods utilize auxiliary endmember signature archives during the unmixing process.
CBHIR systems proposed in [
7,
8,
9] model hyperspectral images with endmembers obtained via Pixel Purity Index (PPI), N-FINDR, and Automatic Pixel Purity Index (A-PPI) linear unmixing algorithms, respectively. In the retrieval phase, all three systems utilize a one-to-one endmember matching-based Spectral Signature Matching Algorithm (SSMA) to assess the similarity between the hyperspectral images. Unlike [
7,
8], the CBHIR system proposed in [
9] employs the SSMA with Spectral Information Divergence-Spectral Angular Distance (SID-SAD)-based hybrid distance. In [
10], an updated version the CBHIR system proposed in [
7] is introduced that implements a distributed hyperspectral imaging repository on a cloud computing platform. In [
11], an endmember matching-based distance for content-based hyperspectral image retrieval is proposed. This distance metric mutually maps each individual endmember that belongs to one image to an endmember of the other image by considering SAD between them. Finally, the sum of the L-2 norm of vectors arising from minimum SAD between matched endmember pairs gives the Grana Distance between two hyperspectral images. The study evaluates the proposed hyperspectral image distance retrieval performance with the Endmember Induction Heuristic Algorithm (EIHA) and N-FINDR linear unmixing algorithms. In [
12], the same research group introduces an alternative CBHIR system that utilizes both endmembers and their abundances. The proposed system assesses the similarity of two hyperspectral images by calculating the sum of SAD between each endmember pair arising from the Cartesian product of two endmember sets. In [
6], yet another CBHIR approach is proposed that copes with spectral and spatial information redundancy in hyperspectral imagery with a data compression strategy. To this end, each hyperspectral image is converted to a text stream (either pixel-wise or band-wise) and then encoded with the Lempel–Ziv–Welch (LZW) algorithm to obtain a dictionary that models the image. In the retrieval phase, the level of similarity between two hyperspectral images is assessed by the dictionary distances that consider common and independent elements in the corresponding dictionaries. In [
13], a hyperspectral image repository with retrieval functionality is introduced. The repository catalogs the hyperspectral images with endmembers obtained via either N-FINDR or Orthagonal Subspace Projection (OSP) linear unmixing algorithms in conjunction with their abundances. The user interacts with the system by choosing one or more spectral signatures from the library, already available in the repository, as a query. In the retrieval phase, the repository evaluates the level of similarity between query endmember(s) and cataloged image endmembers, considering the SAD. The CBHIR system proposed in [
14] constructs a feature extraction strategy on sparse linear unmixing. This approach, which utilizes the SunSAL algorithm, aims to obtain image endmembers through spectral signatures already available in a library within the system. However, this CBHIR approach requires a large built-in library that accommodates spectral signatures of all possible materials for a proper feature extraction phase. In the retrieval phase, the proposed system evaluates the similarity of two images considering the SAD between image endmembers. In [
15], hyperspectral images are characterized with two descriptors. The spectral descriptors corresponding to endmembers are obtained via N-FINDR algorithm. In addition, the proposed system uses Gabor filters to compute a texture descriptor to model the image. In the retrieval phase, the system considers the sum of spectral and texture descriptor distances to assess the similarity between two hyperspectral images. To this end, the distance between spectral and textural descriptors of two images is calculated by adopting the Significance Credit Assessment method introduced in [
12] and squared Euclidean distance between Gabor filter vectors, respectively. Similar to [
15], the CBHIR system proposed in [
16] characterizes hyperspectral images with two descriptors: spatial and spectral. The spatial descriptor is computed with a saliency map that combines four features: the first component of the PCA, orientation, spectral angle, and visible spectral band opponent. On the other hand, the spectral descriptor corresponds to a histogram of spectral words obtained by clustering endmembers extracted from all the images in the archive. In the retrieval phase, the similarity between feature descriptors is calculated with squared Euclidean distance to assess the similarity between two images. In [
17], a CBHIR system is proposed to secure hyperspectral imagery retrieval by encrypting the image descriptors. The system characterizes hyperspectral images with spectral and texture descriptors. To obtain the spectral descriptor, Scale-Invariant Feature Transform (SIFT) key-point descriptors of the RGB representation of the image and the endmembers extracted by the A-PPI linear unmixing algorithm are clustered with the k-means algorithm. This step defines spectral words that correspond to cluster centers. The proposed system employs the GLCM method to compute the texture descriptor to obtain contrast, correlation, energy, and entropy values. In the retrieval phase, these two descriptors are combined to model the images, and the Jaccard distance is used to assess the similarity between two images. Yet another CBHIR system that models the images with spectral and texture descriptors is introduced in [
18]. The system obtains the spectral descriptors with endmembers extracted with the A-PPI unmixing algorithm. The system adopts the GLCM-based method introduced in [
17] to obtain the texture descriptors. In the retrieval phase, the proposed system uses SID-SAM-based distance and Image Euclidean Distance to evaluate the similarity of spectral and texture descriptors, respectively. A bag-of-endmembers-based strategy for CBHIR is proposed in [
19]. The proposed strategy aims to represent hyperspectral image content with a global spectral vocabulary obtained by clustering bag-of-endmembers from all endmembers extracted from the archive. In addition to the methods mentioned above, there is also a method that utilizes artificial neural networks. The method proposed in [
20] suggests a model that provides pixel-based retrieval using a Deep Convolutional Generative Adversarial Network (DCGAN). For this purpose, an artificial neural network model is trained with a combination of spectral and spatial vectors obtained using manually selected pure material signatures from hyperspectral images and neighboring pixel signatures.