Next Article in Journal
Elimination of Defects in Mammograms Caused by a Malfunction of the Device Matrix
Previous Article in Journal
A Review of Watershed Implementations for Segmentation of Volumetric Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Airborne Hyperspectral Imagery for Band Selection Using Moth–Flame Metaheuristic Optimization

1
Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India
2
Geosystems Research Institute, Mississippi State University, Starkville, MS 39759, USA
*
Authors to whom correspondence should be addressed.
J. Imaging 2022, 8(5), 126; https://doi.org/10.3390/jimaging8050126
Submission received: 4 March 2022 / Revised: 5 April 2022 / Accepted: 13 April 2022 / Published: 27 April 2022
(This article belongs to the Topic Hyperspectral Imaging: Methods and Applications)

Abstract

:
In this research, we study a new metaheuristic algorithm called Moth–Flame Optimization (MFO) for hyperspectral band selection. With the hundreds of highly correlated narrow spectral bands, the number of training samples required to train a statistical classifier is high. Thus, the problem is to select a subset of bands without compromising the classification accuracy. One of the ways to solve this problem is to model an objective function that measures class separability and utilize it to arrive at a subset of bands. In this research, we studied MFO to select optimal spectral bands for classification. MFO is inspired by the behavior of moths with respect to flames, which is the navigation method of moths in nature called transverse orientation. In MFO, a moth navigates the search space through a process called transverse orientation by keeping a constant angle with the Moon, which is a compelling strategy for traveling long distances in a straight line, considering that the Moon’s distance from the moth is considerably long. Our research tested MFO on three benchmark hyperspectral datasets—Indian Pines, University of Pavia, and Salinas. MFO produced an Overall Accuracy (OA) of 88.98%, 94.85%, and 97.17%, respectively, on the three datasets. Our experimental results indicate that MFO produces better OA and Kappa when compared to state-of-the-art band selection algorithms such as particle swarm optimization, grey wolf, cuckoo search, and genetic algorithms. The analysis results prove that the proposed approach effectively addresses the spectral band selection problem and provides a high classification accuracy.

1. Introduction

Hyperspectral Imaging (HSI) sensors can acquire detailed reflectance information from narrow spectral bands in the visible, Near-Infrared (NIR), mid-IR, and thermal IR portions of the light spectrum [1]. HSI sensors can collect hundreds of bands with a high spectral resolution, having near-continuous spectral reflectance information for every pixel. This enables researchers to characterize different ground materials which is otherwise impossible with optical or multispectral remote sensing. The primary limitation of HSI is its higher dimensionality and the resulting Hughes phenomenon [2]. To alleviate the Hughes phenomenon and reduce the computation time of hyperspectral analysis, the dimensionality of the data needs to be reduced without losing information. Dimensionality reduction can be achieved in two ways: (1) Feature extraction: this needs a transform to convert a high-dimensional HSI cube into a lower-dimensional space where the originaldata information is lost [3,4]; (2) feature selection or band selection: this reduces the dimensionality to optimize the classification accuracy with a limited sample size. Feature selection selects the ideal mix of bands for supervised learning, utilizing the underlying optimization techniques [5]. Approximation algorithms that are used to find subsets are useful sub-optimal solutions, since finding optimal solutions through exhaustive searches requires increased processing time and computational complexity. Due to this, band selection is the perfect mechanism to reduce the dimensionality and maintain the statistical confidence of the Hughes phenomenon [6].
The balance between exploration and exploitation is a critical aspect in attaining global optimization for band selection. The goal is to cover the entire search space to avoid missing the genuine optima; nevertheless, to discover the genuine optima, the search should zero in on a specific good part of the solution space. A variety of “no free lunch” (NFL) theorems have been presented, establishing that any improved performance over one class of tasks is countered by a reduced performance over another [7]. This theorems lead to a geometric view of what it means for an algorithm to be well performing and appropriate for an optimization problem [8]. The no free lunch theorem demonstrates that an optimization strategy that works well for one type of issue may not work well for another. This theorem states that some may perform better for one sort of data when compared to other metaheuristic techniques. As a result, it is worthwhile to investigate each suggested metaheuristic strategy for HSI band selection. Many nature-inspired metaheuristics, such as Particle Swarm Optimization (PSO), Genetic Algorithm Optimization (GAO), Cuckoo Search Optimization (CSO), differential evolution, and so on, have already been well studied for HSI band selection.
The contributions of this research are as follows:
  • We propose an MFO-based algorithm for hyperspectral band selection.
  • We implemented and tested MFO-based hyperspectral band selection for three benchmark datasets.
  • We compared the performance of MFO with three state-of-the-art metaheuristic band selection methods.
PSO mimics the navigation and foraging of a flock of birds or a school of fish [9]. PSO analyzes the HSI data models for the optimization of band selection. The approach is built on an abstraction of the selection process and was introduced by [10,11]. PSO has difficulty with respect to designing parameters and problems due to the scattering of the data points in a three-dimensional space. It also becomes stuck in local minima, in particular with complex optimum solutions. To overcome this, the genetic algorithm enables the global optimum to be large. In contrast, conventional approaches for optimization will converge towards the local minimum without the global optimum. Genetic algorithms are used to find the optimal set of parameters with less cost. GAO requires little knowledge about the optimization problem, but the creation of an objective function of optimization application is complex and requires more computational resources (i.e., it is time-consuming). CSO was inspired by the obligate brood parasitism of some cuckoo species by laying their eggs in the nests of other host birds (of other species) [12,13]. Some host birds can engage in direct conflict with the intruding cuckoos (i.e., if a host bird discovers the eggs are not its own, it will either throw these foreign eggs out or simply abandon its nest and build a new nest elsewhere). There are three major operators is to find the best optimum values of the bands. The first process is the Levy flight; it generates a new solution (eggs) by perturbing the current fitness value [14]. The second operator is the host bird, which can throw the eggs away/abandon the nest (with a fraction p a = [ 0 , 1 ] and build an entirely new nest. The third operator makes a selection of the optimum bands, which are optimized with the help of the second operator. The CS approach is not ideal because it quickly falls into the optimal local solutions and has a slow convergence rate [15]. We need a global optimization technique such as MFO to overcome these issues. MFO benefits from being simple to understand and apply, and it provides a high convergence rate [16]. Nonetheless, the literature indicates that the MFO algorithm has the potential for development. As a result, several researchers have attempted to develop the algorithm in various ways during the last two years.
Metaheuristic algorithms have been used as promising methods for solving many problems over the last decade. Many metaheuristic algorithms, however, may provide unsatisfactory performance due to slow or premature convergence. As a result, determining how to develop algorithms that balance exploration and exploitation while precisely locating the appropriate hyperspectral band remains a challenge. To solve hyperspectral band selection problems, this paper proposes a new global optimization algorithm based on MFO. The main source of inspiration for this algorithm is the transverse orientation navigation method used by moths in nature. Moths fly at night by maintaining a fixed angle with respect to the Moon, which is a very effective mechanism for traveling long distances in a straight line. These beautiful insects, however, are trapped in a spiral path around artificial lights. We attempted to address the phenomenon of the MFO algorithm’s slow convergence and low precision. MFO is used to maintain a high level of global exploration and an effective balance between global and local searches. Although evolutionary multi-objective optimization approaches have recently been proposed to simultaneously optimize the criteria, they are unable to manage the global exploration versus local exploitation capabilities in the search space for the hyperspectral feature selection problem. Thus, a unique discrete sine-cosine-algorithm-based multi-objective feature selection strategy for hyperspectral imaging is proposed in this work. The suggested method creates a novel and effective framework for multi-objective hyperspectral feature selection. The framework models the ratio of the Jeffries–Matusita distance to mutual information to minimize redundancy and maximize the relevance of the selected feature subset. In addition, the variance of band selection is used to maximize the amount of information.
Evolutionary calculation is a natural evolution-inspired computational intelligence approach. The evolutionary calculation method begins by generating a random population of individuals that representto get a optimization solutions. The first population could be generated at random or by feeding it into the algorithm. Individuals are evaluated using a fitness function, and the output of the function indicates how well the individual solves or approaches solving the problem. Then, several natural evolution-inspired operators, such as crossover, mutation, selection, and reproduction, are applied to individuals. A new population is created based on the fitness values of newly evolved individuals. Some individuals are eliminated because the population size must be maintained in the same way as it is in nature. This method is repeated until the termination requirement is satisfied. The most commonly used condition for stopping the algorithm is reaching the number of defined generations. As the answer, the best individual with the highest fitness value is chosen. Every search algorithm must address search space exploration and exploitation. Exploration is the process of visiting completely new regions of a search space, whereas exploitation is the process of visiting areas of a search space that are close to previously visited places. A search algorithm must achieve a suitable balance between exploration and exploitation in order to be successful. In supervised machine learning, algorithms have achieved good performance based on the number of examples in the training set, the dimensions of the feature space, the correlated features, and overcoming overfitting problems. These are just a few factors on which the selection of the algorithm may depend. Based on this, we used three different supervised machine learning algorithms, Random Forest (RF), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). The derived spectral and spatial information for an effective classification was then learned using RF, KNN, and SVM. The problem with local optima was solved with the MFO algorithm. This optimization approach has been applied efficiently in various fields to find an optimum solution.

2. Materials and Methods

Moth–Flame Optimization Algorithm

MFO navigates through a process known as transverse orientation. A moth flies in this manner by keeping a constant angle with the Moon, which is a compelling strategy for traveling long distances in a straight line, considering the Moon’s distance from the moth, which is considerably long [17]. Artificial lights fool moths, causing them to exhibit certain habits. The proximity of such illumination to the moth and maintaining a relative angle to the light source result in a spiral flying direction of the moths. Moths travel in a logarithmic spiral through flames in the MFO algorithm, then converge on the flame. Moths were assumed to be possible solution candidates in our MFO technique, and the spatial coordinates were re-used as the input variables for the flame problem. As a result, the moth’s position vectors allowed them to fly in the 1D, 2D, 3D, or hyper-dimensional space. Because the MFO algorithm uses a population-based approach, the collection of moths is represented in the following matrix.
M = m 1 , 1 m 1 , 2 m 1 , d m n , 1 m n , 2 m n , d
The return value of the objective function for each moth is the fitness value. When the fitness function receives a position vector (such as the first row of the matrix M), it assigns the fitness function output to the appropriate moth (OM). Flames are an important part of MFO. The following is an example of a flames matrix:
F = F 1 , 1 F 1 , 2 F 1 , d F n , 1 F n , 2 F n , d
where n is the number of moths and d is the number of variables (dimensions). For all the moths, we assumed that there was an array for sorting the corresponding fitness function as follows.
O M = OM 1 OM 2 . . OM n , O F = OF 1 OF 2 . . OF n
In MFO, local and global searches for selecting bands from the hyperspectral band search space help produce quality classification maps. Machine learning algorithms are needed to classify hyperspectral data based on features. The proposed method for the optimized band selection method is depicted in Figure 1. The random parameter (T) accelerates convergence during generation, which varies from −1 to 1 [16,17]. The flow chart of the MFO algorithm is shown in Figure 2. When dealing with non-linear objective functions, the original PSO, CSO, and GA all suffer from the problem of local optima. Aside from that, because of the randomness, the convergence speed and precision are both very low. We suggested the MFO technique in order to solve all of these disadvantages, while also increasing the efficiency of the solution, as well as the pace of convergence [18]. The original MFO, on the other hand, makes use of the ability to travel in a spiral pattern in order to achieve exploitation rather than simply exploring the solution space.
The first approach, spiral behavior, can be understood as the moths trying to catch the flame considering it as prey through a spiral path in a transverse orientation [16]. The corresponding model is shown in Equation (5). To be more accurate, the MFO bands explore all available classification maps. According to Equation (5), the MFO bands encircle the maps or form a spiral in a transverse direction depending on the distance between the actual position of the band and the best positions obtained so far. The optimal band selection using the MFO is given in Algorithm 1.
Algorithm 1: Optimal band selection from airborne HSI using moth–flame optimization.
Input: Train and test datasets from H c u b e
Output: Band collection derived from the HSI data cube using MFO’s global optimum location.
Procedure MFO Algorithm
  Moths are generated randomly at first to populate the feasible search space
  Evaluate and classify the fitness of the entire population
  Equal to the sorted population in flames
  While iteration < max iteration  The following equation can be used to calculate the flame number.
F l a m e N u m b e r = r o u n d N l * N 1 T
where l is the current iteration, N is the maximum flame number, and T is the maximum number of iterations.
The distance D i between the i t h moth M i concerning its corresponding j t h flame F j can be obtained from:
D i = F j M j
Update the value of constant a and t.
a = 1 + l * 1 T
t = a 1 * r a n d + 1
where t is a random number between [−1, 1]. Maintain the direction of the moth and its associated flame F j . We used the spiral function (S), which simulates the moth’s transverse orientation around the Moon using the following equation.
S M i , F j = D i * e b t * cos 2 * p i * t + F j
where b is the spiral shape constant.
Update and sort the fitness for all search agents.
Update the flames.
Iteration = Iteration + 1.
End while
  To find the global best position
The moths are modified using the output of Algorithm 1, which generates a collection of hyperspectral bands via MFO. Then, after selecting representative bands, the initial hyperspectral cube H R w * h * λ , where w and h are the spatial information of the HSI and λ denotes the spectral information, and it becomes H M F O R w * h * λ where r denotes the bands obtained from MFO and r < λ . The main source of inspiration for this algorithm is the transverse orientation navigation method used by moths in nature. Moths fly at night by maintaining a fixed angle with respect to the Moon, which is a very effective mechanism for traveling long distances in a straight line. These beautiful insects, however, are trapped in a spiral path around artificial lights. Here, the moth has coordinated values indicating the solution of the optimization problem or the group of band combinations. The moth population has a predetermined number of moth positions. This flame will act as the objective function or fitness function for optimization band selection. The moth position is update based on the number of bands and each iterations. The proposed paradigm is made up of two opposing objective functions. One assesses the amount of information, while the other measures the level of redundancy in the selected bands. The two elements are quantified by this model, allowing them to be optimized concurrently. A new multi-objective immune method is built to accommodate the features of hyperspectral data in order to optimize this model [19].

3. Classification Methods

Random forest is a classification algorithm that uses many decision tree models built on different sets of bootstrapped features [20]. This algorithm works with the following steps: Bootstrap the training set multiple times. The algorithm adopts a new set to build a single tree in the ensemble during each bootstrap. Whenever the tree sample is split, a portion of features is selected randomly in the training sets to find the best split variable, and new features are evaluated. The KNN algorithm is a non-parametric lazy learning algorithm [21]. Its purpose is to use a database in which the data points are separated into several classes to predict a new sample point classification. Support vector machines are now regarded as actual examples of “kernel Methods,” one of the critical areas in machine learning. SVM tries to map an input space into an output space using a nonlinear mapping function ϕ such that the problem of the data points becomes linearly separable in the output space [22]. When these points become linearly separable, then SVM discovers the optimal separating hyperplane.

Dataset Description

The airborne hyperspectral datasets used in this experiment were the Indian Pines [23], Salinas [24], and University of Pavia images [25]. The ground truth of a sample image from these datasets is shown in Figure 3. The Indian Pines dataset was collected by an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. This dataset was taken in the northwest part of Indiana in the year 1992. Each sample has 224 spectral bands with a spatial dimension of 145 × 145 and ground truth data, as shown in Figure 3a. The Salinas dataset was taken in Salinas Valley, California, in the year 1998 using the AVIRIS sensor. These data have 224 spectral bands with a 512 × 217 spatial dimension with ground truth data and 17 different land cover classes, as shown in Figure 3b. The University of Pavia Scene dataset was collected using the Reflective Optics System Imaging Spectrometer (ROSIS) sensor. This dataset was taken in a flight campaign over Pavia, Northern Italy. These data have 103 spectral bands with a 610 × 340 spatial dimension with ground truth data and have 9 different classes, as shown in Figure 3c [26]. MFO can optimize images with multiple bands by selecting very few bands, and it is compared with other state-of-the-art methods in Table 1.

4. Experimental Results

The research work mainly focused on finding optimal bands in hyperspectral data [27], improving test samples’ classification accuracy and prediction rate.
The performance of the MFO algorithm was evaluated by using 20% of the samples for training and the remaining 80% of the samples for testing. The details of the training and testing samples for the three datasets are provided in Table 2.

Classification Maps

This research proposed a method for band selection using MFO. The designation accuracy was used as the fitness function in this algorithm, and global bands were divided into sub-optimal bands based on the fitness values. The following subsection discusses different optimization techniques. PSO-based hyperspectral band selection was implemented, and it was tested with benchmarks of hyperspectral data. Good overall accuracy was achieved, but PSO has disadvantages, such as easily falling into local optima [28]. The band selection of HSI using the cuckoo search algorithm was implemented and tested with benchmark hyperspectral datasets. Here, the cuckoo search algorithm achieved good accuracy; however, it has two objective functions that need to be implemented, and it had slow convergence for multi-dimensional data [29]. Genetic algorithms (GAs) adopt probabilistic methods of search to minimize a given fitness or cost function. The main features of this optimization technology are that: (i) the algorithm does not deal with the parameters, but instead codes them; (ii) the optimal search works with a population of solution points; (iii) the derivatives of the fitness function are unknown; (iv) the algorithm uses laws of probabilistic transformation rather than deterministic ones for hyperspectral band selection [30]. The classification maps of the proposed approach and other state-of-the-art approaches for sample images from the Indian Pines, University of Pavia and Salinas datasets are depicted in Figure 4, Figure 5 and Figure 6, respectively. From Figure 4j–l, The proposed MFO band selection method is observed to classify the class grasstrees accurately using the RF, KNN, and SVM classifiers (Anand R, 2017) for the Indian Pines HSI. In the case of the KNN classifier, it is observed that the classes grasstrees, grass-pasture-mowed, oats, and hay-windrowed were classified accurately. The rate of misclassification in other classes was low, owing to the intelligent behavior of MFO. However, compared to these methods, SVM played a significant role in this research, especially MFO, because it provided a perfect classification for several classes such as alfalfa, class grasstrees, grass-pasture-mowed, hay-windrowed, oats, and wheat. From Figure 5j–l, the proposed moth–flame optimized band selection method is observed to classify the bare class soil accurately using the RF, KNN, and SVM classifiers for the University of Pavia hyperspectral data. Due to the mixed pixel problem in the hyperspectral cube, a portion of the self-blocking bricks was incorrectly classified as asphalt. Nevertheless, compared to the RF and KNN classifier methods, SVM played a major role in this research, especially MFO, because it provided good classification accuracy for the three classes bare soil, gravel, and trees. From Figure 6j–l, the proposed moth–flame optimized band selection method is observed to classify the class celery accurately using RF classifiers for the Salinas scene hyperspectral data. The classes of celery and lettuce romaine 7 wk were classified accurately using the KNN classifier. However, compared to the RF and KNN classifiers, SVM played a significant role in this research, especially MFO, because it classified many classes accurately such as broccoli green weeds 2, celery, corn senesced green weeds, and lettuce romaine 7 wk. This is because the selected bands were very useful to classify accurately. It was observed that samples of fallow rough plow, lettuce romaine 6 wk, and soil vineyard develop were misclassified because of the highly correlated pixel values.

5. Comparative Analysis of State-of-the-Art Approaches

The efficiency of the proposed method was compared with PSO [31], GA [32], and CSO [33]. Table 3 summarizes the suggested band selection method’s classwise accuracies over Indian Pines. The highest accuracy is denoted in bold. Alfalfa, grasstrees, grass-pasture-mowed, hay-windrowed, oats, and wheat were classified with 100% accuracy. This was accomplished by the intelligent behavior of the MFO’s band selection algorithm. The corn, grass-pasture, soybean-clean, and buildings grasstrees classes obtained accuracy values of 82.22%, 96.85%, 88.16%, and 78.21%, respectively. The accuracy of the other approaches was between 30% and 70%. The accuracy was achieved by extracting only optimized bands through MFO. Additionally, the remaining six classes exhibited high classification accuracy, while the remaining classes exhibited comparable performance to the other methods.
Table 4 summarizes the suggested band selection method’s classwise accuracy on the Pavia dataset. The highest accuracy is denoted in bold. Among the nine groups, MFO-based band selection outperformed all other approaches. Two of the classes, bare soil and trees, achieved a classification accuracy of 100%. Asphalt, meadows, and shadows all had high accuracies of 96.69%, 98.67%, and 93.21%, respectively, while the accuracy of the other approaches was in the range of 71.19 to 92.41%. Painted metal sheets were graded similarly by the PSO of RF and the MFO of KNN since they had a different spectral reflectance property compared to other types of reflectance.
Table 5 summarizes the suggested band selection method’s classwise accuracies over the Salinas dataset. The highest accuracy is denoted in bold. From the sixteen classes, four classes achieved a classification accuracy of 100.00% for the MFO-based bands. The accuracies of the untrained classes such as stubble, celery, and corn senesced green weeds were 99.94%, 99.72%, and 98.63%, respectively, and comparatively outperformed the other approaches. Three classes demonstrated competitive success using alternative approaches.

5.1. Comparative Analysis of State-of-the-Art Techniques with Respect to Overall and Average Accuracies

On all datasets, the proposed MFO algorithm was efficient in attaining actual class marks. As shown in Table 6, the proposed method’s overall accuracy on the Indian Pines dataset was 88.98% for the SVM classifier. It varied between 75.00% and 88.90% with the other strategies. The average accuracy was 91.37 %, while it varied between 72.46% and 90.11% for the other approaches. Similarly, the average accuracy of the University of Pavia dataset was 96.94%, compared to 91.84 to 95.48% for other approaches. Additionally, the overall accuracy was 94.85%, compared to 89.95% to 93.53% for the other systems. Overall, the Salinas dataset collection accuracy was in the range of 91.99% to 95.45% with the average accuracy of 93.9%. The overall average accuracy was 97.17%, while the accuracy of the other approaches varied between 95.86 and 97.71%, as shown in Table 7 and Table 8, respectively. To validate each method, MFO had 30 search members, and 200 iterations were used. It should be emphasized that the number of moths should be selected empirically (or other candidates for solutions in other algorithms). The more fake moths are used, the more likely the global optimum is determined. However, 30 moths for addressing optimization issues is a reasonable number, and it can be decreased to 20 or 10 for costly situations. The entropy of the selected bands of the Indian Pines, Salinas Scene and University of Pavia hyperspectral data and the selected bands from different optimization techniques is shown in Figure 7. It can be seen from Figure 8 that the proposed moth–flame optimization techniques had an average entropy of 12.28157. The proposed moth–flame-based band section showed overall high classification, average accuracy, and kappa coefficients. The reason behind this was that the proposed MFO method selected the best-suitable bands compared to other optimization methods and the selected bands had low entropy. Figure 9a shows the convergence curve of the Indian Pines dataset. Here, the MFO-based band selection had after the 130th iteration an overall accuracy of 88.98%. For the University of Pavia dataset, the optimal global solution was attained at the 126th iteration with an accuracy of 93.92% and is shown in Figure 9b. At the 132nd iteration, the Salinas dataset attained its optimum global solution with an accuracy of 96.94% and is shown in Figure 9c. This implies that MFO is the best appropriate solution for the selection of hyperspectral bands.

5.2. Computation Time

The computational complexity of the MFO method relies on several factors such as the number of moths, the number of variables, the maximum number of iterations, and the sorting technique of flames in each iteration [34]. Since we employed the Quicksort method, O n l o g n a n d O n 2 in the best and worst case, correspondingly. Considering the P function, consequently, the entire computational complexity is determined by,
O M F O = O t n 2 + n = O t n 2 + t n d
where n is the number of moths, t is the maximum number of iterations, and d is the number of variables. In this paper, the maximum value of the combined pixels was considered as n and 200 iterations, and the number of variables represents the number of bands present in the hyperspectral data. Table 9 summarizes the processing time needed by each procedure on each of the three datasets. The overall execution time is the amount of time taken to complete the band collection and classification process. As the table indicates, the suggested approach took a shorter time than the other methods to compute the classifier’s overall accuracy. While the proposed approach required more time to execute, it achieved a high degree of classification accuracy. Table 8 describes the overall comparison of our proposed work with PSO, GAO, and cuckoo search. The results show that the MFO algorithm achieved higher accuracies with the minimum iterations to obtain the global minimum.
Mirjalili [34] mentioned that the MFO algorithm can theoretically be more effective in solving optimization problems, so we observed that this algorithm improved the performance of the HSI band selection method compared with all the other methods. MFO processes the updating positions, allowing acquiring neighboring bands surrounding the flames, a mechanism for largely encouraging exploitations. Local optimum avoidance was high because MFO uses a population of moths. There is no way for the greatest bands to be lost because the F matrix stores them. Exploration and exploitation are balanced using an appropriate number of flames.

6. Conclusions

In this paper, band selection was performed by the optimization of an objective function. The classification accuracy rate and the class separability measure were combined in the design procedure of the respective objective function. A two-class separability measure, which was the K-means distances, was proposed in this method. A new meta-heuristic called the moth–flame optimizer was used to optimize the objective function, providing better results. The proposed approach was tested on three widely used HSI datasets: University of Pavia, Indian Pines, and Salinas Scene, to show the effectiveness. Effectiveness was measured in terms of overall accuracy, average accuracy, individual class accuracy, and computational time. A comparison with some feature selection methods that are defined in the literature was also conducted. The obtained classification accuracy rate of the proposed approach was very satisfactory compared to the others. In this approach, 80% of the data were used to train the algorithm. The proposed technique selects the better band that separates the classes, thereby increasing the classification rate. In the future, the same proposed algorithm can be tested for different hyperspectral datasets, and the quality of the objective function can also be worked on for improvement.
More extensive comparisons incorporating class accuracies and other statistics, as well as tests with more HSI datasets are required to fully appreciate MFO’s benefits. Future research will look at other datasets and compare MFO to other cutting-edge metaheuristic algorithms to further understand its efficacy. Experimenting with newer heuristics such as MFO for HSI band selection is vital since the no free lunch theorem asserts that different optimization methods will be better for different situations and that there is no one optimal strategy for all challenges.

Author Contributions

Conceptualization, S.S., E.W.; methodology, S.S., E.W., R.A.; software, E.W., R.A.; validation, R.A.; formal analysis, R.A.; investigation, R.A., S.S.; resources, S.S., S.V.; data curation, R.A., M.Z.; writing: R.A., S.S.; writing—review and editing, S.S., S.V., M.Z.; visualization, R.A., S.S.; supervision, S.S., S.V.; project administration, S.S., S.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chang, C.I. Hyperspectral Imaging: Techniques for Spectral Detection And Classification; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003; Volume 1. [Google Scholar]
  2. Shahshahani, B.M.; Landgrebe, D.A. The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon. IEEE Trans. Geosci. Remote Sens. 1994, 32, 1087–1095. [Google Scholar] [CrossRef] [Green Version]
  3. McKeown, D.M.; Cochran, S.D.; Ford, S.J.; McGlone, J.C.; Shufelt, J.A.; Yocum, D.A. Fusion of HYDICE hyperspectral data with panchromatic imagery for cartographic feature extraction. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1261–1277. [Google Scholar] [CrossRef]
  4. Liu, B.; Yu, X.; Zhang, P.; Yu, A.; Fu, Q.; Wei, X. Supervised deep feature extraction for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1909–1921. [Google Scholar] [CrossRef]
  5. Damodaran, B.B.; Courty, N.; Lefèvre, S. Sparse Hilbert Schmidt independence criterion and surrogate-kernel-based feature selection for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2385–2398. [Google Scholar] [CrossRef] [Green Version]
  6. Zebari, R.; Abdulazeez, A.; Zeebaree, D.; Zebari, D.; Saeed, J. A Comprehensive Review of Dimensionality Reduction Techniques for Feature Selection and Feature Extraction. J. Appl. Sci. Technol. Trends 2020, 1, 56–70. [Google Scholar] [CrossRef]
  7. Ho, Y.C.; Pepyne, D.L. Simple explanation of the no-free-lunch theorem and its implications. J. Optim. Theory Appl. 2002, 115, 549–570. [Google Scholar] [CrossRef]
  8. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  9. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  10. Chang, Y.L.; Fang, J.P.; Benediktsson, J.A.; Chang, L.; Ren, H.; Chen, K.S. Band selection for hyperspectral images based on parallel particle swarm optimization schemes. In Proceedings of the 2009 IEEE International Geoscience and Remote Sensing Symposium, Cape Town, South Africa, 12–17 July 2009; Volume 5, p. V-84. [Google Scholar]
  11. Grefenstette, J.J. Genetic algorithms and machine learning. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, Santa Cruz, CA, USA, 26–28 July 1993; pp. 3–4. [Google Scholar]
  12. Moradi, M.H.; Abedini, M. A combination of genetic algorithm and particle swarm optimization for optimal DG location and sizing in distribution systems. Int. J. Electr. Power Energy Syst. 2012, 34, 66–74. [Google Scholar] [CrossRef]
  13. Yang, X.S.; Deb, S. Engineering optimization by cuckoo search. Int. J. Math. Model. Numer. Optim. 2010, 1, 330–343. [Google Scholar]
  14. Medjahed, S.A.; Saadi, T.A.; Benyettou, A.; Ouali, M. Binary cuckoo search algorithm for band selection in hyperspectral image classification. IAENG Int. J. Comput. Sci. 2015, 42, 183–191. [Google Scholar]
  15. Wang, G. A comparative study of cuckoo algorithm and ant colony algorithm in optimal path problems. MATEC Web Conf. EDP Sci. 2018, 232, 03003. [Google Scholar] [CrossRef] [Green Version]
  16. Li, Y.; Zhu, X.; Liu, J. An Improved Moth-Flame Optimization Algorithm for Engineering Problems. Symmetry 2020, 12, 1234. [Google Scholar] [CrossRef]
  17. Helmi, A.; Alenany, A. An enhanced Moth-flame optimization algorithm for permutation-based problems. Evol. Intell. 2020, 13, 741–764. [Google Scholar] [CrossRef]
  18. Mohamed, A.A.; Kamel, S.; Hassan, M.H.; Mosaad, M.I.; Aljohani, M. Optimal Power Flow Analysis Based on Hybrid Gradient-Based Optimizer with Moth–Flame Optimization Algorithm Considering Optimal Placement and Sizing of FACTS/Wind Power. Mathematics 2022, 10, 361. [Google Scholar] [CrossRef]
  19. Zhang, M.; Gong, M.; Chan, Y. Hyperspectral band selection based on multi-objective optimization with high information and low redundancy. Appl. Soft Comput. 2018, 70, 604–621. [Google Scholar] [CrossRef]
  20. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, M.L.; Zhou, Z.H. ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognit. 2007, 40, 2038–2048. [Google Scholar] [CrossRef] [Green Version]
  22. Anand, R.; Veni, S.; Aravinth, J. Robust Classification Technique for Hyperspectral Images Based on 3D-Discrete Wavelet Transform. Remote Sens. 2021, 13, 1255. [Google Scholar] [CrossRef]
  23. Gualtieri, J.A.; Cromp, R.F. Support vector machines for hyperspectral remote sensing classification. In Proceedings of the 27th AIPR Workshop: Advances in Computer-Assisted Recognition, Washington, DC, USA, 14–16 October 1998; Volume 3584, pp. 221–232. [Google Scholar]
  24. Gualtieri, J.A.; Chettri, S.R.; Cromp, R.F.; Johnson, L.F. Support vector machine classifiers as applied to AVIRIS data. In Proceedings of the Eighth JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 9–11 February 1999; pp. 217–227. [Google Scholar]
  25. Houshmand, B.; Gamba, P. Integration of High-Resolution Multispectral Imagery with Lidar and IFSAR Data for Urban Analysis Applications. Int. Arch. Photogramm. Remote Sens. 1999, 32, 111–117. [Google Scholar]
  26. Reshma, R.; Sowmya, V.; Soman, K.P. Dimensionality reduction using band selection technique for kernel based hyperspectral image classification. Procedia Comput. Sci. 2016, 93, 396–402. [Google Scholar] [CrossRef] [Green Version]
  27. Haridas, N.; Sowmya, V.; Soman, K.P. Gurls vs libsvm: Performance comparison of kernel methods for hyperspectral image classification. Indian J. Sci. Technol. 2015, 8, 1. [Google Scholar] [CrossRef] [Green Version]
  28. Xu, M.; Shi, J.; Chen, W.; Shen, J.; Gao, H.; Zhao, J. A band selection method for hyperspectral image based on particle swarm optimization algorithm with dynamic sub-swarms. J. Signal Process. Syst. 2018, 90, 1269–1279. [Google Scholar] [CrossRef]
  29. Sawant, S.; Manoharan, P. A hybrid optimization approach for hyperspectral band selection based on wind driven optimization and modified cuckoo search optimization. Multimed. Tools Appl. 2021, 80, 1725–1748. [Google Scholar] [CrossRef]
  30. Nagasubramanian, K.; Jones, S.; Sarkar, S.; Singh, A.K.; Singh, A.; Ganapathysubramanian, B. Hyperspectral band selection using genetic algorithm and support vector machines for early identification of charcoal rot disease in soybean stems. Plant Methods 2018, 14, 1–13. [Google Scholar] [CrossRef] [PubMed]
  31. Su, H.; Du, Q.; Chen, G.; Du, P. Optimised hyperspectral band selection using particle swarm optimization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2659–2670. [Google Scholar] [CrossRef]
  32. Wen, G.; Zhang, C.; Lin, Z.; Xu, Y. Band selection based on genetic algorithms for classification of hyperspectral data. In Proceedings of the 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering, and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016; pp. 1173–1177. [Google Scholar]
  33. Shao, S. An improved cuckoo search-based adaptive band selection for hyperspectral image classification. Eur. J. Remote Sens. 2020, 53, 211–218. [Google Scholar] [CrossRef]
  34. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
Figure 1. Schematic interpretation of the proposed method.
Figure 1. Schematic interpretation of the proposed method.
Jimaging 08 00126 g001
Figure 2. Proposed band selection system with the MFO algorithm.
Figure 2. Proposed band selection system with the MFO algorithm.
Jimaging 08 00126 g002
Figure 3. Ground truth map of three different benchmark airborne hyperspectral remote sensing scenes. (a) Indian Pines Ground Truth Hyperspectral Data, (b) Salinas Ground Truth Hyperspectral Data, (c) Pavia University Scene Ground Truth Hyperspectral Data.
Figure 3. Ground truth map of three different benchmark airborne hyperspectral remote sensing scenes. (a) Indian Pines Ground Truth Hyperspectral Data, (b) Salinas Ground Truth Hyperspectral Data, (c) Pavia University Scene Ground Truth Hyperspectral Data.
Jimaging 08 00126 g003
Figure 4. Classification maps of Indian Pines hyperspectral data: (a) PSO-RF, (b), PSO-KNN (c), PSO-SVM (d) GAO-RF, (e) GAO-KNN, (f) GAO-SVM, (g) CSO-RF, (h) CSO-KNN, (i) CSO-SVM, (j) MFO-RF, (k) MFO-KNN, (l) MFO-SVM, (m) spectral reflectance for original bands for corn class, and (n) spectral reflectance for MFO-based selected bands for corn class.
Figure 4. Classification maps of Indian Pines hyperspectral data: (a) PSO-RF, (b), PSO-KNN (c), PSO-SVM (d) GAO-RF, (e) GAO-KNN, (f) GAO-SVM, (g) CSO-RF, (h) CSO-KNN, (i) CSO-SVM, (j) MFO-RF, (k) MFO-KNN, (l) MFO-SVM, (m) spectral reflectance for original bands for corn class, and (n) spectral reflectance for MFO-based selected bands for corn class.
Jimaging 08 00126 g004aJimaging 08 00126 g004b
Figure 5. Classification maps of University of Pavia hyperspectral data: (a) PSO-RF, (b) PSO-KNN, (c) PSO-SVM, (d) GAO-RF, (e) GAO-KNN, (f) GAO-SVM, (g) CSO-RF, (h) CSO-KNN, (i) CSO-SVM, (j) MFO-RF, (k) MFO-KNN, and (l) MFO-SVM.
Figure 5. Classification maps of University of Pavia hyperspectral data: (a) PSO-RF, (b) PSO-KNN, (c) PSO-SVM, (d) GAO-RF, (e) GAO-KNN, (f) GAO-SVM, (g) CSO-RF, (h) CSO-KNN, (i) CSO-SVM, (j) MFO-RF, (k) MFO-KNN, and (l) MFO-SVM.
Jimaging 08 00126 g005
Figure 6. Classification maps of Salinas hyperspectral data:, (a) PSO-RF, (b) PSO-KNN, (c) PSO-SVM, (d) GAO-RF, (e) GAO-KNN, (f) GAO-SVM, (g) CSO-RF, (h) CSO-KNN, (i) CSO-SVM, (j) MFO-RF, (k) MFO-KNN, and (l) MFO-SVM.
Figure 6. Classification maps of Salinas hyperspectral data:, (a) PSO-RF, (b) PSO-KNN, (c) PSO-SVM, (d) GAO-RF, (e) GAO-KNN, (f) GAO-SVM, (g) CSO-RF, (h) CSO-KNN, (i) CSO-SVM, (j) MFO-RF, (k) MFO-KNN, and (l) MFO-SVM.
Jimaging 08 00126 g006aJimaging 08 00126 g006b
Figure 7. Quantitative analysis of selected bands’ average entropy for the three datasets with different optimization techniques.
Figure 7. Quantitative analysis of selected bands’ average entropy for the three datasets with different optimization techniques.
Jimaging 08 00126 g007
Figure 8. Quantitative analysis of selected bands’ mean spectral divergence for the three datasets with different optimization techniques.
Figure 8. Quantitative analysis of selected bands’ mean spectral divergence for the three datasets with different optimization techniques.
Jimaging 08 00126 g008
Figure 9. Proposed MFO convergence curve analysis over the number of iterations: (a) Indian Pines, (b) University of Pavia, and (c) Salinas.
Figure 9. Proposed MFO convergence curve analysis over the number of iterations: (a) Indian Pines, (b) University of Pavia, and (c) Salinas.
Jimaging 08 00126 g009
Table 1. Band selection using different optimization techniques.
Table 1. Band selection using different optimization techniques.
Optimization TechniquesAirborne Hyperspectral DatasetNon-Optimized BandsNumber of Non-Optimized Band
MFOIndian
Pines
[0, 1, 2, 4, 5, 6, 9, 57, 76, 79, 101, 102,
104, 141, 143, 191]
16
Salinas[0, 1, 2, 3, 4, 5, 103, 104, 105, 106,
108, 146, 147]
13
Pavia
University
[0, 1, 2, 3, 6, 12, 15, 24, 26, 85, 94]11
PSOIndian
Pines
[0, 1, 2, 3, 4, 5, 6, 9, 57, 76, 79, 101, 102,
103, 104, 143, 144, 145, 191]
19
Salinas[0, 1, 2, 3, 4, 16, 60, 79, 80, 105,
146, 155, 152]
13
Pavia
University
[0, 1, 3, 4, 5, 6, 11, 12, 24, 27,
44, 85, 94, 98, 101]
15
GAOIndian Pines[0, 1, 2, 4, 5, 6, 7, 9, 16, 33, 41, 47, 57,
75, 82, 101, 104, 113, 124, 139, 144,
186, 181, 183]
24
Salinas[0, 1, 2, 3, 4, 5, 6, 8, 31, 56, 102, 103,
106, 108, 144, 145, 157, 159, 161]
19
Pavia
University
[0, 1, 2, 3, 4, 5, 8, 12, 14, 15, 22, 26,
28, 36, 68, 86, 91, 101]
18
CSOIndian
Pines
[0, 1, 4, 5, 7, 19, 41, 76, 77, 84, 89, 95,
100, 101, 102, 104, 143, 144, 174,
175, 184, 192]
22
Salinas[0, 1, 2, 3, 4, 5, 8, 9, 21, 34, 46, 82, 86,
103, 104, 105, 107, 145, 146, 147]
20
Pavia
University
[0, 1, 2, 3, 4, 5, 6, 11, 14, 24, 26, 56,
57, 71, 84, 86, 94]
17
Table 2. Details of the datasets—Indian pines, University of Pavia, and Salinas datasets used in this study.
Table 2. Details of the datasets—Indian pines, University of Pavia, and Salinas datasets used in this study.
Indian Pines Hyperspectral DataUniversity of Pavia Hyperspectral DataSalinas Scene Hyperspectral Data
Class NameTraining
Samples
Test
Samples
Total
Samples
Class NameTraining
Samples
Test
Samples
Total
Samples
Class NameTraining
Samples
Test
Samples
Total
Samples
Alfalfa93645Asphalt131452566570Broccoli green
weeds 1
41916762095
Corn-notill28311301413Meadows372514,898.418,623Broccoli green
weeds 2
73729483685
Corn-mintill169674.4843Bitumen42016802100Fallow39015581948
Corn45180225Gravel61924763095Fallow rough
plow
28511401425
Grass-pasture95378.4473Bare Soil28611441430Fallow smooth55922362795
Grasstrees143570.4713Painted
metal sheets
10084030.45038Stubble81732684085
Grass-
pasture-
mowed
62430Self-Blocking
Bricks
26910761345Celery70328103513
Hay-
windrowed
91364455Shadows72929163645Grapes
untrained
2205882011025
Oats310.413Trees187746.4933Soil vineyard
develop
123149246155
Soybean-notill206822.41028 Corn senesced
green weeds
65926343293
Soybean-mintill48419362420 Lettuce romaine 4 wk199794993
Soybean-clean123490.4613 Lettuce romaine 5 wk38315301913
Wheat41164205 Lettuce romaine 6 wk176702878
Woods25910361295 Lettuce romaine 7 wk2078261033
Buildings
Grass Trees
Drives
78312390 Vineyard untrained148259287410
Stone Steel
Towers
1974.493 Vineyard vertical
trellis
37815101888
Table 3. Comparison of classwise accuracies (in%) of MFO with the state-of-the-art methods on Indian Pines using RF, KNN, and SVM.
Table 3. Comparison of classwise accuracies (in%) of MFO with the state-of-the-art methods on Indian Pines using RF, KNN, and SVM.
Class Names of
Indian Pines Dataset
RFKNNSVM
PSOGAOCSOMFOPSOGAOCSOMFOPSOGAOCSMFO
Alfalfa61.1155.5666.6761.1155.5655.5661.1172.2294.4494.4488.89100
Corn-notill76.8177.8878.0576.4670.0968.8570.8069.9180.3580.7181.7780.18
Corn-mintill63.2056.9761.4263.2060.5358.4659.9460.5381.0181.9082.2081.31
Corn37.7833.3334.4436.6737.7841.115037.7882.2286.6781.1182.22
Grass-pasture88.3695.2493.1291.5392.0692.5992.5992.0696.3096.8396.8396.85
Grasstrees97.8998.2597.5497.5497.1996.8497.5497.1997.8997.5497.82100
Grass-pasture-mowed58.33755066.677566.677510083.3383.3383.33100
Hay-windrowed99.4510099.4510098.3597.8098.90100.5510099.4599.45100
Oats100604060806080100100100100100
Soybean-notill76.8976.6477.8678.3580.7880.7882.2480.7885.1684.4381.7584.18
Soybean-mintill90.8189.8890.1990.7079.9679.7579.7580.0689.6790.0890.0889.77
Soybean-clean75.9275.1074.2974.2948.9848.9849.8048.9887.7686.1285.7188.16
Wheat97.5697.5697.5698.7895.1293.9096.3495.1298.7897.5698.78100
Woods98.2697.3096.9197.6893.4494.2193.4493.4496.7297.1097.1096.72
Buildings Grass Trees Drives62.1858.9761.5458.3339.1034.6238.4639.1078.2173.7276.9278.21
Stone Steel Towers89.1989.1989.1989.1991.8989.1991.8991.8986.4991.8986.4986.49
Table 4. Comparison of classwise accuracies (in %) of MFO with state-of-the-art methods on the University of Pavia dataset using RF, KNN, and SVM.
Table 4. Comparison of classwise accuracies (in %) of MFO with state-of-the-art methods on the University of Pavia dataset using RF, KNN, and SVM.
Class Name of
University of Pavia Data
RFKNNSVM
PSOGAOCSOMFOPSOGAOCSOMFOPSOGAOCSMFO
Asphalt95.8196.8495.8196.1291.4891.3691.4095.8195.8195.7491.4896.69
Meadows98.1398.3498.4498.4698.3598.4398.3298.4398.4398.5098.3598.67
Bitumen78.4573.3378.6971.1975.8377.2675.6078.4578.4578.8175.8376.43
Gravel96.9394.1097.2594.1889.1089.3489.2696.9396.9397.5089.1094.10
Bare Soil10099.8310010099.6599.6599.4810010010099.65100
Painted metal sheets92.3188.6891.3686.5576.7777.5776.4392.3192.3191.5676.7786.90
Self-Blocking Bricks86.8076.2186.6277.7088.6688.1087.3686.8086.8086.4388.6686.80
Shadows93.0793.1492.8792.5988.8290.1988.8993.0793.0793.0788.8293.21
Trees100100100100100100100100100100100100
Table 5. Comparison of classwise accuracies (in %) of MFO with the state-of-the-art methods on Salinas Scene dataset using RF, KNN, and SVM.
Table 5. Comparison of classwise accuracies (in %) of MFO with the state-of-the-art methods on Salinas Scene dataset using RF, KNN, and SVM.
Class Name of
Salinas Scene Data
RFKNNSVM
PSOGAOCSOMFOPSOGAOCSOMFOPSOGAOCSOMFO
Broccoli green weeds 199.1699.1899.1899.1898.6998.6998.6998.6999.1498.0198.1799.52
Broccoli green weeds 210099.9399.9310099.7399.7399.7399.73100100100100
Fallow10099.8710099.87100100100100100100100100
Fallow rough plow99.8299.8299.8299.8299.8299.8299.8299.82100100100100
Fallow smooth99.7399.5599.7399.5598.5798.5798.4898.4899.4699.5599.4699.55
Stubble99.9498.8698.1798.4199.7699.7699.7699.7698.1299.8799.1399.94
Celery98.3299.7199.6499.7299.3699.2999.3699.3699.7299.7299.7299.72
Grapes untrained93.4093.5893.5693.5885.7885.6085.5685.7890.7790.8690.7791.02
Soil vineyard develop99.9610099.9699.9699.5999.6399.6399.6399.9299.9299.9299.92
Corn senesced green weeds98.4898.8698.2598.2494.3894.3194.4694.3198.7198.4898.6398.63
Lettuce romaine 4 wk98.4999.5098.7498.9997.9897.9897.9897.9899.2498.4998.7498.49
Lettuce romaine 5 wk10010010010099.8799.8710099.87100100100100
Lettuce romaine 6 wk98.0198.0198.0198.5897.7297.7297.7297.7299.4399.4399.4399.43
Lettuce romaine 7 wk98.5598.3198.3197.8295.4095.4095.4095.4098.1497.6996.1398.79
Vineyard untrained78.5878.3777.6377.6769.8068.9369.4769.4371.2970.7270.3870.82
Vineyard vertical trellis98.9498.9498.9498.9498.5498.4198.4198.4199.0798.9499.0798.94
Table 6. Overall and average accuracy measures and Kappa coefficient measure for all the classifiers with comparative analysis of state-of-the-art approaches for Indian Pines data.
Table 6. Overall and average accuracy measures and Kappa coefficient measure for all the classifiers with comparative analysis of state-of-the-art approaches for Indian Pines data.
Class Name of
Indian Pines Data
RFKNNSVM
PSOGACSMFOPSOGACSMFOPSOGACSMFO
Overall Accuracy83.6883.0283.4183.5477.3276.8077.8877.5188.9088.9388.7888.98
Average Accuracy79.6177.3075.5177.1874.7472.4676.1177.6889.9090.1189.2791.37
Kappa Coefficient0.810.810.810.810.740.740.750.750.870.880.870.88
Table 7. Overall and average accuracy measures and Kappa coefficient measure for all the classifiers with comparative analysis of state-of-the-art approaches for University of Pavia data.
Table 7. Overall and average accuracy measures and Kappa coefficient measure for all the classifiers with comparative analysis of state-of-the-art approaches for University of Pavia data.
Class Name of
University of Pavia Data
RFKNNSVM
PSOGACSMFOPSOGACSMFOPSOGACSMFO
Overall Accuracy93.9894.3893.9993.9891.9492.2491.8494.6295.4895.4695.3996.94
Average Accuracy90.7591.1690.6490.7589.8590.2189.6492.4993.5393.5193.4594.85
Kappa Coefficient0.920.930.920.920.890.900.890.930.940.940.940.89
Table 8. Overall and average accuracy measures and Kappa coefficient measure for all the classifiers with comparative analysis of state-of-the-art approaches for Salinas Scene data.
Table 8. Overall and average accuracy measures and Kappa coefficient measure for all the classifiers with comparative analysis of state-of-the-art approaches for Salinas Scene data.
Class Name of
Salinas Scene Data
RFKNNSVM
PSOGACSMFOPSOGACSMFOPSOGACSMFO
Overall Accuracy95.4595.4995.3495.3992.1691.9992.0792.1093.9593.8293.8793.92
Average Accuracy97.7197.7797.6597.7095.9495.8695.9095.9097.2497.1597.1597.17
Kappa Coefficient0.950.950.950.950.910.910.910.910.930.930.930.93
Table 9. Runtime (in seconds) needed for the datasets and methods.
Table 9. Runtime (in seconds) needed for the datasets and methods.
Optimization TechniquesRandom ForestK-Nearest NeighborSupport Vector Machine
Indian PinesMFO165184206
CSO188172195
GAO348647647
PSO557628722
University of PaviaMFO714768802
CSO644704785
GAO98410841102
PSO102310691248
Salinas SceneMFO8978311046
CSO97410031086
GAO114213401424
PSO126720182123
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Anand, R.; Samiaappan, S.; Veni, S.; Worch, E.; Zhou, M. Airborne Hyperspectral Imagery for Band Selection Using Moth–Flame Metaheuristic Optimization. J. Imaging 2022, 8, 126. https://doi.org/10.3390/jimaging8050126

AMA Style

Anand R, Samiaappan S, Veni S, Worch E, Zhou M. Airborne Hyperspectral Imagery for Band Selection Using Moth–Flame Metaheuristic Optimization. Journal of Imaging. 2022; 8(5):126. https://doi.org/10.3390/jimaging8050126

Chicago/Turabian Style

Anand, Raju, Sathishkumar Samiaappan, Shanmugham Veni, Ethan Worch, and Meilun Zhou. 2022. "Airborne Hyperspectral Imagery for Band Selection Using Moth–Flame Metaheuristic Optimization" Journal of Imaging 8, no. 5: 126. https://doi.org/10.3390/jimaging8050126

APA Style

Anand, R., Samiaappan, S., Veni, S., Worch, E., & Zhou, M. (2022). Airborne Hyperspectral Imagery for Band Selection Using Moth–Flame Metaheuristic Optimization. Journal of Imaging, 8(5), 126. https://doi.org/10.3390/jimaging8050126

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop