An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation
Abstract
:1. Introduction
2. Materials and Methods
2.1. Framework Overview
2.2. Model Pipeline Configuration
2.2.1. Data Preprocessing and Feature Engineering
- Feature transformation (FT): In this study, we employed two main techniques for feature transformation, Normalization (Min–Max scaling) and Standardization (standard scaling), due to their effectiveness in enhancing classification accuracy. Normalization is suitable for datasets with small standard deviations and non-normal feature distributions, while standardization helps transform data to a normal distribution, thus improving convergence and classification performance. Other scaling techniques like Maximum Absolute and Robust scalers were excluded due to the dataset characteristics, while Quantile and Power transformer scalers were deemed unsuitable for the study’s purposes [35,36,37].
- Feature selection (FS): The automatic feature selection approaches typically involve a combination of feature subset search methods and evaluation techniques to rank or prioritize features based on their correlations or importance in predictive tasks. These methods are categorized into Wrappers, Filters, and Embedded methods [38]. Wrappers utilize a predictor to assess the usefulness of feature subsets, thus often leading to overfitting, while Filters rely on correlation coefficients or Mutual Information among features, though they may fail to find the best feature set in certain scenarios due to insufficient sample size. Embedded methods, on the other hand, embed the feature subset generator within model training to reduce overfitting and increase efficiency.
- Feature extraction (FE): Feature extraction techniques primarily focus on reducing dimensionality by transforming data from a high-dimensional feature space to a lower-dimensional one. Unlike feature selection, which discards certain features, feature extraction aims to summarize information while highlighting important details and suppressing less relevant ones. While convolutional neural networks excel at feature extraction, the computational complexity and demand for extensive training data pose challenges, thus aligning with concerns regarding end-to-end pipelines. Therefore, in this study, we employed alternative feature extraction techniques such as Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), Kernel Fisher’s Discriminant Analysis (KFDA), and Locally Linear Embedding (LLE).
2.2.2. Core Image Segmentation Task
2.3. Optimization Strategies
2.4. Implementation
2.5. Testing Dataset
3. Results
3.1. Part1: Feature Transformation–Selection
3.2. Part 2: The Whole Framework
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- De Leeuw, J.; Georgiadou, Y.; Kerle, N.; De Gier, A.; Inoue, Y.; Ferwerda, J.; Smies, M.; Narantuya, D. The function of remote sensing in support of environmental policy. Remote Sens. 2010, 2, 1731–1750. [Google Scholar] [CrossRef]
- Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Trans. Geosci. Remote Sens. 2017, 56, 847–858. [Google Scholar] [CrossRef]
- Goetz, A.F. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
- Camps-Valls, G.; Tuia, D.; Bruzzone, L.; Benediktsson, J.A. Advances in hyperspectral image classification: Earth monitoring with statistical learning methods. IEEE Signal Process. Mag. 2013, 31, 45–54. [Google Scholar] [CrossRef]
- Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; Prabhat, f. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef] [PubMed]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Garcia-Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. arXiv 2017, arXiv:1704.06857. [Google Scholar]
- Vali, A.; Comai, S.; Matteucci, M. Deep learning for land use and land cover classification based on hyperspectral and multispectral earth observation data: A review. Remote Sens. 2020, 12, 2495. [Google Scholar] [CrossRef]
- Bellman, R. Dynamic programming. Science 1966, 153, 34–37. [Google Scholar] [CrossRef] [PubMed]
- Jia, W.; Sun, M.; Lian, J.; Hou, S. Feature dimensionality reduction: A review. Complex Intell. Syst. 2022, 8, 2663–2693. [Google Scholar] [CrossRef]
- Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep learning meets hyperspectral image analysis: A multidisciplinary review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef]
- Liu, P.; Choo, K.K.R.; Wang, L.; Huang, F. SVM or deep learning? A comparative study on remote sensing image classification. Soft Comput. 2017, 21, 7053–7065. [Google Scholar] [CrossRef]
- Yu, X.; Wu, X.; Luo, C.; Ren, P. Deep learning in remote sensing scene classification: A data augmentation enhanced convolutional neural network framework. Giscience Remote Sens. 2017, 54, 741–758. [Google Scholar] [CrossRef]
- Triguero, I.; García, S.; Herrera, F. Self-labeled techniques for semi-supervised learning: Taxonomy, software and empirical study. Knowl. Inf. Syst. 2015, 42, 245–284. [Google Scholar] [CrossRef]
- Han, W.; Feng, R.; Wang, L.; Cheng, Y. A semi-supervised generative framework with deep learning features for high-resolution remote sensing image scene classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 23–43. [Google Scholar] [CrossRef]
- Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 2015, 13, 105–109. [Google Scholar] [CrossRef]
- Chen, Z.; Zhang, T.; Ouyang, C. End-to-end airplane detection using transfer learning in remote sensing images. Remote Sens. 2018, 10, 139. [Google Scholar] [CrossRef]
- Hong, D.; Yokoya, N.; Xia, G.S.; Chanussot, J.; Zhu, X.X. X-ModalNet: A semi-supervised deep cross-modal network for classification of remote sensing data. ISPRS J. Photogramm. Remote Sens. 2020, 167, 12–23. [Google Scholar] [CrossRef] [PubMed]
- Jiang, T.; Gradus, J.L.; Rosellini, A.J. Supervised machine learning: A brief primer. Behav. Ther. 2020, 51, 675–687. [Google Scholar] [CrossRef] [PubMed]
- Zhou, L.; Pan, S.; Wang, J.; Vasilakos, A.V. Machine learning on big data: Opportunities and challenges. Neurocomputing 2017, 237, 350–361. [Google Scholar] [CrossRef]
- Moore, G.E. Cramming more components onto integrated circuits. Proc. IEEE 1998, 86, 82–85. [Google Scholar] [CrossRef]
- Theis, T.N.; Wong, H.S.P. The end of moore’s law: A new beginning for information technology. Comput. Sci. Eng. 2017, 19, 41–50. [Google Scholar] [CrossRef]
- Gholami, A.; Yao, Z.; Kim, S.; Hooper, C.; Mahoney, M.W.; Keutzer, K. Ai and memory wall. arXiv 2024, arXiv:2403.14123. [Google Scholar] [CrossRef]
- Shalf, J. The future of computing beyond Moore’s Law. Philos. Trans. R. Soc. A 2020, 378, 20190061. [Google Scholar] [CrossRef] [PubMed]
- Lundstrom, M.S.; Alam, M.A. Moore’s law: The journey ahead. Science 2022, 378, 722–723. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep learning for hyperspectral image classification: An overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef]
- Feurer, M.; Klein, A.; Eggensperger, K.; Springenberg, J.; Blum, M.; Hutter, F. Efficient and robust automated machine learning. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar]
- Hutter, F.; Kotthoff, L.; Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
- Baratchi, M.; Wang, C.; Limmer, S.; van Rijn, J.N.; Hoos, H.; Bäck, T.; Olhofer, M. Automated machine learning: Past, present and future. Artif. Intell. Rev. 2024, 57, 1–88. [Google Scholar] [CrossRef]
- Wolpert, D.H. The lack of a priori distinctions between learning algorithms. Neural Comput. 1996, 8, 1341–1390. [Google Scholar] [CrossRef]
- Wolpert, D.H.; Macready, W.G. Coevolutionary free lunches. IEEE Trans. Evol. Comput. 2005, 9, 721–735. [Google Scholar] [CrossRef]
- Zhang, C.; Bengio, S.; Hardt, M.; Recht, B.; Vinyals, O. Understanding deep learning requires rethinking generalization. arXiv 2016, arXiv:1611.03530. [Google Scholar] [CrossRef]
- Kawaguchi, K.; Kaelbling, L.P.; Bengio, Y. Generalization in deep learning. arXiv 2017, arXiv:1710.05468. [Google Scholar]
- Saxe, A.M.; Bansal, Y.; Dapello, J.; Advani, M.; Kolchinsky, A.; Tracey, B.D.; Cox, D.D. On the information bottleneck theory of deep learning. J. Stat. Mech. Theory Exp. 2019, 2019, 124020. [Google Scholar] [CrossRef]
- Dinh, L.; Pascanu, R.; Bengio, S.; Bengio, Y. Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning (ICML), Sydney, Australia, 6–11 August 2017; Volume 70, pp. 1019–1028. [Google Scholar]
- Steinbrecher, G.; Shaw, W.T. Quantile mechanics. Eur. J. Appl. Math. 2008, 19, 87–112. [Google Scholar] [CrossRef]
- Zheng, A.; Casari, A. Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2018. [Google Scholar]
- Kuhn, M.; Johnson, K. Feature Engineering and Selection: A Practical Approach for Predictive Models; Chapman and Hall/CRC: Boca Raton, FL, USA, 2019. [Google Scholar]
- Venkatesh, B.; Anuradha, J. A review of feature selection and its methods. Cybern. Inf. Technol. 2019, 19, 3–26. [Google Scholar] [CrossRef]
- Sung, J.; Han, S.; Park, H.; Hwang, S.; Lee, S.J.; Park, J.W.; Youn, I. Classification of stroke severity using clinically relevant symmetric gait features based on recursive feature elimination with cross-validation. IEEE Access 2022, 10, 119437–119447. [Google Scholar] [CrossRef]
- Misra, P.; Yadav, A.S. Improving the classification accuracy using recursive feature elimination with cross-validation. Int. J. Emerg. Technol. 2020, 11, 659–665. [Google Scholar]
- Altmann, A.; Toloşi, L.; Sander, O.; Lengauer, T. Permutation importance: A corrected feature importance measure. Bioinformatics 2010, 26, 1340–1347. [Google Scholar] [CrossRef] [PubMed]
- Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
- Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
- Schölkopf, B.; Smola, A.; Müller, K.R. Kernel principal component analysis. In Proceedings of the International Conference on Artificial Neural Networks, Lausanne, Switzerland, 8–10 October 1997; pp. 583–588. [Google Scholar]
- Hyvarinen, A. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 1999, 10, 626–634. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; Friedman, J.H.; Friedman, J.H. The Elements of Statistical Learning: Data Mining, Inference, and Prediction; Springer: Berlin/Heidelberg, Germany, 2017; Volume 2. [Google Scholar]
- Ghojogh, B.; Karray, F.; Crowley, M. Fisher and kernel Fisher discriminant analysis: Tutorial. arXiv 2019, arXiv:1906.09436. [Google Scholar]
- Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
- Jia, S.; Deng, B.; Zhu, J.; Jia, X.; Li, Q. Local binary pattern-based hyperspectral image classification with superpixel guidance. IEEE Trans. Geosci. Remote Sens. 2017, 56, 749–759. [Google Scholar] [CrossRef]
- He, L.; Li, J.; Plaza, A.; Li, Y. Discriminative low-rank Gabor filtering for spectral–spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2016, 55, 1381–1395. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral—Spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Baumgardner, M.F.; Biehl, L.L.; Landgrebe, D.A. 220 band aviris hyperspectral image data set: June 12, 1992 indian pine test site 3. Purdue Univ. Res. Repos. 2015, 10, R7RX991C. [Google Scholar]
- Hyperspectral Images. Available online: https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 1 July 2024).
- Vali, A. Hyperspectral Image Analysis and Advanced Feature Engineering for Optimized Classification and Data Acquisition. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 2022. [Google Scholar]
Feature Extractor | Best Feature Selector | CV-Score | Fit Time (ms) | Score Time (ms) | Final Eval. (acc.) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Type | n_comp. | FT | Inner Model | n_feat. | Fold | Best C | Mean | std | Mean | std | Mean | std | Train | Test | Mean |
None | - | std | RFECV-LR-l1 | 144 | 1 | 100 | 0.9192 | 0.0013 | 7.4338 | 0.5305 | 8.1113 | 0.1113 | 0.9867 | 0.9309 | 0.9300 |
- | std | RFECV-SVM | 152 | 2 | 100 | 0.9191 | 0.0031 | 6.8188 | 0.2807 | 4.2570 | 0.2871 | 0.9792 | 0.9254 | ||
- | std | RFECV-LR-l1 | 144 | 3 | 100 | 0.9195 | 0.0039 | 12.8243 | 1.9853 | 9.6307 | 0.4263 | 0.9772 | 0.9338 | ||
PCA | 140 | std | RFECV-LR-l1 | 144 | 1 | 100 | 0.9210 | 0.0049 | 3.6325 | 1.0745 | 3.3119 | 0.4358 | 0.9862 | 0.9309 | 0.9291 |
150 | std | RFECV-SVM | 152 | 2 | 100 | 0.9199 | 0.0052 | 6.1206 | 0.2424 | 6.0043 | 1.1208 | 0.9791 | 0.9242 | ||
82 | std | RFECV-LR-l1 | 144 | 3 | 1000 | 0.9191 | 0.0008 | 4.7658 | 1.0875 | 7.0312 | 4.4078 | 0.9955 | 0.9321 | ||
KPCA (kernel = poly) | 90 | std | RFECV-RF | 91 | 1 | 10,000 | 0.8940 | 0.0045 | 42.4327 | 7.1176 | 4.2325 | 0.9557 | 0.9873 | 0.9183 | 0.9136 |
148 | std | RFECV-SVM | 152 | 2 | 10,000 | 0.8924 | 0.0032 | 40.3297 | 2.7449 | 5.2146 | 0.3618 | 0.9621 | 0.9057 | ||
148 | std | RFECV-SVM | 152 | 3 | 10,000 | 0.8882 | 0.0007 | 29.8639 | 1.4377 | 4.0141 | 0.1835 | 0.9753 | 0.9166 | ||
KPCA (kernel = rbf) | 150 | std | RFECV-SVM | 152 | 1 | 10,000 | 0.8858 | 0.0118 | 22.7631 | 1.7372 | 2.6080 | 0.3169 | 0.9985 | 0.9204 | 0.9200 |
152 | std | RFECV-SVM | 152 | 2 | 10,000 | 0.8872 | 0.0065 | 39.4927 | 2.5375 | 4.0813 | 0.5238 | 0.9975 | 0.9177 | ||
150 | std | RFECV-SVM | 152 | 3 | 10,000 | 0.8812 | 0.0047 | 23.1344 | 2.2375 | 2.4789 | 0.2873 | 0.9980 | 0.9218 | ||
FastICA | 90 | none | RFECV-LR-l1 | 91 | 1 | 10,000 | 0.8378 | 0.0043 | 13.2319 | 4.6881 | 2.0039 | 0.1606 | 0.9775 | 0.8982 | 0.8943 |
138 | std | RFECV-SVM | 152 | 2 | 10,000 | 0.8102 | 0.0046 | 112.5495 | 76.0340 | 3.7106 | 0.1691 | 0.9498 | 0.8940 | ||
134 | std | RFECV-LR-l1 | 144 | 3 | 10,000 | 0.8016 | 0.0088 | 66.8760 | 25.2353 | 5.1666 | 1.1551 | 0.9530 | 0.8908 | ||
LDA | 14 | std | RFECV-SVM | 152 | 1 | 100 | 0.8617 | 0.0046 | 3.8267 | 0.1692 | 2.7674 | 0.4969 | 0.9312 | 0.8721 | 0.8710 |
10 | none | RFECV-LR-l1 | 144 | 2 | 100 | 0.8608 | 0.0041 | 1.1577 | 0.2938 | 0.6132 | 0.0795 | 0.9098 | 0.8630 | ||
12 | none | RFECV-LR-l1 | 144 | 3 | 100 | 0.8576 | 0.0050 | 0.9377 | 0.3076 | 0.6177 | 0.2036 | 0.9230 | 0.8779 | ||
KFDA (kernel = poly) | 18 | std | RFECV-SVM | 152 | 1 | 10 | 0.8708 | 0.0050 | 24.5791 | 3.3779 | 2.8568 | 0.2584 | 0.9997 | 0.8879 | 0.8866 |
24 | std | RFECV-SVM | 152 | 2 | 10 | 0.8541 | 0.0082 | 34.0298 | 6.8903 | 2.5398 | 0.2063 | 0.9956 | 0.8854 | ||
20 | std | RFECV-SVM | 152 | 3 | 10 | 0.8488 | 0.0061 | 50.1823 | 8.0026 | 1.9981 | 0.1023 | 0.9961 | 0.8866 | ||
KFDA (kernel = rbf) | 46 | std | RFECV-LR-l1 | 144 | 1 | 10 | 0.9097 | 0.0067 | 75.7540 | 9.0190 | 2.1115 | 0.1897 | 0.9994 | 0.9257 | 0.9232 |
58 | std | RFECV-LR-l1 | 144 | 2 | 10 | 0.9075 | 0.0058 | 47.7175 | 14.7411 | 1.3764 | 0.0251 | 0.9991 | 0.9236 | ||
50 | std | RFECV-LR-l1 | 144 | 3 | 10 | 0.9032 | 0.0069 | 52.8902 | 10.9802 | 1.7634 | 0.1559 | 0.9991 | 0.9202 | ||
LLE (nn = 3) | 150 | std | RFECV-SVM | 152 | 1 | 10,000 | 0.7381 | 0.0046 | 33.3914 | 0.1052 | 8.0919 | 0.4409 | 0.8378 | 0.7562 | 0.7641 |
148 | std | RFECV-SVM | 152 | 2 | 10,000 | 0.7334 | 0.0050 | 41.8622 | 0.8712 | 4.8391 | 1.3178 | 0.8196 | 0.7567 | ||
152 | std | RFECV-SVM | 152 | 3 | 10,000 | 0.7295 | 0.0037 | 45.8610 | 4.5508 | 11.6997 | 1.4774 | 0.8481 | 0.7793 |
Feature Extractor | Best Feature Selector | cls. hyppr. | CV-Score | Fit Time (ms) | Score Time (ms) | Final Eval. (acc.) | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Type | n_comp. | FT | Inner Model | n_feat. | Fold | Activation | Mean | std | Mean | std | Mean | std | Train | Test | Mean | |
None | - | std | RFECV-LR-l1 | 144 | 1 | logistic | 0.001 | 0.9246 | 0.0017 | 105.6994 | 7.9159 | 0.0405 | 0.0175 | 0.9817 | 0.9248 | 0.9327 |
- | std | RFECV-LR-l1 | 144 | 2 | logistic | 0.001 | 0.9273 | 0.0054 | 133.5125 | 20.9237 | 0.0294 | 0.0092 | 0.9864 | 0.9321 | ||
- | std | RFECV-LR-l1 | 144 | 3 | logistic | 0.001 | 0.9254 | 0.0041 | 164.0306 | 13.2748 | 0.0790 | 0.0363 | 0.9890 | 0.9412 | ||
PCA | 144 | std | RFECV-SVM | 152 | 1 | logistic | 0.01 | 0.9280 | 0.0026 | 48.3415 | 2.4717 | 0.0247 | 0.0025 | 0.9952 | 0.9277 | 0.9338 |
144 | std | RFECV-LR-l1 | 144 | 2 | logistic | 0.01 | 0.9273 | 0.0034 | 119.4399 | 7.7521 | 0.0264 | 0.0009 | 0.9963 | 0.9309 | ||
116 | std | RFECV-LR-l1 | 144 | 3 | tanh | 0.1 | 0.9286 | 0.0039 | 57.4218 | 3.9009 | 0.0187 | 0.0013 | 0.9782 | 0.9429 | ||
KPCA (kernel = poly) | 106 | std | RFECV-SVM | 152 | 1 | relu | 0.01 | 0.9232 | 0.0041 | 66.4434 | 2.8286 | 1.9985 | 0.2336 | 0.9704 | 0.9295 | 0.9328 |
144 | std | RFECV-SVM | 152 | 2 | relu | 0.01 | 0.9207 | 0.0046 | 56.4857 | 1.2998 | 1.3047 | 0.2012 | 0.9871 | 0.9365 | ||
148 | std | RFECV-SVM | 152 | 3 | relu | 0.01 | 0.9251 | 0.0050 | 56.2479 | 1.5859 | 1.2445 | 0.0929 | 0.9821 | 0.9324 | ||
KPCA (kernel = rbf) | 124 | std | RFECV-SVM | 152 | 1 | relu | 0.001 | 0.9166 | 0.0072 | 89.9690 | 16.2288 | 0.5233 | 0.0784 | 0.9978 | 0.9303 | 0.9320 |
148 | std | RFECV-SVM | 152 | 2 | relu | 0.001 | 0.9202 | 0.0038 | 124.0150 | 49.9485 | 1.2152 | 0.0715 | 0.9953 | 0.9309 | ||
150 | std | RFECV-SVM | 152 | 3 | relu | 0.001 | 0.9176 | 0.0023 | 85.7280 | 8.2568 | 1.0124 | 0.0355 | 0.9974 | 0.9347 | ||
FastICA | 32 | std | RFECV-RF | 91 | 1 | relu | 0.001 | 0.8861 | 0.0089 | 839.3872 | 125.4677 | 0.3442 | 0.2158 | 0.9763 | 0.9189 | 0.9161 |
34 | std | RFECV-SVM | 152 | 2 | relu | 0.001 | 0.8813 | 0.0033 | 459.8168 | 61.6437 | 0.1076 | 0.0085 | 0.9671 | 0.9098 | ||
54 | std | RFECV-RF | 91 | 3 | relu | 0.001 | 0.8793 | 0.0048 | 576.9616 | 35.2544 | 0.2097 | 0.0730 | 0.9786 | 0.9195 | ||
LDA | 14 | std | RFECV-SVM | 152 | 1 | tanh | 0.1 | 0.8605 | 0.0022 | 19.3157 | 2.5674 | 0.0115 | 0.0002 | 0.9403 | 0.8739 | 0.8704 |
12 | none | RFECV-LR-l1 | 144 | 2 | relu | 0.1 | 0.8597 | 0.0061 | 16.4290 | 3.3352 | 0.0068 | 0.0010 | 0.9281 | 0.8621 | ||
14 | std | RFECV-LR-l1 | 144 | 3 | tanh | 0.1 | 0.8566 | 0.0047 | 21.7565 | 3.3710 | 0.0096 | 0.0014 | 0.9497 | 0.8753 | ||
KFDA (kernel = poly) | 12 | std | RFECV-RF | 91 | 1 | logistic | 0.001 | 0.7858 | 0.0100 | 43.3090 | 2.1410 | 2.0858 | 0.0229 | 0.9903 | 0.8695 | 0.8640 |
14 | std | RFECV-RF | 91 | 2 | logistic | 0.001 | 0.7521 | 0.0029 | 75.0035 | 22.8304 | 1.9501 | 0.0102 | 0.9897 | 0.8601 | ||
14 | std | RFECV-RF | 91 | 3 | logistic | 0.001 | 0.7599 | 0.0083 | 69.9022 | 17.5838 | 1.7962 | 0.0207 | 0.9889 | 0.8625 | ||
KFDA (kernel = rbf) | 54 | std | RFECV-LR-l1 | 144 | 1 | tanh | 0.1 | 0.9071 | 0.0060 | 102.8650 | 10.4852 | 0.3131 | 0.0527 | 0.9999 | 0.9236 | 0.9214 |
58 | std | RFECV-LR-l1 | 144 | 2 | tanh | 0.1 | 0.9053 | 0.0078 | 99.5602 | 9.6832 | 0.4866 | 0.0434 | 0.9998 | 0.9208 | ||
62 | std | RFECV-LR-l1 | 144 | 3 | tanh | 0.1 | 0.9061 | 0.0072 | 87.9921 | 7.2003 | 0.5612 | 0.4089 | 0.9998 | 0.9199 | ||
LLE (nn = 3) | 150 | std | RFECV-SVM | 152 | 1 | relu | 0.001 | 0.7545 | 0.0049 | 149.4420 | 123.9453 | 1.8298 | 0.9244 | 0.8431 | 0.7735 | 0.7765 |
152 | std | RFECV-SVM | 152 | 2 | relu | 0.001 | 0.7604 | 0.0053 | 178.8375 | 99.3189 | 1.3647 | 0.1900 | 0.8452 | 0.7728 | ||
150 | std | RFECV-SVM | 152 | 3 | relu | 0.001 | 0.7575 | 0.0016 | 350.9361 | 121.4696 | 1.2645 | 0.1498 | 0.8690 | 0.7831 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vali, A.; Comai, S.; Matteucci, M. An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation. Remote Sens. 2024, 16, 2561. https://doi.org/10.3390/rs16142561
Vali A, Comai S, Matteucci M. An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation. Remote Sensing. 2024; 16(14):2561. https://doi.org/10.3390/rs16142561
Chicago/Turabian StyleVali, Ava, Sara Comai, and Matteo Matteucci. 2024. "An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation" Remote Sensing 16, no. 14: 2561. https://doi.org/10.3390/rs16142561
APA StyleVali, A., Comai, S., & Matteucci, M. (2024). An Automated Machine Learning Framework for Adaptive and Optimized Hyperspectral-Based Land Cover and Land-Use Segmentation. Remote Sensing, 16(14), 2561. https://doi.org/10.3390/rs16142561