Deep Network Architectures as Feature Extractors for Multi-Label Classification of Remote Sensing Images
Abstract
:1. Introduction
- We present an experimental analysis of different approaches for MLC of RSI. More precisely, we investigate the performance of several deep learning network architectures by using pre-training and fine-tuning as the main learning strategies for the MLC task.
- We evaluate the effectiveness of the deep models used as end-to-end approaches to MLC, and used as feature extractors that provide feature representations of RSI, as inputs to tree ensemble methods for MLC. Moreover, we investigate which of the network architectures is the most suitable choice in terms of performance.
- We also investigate the performance of the considered methods in terms of the influence of the number of labeled training examples by providing the methods with different fractions of the data.
2. Materials and Methods
2.1. Datasets
2.1.1. UC Merced Land Use
2.1.2. AID Multilabel
2.1.3. Ankara HIS Archive
2.1.4. DFC-15 Multilabel
2.1.5. MLRSNet
2.1.6. The BigEarthNet Archive
2.2. Overview of the Learning Methods for Multi-Label Classification
2.2.1. Deep Learning Methods
- VGGs: VGG is a deep CNN network architecture developed by the Visual Geometry Group (VGG) team [23]. It is also the basis of ground-breaking object recognition models that surpass baselines on many tasks and datasets beyond ImageNet and is still one of the most popular image recognition architectures. Two variants of this family of architectures are intensively studied for their performance—VGG-16 and VGG-19. The VGG-16 model can be seen as an upgrade of AlexNet, while VGG-19 is similar to VGG-16 but contains more layers. They are modeled in such a way that convolutions would actually look simpler by replacing large AlexNet convolution filters with a filter, while padding to maintain the same size before a layer down-samples the image size.
- ResNets: ResNets are a family of deep CNN architectures that follow the residual learning principle to ease the training of very deep networks [24]. Their design offers an efficient way to solve the issues related to the vanishing gradients. ResNet follows VGG’s full convolutional layer design. The residual block has two convolutional layers with the same number of output channels. Each convolutional layer is followed by a batch normalization layer and a Rectified Linear Unit () activation function. Then, there is a skip (or so-called skip connection) of those two convolution operations, where the input is directly added before the final activation function. This kind of design requires that the output of the two convolutional layers has to be the same shape as the input, so that they can be added together. By configuring different numbers of channels and residual blocks in the module, we can create different ResNet models, such as the deeper 152-layer ResNets, i.e., ResNet-152. For the experiments, we use three variants of ResNet: ResNet-34, ResNet-50, and ResNet-152.
- EfficientNets: Unlike conventional deep CNNs, which are often over-parameterized, and arbitrarily scale network dimensions, such as width, depth, and resolution, EfficientNets are methods that uniformly scale each dimension with a fixed set of scaling coefficients [25]. These models surpass state-of-the-art accuracy, with up to 10 times better efficiency (i.e., are both smaller and faster than competitors).
2.2.2. Tree Ensemble Methods
- Random Forest: Random forest (RF) is an ensemble learning method for classification and regression, which creates a set of individual decision trees that operate as an ensemble. It uses bagging and feature randomness to create diversity among the predictors: At each node in the decision tree, a random subset of attributes is taken, and the best split is selected from this subset of attributes. Each individual tree in the random forest provides a class prediction, where the predictions can be aggregated by taking the average (for regression tasks) and the majority or probability distribution vote (for classification tasks). RFs were adapted for the task of MLC [33].
- Extremely Randomized Trees: Extremely Randomized Trees, or so-called Extra Trees (ET), is also an ensemble learning method similar to the Random Forest, which is based on extreme randomization of the tree construction algorithm. As compared to the Random Forest ensemble, it operates with two key differences: it splits nodes by choosing the cut-points fully at random, and it uses the whole learning sample to grow the trees. The randomness in this method comes from the random splits of all observations, rather then bootstrapping the data as in RF. ETs were adapted for the task of MLC [34].
3. Experimental Design
- (i)
- What is the influence of the learning strategy on the performance of end-to-end approaches: Is fine-tuning or pre-training only more suitable for solving the RSI-MLC task?
- (ii)
- Which network architecture is the best choice for end-to-end MLC of RSI and for use as a feature extractor and further training of tree ensembles for MLC?
- (iii)
- How do end-to-end learning and feature extraction plus tree ensembles compare on the task of RSI for MLC (assessed by using the best performing architecture from the previous analysis)? and
- (iv)
- How does the number of training examples influence the predictive performance of the methods used?
3.1. Experimental Setup
3.1.1. End-to-End Learning Approaches
3.1.2. CNNs as Feature Extractors and Tree Ensembles
3.2. Evaluation Strategy
3.3. Evaluation Measures and Statistical Analysis
3.4. Implementation Details
4. Results and Discussion
4.1. The Influence of the Learning Strategy
4.2. Comparison of Different Network Architectures
4.3. Comparison of Different Learning Approaches
4.4. Influence of the Number of Available Labeled Images
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
RSI | Remote Sensing Images |
MLC | Multi-Label-Classification |
DNN | Deep Neural Network |
CNN | Convolutional Neural Network |
DL | Deep Learning |
RNN | Recurrent Neural Network |
LSTM | Long Short-Term Memory |
GCN | Graph Convolutional Network |
CLC | Corine Land Cover |
ReLU | Rectified Linear Unit |
RF | Random Forest |
ET | Extra Trees |
Appendix A. Complete Results from the Experimental Evaluation
VGG-16 | VGG-19 | ResNet-34 | ResNet-50 | ResNet-152 | EfficientNet-B0 | EfficientNet-B1 | EfficientNet-B2 | ||
---|---|---|---|---|---|---|---|---|---|
Approach | Datasets | Pre-Training | |||||||
Ankara | 0.298 | 0.371 | 0.343 | 0.351 | 0.350 | 0.349 | 0.330 | 0.422 | |
End-to-end | UCM | 0.186 | 0.180 | 0.154 | 0.149 | 0.135 | 0.194 | 0.185 | 0.184 |
AID | 0.215 | 0.208 | 0.171 | 0.181 | 0.179 | 0.198 | 0.194 | 0.188 | |
DFC-15 | 0.176 | 0.176 | 0.147 | 0.134 | 0.134 | 0.127 | 0.120 | 0.113 | |
MLRSNet | 0.347 | 0.360 | 0.306 | 0.240 | 0.229 | 0.318 | 0.300 | 0.326 | |
BigEarthNet-19 | 0.557 | 0.550 | 0.461 | 0.399 | 0.391 | 0.460 | 0.476 | 0.478 | |
BigEarthNet-43 | 0.578 | 0.546 | 0.480 | 0.431 | 0.410 | 0.468 | 0.481 | 0.480 | |
Ankara | 0.322 | 0.320 | 0.324 | 0.329 | 0.388 | 0.317 | 0.337 | 0.345 | |
Random Forest | UCM | 0.381 | 0.398 | 0.469 | 0.420 | 0.539 | 0.495 | 0.508 | 0.468 |
AID | 0.250 | 0.244 | 0.265 | 0.247 | 0.294 | 0.268 | 0.257 | 0.262 | |
DFC-15 | 0.297 | 0.337 | 0.235 | 0.201 | 0.342 | 0.259 | 0.222 | 0.242 | |
MLRSNet | 0.529 | 0.545 | 0.566 | 0.549 | 0.615 | 0.587 | 0.567 | 0.548 | |
BigEarthNet-19 | 0.534 | 0.525 | 0.588 | 0.543 | 0.532 | 0.637 | 0.670 | 0.669 | |
BigEarthNet-43 | 0.547 | 0.537 | 0.600 | 0.557 | 0.545 | 0.654 | 0.686 | 0.683 | |
Ankara | 0.318 | 0.312 | 0.331 | 0.325 | 0.370 | 0.328 | 0.320 | 0.330 | |
Extra Trees | UCM | 0.390 | 0.405 | 0.457 | 0.417 | 0.552 | 0.483 | 0.513 | 0.462 |
AID | 0.254 | 0.250 | 0.255 | 0.253 | 0.305 | 0.265 | 0.250 | 0.256 | |
DFC-15 | 0.297 | 0.338 | 0.235 | 0.204 | 0.351 | 0.235 | 0.218 | 0.229 | |
MLRSNet | 0.530 | 0.545 | 0.567 | 0.549 | 0.620 | 0.573 | 0.556 | 0.534 | |
BigEarthNet-19 | 0.539 | 0.528 | 0.608 | 0.567 | 0.556 | 0.658 | 0.693 | 0.698 | |
BigEarthNet-43 | 0.550 | 0.540 | 0.622 | 0.581 | 0.570 | 0.676 | 0.709 | 0.713 | |
Fine-tuning | |||||||||
Ankara | 0.294 | 0.285 | 0.377 | 0.356 | 0.360 | 0.335 | 0.353 | 0.322 | |
End-to-end | UCM | 0.224 | 0.508 | 0.101 | 0.097 | 0.112 | 0.088 | 0.096 | 0.081 |
AID | 0.265 | 0.202 | 0.152 | 0.143 | 0.147 | 0.137 | 0.131 | 0.137 | |
DFC-15 | 0.433 | 0.433 | 0.068 | 0.075 | 0.067 | 0.054 | 0.046 | 0.050 | |
MLRSNet | 0.180 | 0.223 | 0.093 | 0.091 | 0.088 | 0.082 | 0.084 | 0.084 | |
BigEarthNet-19 | 0.276 | 0.282 | 0.235 | 0.236 | 0.210 | 0.207 | 0.203 | 0.202 | |
BigEarthNet-43 | 0.271 | 0.276 | 0.243 | 0.232 | 0.199 | 0.206 | 0.195 | 0.194 | |
Ankara | 0.318 | 0.319 | 0.344 | 0.338 | 0.345 | 0.304 | 0.335 | 0.320 | |
Random Forest | UCM | 0.182 | 0.323 | 0.103 | 0.098 | 0.103 | 0.106 | 0.104 | 0.106 |
AID | 0.245 | 0.197 | 0.146 | 0.144 | 0.149 | 0.138 | 0.138 | 0.137 | |
DFC-15 | 0.433 | 0.433 | 0.050 | 0.047 | 0.050 | 0.046 | 0.041 | 0.044 | |
MLRSNet | 0.185 | 0.221 | 0.104 | 0.103 | 0.102 | 0.093 | 0.095 | 0.090 | |
BigEarthNet-19 | 0.258 | 0.268 | 0.229 | 0.219 | 0.214 | 0.222 | 0.219 | 0.217 | |
BigEarthNet-43 | 0.255 | 0.267 | 0.235 | 0.230 | 0.219 | 0.228 | 0.221 | 0.224 | |
Ankara | 0.364 | 0.348 | 0.342 | 0.322 | 0.362 | 0.313 | 0.343 | 0.321 | |
Extra Trees | UCM | 0.177 | 0.334 | 0.103 | 0.102 | 0.102 | 0.100 | 0.098 | 0.097 |
AID | 0.248 | 0.194 | 0.144 | 0.146 | 0.146 | 0.135 | 0.134 | 0.136 | |
DFC-15 | 0.433 | 0.433 | 0.049 | 0.050 | 0.046 | 0.046 | 0.041 | 0.043 | |
MLRSNet | 0.184 | 0.221 | 0.106 | 0.104 | 0.103 | 0.097 | 0.100 | 0.094 | |
BigEarthNet-19 | 0.257 | 0.268 | 0.227 | 0.217 | 0.213 | 0.222 | 0.218 | 0.216 | |
BigEarthNet-43 | 0.255 | 0.267 | 0.233 | 0.228 | 0.217 | 0.227 | 0.219 | 0.224 |
References
- Ibrahim, S.K.; Ziedan, I.E.; Ahmed, A. Study of Climate Change Detection in North-East Africa Using Machine Learning and Satellite Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11080–11094. [Google Scholar] [CrossRef]
- Chen, H.; Qi, Z.; Shi, Z. Remote Sensing Image Change Detection With Transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
- Ortega Adarme, M.; Queiroz Feitosa, R.; Nigri Happ, P.; Aparecido De Almeida, C.; Rodrigues Gomes, A. Evaluation of Deep Learning Techniques for Deforestation Detection in the Brazilian Amazon and Cerrado Biomes From Remote Sensing Imagery. Remote Sens. 2020, 12, 910. [Google Scholar] [CrossRef] [Green Version]
- Park, M.; Tran, D.Q.; Jung, D.; Park, S. Wildfire-Detection Method Using DenseNet and CycleGAN Data Augmentation-Based Remote Camera Imagery. Remote Sens. 2020, 12, 3715. [Google Scholar] [CrossRef]
- Zhang, Q.; Ge, L.; Zhang, R.; Metternicht, G.I.; Liu, C.; Du, Z. Towards a Deep-Learning-Based Framework of Sentinel-2 Imagery for Automated Active Fire Detection. Remote Sens. 2021, 13, 4790. [Google Scholar] [CrossRef]
- Papoutsis, I.; Bountos, N.I.; Zavras, A.; Michail, D.; Tryfonopoulos, C. Benchmarking and scaling of deep learning models for land cover image classification. ISPRS J. Photogramm. Remote Sens. 2023, 195, 250–268. [Google Scholar] [CrossRef]
- Yansheng, L.; Ruixian, C.; Yongjun, Z.; Mi, Z.; Ling, C. Multi-Label Remote Sensing Image Scene Classification by Combining a Convolutional Neural Network and a Graph Neural Network. Remote Sens. 2020, 12, 4003. [Google Scholar]
- Bogatinovski, J.; Todorovski, L.; Džeroski, S.; Kocev, D. Comprehensive comparative study of multi-label classification methods. Expert Syst. Appl. 2022, 203, 117215. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations (ICLR), virtual, 3–7 May 2021. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.; Kai, L.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Dimitrovski, I.; Kitanovski, I.; Kocev, D.; Simidjievski, N. Current Trends in Deep Learning for Earth Observation: An Open-source Benchmark Arena for Image Classification. arXiv 2022, arXiv:2207.07189. [Google Scholar]
- Pires de Lima, R.; Marfurt, K. Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis. Remote Sens. 2020, 12, 86. [Google Scholar] [CrossRef] [Green Version]
- Khaleghian, S.; Ullah, H.; Kræmer, T.; Hughes, N.; Eltoft, T.; Marinoni, A. Sea Ice Classification of SAR Imagery Based on Convolution Neural Networks. Remote Sens. 2021, 13, 1734. [Google Scholar] [CrossRef]
- Wang, A.X.; Tran, C.; Desai, N.; Lobell, D.; Ermon, S. Deep Transfer Learning for Crop Yield Prediction with Remote Sensing Data. In Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, San Jose, CA, USA, 20–22 June 2018; Association for Computing Machinery: New York, NY, USA, 2018. COMPASS’18. [Google Scholar]
- Wang, J.; Yang, Y.; Mao, J.; Huang, Z.; Huang, C.; Xu, W. CNN-RNN: A Unified Framework for Multi-label Image Classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2285–2294. [Google Scholar]
- Chen, Z.; Wei, X.; Wang, P.; Guo, Y. Multi-Label Image Recognition with Graph Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5172–5181. [Google Scholar]
- Sumbul, G.; Charfuelan, M.; Demir, B.; Markl, V. BigEarthNet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding. IEEE Int. Geosci. Remote Sens. Symp. 2019, 12, 5901–5904. [Google Scholar]
- Yessou, H.; Sumbul, G.; Demir, B. A Comparative Study of Deep Learning Loss Functions for Multi-Label Remote Sensing Image Classification. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Waikoloa, HI, USA, 26 September–2 October 2020. [Google Scholar]
- Sumbul, G.; Kang, J.; Demir, B. Deep Learning for Image Search and Retrieval in Large Remote Sensing Archives. arXiv 2020, arXiv:2004.01613. [Google Scholar]
- Hua, Y.; Mou, L.; Zhu, X. Relation Network for Multi-label Aerial Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 58, 4558–4572. [Google Scholar] [CrossRef]
- Sumbul, G.; Demİr, B. A Deep Multi-Attention Driven Approach for Multi-Label Remote Sensing Image Classification. IEEE Access 2020, 8, 95934–95946. [Google Scholar] [CrossRef]
- Wang, X.; Duan, L.; Ning, C. Global Context-Based Multilevel Feature Fusion Networks for Multilabel Remote Sensing Image Scene Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11179–11196. [Google Scholar] [CrossRef]
- Karen, S.; Andrew, Z. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Kaiming, H.; Xiangyu, Z.; Shaoqing, R.; Jian, S. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; Chaudhuri, K., Salakhutdinov, R., Eds.; Volume 97, pp. 6105–6114. [Google Scholar]
- Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th ACM SIGSPATIAL International Symposium on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
- Chaudhuri, B.; Demir, B.; Chaudhuri, S.; Bruzzone, L. Multilabel Remote Sensing Image Retrieval Using a Semisupervised Graph-Theoretic Method. IEEE Trans. Geosci. Remote Sens. 2018, 56, 1144–1158. [Google Scholar] [CrossRef]
- Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Lu, X.; Zhang, L. AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef] [Green Version]
- Ömrüuzun, F.; Demir, B.; L. Bruzzone, L.; Çetin, Y. Content based hyperspectral image retrieval using bag of endmembers image descriptors. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 21–24 August 2016; pp. 1–4. [Google Scholar]
- Hua, Y.; Mou, L.; Zhu, X. Recurrently exploring class-wise attention in a hybrid convolutional and bidirectional LSTM network for multi-label aerial image classification. ISPRS J. Photogramm. Remote Sens. 2019, 149, 188–199. [Google Scholar] [CrossRef]
- Qi, Q.X.; Panpan, Z.; Yuebin, W.; Liqiang, Z.; Junhuan, P.; Mengfan, W.; Jialong, C.; Xudong, Z.; Ning, Z.; Takis, P.M. MLRSNet: A multi-label high spatial resolution remote sensing dataset for semantic scene understanding. ISPRS J. Photogramm. Remote Sens. 2020, 169, 337–350. [Google Scholar] [CrossRef]
- Sumbul, G.; d. Wall, A.; Kreuziger, T.; Marcelino, F.; Costa, H.; Benevides, P.; Caetano, M.; Demir, B.; Markl, V. BigEarthNet-MM: A Large Scale Multi-Modal Multi-Label Benchmark Archive for Remote Sensing Image Classification and Retrieval. IEEE Geosci. Remote Sens. Mag. 2021, 9, 174–180. [Google Scholar] [CrossRef]
- Kocev, D.; Vens, C.; Struyf, J.; Džeroski, S. Tree ensembles for predicting structured outputs. Pattern Recognit. 2013, 46, 817–833. [Google Scholar] [CrossRef] [Green Version]
- Kocev, D.; Ceci, M.; Stepisnik, T. Ensembles of extremely randomized predictive clustering trees for predicting structured outputs. Mach. Learn. 2020, 109, 2213–2241. [Google Scholar] [CrossRef]
- Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A. Albumentations: Fast and Flexible Image Augmentations. Information 2020, 11, 125. [Google Scholar] [CrossRef]
- Xiao, Q.; Liu, B.; Li, Z.; Ni, W.; Yang, Z.; Li, L. Progressive Data Augmentation Method for Remote Sensing Ship Image Classification Based on Imaging Simulation System and Neural Style Transfer. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 9176–9186. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
Dataset | Image Type | N | ||||||
---|---|---|---|---|---|---|---|---|
Ankara | Hyperspectral/Aerial RGB | 29 | 9.120 | 0.536 | 216 | 171 | 45 | |
UC Merced Land Use | Aerial RGB | 17 | 3.334 | 0.476 | 2100 | 1667 | 433 | |
AID Multilabel | Aerial RGB | 17 | 5.152 | 0.468 | 3000 | 2400 | 600 | |
DFC-15 Multilabel | Aerial RGB | 8 | 2.795 | 0.465 | 3341 | 2672 | 669 | |
MLRSNet | Aerial RGB | 60 | 5.770 | 0.144 | 109,151 | 87,325 | 21,826 | |
BigEarthNet | Hyperspectral/Aerial RGB | 19 | 2.900 | 0.263 | 590,326 | 472,245 | 118,081 | |
BigEarthNet | Hyperspectral/Aerial RGB | 43 | 2.965 | 0.247 | 590,326 | 472,245 | 118,081 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Stoimchev, M.; Kocev, D.; Džeroski, S. Deep Network Architectures as Feature Extractors for Multi-Label Classification of Remote Sensing Images. Remote Sens. 2023, 15, 538. https://doi.org/10.3390/rs15020538
Stoimchev M, Kocev D, Džeroski S. Deep Network Architectures as Feature Extractors for Multi-Label Classification of Remote Sensing Images. Remote Sensing. 2023; 15(2):538. https://doi.org/10.3390/rs15020538
Chicago/Turabian StyleStoimchev, Marjan, Dragi Kocev, and Sašo Džeroski. 2023. "Deep Network Architectures as Feature Extractors for Multi-Label Classification of Remote Sensing Images" Remote Sensing 15, no. 2: 538. https://doi.org/10.3390/rs15020538
APA StyleStoimchev, M., Kocev, D., & Džeroski, S. (2023). Deep Network Architectures as Feature Extractors for Multi-Label Classification of Remote Sensing Images. Remote Sensing, 15(2), 538. https://doi.org/10.3390/rs15020538