A Lightweight Convolutional Neural Network Based on Hierarchical-Wise Convolution Fusion for Remote-Sensing Scene Image Classification
Abstract
:1. Introduction
- (1)
- A new lightweight dimension-wise convolution is proposed. Dimension-wise convolution is carried out along the three dimensions of width, length and channel, respectively, and then, the convoluted features of the three dimensions are fused. Compared with traditional convolution, dimension-wise convolution significantly reduces the number of parameters and computations, and has stronger feature extraction ability.
- (2)
- A hierarchical-wise convolution fusion module is designed. The hierarchical-wise convolution fusion module first groups the input along the channel dimension and selects the first group of features to map to the next layer directly. The second group of features first uses the dimension-wise convolution for feature extraction and then divides the output features into two parts; one is mapped to the next layer, and the other is concatenated with the next group of features. The concatenated features are operated by dimension-wise convolution. Repeat the above operation several times until all groups are processed.
- (3)
- In the classification phase, a combination of global average pooling, fully connected layer and Softmax is adopted to convert the input features into the probability of each category. Global average pooling is used before the fully connected layer can preserve the spatial information of features as much as possible.
- (4)
- A lightweight convolutional neural network is constructed by using dimension-wise convolution, hierarchical-wise convolution fusion module and classifier. The superiority of the proposed method is proven by a series of experiments.
2. Related Work
2.1. Convolution Variant Structure
2.2. Group Convolution
3. Methods
3.1. The Overall Structure of the Proposed LCNN-HWCF Method
3.2. Dimension-Wise Convolution
3.3. Hierarchical-Wise Convolution Fusion Module
4. Experiment
4.1. Dataset Settings
4.2. Setting of the Experiments
4.3. Experimental Results
4.3.1. Performance of the Proposed LCNN-HWCF Method
4.3.2. Experimental Results on UCM Dataset
4.3.3. Experimental Results on RSSCN7 Dataset
4.3.4. Experimental Results on AID Dataset
4.3.5. Experimental Results on NWPU Dataset
4.4. Model Complexity Analysis
4.5. Model Running Speed Comparison
4.6. Visual Analysis
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Gómez-Chova, L.; Tuia, D.; Moser, G.; Camps-Valls, G. Multimodal Classification of Remote Sensing Images: A Review and Future Directions. Proc. IE35EE 2015, 103, 1560–1584. [Google Scholar] [CrossRef]
- Longbotham, N.; Chaapel, C.; Bleiler, L.; Padwick, C.; Emery, W.; Pacifici, F. Very High Resolution Multiangle Urban Classification Analysis. IEEE Trans. Geosci. Remote Sens. 2011, 50, 1155–1170. [Google Scholar] [CrossRef]
- Zhang, T.; Huang, X. Monitoring of Urban Impervious Surfaces Using Time Series of High-Resolution Remote Sensing Images in Rapidly Urbanized Areas: A Case Study of Shenzhen. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2692–2708. [Google Scholar] [CrossRef]
- Cheng, G.; Han, J.; Zhou, P.; Guo, L. Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS J. Photogramm. Remote Sens. 2014, 98, 119–132. [Google Scholar]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar] [CrossRef] [Green Version]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Effififificient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Zeng, D.; Chen, S.; Chen, B.; Li, S. Improving Remote Sensing Scene Classification by Integrating Global-Context and Local-Object Features. Remote Sens. 2018, 10, 734. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; Duan, L.; Ning, C. Global Context-based Multi-level Feature Fusion Networks for Multi-label Remote Sensing Image Scene Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11179–11196. [Google Scholar] [CrossRef]
- Shi, C.; Zhao, X.; Wang, L. A Multi-Branch Feature Fusion Strategy Based on an Attention Mechanism for Remote Sensing Image Scene Classification. Remote Sens. 2021, 13, 1950. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, Y.; Ding, L. Scene Classification Based on Two-Stage Deep Feature Fusion. IEEE Geosci. Remote Sens. Lett. 2018, 15, 183–186. [Google Scholar] [CrossRef]
- Singh, P.; Verma, V.K.; Rai, P.; Namboodiri, V.P. HetConv: Heterogeneous Kernel-Based Convolutions for Deep CNNs. arXiv 2019, arXiv:1903.04120. [Google Scholar]
- Chen, Y.; Dai, X.; Liu, M.; Chen, D.; Yuan, L.; Liu, Z. Dynamic Convolution: Attention Over Convolution Kernels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 11030–11039. [Google Scholar]
- Liu, J.J.; Hou, Q.; Cheng, M.M.; Wang, C.; Feng, J. Improving Convolutional Networks with Self-Calibrated Convolutions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 10096–10105. [Google Scholar]
- Chen, Y.; Fan, H.; Xu, B.; Yan, Z.; Kalantidis, Y.; Rohrbach, M.; Yan, S.; Feng, J. Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution. arXiv 2019, arXiv:1904.05049. [Google Scholar]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. GhostNet: More Features from Cheap Operations. arXiv 2020, arXiv:1911.11907 [cs.CV]. [Google Scholar]
- Yang, B.; Bender, G.; Le, Q.V.; Ngiam, J. CondConv: Conditionally Parameterized Convolutions for Efficient Inference. arXiv 2019, arXiv:1904.04971 [cs.CV]. [Google Scholar]
- Cao, J.; Li, Y.; Sun, M.; Chen, Y.; Lischinski, D.; Cohen-Or, D.; Chen, B.; Tu, C. Depthwise Over-parameterized Convolution. arXiv 2020, arXiv:2006.12030 [cs.CV]. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Xie, S.; Girshick, R.; Dollar, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. arXiv 2017, arXiv:1611.05431v2 [cs.CV]. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile. arXiv 2017, arXiv:1707.01083v2 [cs.CV]. [Google Scholar]
- Wu, P.; Cui, Z.; Gan, Z.; Liu, F. Residual Group Channel and Space Attention Network for Hyperspectral Image Classification. Remote Sens. 2020, 12, 2035. [Google Scholar] [CrossRef]
- Liu, Y.; Gao, L.; Xiao, C.; Qu, Y.; Zheng, K.; Marinoni, A. Hyperspectral Image Classification Based on a Shuffled Group Convolutional Neural Network with Transfer Learning. Remote Sens. 2020, 12, 1780. [Google Scholar] [CrossRef]
- Shen, J.; Zhang, T.; Wang, Y.; Wang, R.; Wang, Q.; Qi, M. A Dual-Model Architecture with Grouping-Attention-Fusion for Remote Sensing Scene Classification. Remote Sens. 2021, 13, 433. [Google Scholar] [CrossRef]
- Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L. AID: A benchmark data set for performance evaluation of aerial scene classifification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef] [Green Version]
- Zou, Q.; Ni, L.; Zhang, T.; Wang, Q. Deep Learning Based Feature Selection for Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2321–2325. [Google Scholar] [CrossRef]
- Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classifification. In Proceedings of the 18th SIGSPA-TIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 3–5 November 2010; p. 270. [Google Scholar]
- Cheng, G.; Han, J.; Lu, X. Remote Sensing Image Scene Classifification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef] [Green Version]
- Shi, C.; Wang, T.; Wang, L. Branch Feature Fusion Convolution Network for Remote Sensing Scene Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 5194–5210. [Google Scholar] [CrossRef]
- Xie, J.; He, N.; Fang, L.; Plaza, A. Scale-free convolutional neural network for remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6916–6928. [Google Scholar] [CrossRef]
- Zhang, W.; Tang, P.; Zhao, L. Remote sensing image scene classification using CNN-CapsNet. Remote Sens. 2019, 11, 494. [Google Scholar] [CrossRef] [Green Version]
- He, N.; Fang, L.; Li, S.; Plaza, J.; Plaza, A. Skip-connected covariance network for remote sensing scene classification. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1461–1474. [Google Scholar] [CrossRef] [Green Version]
- Zhao, F.; Mu, X.; Yang, Z.; Yi, Z. A novel two-stage scene classification model based on Feature var iablesignificancein high-resolution remote sensing. Geocarto Int. 2020, 35, 1603–1614. [Google Scholar] [CrossRef]
- Liu, B.D.; Meng, J.; Xie, W.Y.; Shao, S.; Li, Y.; Wang, Y. Weighted spatial pyramid matching collaborative representation for remote-sensing-image scene classification. Remote Sens. 2019, 11, 518. [Google Scholar] [CrossRef] [Green Version]
- Li, B.; Su, W.; Wu, H.; Li, R.; Zhang, W.; Qin, W.; Zhang, S. Aggregated deep fisher feature for VHR remote sensing scene classification. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2019, 12, 3508–3523. [Google Scholar] [CrossRef]
- He, N.; Fang, L.; Li, S.; Plaza, A.; Plaza, J. Remote sensing scene classification using multilayer stacked covariance pooling. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6899–6910. [Google Scholar] [CrossRef]
- Sun, H.; Li, S.; Zheng, X.; Lu, X. Remote sensing scene classification by gated bidirectional network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 82–96. [Google Scholar] [CrossRef]
- Lu, X.; Sun, H.; Zheng, X. A feature aggregation convolutional neural network for remote sensing scene classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7894–7906. [Google Scholar] [CrossRef]
- Li, J.; Lin, D.; Wang, Y.; Xu, G.; Zhang, Y.; Ding, C.; Zhou, Y. Deep discriminative representation learning with attention map for scene classification. Remote Sens. 2020, 12, 1366. [Google Scholar] [CrossRef]
- Cheng, G.; Yang, C.; Yao, X.; Guo, L.; Han, J. When deep learning meets metric learning: Remote sensing image scene classification via learning discriminative CNNs. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2811–2821. [Google Scholar] [CrossRef]
- Boualleg, Y.; Farah, M.; Farah, I.R. Remote sensing scene classification using convolutional features and deep forest classifier. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1944–1948. [Google Scholar] [CrossRef]
- Yan, P.; He, F.; Yang, Y.; Hu, F. Semi-supervised representation learning for remote sensing image classification based on generative adversarial networks. IEEE Access 2020, 8, 54135–54144. [Google Scholar] [CrossRef]
- Wang, C.; Lin, W.; Tang, P. Multiple resolution block feature for remote-sensing scene classification. Int. J. Remote Sens. 2019, 40, 6884–6904. [Google Scholar] [CrossRef]
- Liu, X.; Zhou, Y.; Zhao, J.; Yao, R.; Liu, B.; Zheng, Y. Siamese convolutional neural networks for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1200–1204. [Google Scholar] [CrossRef]
- Zhou, Y.; Liu, X.; Zhao, J.; Ma, D.; Yao, R.; Liu, B.; Zheng, Y. Remote sensing scene classification based on rotation-invariant feature learning and joint decision making. EURASIP J. Image Video Process. 2019, 2019, 3. [Google Scholar] [CrossRef] [Green Version]
- Lu, X.; Ji, W.; Li, X.; Zheng, X. Bidirectional adaptive feature fusion for remote sensing scene classification. Neurocomputing 2019, 328, 135–146. [Google Scholar] [CrossRef]
- Liu, Y.; Zhong, Y.; Qin, Q. Scene classification based on multiscale convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 7109–7121. [Google Scholar] [CrossRef] [Green Version]
- Cao, R.; Fang, L.; Lu, T.; He, N. Self-attention-based deep feature fusion for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 2020, 18, 43–47. [Google Scholar] [CrossRef]
- Liu, M.; Jiao, L.; Liu, X.; Li, L.; Liu, F.; Yang, S. C-CNN: Contourlet convolutional neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2636–2649. [Google Scholar] [CrossRef]
- Zhang, B.; Zhang, Y.; Wang, S. A lightweight and discriminative model for remote sensing scene classifification with multidilation pooling module. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2019, 12, 2636–2653. [Google Scholar] [CrossRef]
- Li, W.; Wang, Z.; Wang, Y.; Wu, J.; Wang, J.; Jia, Y.; Gui, G. Classification of high-spatial-resolution remote sensing scenes method using transfer learning and deep convolutional neural network. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2020, 13, 1986–1995. [Google Scholar] [CrossRef]
- Shi, C.; Zhang, X.; Sun, J.; Wang, L. A Lightweight Convolutional Neural Network Based on Group-Wise Hybrid Attention for Remote Sensing Scene Classifification. Remote Sens. 2022, 14, 161. [Google Scholar] [CrossRef]
- Xu, C.; Zhu, G.; Shu, J. A lightweight intrinsic mean for remote sensing classifification with lie group kernel function. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1741–1745. [Google Scholar] [CrossRef]
Datasets | OA (%) | AA (%) | F1 (%) | Kappa (%) |
---|---|---|---|---|
UCM (80%) | 99.53 | 99.55 | 99.53 | 99.50 |
RSSCN7 (50%) | 97.65 | 97.76 | 97.65 | 97.56 |
AID (50%) | 97.43 | 97.35 | 97.16 | 97.05 |
AID (20%) | 95.76 | 95.52 | 95.24 | 95.45 |
NWPU (10%) | 93.10 | 93.12 | 92.98 | 93.02 |
NWPU (20%) | 94.53 | 94.63 | 94.45 | 94.49 |
Network Model | OA (%) | Number of Parameters |
---|---|---|
Variable-Weighted Multi-Fusion [36] | 97.79 | 32 M |
ResNet+WSPM-CRC [37] | 97.95 | 23 M |
ADFF [38] | 98.81 ± 0.51 | 23 M |
LCNN-BFF [32] | 99.29 ± 0.24 | 6.2 M |
VGG16 with MSCP [39] | 98.36 ± 0.58 | 15 M |
Gated Bidirectiona+global feature [40] | 98.57 ± 0.48 | 138 M |
Feature Aggregation CNN [41] | 98.81 ± 0.24 | 130 M |
Skip-Connected CNN [42] | 98.04 ± 0.23 | 6 M |
Discriminative CNN [43] | 98.93 ± 0.10 | 130 M |
VGG16-DF [44] | 98.97 | 130 M |
Scale-Free CNN [33] | 99.05 ± 0.27 | 130 M |
Inceptionv3+CapsNet [34] | 99.05 ± 0.24 | 22 M |
DDRL-AM [35] | 99.05 ± 0.08 | 30 M |
Semi-Supervised Representation Learning [45] | 94.05 ± 0.96 | 210 M |
Multiple Resolution BlockFeature [46] | 94.19 | 36 M |
Siamese CNN [47] | 94.29 | 62 M |
Siamese ResNet50 with R.D [48] | 94.76 | 20 M |
Bidirectional Adaptive Feature Fusion [49] | 95.48 | 130 M |
Multiscale CNN [50] | 96.66 ± 0.90 | 60 M |
VGG_VD16 with SAFF [51] | 97.02 ± 0.78 | 15 M |
Proposed | 99.53 ± 0.25 | 0.6 M |
Network Model | OA (%) | Number of Parameters |
---|---|---|
VGG16+SVM Method [28] | 87.18 | 130 M |
Variable-Weighted Multi-Fusion Method [36] | 89.1 | 32 M |
TSDFF Method [14] | 92.37 ± 0.72 | 50 M |
ResNet+SPM-CRC Method [37] | 93.86 | 23 M |
ResNet+WSPM-CRC Method [37] | 93.9 | 23 M |
LCNN-BFF Method [32] | 94.64 ± 0.21 | 6.2 M |
ADFF [38] | 95.21 ± 0.50 | 23 M |
Coutourlet CNN [52] | 95.54 ± 0.17 | 12.6 M |
SE-MDPMNet [53] | 94.71 ± 0.15 | 5.17 M |
Proposed | 97.65 ± 0.12 | 0.6 M |
Network Model | OA (20/80%) | OA (50/50) | Number of Parameters |
---|---|---|---|
VGG16+CapsNet [34] | 91.63 ± 0.19 | 94.74 ± 0.17 | 130 M |
VGG_VD16 with SAFF [51] | 90.25 ± 0.29 | 93.83 ± 0.28 | 15 M |
Discriminative CNN [43] | 90.82 ± 0.16 | 96.89 ± 0.10 | 130 M |
Fine-tuning [28] | 86.59 ± 0.29 | 89.64 ± 0.36 | 130 M |
Skip-Connected CNN [42] | 91.10 ± 0.15 | 93.30 ± 0.13 | 6 M |
LCNN-BFF [32] | 91.66 ± 0.48 | 94.64 ± 0.16 | 6.2 M |
Gated Bidirectiona [40] | 90.16 ± 0.24 | 93.72 ± 0.34 | 18 M |
Gated Bidirectiona+global feature [40] | 92.20 ± 0.23 | 95.48 ± 0.12 | 138 M |
TSDFF [14] | 93.06 ± 0.20 | 91.8 | 50 M |
AlexNet with MSCP [39] | 88.99 ± 0.38 | 92.36 ± 0.21 | 46.2 M |
VGG16 with MSCP [39] | 91.52 ± 0.21 | 94.42 ± 0.17 | 15 M |
ResNet50 [54] | 92.39 ± 0.15 | 94.69 ± 0.19 | 25.61 M |
LCNN-GWHA [55] | 92.12 ± 0.35 | 95.63 ± 0.54 | 0.3 M |
InceptionV3 [54] | 93.27 ± 0.17 | 95.07 ± 0.22 | 45.37 M |
Proposed | 95.76 ± 0.16 | 97.43 ± 0.28 | 0.6 M |
Network Model | OA (10/90) (%) | OA (20/80) (%) | Number of Parameters |
---|---|---|---|
Siamese ResNet50 with R.D [48] | 85.27 ± 0.31 | 91.03 | 20 M |
AlexNet with MSCP [39] | 81.70 ± 0.23 | 85.58 ± 0.16 | 35 M |
VGG16 with MSCP [39] | 85.33 ± 0.17 | 88.93 ± 0.14 | 60 M |
VGG_VD16 with SAFF [51] | 84.38 ± 0.19 | 87.86 ± 0.14 | 15 M |
Fine-tuning [28] | 87.15 ± 0.45 | 90.36 ± 0.18 | 130 M |
Skip-Connected CNN [42] | 84.33 ± 0.19 | 87.30 ± 0.23 | 6 M |
LCNN-BFF [32] | 86.53 ± 0.15 | 91.73 ± 0.17 | 6.2 M |
VGG16+CapsNet [34] | 85.05 ± 0.13 | 89.18 ± 0.14 | 130 M |
Discriminative with AlexNet [43] | 85.56 ± 0.20 | 87.24 ± 0.12 | 130 M |
ResNet50 [54] | 86.23 ± 0.41 | 88.93 ± 0.12 | 25.61 M |
InceptionV3 [54] | 85.46 ± 0.33 | 87.75 ± 0.43 | 45.37 M |
Contourlet CNN [52] | 85.93 ± 0.51 | 89.57 ± 0.45 | 12.6 M |
LiG with RBF kernel [56] | 90.23 ± 0.13 | 93.25 ± 0.12 | 2.07 M |
Proposed | 93.10 ± 0.12 | 94.53 ± 0.25 | 0.6 M |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shi, C.; Zhang, X.; Wang, T.; Wang, L. A Lightweight Convolutional Neural Network Based on Hierarchical-Wise Convolution Fusion for Remote-Sensing Scene Image Classification. Remote Sens. 2022, 14, 3184. https://doi.org/10.3390/rs14133184
Shi C, Zhang X, Wang T, Wang L. A Lightweight Convolutional Neural Network Based on Hierarchical-Wise Convolution Fusion for Remote-Sensing Scene Image Classification. Remote Sensing. 2022; 14(13):3184. https://doi.org/10.3390/rs14133184
Chicago/Turabian StyleShi, Cuiping, Xinlei Zhang, Tianyi Wang, and Liguo Wang. 2022. "A Lightweight Convolutional Neural Network Based on Hierarchical-Wise Convolution Fusion for Remote-Sensing Scene Image Classification" Remote Sensing 14, no. 13: 3184. https://doi.org/10.3390/rs14133184
APA StyleShi, C., Zhang, X., Wang, T., & Wang, L. (2022). A Lightweight Convolutional Neural Network Based on Hierarchical-Wise Convolution Fusion for Remote-Sensing Scene Image Classification. Remote Sensing, 14(13), 3184. https://doi.org/10.3390/rs14133184