Spatiotemporal Assessment of Satellite Image Time Series for Land Cover Classification Using Deep Learning Techniques: A Case Study of Reunion Island, France
Abstract
:1. Introduction
1.1. Traditional Techniques for SITS Classification
1.2. Deep Learning Techniques for SITS Classification
- The performance of several deep learning architectures with the proposed framework is assessed for the land cover classification to exploit spatiotemporal properties from SITS using a test dataset.
- Accuracy assessment for the classification performance of every model is done by building a feature extractor to identify spatial and temporal properties. Depending on the features, various classifiers are trained and experimented on the land cover classification dataset.
- The proposed approach extracted the best land cover from the multi-temporal remote sensing images achieving higher F1 scores than the GRU, TCNN, and attention-based models.
2. Materials and Study Area
- The rich band information comprises ten features with a description of each pixel.
- Temporal data are showcased with the 23 time-points, amongst which band features are considered. The spatial information is associated with each pixel in the form of coordinates.
Class ID | Class Name | Instances Train | Instances Test |
---|---|---|---|
1 | Urban areas | 16,000 | 4000 |
2 | Other built-up surfaces | 3236 | 647 |
3 | Forests | 16,000 | 4000 |
4 | Sparse vegetation | 16,000 | 3398 |
5 | Rocks and bare soil | 12,942 | 2599 |
6 | Grassland | 5681 | 1136 |
7 | Sugarcane crops | 7656 | 1531 |
8 | Other crops | 1600 | 154 |
9 | Water | 2599 | 519 |
- Urban Areas: cities on the Reunion Island are not as dense as those in Europe. Additionally, cities have a lot of green spaces, such as tree rows that offer shade and cooler locations. Its greenness density may influence a pixel’s associated NDVI values.
- Other built-up surfaces: these are possibly related to greenhouses, although this is an assumption. Indeed, greenhouses are frequently built-in green spaces, which could account for more intra-class variation than in urban areas.
- Forests: the NDVI values of the forests are stable because of the tropical climate. Thus, the time series that reflects the forests exhibits high but consistent NDVI values.
- Sparse vegetation: pixels that do not fit into any of the other classes are grouped together. Thus, a range of profiles with intermediate NDVI values are included under this class.
- Rocks and bare soil: this class has low NDVI values (close to or below 0). The NDVI value is predicted to vary only slightly for a given pixel.
- Grassland: this class assembles grasslands that were mowed and grazed. The regrowth rate may vary depending on irrigation, resulting in various time series profiles.
- Sugarcane crops: since the NDVI values decrease following harvest, this time series has the most easily distinguishable characteristics. On Reunion Island, sugarcane fields occupy approximately 60% of the farmed land, and their harvesting season lasts for approximately a month.
- Other crops: orchards and a variety of crops, such as (pineapple and bananas), make up this class (e.g., lychees and mangos). Due to the variety of crops, this class has considerable intra-class variability.
- Water: the considerable intra-class variability in this class can be attributed to the water’s turbidity. Water clarity is measured by its turbidity, which can affect NDVI results.
3. Models Implemented
- Pre-calculating temporal features (has little effect and requires low-level feature engineering);
- Using the nearest neighbor algorithm with temporal similarity features (computationally expensive).
3.1. Gated Recurrent Unit
3.2. Temporal CNN
3.3. GRU + Temporal CNN
3.4. Attention on Temporal CNN and GRU
3.5. Proposed Framework
- Multivariate model: uses a 1D convolution on the actual data. It consists of three convolutional layers with no presence of pooling layers in between with a filter size of 3 and a RELU activation function.
- Univariate model: this model makes use of 10 univariate models, uses 1D convolutions individually for each feature and concatenates the results. It comprises two convolution layers, including max-pooling and flatten layers at both levels. Both layers’ outcomes are concatenated to engineer the features at different levels.
- Pixel coordinates: this model is utilized to pass through the preprocessed and scaled pixel coordinates to the final set of fully connected layers. Then, each of these feature extraction model outputs is concatenated to be classified with the usual fully connected layers.
4. Experiments and Results
4.1. Evaluation Metrics
4.2. Bidirectional GRU
4.3. Temporal CNN
4.4. GRU + Temporal CNN
4.5. Attention on Temporal CNN
4.6. Attention on Temporal CNN + GRU
4.7. Univariate + Multivariate + Coordinates
4.8. Univariate + Multivariate
4.9. Univariate + Multivariate (LSTM) + Coordinates
4.10. Qualitative Comparisons
5. Discussion
- Constrained model interpretability due to the absence of a direct causal relationship between inputs and outputs, i.e., the difficulty in determining the most significant and helpful variable for the classification task.
- High computational demands were brought on by an increase in data volume resulting from an increase in the number of features and dimensions that the models could extract during the computational stage, leading to a greater need for training time and memory.
- High demand for reference data brought on by intraclass variability (heterogeneity) in most of the LCC datasets.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Chen, L.; Jin, Z.; Michishita, R.; Cai, J.; Yue, T.; Chen, B.; Xu, B. Dynamic monitoring of wetland cover changes using time-series remote sensing imagery. Ecol. Inform. 2014, 24, 17–26. [Google Scholar] [CrossRef]
- Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
- Olen, S.; Bookhagen, B. Mapping damage-affected areas after natural hazard events using sentinel-1 coherence time series. Remote Sens. 2018, 10, 1272. [Google Scholar] [CrossRef] [Green Version]
- Inglada, J.; Vincent, A.; Arias, M.; Tardy, B.; Morin, D.; Rodes, I. Operational high resolution land cover map production at the country scale using satellite image time series. Remote Sens. 2017, 9, 95. [Google Scholar] [CrossRef] [Green Version]
- Wulder, M.A.; White, J.C.; Goward, S.N.; Masek, J.G.; Irons, J.R.; Herold, M.; Cohen, W.B.; Loveland, T.R.; Woodcock, C.E. Landsat continuity: Issues and opportunities for land cover monitoring. Remote Sens. Environ. 2008, 112, 955–969. [Google Scholar] [CrossRef]
- Khiali, L.; Ienco, D.; Teisseire, M. Object-oriented satellite image time series analysis using a graph-based representation. Ecol. Inform. 2018, 43, 52–64. [Google Scholar] [CrossRef] [Green Version]
- Hay, G.J.; Castilla, G. Geographic Object-Based Image Analysis (GEOBIA): A new name for a new discipline. In Object-Based Image Analysis; Springer: Berlin/Heidelberg, Germany, 2008; pp. 75–89. [Google Scholar]
- Nguyen, L.H.; Joshi, D.R.; Clay, D.E.; Henebry, G.M. Characterizing land cover/land use from multiple years of Landsat andMODIS time series: A novel approach using land surface phenology modeling and random forest classifier. Remote Sens. Environ. 2020, 238, 111017. [Google Scholar] [CrossRef]
- Singh, B.; Venkatramanan, V.; Deshmukh, B. Monitoring of land use land cover dynamics and prediction of urban growth using Land Change Modeler in Delhi and its environs, India. Environ. Sci. Pollut. Res. 2022, 29, 71534–71554. [Google Scholar] [CrossRef]
- Stürck, J.; Schulp, C.J.; Verburg, P.H. Spatio-temporal dynamics of regulating ecosystem services in Europe–the role of past and future land use change. Appl. Geogr. 2015, 63, 121–135. [Google Scholar] [CrossRef]
- Nguyen, H.; Pham, T.; Doan, M.; Tran, P. Land Use/Land Cover Change Prediction Using Multi-Temporal Satellite Imagery and Multi-Layer Perceptron Markov Model. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 2020, 99–105. [Google Scholar] [CrossRef]
- Ienco, D.; Gaetano, R.; Dupaquier, C.; Maurel, P. Land cover classification via multitemporal spatial data by deep recurrent neural networks. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1685–1689. [Google Scholar] [CrossRef] [Green Version]
- Xie, S.; Liu, L.; Zhang, X.; Chen, X. Annual land-cover mapping based on multi-temporal cloud-contaminated landsat images. Int. J. Remote Sens. 2019, 40, 3855–3877. [Google Scholar] [CrossRef]
- Abade, N.A.; de Carvalho Júnior, O.A.; Guimarães, R.F.; De Oliveira, S.N. Comparative analysis of MODIS time-series classification using support vector machines and methods based upon distance and similarity measures in the Brazilian Cerrado-Caatinga boundary. Remote Sens. 2015, 7, 12160–12191. [Google Scholar] [CrossRef] [Green Version]
- Hakkenberg, C.; Peet, R.; Urban, D.; Song, C. Modeling plant composition as community continua in a forest landscape with LiDAR and hyperspectral remote sensing. Ecol. Appl. 2018, 28, 177–190. [Google Scholar] [CrossRef]
- Belgiu, M.; Csillik, O. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sens. Environ. 2018, 204, 509–523. [Google Scholar] [CrossRef]
- Shao, Y.; Lunetta, R.S.; Wheeler, B.; Iiames, J.S.; Campbell, J.B. An evaluation of time-series smoothing algorithms for land-cover classifications using MODIS-NDVI multi-temporal data. Remote Sens. Environ. 2016, 174, 258–265. [Google Scholar] [CrossRef]
- Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
- Diek, S.; Fornallaz, F.; Schaepman, M.E.; De Jong, R. Barest pixel composite for agricultural areas using landsat time series. Remote Sens. 2017, 9, 1245. [Google Scholar] [CrossRef] [Green Version]
- Pelletier, C.; Webb, G.I.; Petitjean, F. Temporal convolutional neural network for the classification of satellite image time series. Remote Sens. 2019, 11, 523. [Google Scholar] [CrossRef] [Green Version]
- Wubie, M.A.; Assen, M.; Nicolau, M.D. Patterns, causes and consequences of land use/cover dynamics in the Gumara watershedof lake Tana basin, Northwestern Ethiopia. Environ. Syst. Res. 2016, 5, 8. [Google Scholar] [CrossRef]
- Ojima, D.; Galvin, K.; Turner, B. The global impact of land-use change. BioScience 1994, 44, 300–304. [Google Scholar] [CrossRef]
- Bartholome, E.; Belward, A.S. GLC2000: A new approach to global land cover mapping from Earth observation data. Int. J. Remote Sens. 2005, 26, 1959–1977. [Google Scholar] [CrossRef]
- Gong, P.; Wang, J.; Yu, L.; Zhao, Y.; Zhao, Y.; Liang, L.; Niu, Z.; Huang, X.; Fu, H.; Liu, S.; et al. Finer resolution observation andmonitoring of global land cover: First mapping results with Landsat TM and ETM+ data. Int. J. Remote Sens. 2013, 34, 2607–2654. [Google Scholar] [CrossRef] [Green Version]
- Abdi, A.M. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. GIScience Remote Sens. 2020, 57, 1–20. [Google Scholar] [CrossRef] [Green Version]
- Zhai, Y.; Qu, Z.; Hao, L. Land cover classification using integrated spectral, temporal, and spatial features derived from remotely sensed images. Remote Sens. 2018, 10, 383. [Google Scholar] [CrossRef] [Green Version]
- Lacerda Silva, A.; Salas Alves, D.; Pinheiro Ferreira, M. Landsat-Based land use change assessment in the Brazilian Atlantic forest: Forest transition and sugarcane expansion. Remote Sens. 2018, 10, 996. [Google Scholar] [CrossRef] [Green Version]
- Arévalo, P.; Olofsson, P.; Woodcock, C.E. Continuous monitoring of land change activities and post-disturbance dynamics from Landsat time series: A test methodology for REDD+ reporting. Remote Sens. Environ. 2020, 238, 111051. [Google Scholar] [CrossRef]
- Comber, A.; Balzter, H.; Cole, B.; Fisher, P.; Johnson, S.C.; Ogutu, B. Methods to quantify regional differences in land cover change. Remote Sens. 2016, 8, 176. [Google Scholar] [CrossRef] [Green Version]
- Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 2015, 13, 105–109. [Google Scholar] [CrossRef] [Green Version]
- Tracewski, L.; Bastin, L.; Fonte, C.C. Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization. Geo-Spat. Inf. Sci. 2017, 20, 252–268. [Google Scholar] [CrossRef]
- Ge, Y.; Zhang, X.; Atkinson, P.M.; Stein, A.; Li, L. Geoscience-aware deep learning: A new paradigm for remote sensing. Sci. Remote Sens. 2022, 5, 100047. [Google Scholar] [CrossRef]
- Boori, M.; Paringer, R.; Choudhary, K.; Kupriyanov, A.; Banda, R. Land cover classification and build spectral library from hyperspectral and multi-spectral satellite data: A data comparison study in Samara, Russia. In Proceedings of the IV International Conference on “Information Technology and Nanotechnology” (ITNT-2018), Samara, Russia, 24–27 April 2018; Volume 2210, pp. 390–401. [Google Scholar]
- Castelluccio, M.; Poggi, G.; Sansone, C.; Verdoliva, L. Land use classification in remote sensing images by convolutional neural networks. arXiv 2015, arXiv:1508.00092. [Google Scholar]
- Interdonato, R.; Ienco, D.; Gaetano, R.; Ose, K. DuPLO: A DUal view Point deep Learning architecture for time series classification. ISPRS J. Photogramm. Remote Sens. 2019, 149, 91–104. [Google Scholar] [CrossRef] [Green Version]
- Rußwurm, M.; Körner, M. Multi-temporal land cover classification with sequential recurrent encoders. ISPRS Int. J. Geo-Inf. 2018, 7, 129. [Google Scholar] [CrossRef] [Green Version]
- Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J.; et al. Deep learning in environmental remote sensing: Achievements and challenges. Remote Sens. Environ. 2020, 241, 111716. [Google Scholar] [CrossRef]
- Lyu, H.; Lu, H.; Mou, L.; Li, W.; Wright, J.; Li, X.; Li, X.; Zhu, X.X.; Wang, J.; Yu, L.; et al. Long-term annual mapping of four cities on different continents by applying a deep information learning method to landsat data. Remote Sens. 2018, 10, 471. [Google Scholar] [CrossRef] [Green Version]
- Ebrahim, S.A.; Poshtan, J.; Jamali, S.M.; Ebrahim, N.A. Quantitative and qualitative analysis of time-series classification using deep learning. IEEE Access 2020, 8, 90202–90215. [Google Scholar] [CrossRef]
- Dong, L.; Du, H.; Mao, F.; Han, N.; Li, X.; Zhou, G.; Zheng, J.; Zhang, M.; Xing, L.; Liu, T.; et al. Very high-resolution remote sensing imagery classification using a fusion of random forest and deep learning technique—Subtropical area for example. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 13, 113–128. [Google Scholar] [CrossRef]
- Liu, S.; Shi, Q. Local climate zone mapping as remote sensing scene classification using deep learning: A case study of metropolitan China. ISPRS J. Photogramm. Remote Sens. 2020, 164, 229–242. [Google Scholar] [CrossRef]
- Cheng, G.; Yang, C.; Yao, X.; Guo, L.; Han, J. When deep learning meets metric learning: Remote sensing image scene classification via learning discriminative CNNs. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2811–2821. [Google Scholar] [CrossRef]
- Wang, H.; Zhao, X.; Zhang, X.; Wu, D.; Du, X. Long time series land cover classification in China from 1982 to 2015 based on Bi-LSTM deep learning. Remote Sens. 2019, 11, 1639. [Google Scholar] [CrossRef]
- Penghua, Z.; Dingyi, Z. Bidirectional-GRU based on attention mechanism for aspect-level sentiment analysis. In Proceedings of the 2019 11th International Conference on Machine Learning and Computing, Zhuhai, China, 22–24 February 2019; pp. 86–90. [Google Scholar]
- Brock, J.; Abdallah, Z.S. Investigating Temporal Convolutional Neural Networks for Satellite Image Time Series Classification. arXiv 2022, arXiv:2204.08461. [Google Scholar]
- Garnot, V.S.F.; Landrieu, L. Lightweight temporal self-attention for classifying satellite images time series. In International Workshop on Advanced Analytics and Learning on Temporal Data; Springer: Berlin/Heidelberg, Germany, 2020; pp. 171–181. [Google Scholar]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. Joint Deep Learning for land cover and land use classification. Remote Sens. Environ. 2019, 221, 173–187. [Google Scholar] [CrossRef] [Green Version]
- Zhang, C.; Harrison, P.A.; Pan, X.; Li, H.; Sargent, I.; Atkinson, P.M. Scale sequence Joint Deep Learning for land use and land cover classification. Remote Sens. Environ. 2019, 237, 111593. [Google Scholar] [CrossRef]
- Dou, P.; Shen, H.; Li, Z.; Guan, X. Time series remote sensing image classification framework using combination of deep learning and multiple classifiers system. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102477. [Google Scholar] [CrossRef]
- Sharma, A.; Liu, X.; Yang, X. Land cover classification from multi-temporal, multi-spectral remotely sensed imagery using patch-based recurrent neural networks. Neural Netw. 2018, 105, 346–355. [Google Scholar] [CrossRef] [Green Version]
- Noppitak, S.; Surinta, O. Ensemble Convolutional Neural Network Architectures for Land Use Classification in Economic Crops Aerial Images. ICIC Express Lett. 2021, 15, 531–543. [Google Scholar]
- Zhang, W.; Tang, P.; Zhao, L. Fast and accurate land-cover classification on medium-resolution remote-sensing images using segmentation models. Int. J. Remote Sens. 2021, 42, 3277–3301. [Google Scholar] [CrossRef]
- He, T.; Wang, S. Multi-spectral remote sensing land-cover classification based on deep learning methods. J. Supercomput. 2021, 77, 2829–2843. [Google Scholar] [CrossRef]
- Rousset, G.; Despinoy, M.; Schindler, K.; Mangeas, M. Assessment of Deep Learning Techniques for Land Use Land Cover Classification in Southern New Caledonia. Remote Sens. 2021, 13, 2257. [Google Scholar] [CrossRef]
- Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N. Deep learning and process understanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef] [PubMed]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Rußwurm, M.; Wang, S.; Korner, M.; Lobell, D. Meta-learning for few-shot land cover classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, DC, USA, 14–19 June 2020; pp. 200–201. [Google Scholar]
GRU | Temporal CNN | GRU + Temporal CNN | Attention on Temporal CNN | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class ID | P | R | F1 | S | P | R | F1 | S | P | R | F1 | S | P | R | F1 | S |
1 (UA) | 0.59 | 0.92 | 0.72 | 4000 | 0.90 | 0.92 | 0.91 | 4000 | 0.93 | 0.92 | 0.92 | 4000 | 0.91 | 0.92 | 0.92 | 4000 |
2 (OBS) | 0.21 | 0.09 | 0.12 | 647 | 0.70 | 0.64 | 0.67 | 647 | 0.73 | 0.66 | 0.69 | 647 | 0.68 | 0.64 | 0.66 | 647 |
3 (Fr) | 0.90 | 0.71 | 0.79 | 4000 | 0.87 | 0.92 | 0.89 | 4000 | 0.88 | 0.92 | 0.90 | 4000 | 0.91 | 0.91 | 0.91 | 4000 |
4 (SV) | 0.87 | 0.63 | 0.73 | 3398 | 0.91 | 0.91 | 0.91 | 3398 | 0.92 | 0.93 | 0.92 | 3398 | 0.94 | 0.94 | 0.94 | 3398 |
5 (RBS) | 0.73 | 0.80 | 0.76 | 2588 | 0.93 | 0.95 | 0.94 | 2588 | 0.95 | 0.95 | 0.95 | 2588 | 0.95 | 0.96 | 0.95 | 2588 |
6 (GL) | 0.39 | 0.64 | 0.48 | 1136 | 0.84 | 0.81 | 0.82 | 1136 | 0.88 | 0.81 | 0.84 | 1136 | 0.84 | 0.84 | 0.84 | 1136 |
7 (SC) | 0.88 | 0.48 | 0.62 | 1531 | 0.97 | 0.89 | 0.93 | 1531 | 0.93 | 0.96 | 0.94 | 1531 | 0.94 | 0.94 | 0.94 | 1531 |
8 (OC) | 0.00 | 0.00 | 0.00 | 154 | 0.43 | 0.33 | 0.37 | 154 | 0.43 | 0.34 | 0.38 | 154 | 0.56 | 0.40 | 0.47 | 154 |
9 (Wr) | 0.70 | 0.37 | 0.48 | 519 | 0.92 | 0.75 | 0.83 | 519 | 0.85 | 0.84 | 0.84 | 519 | 0.87 | 0.85 | 0.86 | 519 |
Accuracy | 0.69 | 17,973 | 0.89 | 17,973 | 0.91 | 17,973 | 0.91 | 17,973 | ||||||||
Macro avg | 0.58 | 0.52 | 0.52 | 17,973 | 0.83 | 0.79 | 0.81 | 17,973 | 0.83 | 0.81 | 0.82 | 17,973 | 0.84 | 0.82 | 0.83 | 17,973 |
Weighted avg | 0.73 | 0.69 | 0.69 | 17,973 | 0.89 | 0.89 | 0.89 | 17,973 | 0.90 | 0.91 | 0.90 | 17,973 | 0.91 | 0.91 | 0.91 | 17,973 |
Attention on Temporal CNN + GRU | Univariate + Multivariate+ Coordinates | Univariate + Multivariate | Univariate + Multivariate (LSTM) + Coordinates | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class ID | P | R | F1 | S | P | R | F1 | S | P | R | F1 | S | P | R | F1 | S |
1 (UA) | 0.91 | 0.91 | 0.91 | 4000 | 0.93 | 0.93 | 0.93 | 4000 | 0.95 | 0.92 | 0.93 | 4000 | 0.93 | 0.92 | 0.93 | 4000 |
2 (OBS) | 0.65 | 0.60 | 0.63 | 647 | 0.78 | 0.76 | 0.77 | 647 | 0.78 | 0.82 | 0.80 | 647 | 0.78 | 0.74 | 0.76 | 647 |
3 (Fr) | 0.91 | 0.91 | 0.91 | 4000 | 0.94 | 0.92 | 0.93 | 4000 | 0.93 | 0.93 | 0.93 | 4000 | 0.94 | 0.92 | 0.93 | 4000 |
4 (SV) | 0.92 | 0.93 | 0.93 | 3398 | 0.96 | 0.96 | 0.96 | 3398 | 0.95 | 0.95 | 0.95 | 3398 | 0.95 | 0.96 | 0.95 | 3398 |
5 (RBS) | 0.95 | 0.95 | 0.95 | 2588 | 0.97 | 0.98 | 0.97 | 2588 | 0.96 | 0.98 | 0.97 | 2588 | 0.96 | 0.97 | 0.97 | 2588 |
6 (GL) | 0.79 | 0.84 | 0.82 | 1136 | 0.88 | 0.91 | 0.90 | 1136 | 0.90 | 0.91 | 0.90 | 1136 | 0.89 | 0.91 | 0.90 | 1136 |
7 (SC) | 0.94 | 0.95 | 0.94 | 1531 | 0.95 | 0.95 | 0.95 | 1531 | 0.96 | 0.95 | 0.96 | 1531 | 0.94 | 0.96 | 0.95 | 1531 |
8 (OC) | 0.55 | 0.35 | 0.43 | 154 | 0.52 | 0.63 | 0.57 | 154 | 0.47 | 0.63 | 0.54 | 154 | 0.46 | 0.63 | 0.53 | 154 |
9 (Wr) | 0.85 | 0.83 | 0.84 | 519 | 0.87 | 0.89 | 0.88 | 519 | 0.88 | 0.89 | 0.89 | 519 | 0.88 | 0.87 | 0.88 | 519 |
Accuracy | 0.90 | 17,973 | 0.93 | 17,973 | 0.93 | 17,973 | 0.93 | 17,973 | ||||||||
Macro avg | 0.83 | 0.81 | 0.82 | 17,973 | 0.87 | 0.88 | 0.87 | 17,973 | 0.87 | 0.89 | 0.87 | 17,973 | 0.86 | 0.88 | 0.87 | 17,973 |
Weighted avg | 0.90 | 0.90 | 0.90 | 17,973 | 0.93 | 0.93 | 0.93 | 17,973 | 0.93 | 0.93 | 0.93 | 17,973 | 0.93 | 0.93 | 0.93 | 17,973 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Navnath, N.N.; Chandrasekaran, K.; Stateczny, A.; Sundaram, V.M.; Panneer, P. Spatiotemporal Assessment of Satellite Image Time Series for Land Cover Classification Using Deep Learning Techniques: A Case Study of Reunion Island, France. Remote Sens. 2022, 14, 5232. https://doi.org/10.3390/rs14205232
Navnath NN, Chandrasekaran K, Stateczny A, Sundaram VM, Panneer P. Spatiotemporal Assessment of Satellite Image Time Series for Land Cover Classification Using Deep Learning Techniques: A Case Study of Reunion Island, France. Remote Sensing. 2022; 14(20):5232. https://doi.org/10.3390/rs14205232
Chicago/Turabian StyleNavnath, Naik Nitesh, Kandasamy Chandrasekaran, Andrzej Stateczny, Venkatesan Meenakshi Sundaram, and Prabhavathy Panneer. 2022. "Spatiotemporal Assessment of Satellite Image Time Series for Land Cover Classification Using Deep Learning Techniques: A Case Study of Reunion Island, France" Remote Sensing 14, no. 20: 5232. https://doi.org/10.3390/rs14205232
APA StyleNavnath, N. N., Chandrasekaran, K., Stateczny, A., Sundaram, V. M., & Panneer, P. (2022). Spatiotemporal Assessment of Satellite Image Time Series for Land Cover Classification Using Deep Learning Techniques: A Case Study of Reunion Island, France. Remote Sensing, 14(20), 5232. https://doi.org/10.3390/rs14205232