In-Season Crop Type Detection by Combing Sentinel-1A and Sentinel-2 Imagery Based on the CNN Model
Abstract
:1. Introduction
2. Materials
2.1. Study Site
2.2. Ground Reference Data
2.3. Sentinel-1A/2 Data and Preprocessing
- (1)
- Atmosphere calibration: We used the sen2cor plugin v2.5.5 to process reflectance images from Top-of-Atmosphere (TOA) Level 1C S2, to Bottom-of-Atmosphere (BOA) Level 2A, following Sentinel-2 for Agriculture (Sen2-Agri) protocols [30] (http://www.esa-sen2agri.org/, accessed on 24 November 2020).
- (2)
- Masking clouds: We used Function of mask (Fmask) 4.0 [31] to mask clouds and cloud shadow (the cloud probability threshold was set to 50%). Note that compared with cloud confidence layers in the output of sen2cor, Fmask 4.0 results were more accurate in our study area.
- (3)
- Resampling: We resampled the images of RE1, RE2, RE3, NIR2, SWIR1, and SWIR2 from step (1) and cloud masks from step (2) to 10 m.
- (4)
- Filling gaps: Because linear interpolation is usually appropriate for short gaps [32], we adopted the Savitzky–Golay filter to reconstruct each band value using a moving window of seven observations and a filter order of 2 [33]. Note that we used S-2A/B images observed in March and October 2019 because of missing values in early April and late September.
3. Methodology and Experiments
3.1. CNN for Multivariate
3.2. The Dual-1DCNN
3.3. Evaluation
3.4. Experimental Design
- (1)
- For the Dual-1DCNN model, the number of epochs was set to 10,000 with a batch size of 128 and an Adam optimizer [39]. We initially set the learning rate as and applied a global adaptation during each epoch. If the training cross-entropy error did not decrease for 100 epochs, we reduced it by 20% for the next epoch (the minimum learning rate was ). In addition, each training process was monitored through a callback function named ModelCheckpoint [40], and the model was saved when a better model of the training set was found. Apart from differences in input data, the training process of Mono-1DCNN was similar to that of the Dual-1DCNN.
- (2)
- For the SVM, we used the radial basis function (RBF)-based SVM (RBF-SVM), which requires two hyperparameters (i.e., penalty parameter C and kernel parameter ) to be tuned. During the optimization process, we selected from , and selected from .
- (3)
- The primary parameters of the RF model were the number of predictors at each decision tree node split (max_features) and the number of decision trees (n_estimators) to run. The features in this study were all channels of each input, and therefore, we set the parameter “max_features” with the default value (b is the number of channels) [41]. Additionally, the range of the grid search value for the “n_estimators” parameter varied from 100 to 10,000 with an interval of 100 [42].
4. Results
4.1. Temporal Profiles of S1A and S2 Data
4.2. Overall Assessment of Classification Accuracy
4.3. Early Detection of Crop Types
5. Discussion
5.1. Performance of the Dual-1DCNN Algorithm
5.2. Limitations and Future Work
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Kolotii, A.; Kussul, N.; Shelestov, A.; Skakun, S.; Yailymov, B.; Basarab, R.; Lavreniuk, M.; Oliinyk, T.; Ostapenko, V. Comparison of biophysical and satellite predictors for wheat yield forecasting in Ukraine. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2015, XL-7/W3, 39–44. [Google Scholar] [CrossRef] [Green Version]
- Lobell, D.B.; David, T.; Christopher, S.; Eric, E.; Bertis, L. A scalable satellite-based crop yield mapper. Remote Sens. Environ. 2015, 164, 324–333. [Google Scholar] [CrossRef]
- Skakun, S.; Franch, B.; Vermote, E.; Roger, J.-C.; Becker-Reshef, I.; Justice, C.; Kussul, N. Early season large-area winter crop mapping using MODIS NDVI data, growing degree days information and a Gaussian mixture model. Remote Sens. Environ. 2017, 195, 244–258. [Google Scholar] [CrossRef]
- Homer, C.; Huang, C.; Yang, L.; Wylie, B.K.; Coan, M. Development of a 2001 national land-cover database for the United States. Photogramm. Eng. Remote Sens. 2004, 70, 829–840. [Google Scholar] [CrossRef] [Green Version]
- Khan, A.; Hansen, M.C.; Potapov, P.; Stehman, S.V.; Chatta, A.A. Landsat-based wheat mapping in the heterogeneous cropping system of Punjab, Pakistan. Int. J. Remote Sens. 2016, 37, 1391–1410. [Google Scholar] [CrossRef]
- Pôças, I.; Cunha, M.; Marcal, A.R.; Pereira, L.S. An evaluation of changes in a mountainous rural landscape of Northeast Portugal using remotely sensed data. Landsc. Urban Plan. 2011, 101, 253–261. [Google Scholar] [CrossRef] [Green Version]
- Skriver, H.; Mattia, F.; Satalino, G.; Balenzano, A.; Pauwels, V.R.; Verhoest, N.E.; Davidson, M. Crop classification using short-revisit multitemporal SAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 423–431. [Google Scholar] [CrossRef]
- Wardlow, B.D.; Egbert, S.L.; Kastens, J.H. Analysis of time-series MODIS 250 m vegetation index data for crop classification in the US Central Great Plains. Remote Sens. Environ. 2007, 108, 290–310. [Google Scholar] [CrossRef] [Green Version]
- Lebourgeois, V.; Dupuy, S.; Vintrou, É.; Ameline, M.; Butler, S.; Bégué, A. A combined random forest and OBIA classification scheme for mapping smallholder agriculture at different nomenclature levels using multisource data (simulated Sentinel-2 time series, VHRS and DEM). Remote Sens. 2017, 9, 259. [Google Scholar] [CrossRef] [Green Version]
- McCarty, J.; Neigh, C.; Carroll, M.; Wooten, M. Extracting smallholder cropped area in Tigray, Ethiopia with wall-to-wall sub-meter WorldView and moderate resolution Landsat 8 imagery. Remote Sens. Environ. 2017, 202, 142–151. [Google Scholar] [CrossRef]
- Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of three deep learning models for early crop classification using sentinel-1A imagery time series—A case study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef] [Green Version]
- Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
- Dalla Mura, M.; Prasad, S.; Pacifici, F.; Gamba, P.; Chanussot, J.; Benediktsson, J.A. Challenges and opportunities of multimodality and data fusion in remote sensing. Proc. IEEE 2015, 103, 1585–1601. [Google Scholar] [CrossRef] [Green Version]
- McNairn, H.; Kross, A.; Lapen, D.; Caves, R.; Shang, J. Early season monitoring of corn and soybeans with TerraSAR-X and RADARSAT-2. Int. J. Appl. Earth Obs. Geoinf. 2014, 28, 252–259. [Google Scholar] [CrossRef]
- Cai, Y.; Guan, K.; Peng, J.; Wang, S.; Seifert, C.; Wardlow, B.; Li, Z. A high-performance and in-season classification system of field-level crop types using time-series Landsat data and a machine learning approach. Remote Sens. Environ. 2018, 210, 35–47. [Google Scholar] [CrossRef]
- Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.-i. Crop classification from Sentinel-2-derived vegetation indices using ensemble learning. J. Appl. Remote Sens. 2018, 12, 026019. [Google Scholar] [CrossRef] [Green Version]
- Vreugdenhil, M.; Wagner, W.; Bauer-Marschallinger, B.; Pfeil, I.; Teubner, I.; Rüdiger, C.; Strauss, P. Sensitivity of Sentinel-1 backscatter to vegetation dynamics: An Austrian case study. Remote Sens. 2018, 10, 1396. [Google Scholar] [CrossRef] [Green Version]
- Ndikumana, E.; Ho Tong Minh, D.; Baghdadi, N.; Courault, D.; Hossard, L. Deep recurrent neural network for agricultural classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef] [Green Version]
- Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
- Saha, S.; Mou, L.; Qiu, C.; Zhu, X.X.; Bovolo, F.; Bruzzone, L. Unsupervised deep joint segmentation of multitemporal high-resolution images. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8780–8792. [Google Scholar] [CrossRef]
- Yuan, Q.; Shen, H.; Li, T.; Li, Z.; Li, S.; Jiang, Y.; Xu, H.; Tan, W.; Yang, Q.; Wang, J. Deep learning in environmental remote sensing: Achievements and challenges. Remote Sens. Environ. 2020, 241, 111716. [Google Scholar] [CrossRef]
- Zhang, F.; Yao, X.; Tang, H.; Yin, Q.; Hu, Y.; Lei, B. Multiple mode SAR raw data simulation and parallel acceleration for Gaofen-3 mission. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2115–2126. [Google Scholar] [CrossRef]
- Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.-A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Y.; Liu, Q.; Chen, E.; Ge, Y.; Zhao, J.L. Time series classification using multi-channels deep convolutional neural networks. In Proceedings of the Web-Age Information Management: 15th International Conference, WAIM 2014, Macau, China, 16–18 June 2014; pp. 298–310. [Google Scholar]
- Cho, K.; Van Merriënboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder approaches. arXiv 2014, arXiv:1409.1259. [Google Scholar]
- Liao, C.; Wang, J.; Xie, Q.; Baz, A.A.; Huang, X.; Shang, J.; He, Y. Synergistic use of multi-temporal RADARSAT-2 and VENµS data for crop classification based on 1D convolutional neural network. Remote Sens. 2020, 12, 832. [Google Scholar] [CrossRef] [Green Version]
- Zhao, W.; Qu, Y.; Chen, J.; Yuan, Z. Deeply synergistic optical and SAR time series for crop dynamic monitoring. Remote Sens. Environ. 2020, 247, 111952. [Google Scholar] [CrossRef]
- Ienco, D.; Interdonato, R.; Gaetano, R.; Minh, D.H.T. Combining Sentinel-1 and Sentinel-2 Satellite Image Time Series for land cover mapping via a multi-source deep learning architecture. ISPRS J. Photogramm. Remote Sens. 2019, 158, 11–22. [Google Scholar] [CrossRef]
- Sak, H.; Senior, A.; Beaufays, F. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Proceedings of the Fifteenth Annual Conference of the International Speech Communication Association, Singapore, 14–18 September 2014. [Google Scholar]
- Sophie, B.; Arias, M.; Cara, C.; Dedieu, G.; Guzzonato, E.; Hagolle, O.; Inglada, J.; Matton, N.; Morin, D.; Popescu, R. Building a data set over 12 globally distributed sites to support the development of agriculture monitoring applications with sentinel-2. Remote Sens. 2015, 7, 16062–16090. [Google Scholar]
- Qiu, S.; Zhu, Z.; He, B. Fmask 4.0: Improved cloud and cloud shadow detection in Landsats 4–8 and Sentinel-2 imagery. Remote Sens. Environ. 2019, 231, 111205. [Google Scholar] [CrossRef]
- Kandasamy, S.; Baret, F.; Verger, A.; Neveux, P.; Weiss, M. A comparison of methods for smoothing and gap filling time series of remote sensing observations: Application to MODIS LAI products. Biogeosciences 2012, 10, 4055–4071. [Google Scholar] [CrossRef] [Green Version]
- Chen, J.; Jönsson, P.; Tamura, M.; Gu, Z.; Matsushita, B.; Eklundh, L. A simple method for reconstructing a high-quality NDVI time-series data set based on the Savitzky–Golay filter. Remote Sens. Environ. 2004, 91, 332–344. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Zeiler, M.D.; Fergus, R. Stochastic pooling for regularization of deep convolutional neural networks. arXiv 2013, arXiv:1301.3557. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef] [Green Version]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
- Goldstein, B.A.; Polley, E.C.; Briggs, F.B. Random forests for genetic association studies. Stat. Appl. Genet. Mol. Biol. 2011, 10, 32. [Google Scholar] [CrossRef]
- Probst, P.; Wright, M.; Boulesteix, A. Hyperparameters and tuning strategies for random forest. WIREs Data Min. Knowl. Discov. 2019, 9, e1301. [Google Scholar] [CrossRef] [Green Version]
- Blickensdörfer, L.; Schwieder, M.; Pflugmacher, D.; Nendel, C.; Erasmi, S.; Hostert, P. Mapping of crop types and crop sequences with combined time series of Sentinel-1, Sentinel-2 and Landsat 8 data for Germany. Remote Sens. Environ. 2022, 269, 112831. [Google Scholar] [CrossRef]
- Hatami, N.; Gavet, Y.; Debayle, J. Classification of time-series images using deep convolutional neural networks. In Proceedings of the Tenth International Conference on Machine Vision (ICMV 2017), Vienna, Austria, 13–15 November 2017; pp. 242–249. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Wang, H.; Wang, Y.; Zhang, Q.; Xiang, S.; Pan, C. Gated convolutional neural network for semantic segmentation in high-resolution images. Remote Sens. 2017, 9, 446. [Google Scholar] [CrossRef] [Green Version]
- Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the International Joint Conference on Artificial Intelligence, Montreal, QB, Canada, 20–25 August 1995; pp. 1137–1145. [Google Scholar]
- Xie, L.; Zhang, H.; Li, H.; Wang, C. A unified framework for crop classification in southern China using fully polarimetric, dual polarimetric, and compact polarimetric SAR data. Int. J. Remote Sens. 2015, 36, 3798–3818. [Google Scholar] [CrossRef]
- Douzas, G.; Bacao, F.; Fonseca, J.; Khudinyan, M. Imbalanced Learning in Land Cover Classification: Improving Minority Classes’ Prediction Accuracy Using the Geometric SMOTE Algorithm. Remote Sens. 2019, 11, 3040. [Google Scholar] [CrossRef] [Green Version]
- Sun, F.; Fang, F.; Wang, R.; Wan, B.; Guo, Q.; Li, H.; Wu, X. An Impartial Semi-Supervised Learning Strategy for Imbalanced Classification on VHR Images. Sensors 2020, 20, 6699. [Google Scholar] [CrossRef]
Class Label | 1 | 2 | 3 | 4 | 5 | Total |
---|---|---|---|---|---|---|
Class type | forest | summer maize | cotton | fruit tree | common yam rhizome | |
Number | 289 | 897 | 385 | 286 | 85 | 1942 |
DOY | 103 | 106 | 108 | 113 | 118 | 123 | 128 | 130 | 133 | 138 | 142 | 143 | 148 | 153 | 154 | 158 |
L1 | 0 | 1 | 1 | 1 | 2 | 2 | 2 | 3 | 3 | 3 | 4 | 4 | 4 | 4 | 5 | 5 |
L2 | 1 | 1 | 2 | 3 | 4 | 5 | 6 | 6 | 7 | 8 | 8 | 9 | 10 | 11 | 11 | 12 |
DOY | 163 | 166 | 168 | 173 | 178 | 183 | 188 | 190 | 193 | 198 | 202 | 203 | 208 | 213 | 214 | 218 |
L1 | 5 | 6 | 6 | 6 | 7 | 7 | 7 | 8 | 8 | 8 | 9 | 9 | 9 | 9 | 10 | 10 |
L2 | 13 | 13 | 14 | 15 | 16 | 17 | 18 | 18 | 19 | 20 | 20 | 21 | 22 | 23 | 23 | 24 |
DOY | 223 | 226 | 228 | 233 | 238 | 243 | 248 | 250 | 253 | 258 | 262 | 263 | 268 | 273 | ||
L1 | 10 | 11 | 11 | 11 | 12 | 12 | 12 | 13 | 13 | 13 | 14 | 14 | 14 | 14 | ||
L2 | 25 | 25 | 26 | 27 | 28 | 29 | 30 | 30 | 31 | 32 | 32 | 33 | 34 | 35 |
DOY | 103 | 106 | 108 | 113 | 118 | 123 | 128 | 130 |
Dual-1DCNN | 71.00 ± 1.90 | 72.09 ± 2.3 | 76.63 ± 3.41 | 80.02 ± 3.53 | 80.49 ± 3.03 | 81.46 ± 1.41 | 82.08 ± 1.27 | 81.93 ± 1.56 |
SVM | 77.86 ± 2.83 | 77.96 ± 2.75 | 79.2 ± 2.05 | 81.31 ± 2.85 | 81.00 ± 2.39 | 82.39 ± 1.94 | 82.65 ± 1.84 | 82.85 ± 1.65 |
RF | 76.67 ± 3.21 | 78.99 ± 3.68 | 79.46 ± 3.35 | 80.33 ± 2.72 | 81.1 ± 2.72 | 81.21 ± 2.46 | 81.51 ± 2.48 | 80.95 ± 2.65 |
Mono-1DCNN | 71 ± 1.90 | / | 76.85 ± 2.53 | 80.15 ± 3.01 | 80.36 ± 2.74 | 81.82 ± 2.01 | 82.05 ± 1.21 | / |
DOY | 133 | 138 | 142 | 143 | 148 | 153 | 154 | 158 |
Dual-1DCNN | 82.65 ± 2.03 | 83.42 ± 1.83 | 83.27 ± 1.48 | 84.30 ± 1.23 | 85.32 ± 1.77 | 84.86 ± 1.46 | 85.53 ± 1.19 | 85.58 ± 2.14 |
SVM | 83.37 ± 1.84 | 83.99 ± 1.75 | 83.99 ± 1.24 | 84.24 ± 1.95 | 84.66 ± 1.58 | 84.65 ± 1.72 | 84.91 ± 1.57 | 85.53 ± 2.15 |
RF | 81.31 ± 3.04 | 83.01 ± 2.31 | 82.55 ± 2.00 | 83.21 ± 1.74 | 83.32 ± 1.02 | 83.27 ± 1.48 | 83.68 ± 1.92 | 84.04 ± 2.13 |
Mono-1DCNN | 82.54 ± 1.52 | 83.59 ± 1.95 | / | 84.36 ± 1.14 | 84.41 ± 1.32 | 85.13 ± 1.69 | / | 85.64 ± 1.58 |
DOY | 163 | 166 | 168 | 173 | 178 | 183 | 188 | 190 |
Dual-1DCNN | 85.69 ± 1.63 | 85.79 ± 1.15 | 86.20 ± 1.55 | 86.77 ± 2.15 | 86.10 ± 1.34 | 86.51 ± 1.95 | 86.15 ± 2.20 | 86.25 ± 1.49 |
SVM | 85.07 ± 1.80 | 85.43 ± 2.07 | 85.06 ± 1.87 | 85.17 ± 1.82 | 85.63 ± 1.72 | 85.53 ± 1.64 | 85.48 ± 1.61 | 85.43 ± 1.82 |
RF | 84.45 ± 2.50 | 84.40 ± 2.26 | 84.40 ± 2.38 | 84.55 ± 2.58 | 84.19 ± 2.72 | 84.04 ± 2.56 | 84.86 ± 2.87 | 84.92 ± 2.56 |
Mono-1DCNN | 85.38 ± 1.26 | / | 85.44 ± 1.75 | 86.41 ± 1.95 | 86.46 ± 2.08 | 86.67 ± 2.14 | 86.31 ± 1.56 | / |
DOY | 193 | 198 | 202 | 203 | 208 | 213 | 214 | 218 |
Dual-1DCNN | 85.89 ± 2.13 | 86.72 ± 1.59 | 86.51 ± 1.64 | 86.05 ± 1.75 | 85.99 ± 0.97 | 86.72 ± 1.23 | 86.36 ± 1.39 | 86.72 ± 1.48 |
SVM | 85.79 ± 2.02 | 85.27 ± 1.03 | 85.69 ± 2.19 | 85.94 ± 1.19 | 85.89 ± 1.96 | 85.84 ± 2.15 | 86.10 ± 1.54 | 86.25 ± 1.39 |
RF | 84.35 ± 2.74 | 84.55 ± 2.81 | 84.76 ± 2.06 | 84.66 ± 3.19 | 84.86 ± 2.75 | 84.76 ± 2.30 | 84.66 ± 2.95 | 84.60 ± 2.36 |
Mono-1DCNN | 86.56 ± 1.73 | 86.3 ± 1.69 | / | 86.61 ± 1.22 | 86.2 ± 1.35 | 86.51 ± 2.42 | / | 86.58 ± 1.54 |
DOY | 223 | 226 | 228 | |||||
Dual-1DCNN | 87.18 ± 1.14 | 87.23 ± 1.50 | 87.02 ± 1.56 | |||||
SVM | 86.61 ± 1.34 | 86.56 ± 1.63 | 86.97 ± 0.69 | |||||
RF | 84.66 ± 2.36 | 84.91 ± 2.34 | 85.74 ± 1.90 | |||||
Mono-1DCNN | 86.79 ± 0.96 | / | 87.12 ± 0.77 |
Methods | Indices | Summer Maize | Cotton | Common Yam Rhizome |
---|---|---|---|---|
Dual-1DCNN | In-season detection DOY | 193 | 143 | 178 |
F1 value | 91.48 | 85.45 | 84.38 | |
SVM | In-season detection DOY | 178 | 163 | 208 |
F1 value | 91.04 | 85.04 | 84.18 | |
RF | In-season detection DOY | 188 | 188 | 250 |
F1 value | 90.44 | 85.24 | 84.8 | |
Mono-1DCNN | In-season detection DOY | 193 | 158 | 248 |
F1 value | 90.84 | 85.2 | 84.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mao, M.; Zhao, H.; Tang, G.; Ren, J. In-Season Crop Type Detection by Combing Sentinel-1A and Sentinel-2 Imagery Based on the CNN Model. Agronomy 2023, 13, 1723. https://doi.org/10.3390/agronomy13071723
Mao M, Zhao H, Tang G, Ren J. In-Season Crop Type Detection by Combing Sentinel-1A and Sentinel-2 Imagery Based on the CNN Model. Agronomy. 2023; 13(7):1723. https://doi.org/10.3390/agronomy13071723
Chicago/Turabian StyleMao, Mingxiang, Hongwei Zhao, Gula Tang, and Jianqiang Ren. 2023. "In-Season Crop Type Detection by Combing Sentinel-1A and Sentinel-2 Imagery Based on the CNN Model" Agronomy 13, no. 7: 1723. https://doi.org/10.3390/agronomy13071723
APA StyleMao, M., Zhao, H., Tang, G., & Ren, J. (2023). In-Season Crop Type Detection by Combing Sentinel-1A and Sentinel-2 Imagery Based on the CNN Model. Agronomy, 13(7), 1723. https://doi.org/10.3390/agronomy13071723