Identification of Crop Type in Crowdsourced Road View Photos with Deep Convolutional Neural Network
Abstract
:1. Introduction
2. Materials and Methods
2.1. Dataset
2.1.1. Data Collection
2.1.2. Dataset Cleaning and Labeling
2.1.3. Training and Validation Subsets
2.2. Method
2.2.1. Data Augmentation
2.2.2. Convolutional Neural Networks
2.2.3. Ensemble Classification
3. Results
3.1. Accuracy of Single CNN
3.2. Identification Efficiency of Single CNN
3.3. Fusion Accuracy
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
References
- United Nations. Transforming Our World: The 2030 Agenda for Sustainable Development. Available online: https://sustainabledevelopment.un.org/post2015/transformingourworld (accessed on 19 March 2020).
- United Nations. The Sustainable Development Goals Report 2019. Available online: https://unstats.un.org/sdgs/report/2019/ (accessed on 19 March 2020).
- FAO. Investing in Data for the SDGs: Why Good Numbers Matter. 2019. Available online: http://www.fao.org/partnerships/resource-partners/news/news-article/en/c/1200471/ (accessed on 19 March 2020).
- Rahman, M.; Di, L.; Yu, E.; Zhang, C.; Mohiuddin, H. In-Season Major Crop-Type Identification for US Cropland from Landsat Images Using Crop-Rotation Pattern and Progressive Data Classification. Agriculture 2019, 9, 17. [Google Scholar] [CrossRef] [Green Version]
- Xiong, J.; Prasad, S.T.; Murali, K.G.; Pardhasaradhi, T.; Justin, P.; Russell, G.C.; Kamini, Y.; David, T. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J. Photogramm. Remote Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef] [Green Version]
- Zhang, X.; Wu, B.; Guillermo, P.C.; Zhang, M.; Chang, S.; Tian, F. Mapping up-to-date paddy rice extent at 10 m resolution in china through the integration of optical and synthetic aperture radar images. Remote Sens. 2018, 10, 1200. [Google Scholar] [CrossRef] [Green Version]
- Fabrizio, R.; Fabrizio, P.; Olivier, A. S2 prototype LC map at 20 m of Africa 2016. Users Feedback Compendium Esa, 2018. Available online: https://un-spider.org/links-and-resources/data-sources/cci-land-cover-s2-prototype-land-cover-20m-map-africa (accessed on 10 October 2019).
- Adam, J.O.; Prasad, S.T.; Pardhasaradhi, T.; Xiong, J.; Murali, K.G.; Russell, G.C.; Kamini, Y. Mapping cropland extent of Southeast and Northeast Asia using multi-year time-series Landsat 30-m data using a random forest classifier on the Google Earth Engine Cloud. Int. J. Appl. Earth Obs. Geoinf. 2019, 81, 110–124. [Google Scholar] [CrossRef]
- Nabil, M.; Zhang, M.; Bofana, J.; Wu, B.; Stein, A.; Dong, T.; Zeng, H.; Shang, J. Assessing factors impacting the spatial discrepancy of remote sensing based cropland products: A case study in Africa. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 102010. [Google Scholar] [CrossRef]
- Fritz, S.; See, L.; Perger, C.; McCallum, L.; Schill, C.; Schepaschenko, D.; Duerauer, M.; Karner, M.; Dresel, C.; Laso-Bayas, J.C.; et al. A global dataset of crowdsourced land cover and land use reference data. Sci. Data 2017, 4, 170075. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Leung, D.; Newsam, S. Exploring geotagged images for land-use classification. In Proceedings of the ACM multimedia 2012 Workshop on Geotagging and Its Applications in Multimedia (GeoMM’12), Nara, Japan, 29 October–2 November 2012; ACM Press: New York, NY, USA, 2012; pp. 3–8. [Google Scholar]
- Wu, B.; Li, Q. Crop planting and type proportion method for crop acreage estimation of complex agricultural landscapes. Int. J. Appl. Earth Obs. Geoinf. 2012, 16, 101–112. [Google Scholar] [CrossRef]
- Waldner, F.; Bellemans, N.; Hochman, Z.; Newby, T.; de Diego, A.; Santiago, R.V.; Sergey, B.; Mykola, L.; Nataliia, K.; Guerric, L.M.; et al. Roadside collection of training data for cropland mapping is viable when environmental and management gradients are surveyed. Int. J. Appl. Earth Obs. Geoinf. 2019, 80, 82–93. [Google Scholar] [CrossRef]
- Wu, B.; Meng, J.; Li, Q. Remote sensing-based global crop monitoring: Experiences with China’s CropWatch system. Int. J. Digit. Earth 2013. [Google Scholar] [CrossRef]
- Wu, B.; Tian, F.; Zhang, M.; Zeng, H.; Zeng, Y. Cloud services with big data provide a solution for monitoring and tracking sustainable development goals. Geogr. Sustain. 2020, 1, 25–32. [Google Scholar] [CrossRef]
- Antoniou, V.; Fonte, C.C.; See, L.; Estima, J.; Arsanjani, J.J.; Lupia, F.; Minghini, M.; Foody, G.; Fritz, S. Investigating the feasibility of geo-tagged photographs as sources of land cover input data. ISPRS Int. J. Geo-Inf. 2016, 5, 64. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 1419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chebrolu, N.; Lottes, P.; Schaefer, A.; Winterhalter, W.; Burgard, W.; Stachniss, C. Agricultural robot dataset for plant classification, localization and mapping on sugar beet fields. Int. J. Robot. Res. 2017. [Google Scholar] [CrossRef] [Green Version]
- Raja, R.; Nguyen, T.T.; Slaughter, D.C.; Fennimore, S.A. Real-time weed-crop classification and localisation technique for robotic weed control in lettuce. Biosyst. Eng. 2020, 192, 257–274. [Google Scholar] [CrossRef]
- Kamilaris, A.; Prenafeta-Boldu, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
- Deng, J.; Dong, W.; Socher, R.; Li, L.; Li, K.; Li, F. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- iNaturalist.org. iNaturalist Research-Grade Observations. Occurrence Dataset 2019. Available online: https://doi.org/10.15468/ab3s5x (accessed on 10 October 2019).
- Hughes, D.P.; Salathe, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
- Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 2019, 9, 2058. [Google Scholar] [CrossRef]
- Zheng, Y.; Kong, J.; Jin, X.; Wang, X.; Su, T.; Zuo, M. CropDeep: The Crop Vision Dataset for Deep-Learning-Based Classification and Detection in Precision Agriculture. Sensors 2019, 19, 1058. [Google Scholar] [CrossRef] [Green Version]
- Mwebaze, E.; Gebru, T.; Frome, A.; Nsumba, S.; Tusubira, J. iCassava 2019 fine-grained visual categorization challenge. arXiv 2019, arXiv:1908.02900. [Google Scholar]
- Ringland, J.; Bohm, M.; Baek, S. Characterization of food cultivation along roadside transects with Google Street View imagery and deep learning. Comput. Electron. Agric. 2019, 158, 36–50. [Google Scholar] [CrossRef]
- Deus, E.; Silva, J.S.; Catry, F.X.; Rocha, M.; Moreira, F. Google street view as an alternative method to car surveys in large-scale vegetation assessments. Environ. Monit. Assess. 2016, 188, 560.1–560.14. [Google Scholar] [CrossRef] [PubMed]
- Yan, Y.; Ryu, Y. Exploring Google Street View with deep learning for crop type mapping. ISPRS J. Photogramm. Remote Sens. 2021, 171, 278–296. [Google Scholar] [CrossRef]
- Wu, B.; Gommes, R.; Zhang, M.; Zeng, H.; Yan, N.; Zou, W.; Zheng, Y.; Zhang, N.; Chang, S.; Xing, Q.; et al. Global Crop Monitoring: A Satellite-Based Hierarchical Approach. Remote Sens. 2015, 7, 3907–3933. [Google Scholar] [CrossRef] [Green Version]
- Tian, F.; Wu, B.; Zeng, H.; Zhang, X.; Xu, J. Efficient Identification of Corn Cultivation Area with Multitemporal Synthetic Aperture Radar and Optical Images in the Google Earth Engine Cloud Platform. Remote Sens. 2019, 11, 629. [Google Scholar] [CrossRef] [Green Version]
- Fine, T.L. Feedforward Neural Network Methodology; Springer Science Business Media: Berlin, Germany, 2006. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. In NeurIPS Proceedings; Curran Associates Inc.: Red Hook, NY, USA, 2012; Volume 25. [Google Scholar]
- Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep neural networks based recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci. 2016, 2016. [Google Scholar] [CrossRef] [Green Version]
- Sørensen, R.A.; Rasmussen, J.; Nielsen, J.; Jørgensen, R. Thistle Detection Using Convolutional Neural Networks. In Proceedings of the EFITA Congress, Montpellier, France, 2–6 July 2017. [Google Scholar]
- Namin, S.T.; Esmaeilzadeh, M.; Najafi, M.; Brown, T.B.; Borevitz, J.O. Deep Phenotyping: Deep Learning for Temporal Phenotype/Genotype Classification. Plant Methods 2018, 14, 14. [Google Scholar]
- Chen, S.W.; Shivakumar, S.S.; Dcunha, S.; Das, J.; Okon, E.; Qu, C.; Kumar, V. Counting apples and oranges with deep learning: A data-driven approach. IEEE Rob. Autom. Lett. 2017, 2, 781–788. [Google Scholar] [CrossRef]
- Hiroya, M.; Yoshihide, S.; Toshikazu, S.; Takehiro, K.; Hiroshi, O. Road damage detection using deep neural networks with images captured through a smartphone. arXiv 2018, arXiv:1801.09454. [Google Scholar]
- Liu, S.; Tian, G.; Xu, Y. A novel scene classification model combining ResNet based transfer learning and data augmentation with a filter. Neurocomputing 2019, 338, 191–206. [Google Scholar] [CrossRef]
- Xue, D.X.; Zhang, R.; Feng, H.; Wang, Y.L. CNN-SVM for microvascular morphological type recognition with data augmentation. J. Med. Biol. Eng. 2016, 36, 755–764. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Montserrat, D.M.; Lin, Q.; Allebach, J.; Delp, E. Training object detection and recognition cnn models using data augmentation. Electron. Imaging 2017, 2017, 27–36. [Google Scholar] [CrossRef]
- Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. ResNet: Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. arXiv 2016, arXiv:1602.07261. [Google Scholar]
- Huang, G.; Liu, Z.; Maaten, L.V.D.; Weinberger, K.Q. Densely Connected Convolutional Networks. CVPR. IEEE Comput. Soc. 2017, arXiv:1608.06993v5. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv 2019, arXiv:1801.04381v4. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.; Sun, J. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. arXiv 2018, arXiv:1807.11164. [Google Scholar]
- PaddlePaddle. Available online: https://github.com/PaddlePaddle (accessed on 19 July 2019).
- Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2016, arXiv:1609.04747. [Google Scholar]
- Sridar, P.; Kumar, A.; Quinton, A.; Nanan, R.; Kim, J.; Krishnakumar, R. Decision Fusion-Based Fetal Ultrasound Image Plane Classification Using Convolutional Neural Networks. Ultrasound Med. Biol. 2019, 45, 1259–1273. [Google Scholar] [CrossRef]
- Hall, D.; McCool, C.; Dayoub, F.; Sunderhauf, N.; Upcroft, B. Evaluation of features for leaf classification in challenging conditions. In Winter Conference on Applications of Computer Vision (WACV); IEEE: Waikoloa Beach, HI, USA, 2015; pp. 797–804. [Google Scholar]
- Hajdu, A.; Hajdu, L.; Jonas, A.; Kovacs, L.; Toman, H. Generalizing the majority voting scheme to spatially constrained voting. IEEE Trans. Image Process. 2013, 22, 4182–4194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Sun, C.; Shrivastava, A.; Singh, S.; Gupta, A. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. arXiv 2017, arXiv:1707.02968. [Google Scholar]
- Hestness, J. Deep Learning Scaling is Predictable, Empirically. arXiv 2017, arXiv:1712.00409. [Google Scholar]
- Joulin, A. Learning Visual Features from Large Weakly Supervised Data. arXiv 2015, arXiv:1511.02251. [Google Scholar]
- Lei, S.; Zhang, H.; Wang, K.; Su, Z. How Training Data Affect the Accuracy and Robustness of Neural Networks for Image Classification. In Proceedings of the ICLR Conference, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
Categories | Train | Test | Total |
---|---|---|---|
Bare land | 1543 | 327 | 1870 |
Cotton | 249 | 59 | 308 |
Maize | 4114 | 1044 | 5158 |
Peanut | 608 | 139 | 747 |
Rapeseed | 3695 | 871 | 4566 |
Rice | 4646 | 1204 | 5850 |
Sorghum | 112 | 37 | 149 |
Soybean | 4257 | 1080 | 5337 |
Sunflower | 199 | 44 | 243 |
Tobacco | 297 | 70 | 367 |
Vegetable | 3651 | 890 | 4541 |
Wheat | 4030 | 951 | 4981 |
Categories | InceptionV4 | DenseNet121 | ResNet50 | MobileNetV2 | ShuffleNetV2 |
---|---|---|---|---|---|
Bare land | 76.7 | 78.3 | 83.2 | 79.8 | 81.3 |
Cotton | 62.7 | 76.2 | 84.7 | 69.5 | 81.4 |
Maize | 91.1 | 93.1 | 92.1 | 92.9 | 92 |
Peanut | 59.0 | 64.7 | 73.4 | 68.3 | 65.5 |
Rapeseed | 89.2 | 96.7 | 93.5 | 88.9 | 93.2 |
Rice | 93.5 | 87.0 | 88.9 | 95.5 | 93 |
Sorghum | 59.5 | 73.0 | 73.0 | 70.3 | 75.7 |
Soybean | 85.5 | 83.5 | 85.4 | 79.4 | 82.1 |
Sunflower | 79.5 | 65.9 | 72.7 | 56.8 | 72.7 |
Tobacco | 72.9 | 65.7 | 71.4 | 81.4 | 82.9 |
Vegetable | 81.1 | 80.4 | 81.8 | 79.7 | 81.3 |
Wheat | 88.4 | 84.5 | 91.7 | 96.0 | 89.3 |
Average accuracy | 86.2 | 86 | 87.9 | 87.5 | 87.5 |
Categories | Voting-3 | Voting-5 |
---|---|---|
Bare land | 85 | 85.3 |
Cotton | 84.7 | 84.7 |
Maize | 94.7 | 95.3 |
Peanut | 74.8 | 72.7 |
Rapeseed | 94.4 | 96.1 |
Rice | 94.9 | 95.7 |
Sorghum | 75.7 | 73.0 |
Soybean | 85.9 | 86.8 |
Sunflower | 70.5 | 70.5 |
Tobacco | 84.3 | 77.1 |
Vegetable | 85.2 | 86.5 |
Wheat | 94.3 | 92.8 |
Average accuracy | 90.6 | 91.1 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, F.; Wu, B.; Zhang, M.; Zeng, H.; Tian, F. Identification of Crop Type in Crowdsourced Road View Photos with Deep Convolutional Neural Network. Sensors 2021, 21, 1165. https://doi.org/10.3390/s21041165
Wu F, Wu B, Zhang M, Zeng H, Tian F. Identification of Crop Type in Crowdsourced Road View Photos with Deep Convolutional Neural Network. Sensors. 2021; 21(4):1165. https://doi.org/10.3390/s21041165
Chicago/Turabian StyleWu, Fangming, Bingfang Wu, Miao Zhang, Hongwei Zeng, and Fuyou Tian. 2021. "Identification of Crop Type in Crowdsourced Road View Photos with Deep Convolutional Neural Network" Sensors 21, no. 4: 1165. https://doi.org/10.3390/s21041165
APA StyleWu, F., Wu, B., Zhang, M., Zeng, H., & Tian, F. (2021). Identification of Crop Type in Crowdsourced Road View Photos with Deep Convolutional Neural Network. Sensors, 21(4), 1165. https://doi.org/10.3390/s21041165