A Review of Deep Learning in Multiscale Agricultural Sensing
Abstract
:1. Introduction
- (i).
- According to learning strategies, we broadly classified the DL frameworks that are commonly used in PA into three categories, i.e., CNN-based supervised learning (CNN-SL), transfer learning (TL), and few-shot learning (FSL).
- (ii).
- We conducted a comprehensive review of typical studies on the application of CNN-SL, TL, and FSL, at the leaf, canopy, field, and land scales of agricultural sensing.
- (iii).
- We elaborated the limitations that impede the application of DL in PA, discussed the current challenges, and we proposed our perspectives for future research.
2. CNN-Based Supervised Learning (CNN-SL) in PA
2.1. Leaf Scale: Disease and Pest Identification and Quantification
2.2. Canopy Scale: Plant and Weed Classification, Identification, and Positioning
2.3. Field Scale: Abiotic Stress Assessment, Growth Monitoring, and Yield Prediction
2.4. Land Scale: Land Cover Mapping
3. Transfer Learning (TL) in PA
4. Few-Shot Learning (FSL) in PA
5. Discussion
5.1. Datasets
- (i).
- Most datasets are built for specific tasks and are not general.
- (ii).
- The volume of the dataset is generally small, and only a small part can reach the volume of tens of thousands of images.
- (iii).
- The background of the images is usually simple, especially for datasets collected in a laboratory setting. This leads to insufficient generalization ability of deep learning in unstructured and complex farmland environments.
- (iv).
- Each dataset differs from the others in terms of camera configurations, employed platforms, collection periods, ground sampling resolutions, imagery types, and so on. This results in insufficient adaptability of the dataset across different tasks.
- (v).
- In some datasets, images from different classes are imbalanced, which can lead to biased accuracy.
- (vi).
- Most datasets have varying degrees of temporal and spatial limitations, which means that images in datasets only contain information about short-term local regions.
- (vii).
- Vertically downward is the most common method, which lacks the representation of multiple angles of the same target.
- (viii).
- Handheld cameras, terrestrial agricultural robots, and UAVs are the most common platforms for collecting farm imagery, however, their combination is rarely used.
- (i).
- Collection methods. In order to ensure the generalization ability and robustness of deep learning algorithms in unstructured and complex farmland environments, if possible, images should be collected in real farmlands, rather than in controllable laboratory environments. Before image acquisition, there should be detailed plans for the observation object, acquisition location, acquisition period, acquisition time, acquisition equipment, camera parameters, etc., and efforts should be made to keep these consistent throughout the entire dataset construction process. Complex backgrounds, natural lighting, observation angles, perspective conditions, etc. should also be taken into consideration, and these should be as close to the real application scenario as possible. Combining high-performance cameras with mobile platforms such as handheld devices, field robots, and UAVs to collect high-quality images at multiple spatial and time scales is necessary for the construction of future agricultural datasets. For example, multiple acquisition platforms can be used to collect information on the same observation object at multiple spatial scales and perspectives at the same time. In addition, soil, climate, and environmental information should also be accurately recorded during image acquisition, as this information has the potential to further improve model performance.
- (ii).
- Raw data processing. The selection, classification, preprocessing, and labeling of raw images should be done under the guidance of experienced agricultural scientists or farmers. It is worth noting that the data balance between different categories should be given special attention when constructing a dataset. Some data augmentation methods can be used to further increase the size of a dataset. For example, a dataset can be augmented by filtering images from existing public datasets that are similar to the target task. Some conventional data augmentation methods such as rotation, mirroring, scaling, and adding noise, are also effective for augmenting datasets. In addition, pseudo samples generated by artificial intelligence methods are also effective in situations where it is difficult to construct large-scale datasets. Usually, limited by the neural network model and computing resources, the original image needs to be cropped or scaled to a specific size before it can be used as the input of the network. During this process, the integrity of the image details should be maintained as much as possible.
- (iii).
- Temporal resolution. Throughout an entire crop growth cycle, the farmland environment changes significantly, and the information of the same growth stage in different planting seasons is not consistent. Therefore, the temporal resolution of the dataset also needs attention. Some datasets have taken the changes of physical morphology and physiological properties of crops and weeds during various growth stages into account. Meanwhile, the acquisition period of some datasets spanned a few consecutive seasons. Further refining the temporal resolution of the dataset acquisition period is very important to improve the robustness of the deep learning model algorithm.
- (iv).
- Spatial and spectral resolution. Image diversity plays a critical role in increasing spatial and spectral resolutions of the dataset. RGB cameras, the most common sensors, can yield high spatial resolution images with visible spectrum bands. RGB images contain rich features of color, texture, and profile that are essential for DL-based classification, detection, segmentation, and tracking. However, the quality of RGB images is very sensitive to light and can only present information in the visible spectrum. Recently, multispectral and thermal imagery has been playing an increasing role in agricultural sensing, since the information underlying in invisible spectrum significantly helps with the detection of early crop deficits. Satellite-borne SAR and optical images have also been broadly used for large-scale farmland mapping. RGB, spectral, thermal, and SAR cameras provide rich images at various spatial and spectral resolutions. We suggest that future datasets should be constructed with a combination of multiple types of cameras to compensate for the resolution limitations of a single type of sensor.
- (v).
- Sharing and maintenance. We found that datasets built by different teams for the same target task vary widely. In order to improve the generality and utilization efficiency of datasets, cross-regional multilateral cooperation among different research teams is needed. For example, the species of rice, weeds, and diseases vary in different countries and regions. If the collection and sharing methods of datasets can achieve some kind of consensus and cooperation among multiple research teams, the volume and quality of the corresponding datasets will be greatly improved. In addition, the maintenance and supplementation of datasets is also related to the long-term sustainability of scientific research. Although some researchers publicly shared their datasets at the time of paper publication, parts of the datasets had either been lost over time or become obsolete due to poor maintenance, which hindered further research.
5.2. Deep Learning Algorithms and Neural Networks
- (i).
- We found that most of the DL-based methods applied in PA only used simple algorithms and network structures. The main reason is that the combination of deep learning and precision agriculture is still in its infancy. Another reason for this phenomenon is the isolation of interdisciplinary knowledge between agricultural scientists and computer scientists.
- (ii).
- Although many of the deep learning algorithms listed in Table 1 achieved an accuracy of more than 90%, it should be made clear that these results are limited to specific datasets. When the trained models are deployed to other datasets or a real farmland environment, the accuracy and speed of these networks are usually lower than the benchmarks. The main reason for this is that the volume, quality, and complexity of agricultural datasets are still quite different from actual farmland environments.
- (iii).
- Some novel strategies, such as transfer learning [104,105,106,107,108,109], few-shot learning [31,32,114,115], graph convolutional networks (GCN) [66], generative adversarial networks (GAN) [127], and semi-supervised learning [128] have been proposed to reduce the dependency of DL models on the agricultural datasets. However, their performance has not been fully released.
- (i).
- More extensive optimization of deep learning algorithms and neural networks based on the agricultural characteristics that are needed. Research has demonstrated the effectiveness of some biomimetic mechanisms, such as visual attention mechanism and long short-term memory mechanism, in image processing. Therefore, they should also be introduced into model optimization for agricultural applications.
- (ii).
- As the spatial, temporal, and spectral resolution of the dataset increases, there is a strong need for neural networks that can take data cubes as input.
- (iii).
- Efforts in this area should not be limited to only data-driven deep learning. Meta-learning, which means learning-to-learn, is also a research direction. It has been proven that transfer learning, few-shot learning, and semi-supervised learning have great potential to relieve dependency on datasets, especially in situations where datasets are difficult to obtain. However, they inevitably cause a decrease in accuracy. In some cases, this is acceptable and in others it is not. Therefore, more efforts should be made to address this contradiction or find a trade-off between dataset and performance.
- (iv).
- More attention should be given to semi-supervised learning. For example, the recently emerged GCN is a successful semi-supervised learning approach in which a large amount of unlabeled data can be leveraged with typically a small amount of labeled data.
- (v).
- Interdisciplinarity is a necessary way to break down the intellectual segregation between agricultural scientists and computer scientists. The close communication and cooperation between the two parties is of great significance for deepening the application of deep learning in precision agriculture.
5.3. Computational Capacity
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- World Bank. World Development Report 2008: Agriculture for Development; The World Bank: Washington, DC, USA, 2007. [Google Scholar] [CrossRef]
- Food and Agriculture Organization of the United Nations. Agriculture and Climate Change: Challenges and Opportunities at the Global and Local Level: Collaboration on Climate-Smart Agriculture; FAO: Rome, Italy, 2019. [Google Scholar]
- Fedoroff, N.V.; Battisti, D.S.; Beachy, R.N.; Cooper, P.J.M.; Fischhoff, D.A.; Hodges, C.N.; Knauf, V.C.; Lobell, D.; Mazur, B.J.; Molden, D.; et al. Radically rethinking agriculture for the 21st century. Science 2010, 327, 833–834. [Google Scholar] [CrossRef] [Green Version]
- Laborde, D.; Martin, W.; Swinnen, J.; Vos, R. COVID-19 risks to global food security. Science 2020, 369, 500–502. [Google Scholar] [CrossRef]
- King, A. Technology: The future of agriculture. Nature 2017, 544, S21–S23. [Google Scholar] [CrossRef] [Green Version]
- Asseng, S.; Asche, F. Future farms without farmers. Sci. Robot. 2019, 4, eaaw1875. [Google Scholar] [CrossRef]
- Gebbers, R.; Adamchuk, V.I. Precision agriculture and food security. Science 2010, 327, 828–831. [Google Scholar] [CrossRef]
- Penuelas, J.; Filella, I. Visible and near-infrared reflectance techniques for diagnosing plant physiological status. Trends Plant Sci. 1998, 3, 151–156. [Google Scholar] [CrossRef]
- Burke, M.; Driscoll, A.; Lobell, D.B.; Ermon, S. Using satellite imagery to understand and promote sustainable development. Science 2021, 371, eabe8628. [Google Scholar] [CrossRef]
- Maes, W.H.; Steppe, K. Perspectives for remote sensing with unmanned aerial vehicles in precision agriculture. Trends Plant Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef]
- Vougioukas, S.G. Agricultural robotics. Annu. Rev. Control Robot. Auton. Syst. 2019, 2, 365–392. [Google Scholar] [CrossRef]
- Singh, A.; Ganapathysubramanian, B.; Singh, A.K.; Sarkar, S. Machine learning for high-throughput stress phenotyping in plants. Trends Plant Sci. 2016, 21, 110–124. [Google Scholar] [CrossRef] [Green Version]
- Ma, C.; Zhang, H.H.; Wang, X.F. Machine learning for big data analytics in plants. Trends Plant Sci. 2014, 19, 798–808. [Google Scholar] [CrossRef] [PubMed]
- Li, D.L.; Li, C.; Yao, Y.; Li, M.; Liu, L. Modern imaging techniques in plant nutrition analysis: A review. Comput. Electron. Agric. 2020, 174, 105459. [Google Scholar] [CrossRef]
- Patrício, D.I.; Rieder, R. Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef] [Green Version]
- Xu, S.; Liu, J.; Yang, C.; Wu, X.; Xu, T. A learning-based stable servo control strategy using broad learning system applied for microrobotic control. IEEE Trans. Cybern. 2021, 1–11. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
- Ma, L.; Liu, Y.; Zhang, X.L.; Ye, Y.X.; Yin, G.F.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Zhong, L.H.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
- Singh, A.K.; Ganapathysubramanian, B.; Sarkar, S.; Singh, A. Deep learning for plant stress phenotyping: Trends and future perspectives. Trends Plant Sci. 2018, 23, 883–898. [Google Scholar] [CrossRef] [Green Version]
- Singh, A.; Jones, S.; Ganapathysubramanian, B.; Sarkar, S.; Mueller, D.; Sandhu, K.; Nagasubramanian, K. Challenges and opportunities in machine-augmented plant stress phenotyping. Trends Plant Sci. 2021, 26, 53–69. [Google Scholar] [CrossRef]
- Wang, D.; Li, W.; Liu, X.; Li, N.; Zhang, C. UAV environmental perception and autonomous obstacle avoidance: A deep learning and depth camera combined solution. Comput. Electron. Agric. 2020, 175, 105523. [Google Scholar] [CrossRef]
- Hasan, A.S.M.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G. A survey of deep learning techniques for weed detection from images. Comput. Electron. Agric. 2021, 184, 106067. [Google Scholar] [CrossRef]
- Lu, Y.Z.; Young, S. A survey of public datasets for computer vision tasks in precision agriculture. Comput. Electron. Agric. 2020, 178, 105760. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
- Snell, J.; Swersky, K.; Zemel, R.S. Prototypical networks for few-shot learning. arXiv 2017, arXiv:1703.05175. [Google Scholar]
- Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S.; Vasilakoglou, I. Towards weeds identification assistance through transfer learning. Comput. Electron. Agric. 2020, 171, 105306. [Google Scholar] [CrossRef]
- Chen, J.; Chen, J.X.; Zhang, D.F.; Sun, Y.D.; Nanehkaran, Y.A. Using deep transfer learning for image-based plant disease identification. Comput. Electron. Agric. 2020, 173, 105393. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
- Argüeso, D.; Picon, A.; Irusta, U.; Medela, A.; San-Emeterio, M.G.; Bereciartua, A.; Alvarez-Gila, A. Few-shot learning approach for plant disease classification using images taken in the field. Comput. Electron. Agric. 2020, 175, 105542. [Google Scholar] [CrossRef]
- Zhong, F.M.; Chen, Z.K.; Zhang, Y.C.; Xia, F. Zero-and few-shot learning for diseases recognition of Citrus aurantium L. using conditional adversarial autoencoders. Comput. Electron. Agric. 2020, 179, 105828. [Google Scholar] [CrossRef]
- Li, Y.; Yang, J. Meta-learning baselines and database for few-shot classification in agriculture. Comput. Electron. Agric. 2021, 182, 106055. [Google Scholar] [CrossRef]
- Lee, S.H.; Chan, C.S.; Mayo, S.J.; Remagnino, P. How deep learning extracts and learns leaf features for plant classification. Pattern Recognit. 2017, 71, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Grinblat, G.L.; Uzal, L.C.; Larese, M.G.; Granitto, P.M. Deep learning for plant identification using vein morphological patterns. Comput. Electron. Agric. 2016, 127, 418–424. [Google Scholar] [CrossRef] [Green Version]
- Barbedo, J.G. Factors influencing the use of deep learning for plant disease recognition. Biosyst. Eng. 2018, 172, 84–91. [Google Scholar] [CrossRef]
- Strange, R.N.; Scott, P.R. Plant disease: A threat to global food security. Annu. Rev. Phytopathol. 2015, 43, 83–116. [Google Scholar] [CrossRef] [PubMed]
- Noon, S.K.; Amjad, M.; Qureshi, M.A.; Mannan, A. Use of deep learning techniques for identification of plant leaf stresses: A review. Sustain. Comput. Inform. Syst. 2020, 28, 100443. [Google Scholar] [CrossRef]
- Zeng, W.H.; Li, M. Crop leaf disease recognition based on Self-Attention convolutional neural network. Comput. Electron. Agric. 2020, 172, 105341. [Google Scholar] [CrossRef]
- Too, E.C.; Yujian, L.; Njuki, S.; Yingchun, L. A comparative study of fine-tuning deep learning models for plant disease identification. Comput. Electron. Agric. 2019, 161, 272–279. [Google Scholar] [CrossRef]
- Hughes, D.P.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
- Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [Green Version]
- Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
- Lee, S.H.; Goëau, H.; Bonnet, P.; Joly, A. New perspectives on plant disease characterization based on deep learning. Comput. Electron. Agric. 2020, 170, 105220. [Google Scholar] [CrossRef]
- Picon, A.; Seitz, M.; Alvarez-Gila, A.; Mohnke, P.; Ortiz-Barredo, A.; Echazarra, J. Crop conditional convolutional neural networks for massive multi-crop plant disease classification over cell phone acquired images taken on real field conditions. Comput. Electron. Agric. 2019, 167, 105093. [Google Scholar] [CrossRef]
- Picon, A.; Alvarez-Gila, A.; Seitz, M.; Ortiz-Barredo, A.; Echazarra, J.; Johannes, A. Deep convolutional neural networks for mobile capture device-based crop disease classification in the wild. Comput. Electron. Agric. 2019, 161, 280–290. [Google Scholar] [CrossRef]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Zhang, H.K.; Shen, Q. Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
- Nagasubramanian, K.; Jones, S.; Singh, A.K.; Sarkar, S.; Singh, A.; Ganapathysubramanian, B. Plant disease identification using explainable 3D deep learning on hyperspectral images. Plant Methods 2019, 15, 98. [Google Scholar] [CrossRef]
- Ozguven, M.M.; Adem, K. Automatic detection and classification of leaf spot disease in sugar beet using deep learning algorithms. Phys. A Stat. Mech. Its Appl. 2019, 535, 122537. [Google Scholar] [CrossRef]
- Lin, K.; Gong, L.; Huang, Y.X.; Liu, C.L.; Pan, J.H. Deep learning-based segmentation and quantification of cucumber powdery mildew using convolutional neural network. Front. Plant Sci. 2019, 10, 155. [Google Scholar] [CrossRef] [Green Version]
- Garg, K.; Bhugra, S.; Lall, B. Automatic quantification of plant disease from field image data using deep learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 1965–1972. [Google Scholar]
- Li, Y.F.; Wang, H.X.; Dang, L.M.; Sadeghi-Niaraki, A.; Moon, H. Crop pest recognition in natural scenes using convolutional neural networks. Comput. Electron. Agric. 2020, 169, 105174. [Google Scholar] [CrossRef]
- Cheng, X.; Zhang, Y.H.; Chen, Y.Q.; Wu, Y.Z.; Yue, Y. Pest identification via deep residual learning in complex background. Comput. Electron. Agric. 2017, 141, 351–356. [Google Scholar] [CrossRef]
- Wang, F.Y.; Wang, R.J.; Xie, C.J.; Yang, P.; Liu, L. Fusing multi-scale context-aware information representation for automatic in-field pest detection and recognition. Comput. Electron. Agric. 2020, 169, 105222. [Google Scholar] [CrossRef]
- Wu, X.L.; Aravecchia, S.; Lottes, P.; Stachniss, C.; Pradalier, C. Robotic weed control using automated weed and crop classification. J. Field Robot. 2020, 37, 322–340. [Google Scholar] [CrossRef] [Green Version]
- Gai, J.Y.; Tang, L.; Steward, B.L. Automated crop plant detection based on the fusion of color and depth images for robotic weed control. J. Field Robot. 2020, 37, 35–52. [Google Scholar] [CrossRef]
- Dyrmann, M.; Christiansen, P.; Midtiby, H.S. Estimation of plant species by classifying plants and leaves in combination. J. Field Robot. 2018, 35, 202–212. [Google Scholar] [CrossRef]
- Slaughter, D.C.; Giles, D.K.; Downey, D. Autonomous robotic weed control systems: A review. Comput. Electron. Agric. 2008, 61, 63–78. [Google Scholar] [CrossRef]
- Knoll, F.J.; Czymmek, V.; Poczihoski, S.; Holtorf, T.; Hussmann, S. Improving efficiency of organic farming by using a deep learning classification approach. Comput. Electron. Agric. 2018, 153, 347–356. [Google Scholar] [CrossRef]
- Dyrmann, M.; Karstoft, H.; Midtiby, H.S. Plant species classification using deep convolutional neural network. Biosyst. Eng. 2016, 151, 72–80. [Google Scholar] [CrossRef]
- Lottes, P.; Hörferlin, M.; Sander, S.; Stachniss, C. Effective vision-based classification for separating sugar beets and weeds for precision farming. J. Field Robot. 2017, 34, 1160–1178. [Google Scholar] [CrossRef]
- Lottes, P.; Behley, J.; Chebrolu, N.; Milioto, A.; Stachniss, C. Robust joint stem detection and crop-weed classification using image sequences for plant-specific treatment in precision farming. J. Field Robot. 2020, 37, 20–34. [Google Scholar] [CrossRef]
- Lottes, P.; Behley, J.; Milioto, A.; Stachniss, C. Fully convolutional networks with sequential information for robust crop and weed detection in precision farming. IEEE Robot. Autom. Lett. 2018, 3, 2870–2877. [Google Scholar] [CrossRef] [Green Version]
- Chavan, T.R.; Nandedkar, A.V. AgroAVNET for crops and weeds classification: A step forward in automatic farming. Comput. Electron. Agric. 2018, 154, 361–372. [Google Scholar] [CrossRef]
- Jiang, H.H.; Zhang, C.Y.; Qiao, Y.L.; Zhang, Z.; Zhang, W.J.; Song, C.Q. CNN feature based graph convolutional network for weed and crop recognition in smart farming. Comput. Electron. Agric. 2020, 174, 105450. [Google Scholar] [CrossRef]
- Wang, A.; Zhang, W.; Wei, X.H. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
- Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A multiclass weed species image dataset for deep learning. Sci. Rep. 2019, 9, 2058. [Google Scholar] [CrossRef]
- Hu, K.; Coleman, G.; Zeng, S.; Wang, Z.; Walsh, M. Graph weeds net: A graph-based deep learning method for weed recognition. Comput. Electron. Agric. 2020, 174, 105520. [Google Scholar] [CrossRef]
- dos Santos Ferreira, A.; Freitas, D.M.; da Silva, G.G.; Pistori, H.; Folhes, M.T. Weed detection in soybean crops using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324. [Google Scholar] [CrossRef]
- dos Santos Ferreira, A.; Freitas, D.M.; da Silva, G.G.; Pistori, H.; Folhes, M.T. Unsupervised deep learning and semi-automatic data labeling in weed discrimination. Comput. Electron. Agric. 2019, 165, 104963. [Google Scholar] [CrossRef]
- Mittler, R. Abiotic stress, the field environment and stress combination. Trends Plant Sci. 2009, 11, 15–19. [Google Scholar] [CrossRef]
- Virnodkar, S.S.; Pachghare, V.K.; Patil, V.C.; Jha, S.K. Remote sensing and machine learning for crop water stress determination in various crops: A critical review. Precis. Agric. 2020, 21, 1121–1155. [Google Scholar] [CrossRef]
- Chandel, N.S.; Chakraborty, S.K.; Rajwade, Y.A.; Dubey, K.; Tiwari, M.K.; Jat, D. Identifying crop water stress using deep learning models. Neural Comput. Appl. 2021, 33, 5353–5367. [Google Scholar] [CrossRef]
- Feng, X.; Zhan, Y.; Wang, Q.; Yang, X.; Yu, C.; Wang, H.; Tang, Z.; Jiang, D.; Peng, C.; He, Y. Hyperspectral imaging combined with machine learning as a tool to obtain high-throughput plant salt-stress phenotyping. Plant J. 2020, 101, 1448–1461. [Google Scholar] [CrossRef] [PubMed]
- Lee, K.J.; Lee, B.W. Estimation of rice growth and nitrogen nutrition status using color digital camera image analysis. Eur. J. Agron. 2013, 48, 57–65. [Google Scholar] [CrossRef]
- Velumani, K.; Madec, S.; de Solan, B.; Lopez-Lozano, R.; Gillet, J.; Labrosse, J.; Jezequel, S.; Comar, A.; Baret, F. An automatic method based on daily in situ images and deep learning to date wheat heading stage. Field Crops Res. 2020, 252, 107793. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Detection of nutrition deficiencies in plants using proximal images and machine learning: A review. Comput. Electron. Agric. 2019, 162, 482–492. [Google Scholar] [CrossRef]
- Rasti, S.; Bleakley, C.J.; Silvestre, G.C.; Holden, N.M.; Langton, D.; O’Hare, G.M. Crop growth stage estimation prior to canopy closure using deep learning algorithms. Neural Comput. Appl. 2021, 33, 1733–1743. [Google Scholar] [CrossRef]
- Abdalla, A.; Cen, H.Y.; Wan, L.; Mehmood, K.; He, Y. Nutrient status diagnosis of infield oilseed rape via deep learning-enabled dynamic model. IEEE Trans. Ind. Inform. 2020, 17, 4379–4389. [Google Scholar] [CrossRef]
- Zhang, D.Y.; Ding, Y.; Chen, P.F.; Zhang, X.Q.; Pan, Z.G.; Liang, D. Automatic extraction of wheat lodging area based on transfer learning method and deeplabv3+ network. Comput. Electron. Agric. 2020, 179, 105845. [Google Scholar] [CrossRef]
- Chlingaryan, A.; Sukkarieh, S.; Whelan, B. Machine learning approaches for crop yield prediction and nitrogen status estimation in precision agriculture: A review. Comput. Electron. Agric. 2018, 151, 61–69. [Google Scholar] [CrossRef]
- Van Klompenburg, T.; Kassahun, A.; Catal, C. Crop yield prediction using machine learning: A systematic literature review. Comput. Electron. Agric. 2020, 177, 105709. [Google Scholar] [CrossRef]
- Barbosa, A.; Trevisan, R.; Hovakimyan, N.; Martin, N.F. Modeling yield response to crop management using convolutional neural networks. Comput. Electron. Agric. 2020, 170, 105197. [Google Scholar] [CrossRef]
- Tedesco-Oliveira, D.; da Silva, R.P.; Maldonado, W., Jr.; Zerbato, C. Convolutional neural networks in predicting cotton yield from images of commercial fields. Comput. Electron. Agric. 2020, 171, 105307. [Google Scholar] [CrossRef]
- Nevavuori, P.; Narra, N.; Lipping, T. Crop yield prediction with deep convolutional neural networks. Comput. Electron. Agric. 2019, 163, 104859. [Google Scholar] [CrossRef]
- Chu, Z.; Yu, J. An end-to-end model for rice yield prediction using deep learning fusion. Comput. Electron. Agric. 2020, 174, 105471. [Google Scholar] [CrossRef]
- Nguyen, T.T.; Hoang, T.D.; Pham, M.T.; Vu, T.T.; Nguyen, T.H.; Huynh, Q.T.; Jo, J. Monitoring agriculture areas with satellite images and deep learning. Appl. Soft Comput. 2020, 95, 106565. [Google Scholar] [CrossRef]
- Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens. Environ. 2020, 245, 111741. [Google Scholar] [CrossRef]
- Wei, S.; Zhang, H.; Wang, C.; Wang, Y.; Xu, L. Multi-temporal SAR data large-scale crop mapping based on U-Net model. Remote Sens. 2019, 11, 68. [Google Scholar] [CrossRef] [Green Version]
- Papadomanolaki, M.; Vakalopoulou, M.; Zagoruyko, S.; Karantzalos, K. Benchmarking deep learning frameworks for the classification of very high resolution satellite multispectral data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 3, 83–88. [Google Scholar] [CrossRef] [Green Version]
- Sagan, V.; Maimaitijiang, M.; Bhadra, S.; Maimaitiyiming, M.; Brown, D.R.; Sidike, P.; Fritschi, F.B. Field-scale crop yield prediction using multi-temporal WorldView-3 and PlanetScope satellite data and deep learning. ISPRS J. Photogramm. 2021, 174, 265–281. [Google Scholar] [CrossRef]
- Meng, S.Y.; Wang, X.Y.; Hu, X.; Luo, C.; Zhong, Y.F. Deep learning-based crop mapping in the cloudy season using one-shot hyperspectral satellite imagery. Comput. Electron. Agric. 2021, 186, 106188. [Google Scholar] [CrossRef]
- Xu, L.; Zhang, H.; Wang, C.; Zhang, B.; Liu, M. Crop classification based on temporal information using sentinel-1 SAR time-series data. Remote Sens. 2019, 11, 53. [Google Scholar] [CrossRef] [Green Version]
- Gella, G.W.; Bijker, W.; Belgiu, M. Mapping crop types in complex farming areas using SAR imagery with dynamic time warping. ISPRS J. Photogramm. 2021, 175, 171–183. [Google Scholar] [CrossRef]
- Huang, Z.L.; Datcu, M.; Pan, Z.X.; Lei, B. Deep SAR-Net: Learning objects from signals. ISPRS J. Photogramm. 2020, 161, 179–193. [Google Scholar] [CrossRef]
- Zhang, T.W.; Zhang, X.L.; Shi, J.; Wei, S.J. HyperLi-net: A hyper-light deep learning network for high-accurate and high-speed ship detection from synthetic aperture radar imagery. ISPRS J. Photogramm. 2020, 167, 123–153. [Google Scholar] [CrossRef]
- Zheng, Z.; Ma, A.; Zhang, L.P.; Zhong, Y.F. Deep multisensor learning for missing-modality all-weather mapping. ISPRS J. Photogramm. 2021, 174, 254–264. [Google Scholar] [CrossRef]
- Ienco, D.; Interdonato, R.; Gaetano, R.; Minh, D.H.T. Combining sentinel-1 and sentinel-2 satellite image time series for land cover mapping via a multi-source deep learning architecture. ISPRS J. Photogramm. 2019, 158, 11–22. [Google Scholar] [CrossRef]
- Adrian, J.; Sagan, V.; Maimaitijiang, M. Sentinel SAR-optical fusion for crop type mapping using deep learning and google earth engine. ISPRS J. Photogramm. 2021, 175, 215–235. [Google Scholar] [CrossRef]
- Zhao, W.Z.; Qu, Y.; Chen, J.; Yuan, Z.L. Deeply synergistic optical and SAR time series for crop dynamic monitoring. Remote Sens. Environ. 2020, 247, 111952. [Google Scholar] [CrossRef]
- Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef] [Green Version]
- Vali, A.; Comai, S.; Matteucci, M. Deep learning for land use and land cover classification based on hyperspectral and multispectral earth observation data: A review. Remote Sens. 2020, 12, 2495. [Google Scholar] [CrossRef]
- Kaya, A.; Keceli, A.S.; Catal, C.; Yalic, H.Y.; Temucin, H.; Tekinerdogan, B. Analysis of transfer learning for deep neural network based plant classification models. Comput. Electron. Agric. 2019, 158, 20–29. [Google Scholar] [CrossRef]
- Cai, E.; Baireddy, S.; Yang, C.; Crawford, M.; Delp, E.J. Deep transfer learning for plant center localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 62–63. [Google Scholar]
- Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Fountas, S. Improving weeds identification with a repository of agricultural pre-trained deep neural networks. Comput. Electron. Agric. 2020, 175, 105593. [Google Scholar] [CrossRef]
- Bosilj, P.; Aptoula, E.; Duckett, T.; Cielniak, G. Transfer learning between crop types for semantic segmentation of crops versus weeds in precision agriculture. J. Field Robot. 2020, 37, 7–19. [Google Scholar] [CrossRef]
- Abdalla, A.; Cen, H.; Wan, L.; Rashid, R.; Weng, H.Y.; Zhou, W.J.; He, Y. Fine-tuning convolutional neural network with transfer learning for semantic segmentation of ground-level oilseed rape images in a field with high weed pressure. Comput. Electron. Agric. 2019, 167, 105091. [Google Scholar] [CrossRef]
- Suh, H.K.; Ijsselmuiden, J.; Hofstee, J.W.; van Henten, E.J. Transfer learning for the classification of sugar beet and volunteer potato under field conditions. Biosyst. Eng. 2018, 174, 50–65. [Google Scholar] [CrossRef]
- Fei-Fei, L.; Fergus, R.; Perona, P. One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 594–611. [Google Scholar] [CrossRef] [Green Version]
- Lake, B.M.; Salakhutdinov, R.; Tenenbaum, J.B. Human-level concept learning through probabilistic program induction. Science 2015, 350, 1332–1338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chen, W.Y.; Liu, Y.C.; Kira, Z.; Wang, Y.C.F.; Huang, J.B. A closer look at few-shot classification. arXiv 2019, arXiv:1904.04232. [Google Scholar]
- Koch, G.; Zemel, R.; Salakhutdinov, R. Siamese neural networks for one-shot image recognition. In Proceedings of the International Conference on Machine Learning Deep Learning Workshop, Lille, France, 10–11 July 2015; Volume 2. [Google Scholar]
- Li, Y.; Yang, J.C. Few-shot cotton pest recognition and terminal realization. Comput. Electron. Agric. 2020, 169, 105240. [Google Scholar] [CrossRef]
- Hu, G.S.; Wu, H.Y.; Zhang, Y.; Wan, M.Z. A low shot learning method for tea leaf’s disease identification. Comput. Electron. Agric. 2019, 163, 104852. [Google Scholar] [CrossRef]
- Liu, B.; Yu, X.C.; Yu, A.Z.; Zhang, P.Q.; Wan, G.; Wang, R.R. Deep few-shot learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2290–2304. [Google Scholar] [CrossRef]
- Sung, F.; Yang, Y.X.; Zhang, L.; Xiang, T.; Torr, P.H.; Hospedales, T.M. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake, UT, USA, 18–23 June 2018; pp. 1199–1208. [Google Scholar]
- Andrychowicz, M.; Denil, M.; Gomez, S.; Hoffman, M.W.; Pfau, D.; Schaul, T.; De Freitas, N. Learning to learn by gradient descent by gradient descent. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 3981–3989. [Google Scholar]
- Gao, K.L.; Liu, B.; Yu, X.C.; Qin, J.C.; Zhang, P.Q.; Tan, X. Deep relation network for hyperspectral image few-shot classification. Remote Sens. 2020, 12, 923. [Google Scholar] [CrossRef] [Green Version]
- Haug, S.; Ostermann, J. A crop/weed field image dataset for the evaluation of computer vision based precision agriculture tasks. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 105–116. [Google Scholar]
- Lameski, P.; Zdravevski, E.; Trajkovik, V.; Kulakov, A. Weed detection dataset with RGB images taken under variable light conditions. In Proceedings of the International Conference on ICT Innovations, Skopje, Macedonia, 18–23 September 2017; Springer: Cham, Switzerland, 2017; pp. 112–119. [Google Scholar]
- Wiesner-Hanks, T.; Stewart, E.L.; Kaczmar, N.; DeChant, C.; Wu, H.; Nelson, R.J.; Lipson, H.; Gore, M.A. Image set for deep learning: Field images of maize annotated with disease symptoms. BMC Res. Notes 2018, 11, 440. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wiesner-Hanks, T.; Wu, H.; Stewart, E.; DeChant, C.; Kaczmar, N.; Lipson, H.; Gore, M.A.; Nelson, R.J. Millimeter-level plant disease detection from aerial photographs via deep learning and crowdsourced data. Front. Plant Sci. 2019, 10, 1550. [Google Scholar] [CrossRef] [Green Version]
- Liu, X.; Min, W.; Mei, S.; Wang, L.; Jiang, S. Plant disease recognition: A large-scale benchmark dataset and a visual region and loss reweighting approach. IEEE Trans. Image Process. 2021, 30, 2003–2015. [Google Scholar] [CrossRef]
- Chiu, M.T.; Xu, X.; Wei, Y.; Huang, Z.; Schwing, A.G.; Brunner, R.; Khachatrian, H.; Karapetyan, H.; Dozier, I.; Rose, G.; et al. Agriculture-vision: A large aerial image database for agricultural pattern analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2828–2838. [Google Scholar]
- Su, J.; Yi, D.; Su, B.; Mi, Z.; Liu, C.; Hu, X.; Xu, X.; Guo, L.; Chen, W.H. Aerial visual perception in smart farming: Field study of wheat yellow rust monitoring. IEEE Trans. Ind. Inform. 2021, 17, 2242–2249. [Google Scholar] [CrossRef] [Green Version]
- Abbas, A.; Jain, S.; Gour, M.; Vankudothu, S. Tomato plant disease detection using transfer learning with C-GAN synthetic images. Comput. Electron. Agric. 2021, 187, 106279. [Google Scholar] [CrossRef]
- Khaki, S.; Pham, H.; Han, Y.; Kuhl, A.; Kent, W.; Wang, L. Deepcorn: A semi-supervised deep learning method for high-throughput image-based corn kernel counting and yield estimation. Knowl. Based Syst. 2021, 218, 106874. [Google Scholar] [CrossRef]
- Maimaitijiang, M.; Sagan, V.; Sidike, P.; Hartling, S.; Esposito, F.; Fritschi, F.B. Soybean yield prediction from UAV using multimodal data fusion and deep learning. Remote Sens. Environ. 2020, 237, 111599. [Google Scholar] [CrossRef]
- Sa, I.; Chen, Z.; Popović, M.; Khanna, R.; Liebisch, F.; Nieto, J.; Siegwart, R. WeedNet: Dense semantic weed classification using multispectral images and MAV for smart farming. IEEE Robot. Autom. Lett. 2017, 3, 588–595. [Google Scholar] [CrossRef] [Green Version]
- Sa, I.; Popović, M.; Khanna, R.; Chen, Z.; Lottes, P.; Liebisch, F.; Nieto, J.; Stachniss, C.; Walter, A.; Siegwart, R. WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming. Remote Sens. 2018, 10, 1423. [Google Scholar] [CrossRef] [Green Version]
- Jiao, L.; Dong, S.; Zhang, S.; Xie, C.; Wang, H. AF-RCNN: An anchor-free convolutional neural network for multi-categories agricultural pest detection. Comput. Electron. Agric. 2020, 174, 105522. [Google Scholar] [CrossRef]
Categories | Scale | Application | Dataset | Network Architecture | Performance | Year | Refs |
---|---|---|---|---|---|---|---|
CNN-SL | Leaf | Disease detection | PlantVillage | AlexNet, GoogLeNet | CA, 99.35% | 2016 | [42] |
CNN-SL | Canopy | Plant species classification | 22 Weed and crop species | Customized | CA, 33–98% | 2016 | [61] |
CNN-SL | Canopy | Weed detection | Grass-Broadleaf | AlexNet | CA, 98% | 2017 | [70] |
CNN-SL | Leaf | Disease detection | PlantVillage | AlexNet, AlexNetOWTBn, GoogLeNet, Overfeat, VGG | SR, 99.53% | 2018 | [43] |
CNN-SL | Canopy | Joint stem detection and crop/weed classification | Sugar beet and dicot weeds | FCN architecture Stem-seg-S | mAP, 85.4% (stem detection) 69.7% (segmentation) | 2018 2020 | [63,64] |
CNN-SL | Canopy | Crop/weed classification | Plant seedlings dataset (3 crops and 9 weeds) | AgroAVNET | CA, 98.23% | 2018 | [65] |
TL | Canopy | Crop/weed classification | Sugar beet and volunteer potato | AlexNet, VGG-19, GoogLeNet, ResNet-50, ResNet-101, Inception-v3. | Highest CA, 96% (VGG-19) | 2018 | [109] |
CNN-SL | Leaf | Disease classification | 17 Diseases on 5 crops | ResNet50 variants | BA, 98% | 2019 | [45] |
CNN-SL | Leaf | Disease identification | Charcoal rot disease | 3D DCNN | CA, 95.73% | 2019 | [49] |
CNN-SL | Leaf | Disease severity detection | Sugar beet leaf images | Updated Faster R-CNN | CA, 95.48% | 2019 | [50] |
CNN-SL | Canopy | Weed specie classification | DeepWeeds | Inception-v3 and ResNet-50 | Highest CA, 95.7% | 2019 | [68] |
CNN-SL | Field | Wheat and barley yield prediction | RGB and NDVI images | Customized | Lowest MAPE, 8.8 | 2019 | [86] |
TL | Leaf | Plant classification | Flavia, Swedish Leaf, UCI Leaf, PlantVillage | AlexNet/VGG-16 with LSTM | Highest CA, 99.11% (Swedish Leaf) | 2019 | [104] |
TL | Canopy | Crop/weed semantic segmentation | Oilseed rape images | VGG-based encoder | Highest CA, 96% | 2019 | [108] |
FSL | Leaf | Disease identification | Tea leaf’s disease images | C-DCGAN-VGG16 | Average identification accuracy, 90% | 2019 | [116] |
CNN-SL | Leaf | Pest recognition | 10 Common species of crop pest | VGG-16, VGG-19, ResNet50, ResNet152, GoogLeNet | CA, 96.67% | 2020 | [53] |
CNN-SL | Leaf | Pest detection and recognition | In-field Pest in Food Crop (IPFC) | DeepPest | mAP, 74.3% | 2020 | [55] |
CNN-SL | Canopy | Weed and crop recognition | Four different weed datasets | GCN-ResNet-101 | Highest CA, 99.37% | 2020 | [66] |
CNN-SL | Canopy | Weed recognition | DeepWeeds | Graph Weeds Net (GWN) | Top-1 accuracy: 98.1% | 2020 | [69] |
CNN-SL | Field | Water stress identification | Maize, okra, and soybean (1200) | AlexNet, GoogLeNet, Inception V3 | CA (GoogLeNet), 98.30, 97.50, 94.16% | 2020 | [74] |
CNN-SL | Field | Salt-stress phenotyping | Okra hyperspectral images | Encoder-Decoder | IoU, 0.94 (plant segmentation) BD, 85.4 (leaf segmentation) | 2020 | [75] |
CNN-SL | Field | Growth stage estimation | Wheat and barley images | 5-layer ConvNet, VGG19 | Accuracy, 91.1–94.2% (ConvNet), 99.7–100% (VGG19) | 2020 | [79] |
CNN-SL | Field | Nutrient Status Diagnosis | Oilseed rape RGB images | CNN-LSTM | Highest CA, 95% (Inceptionv3-LSTM) | 2020 | [80] |
CNN-SL | Field | Wheat lodging extraction | RGB, multispectral images | TL and Deeplab v3+ | Highest DC, 0.90 (early maturity with RGB) and 0.93 (early maturity with multispectral) | 2020 | [81] |
CNN-SL | Field | Cotton yield predication | RGB images | Faster R-CNN, SSD, SSDLITE | Mean errors of 17.86% | 2020 | [85] |
CNN-SL | Land | Crop monitoring | Satellite SAR and optical images | MCNN-Seq | Highest R2, 0.9824 | 2020 | [101] |
TL | Canopy | Plant Center Localization | Maize orthomosaic images | ResNet encoder modified U-Net | Precision, 84.3%; Recall, 98.8%; F1 Score, 91.0% | 2020 | [105] |
TL | Canopy | Crop/weed identification | Two crops and two weeds | Xception, Inception-Resnet, VGNets, MobileNet, DenseNet | F1 Score, 99.29% | 2020 | [106] |
TL | Canopy | Crop/weed semantic segmentation | Sugar Beets 2016, Carrots 2017, Onions 2017 | SegNet | Training times reduced 80% | 2020 | [107] |
TL | Leaf | Plant disease identification | Rice/maize plant images | INC-VGGN | Average accuracy, 92% | 2020 | [29] |
FSL | Leaf | Plant disease classification | PlantVillage | Inception V3-Siamese network | Accuracy above 90% and 89.1% reduction in training data | 2020 | [31] |
FSL | Leaf | Pest recognition | Two Cotton dataset | CNN- prototypical network | Testing accuracy, 95.4% and 96.2% | 2020 | [114] |
FSL | Leaf | Disease recognition | Citrus aurantium L. diseases images | CAAE | Harmonic mean accuracy of 53.4% | 2020 | [32] |
FSL | Leaf | Pest classification | PlantVillage | Shallow CNN | Varies under different conditions | 2021 | [33] |
CNN-SL | Leaf | Disease severity quantification | NLB-infected maize images | Cascaded MRCNN | Correlation 73%, efficiency of 5 fps | 2021 | [52] |
CNN-SL | Land | Yield predication | Satellite multispectral images | 2D and 3D ResNet | Nearly 90% | 2021 | [92] |
CNN-SL | Land | Crop mapping | Satellite hyperspectral images | 1D/2D/3D CNN | More than 94% (3D CNN) | 2021 | [93] |
CNN-SL | Land | Land mapping | SAR images | Deep SAR-Net | Accuracy, 92.94 ± 1.05 | 2021 | [98] |
CNN-SL | Land | Crop mapping | SAR and optical images | 3D U-Net | Overall accuracy, 0.941 | 2021 | [100] |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, D.; Cao, W.; Zhang, F.; Li, Z.; Xu, S.; Wu, X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sens. 2022, 14, 559. https://doi.org/10.3390/rs14030559
Wang D, Cao W, Zhang F, Li Z, Xu S, Wu X. A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sensing. 2022; 14(3):559. https://doi.org/10.3390/rs14030559
Chicago/Turabian StyleWang, Dashuai, Wujing Cao, Fan Zhang, Zhuolin Li, Sheng Xu, and Xinyu Wu. 2022. "A Review of Deep Learning in Multiscale Agricultural Sensing" Remote Sensing 14, no. 3: 559. https://doi.org/10.3390/rs14030559
APA StyleWang, D., Cao, W., Zhang, F., Li, Z., Xu, S., & Wu, X. (2022). A Review of Deep Learning in Multiscale Agricultural Sensing. Remote Sensing, 14(3), 559. https://doi.org/10.3390/rs14030559