Augmenting Crop Detection for Precision Agriculture with Deep Visual Transfer Learning—A Case Study of Bale Detection
Abstract
:1. Introduction
- For bale detection under illumination conditions, a YOLOv3 model was built. The associate training dataset will be released under conditions with the current work to fill the voids in the bale training dataset, with labels as the ground truths.
- We constructed an innovative object detection approach (algorithms pipeline), including YOLOv3 and domain adaptation (DA). Additionally, this approach improves the capability of bale detection.
- We augmented the labeled training data with more scenarios using domain adaptation. Combined with our manually labeled data, we are able to provide a valuable training dataset of over 1000 bale images, which is publicly available after this publication.
2. Related Work
2.1. Computer Vision in Precision Agriculture
2.2. Transfer Learning and Domain Adaptation
3. Methodology
4. Experiment Design and Data Association
4.1. Experiment Equipment
4.2. Bales Data Collection and Description
5. Result and Discussion
5.1. Primary Bale Detection with YOLOv3 Corresponding to Step 1
5.2. Augmenting the Training Data with CycleGAN Corresponding to Step 2
5.3. Optimized YOLOv3 Model with Extended Datasets Corresponding to Step 3
5.4. Comparison and Advantages
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Census Bureau. U.S. and World Population Clock. Available online: https://www.census.gov/data-tools/demo/idb/#/country?YR_ANIM=2050&COUNTRY_YEAR=2050 (accessed on 16 December 2020).
- Valin, H.; Sands, R.D.; Van Der Mensbrugghe, D.; Nelson, G.C.; Ahammad, H.; Blanc, E.; Bodirsky, B.; Fujimori, S.; Hasegawa, T.; Havlík, P.; et al. The future of food demand: Understanding differences in global economic models. Agric. Econ. 2014, 45, 51–67. [Google Scholar] [CrossRef]
- Mahajan, S.; Das, A.; Sardana, H.K. Image acquisition techniques for assessment of legume quality. Trends Food Sci. Technol. 2015, 42, 116–133. [Google Scholar] [CrossRef]
- Barbedo, J. A review on the main challenges in automatic plant disease identification based on visible range images. Biosyst. Eng. 2016, 144, 52–60. [Google Scholar] [CrossRef]
- Story, D.; Kacira, M. Design and implementation of a computer vision-guided greenhouse crop diagnostics system. Mach. Vis. Appl. 2015, 26, 495–506. [Google Scholar] [CrossRef]
- Rieder, R.; Rieder, R. Computer vision and artificial intelligence in precision agriculture for grain crops: A systematic review. Comput. Electron. Agric. 2018, 153, 69–81. [Google Scholar] [CrossRef] [Green Version]
- Tillett, N.; Hague, T.; Grundy, A.; Dedousis, A. Mechanical within-row weed control for transplanted crops using computer vision. Biosyst. Eng. 2008, 99, 171–178. [Google Scholar] [CrossRef]
- Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using Deep Learning for Image-Based Plant Disease Detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rastogi, A.; Arora, R.; Sharma, S. Leaf disease detection and grading using computer vision technology & fuzzy logic. In Proceedings of the 2015 2nd International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India, 19–20 February 2015; Institute of Electrical and Electronics Engineers (IEEE): New York, NY, USA, July 2015; pp. 500–505. [Google Scholar]
- Choi, H.; Geeves, M.; Alsalam, B.; Gonzalez, F. Open source computer-vision based guidance system for UAVs on-board decision making. In Proceedings of the 2016 IEEE Aerospace Conference, Big Sky, MT, USA, 5–12 March 2016; pp. 1–5. [Google Scholar] [CrossRef]
- Ward, S.L.; Hensler, J.; Alsalam, B.H.Y.; Duncan, C.; Felipe, G. Autonomous UAVs Wildlife Monitoring and Tracking Using Thermal Imaging and Computer vision. In Proceedings of the IEEE Aerospace Conferece, Big Sky, MT, USA, 4–11 March 2016. [Google Scholar]
- Xiang, H.; Tian, L. Method for automatic georeferencing aerial remote sensing (RS) images from an unmanned aerial vehicle (UAV) platform. Biosyst. Eng. 2011, 108, 104–113. [Google Scholar] [CrossRef]
- Hunt, E.R., Jr.; Cavigelli, M.; Daughtry, C.S.T.; McMurtrey, J.E.; Walthall, C.L. Evaluation of Digital Photography from Model Aircraft for Remote Sensing of Crop Biomass and Nitrogen Status. Precis. Agric. 2005, 6, 359–378. [Google Scholar] [CrossRef]
- Rieke, M.; Foerster, T.; Geipel, J.; Prinz, T. High-Precision Positioning and Real-Time Data Processing of UAV-Systems. ISPRS Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2012, 38, 119–124. [Google Scholar] [CrossRef] [Green Version]
- Zhao, W.; Yin, J.; Wang, X.; Hu, J.; Qi, B.; Runge, T. Real-Time Vehicle Motion Detection and Motion Altering for Connected Vehicle: Algorithm Design and Practical Applications. Sensors 2019, 19, 4108. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cheng, F.-C.; Huang, S.-C.; Ruan, S.-J. Illumination-Sensitive Background Modeling Approach for Accurate Moving Object Detection. IEEE Trans. Broadcast. 2011, 57, 794–801. [Google Scholar] [CrossRef]
- Zhao, W.; Xu, L.; Xi, S.; Wang, J.; Runge, T. A Sensor-Based Visual Effect Evaluation of Chevron Alignment Signs’ Colors on Drivers through the Curves in Snow and Ice Environment. J. Sens. 2017, 2017, 1–10. [Google Scholar] [CrossRef] [Green Version]
- Zhao, W.; Wang, X.; Qi, B.; Runge, T. Ground-level Mapping and Navigating for Agriculture based on IoT and Computer Vision. IEEE Access 2020, 8, 221975–221985. [Google Scholar] [CrossRef]
- Hornberg, A. (Ed.) Handbook of Machine and Computer Vision: The Guide for Developers and Users; Wiley-VCH: Weinheim, Germany, 2017. [Google Scholar]
- Baweja, H.S.; Parhar, T.; Nuske, S. Early-season Vineyard Shoot and Leaf Estimation Using Computer Vision Techniques. 2017 Spokane Wash. 2017. [Google Scholar] [CrossRef]
- Lin, Y.; Chen, J.; Cao, Y.; Zhou, Y.; Zhang, L.; Tang, Y.Y.; Wang, S. Cross-Domain Recognition by Identifying Joint Subspaces of Source Domain and Target Domain. IEEE Trans. Cybern. 2017, 47, 1090–1101. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Jian, S. R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 91–99. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Yu, H.; Yang, K.; Zhang, J.; Bin, R. Video-based traffic data collection system for multiple vehicle types. IET Intell. Transp. Syst. 2014, 8, 164–174. [Google Scholar] [CrossRef] [Green Version]
- Tian, H.; Wang, T.; Liu, Y.; Qiao, X.; Li, Y. Computer vision technology in agricultural automation—A review. Inf. Process. Agric. 2020, 7, 1–19. [Google Scholar] [CrossRef]
- Romualdo, L.M.; Luz, P.H.D.C.; Devechio, F.F.S.; Marin, M.A.; Zúñiga, A.M.G.; Bruno, O.M.; Herling, V.R. Use of artificial vision techniques for diagnostic of nitrogen nutritional status in maize plants. Comput. Electron. Agric. 2014, 104, 63–70. [Google Scholar] [CrossRef]
- Pérez-Zavala, R.; Torres-Torriti, M.; Cheein, F.A.; Troni, G. A pattern recognition strategy for visual grape bunch detection in vineyards. Comput. Electron. Agric. 2018, 151, 136–149. [Google Scholar] [CrossRef]
- Chandel, N.S.; Chakraborty, S.K.; Rajwade, Y.A.; Dubey, K.; Tiwari, M.K.; Jat, D. Identifying crop water stress using deep learning models. Neural Comput. Appl. 2020, 1–15, 1–15. [Google Scholar] [CrossRef]
- Parra, L.; Marin, J.; Yousfi, S.; Rincón, G.; Mauri, P.V.; Lloret, J. Edge detection for weed recognition in lawns. Comput. Electron. Agric. 2020, 176, 105684. [Google Scholar] [CrossRef]
- Oberti, R.; Marchi, M.; Tirelli, P.; Calcante, A.; Iriti, M.; Borghese, A.N.; Oberti, R.; Marchi, M.; Tirelli, P.; Calcante, A.; et al. Automatic detection of powdery mildew on grapevine leaves by image analysis: Optimal view-angle range to increase the sensitivity. Comput. Electron. Agric. 2014, 104, 1–8. [Google Scholar] [CrossRef]
- Pourreza, A.; Lee, W.S.; Ehsani, R.; Schueller, J.K.; Raveh, E. An optimum method for real-time in-field detection of Huanglongbing disease using a vision sensor. Comput. Electron. Agric. 2015, 110, 221–232. [Google Scholar] [CrossRef]
- Maharlooei, M.; Sivarajan, S.; Bajwa, S.G.; Harmon, J.P.; Nowatzki, J. Detection of soybean aphids in a greenhouse using an image processing technique. Comput. Electron. Agric. 2017, 132, 63–70. [Google Scholar] [CrossRef]
- Toseef, M.; Khan, M.J. An intelligent mobile application for diagnosis of crop diseases in Pakistan using fuzzy inference system. Comput. Electron. Agric. 2018, 153, 1–11. [Google Scholar] [CrossRef]
- Rustia, D.J.A.; Lin, C.E.; Chung, J.-Y.; Zhuang, Y.-J.; Hsu, J.-C.; Lin, T.-T. Application of an image and environmental sensor network for automated greenhouse insect pest monitoring. J. Asia-Pacific Èntomol. 2020, 23, 17–28. [Google Scholar] [CrossRef]
- Barnea, E.; Mairon, R.; Ben-Shahar, O. Colour-agnostic shape-based 3D fruit detection for crop harvesting robots. Biosyst. Eng. 2016, 146, 57–70. [Google Scholar] [CrossRef]
- Lehnert, C.F.; English, A.; McCool, C.; Tow, A.W.; Perez, T. Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robot. Autom. Lett. 2017, 2, 872–879. [Google Scholar] [CrossRef] [Green Version]
- Brechbill, S.C.; Tyner, W.E.; Ileleji, K.E. The Economics of Biomass Collection and Transportation and Its Supply to Indiana Cellulosic and Electric Utility Facilities. BioEnergy Res. 2011, 4, 141–152. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 1 December 2012; Volume 1, pp. 1097–1105. [Google Scholar]
- Richter, S.R.; Vineet, V.; Roth, S.; Koltun, V. Playing for Data: Ground Truth from Computer Games; Springer Science and Business Media LLC: Heidelberg, Germany, 2016; pp. 102–118. [Google Scholar]
- Ganin, Y.; Lempitsky, V. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning; PMLR: New York, NY, USA, 27 June 2015; pp. 1180–1189. [Google Scholar]
- Othman, E.; Bazi, Y.; Melgani, F.; Alhichri, H.; Alajlan, N.; Zuair, M. Domain Adaptation Network for Cross-Scene Classification. IEEE Trans. Geosci. Remote. Sens. 2017, 55, 4441–4456. [Google Scholar] [CrossRef]
- Li, X.; Ye, M.; Fu, M.; Xu, P.; Li, T. Domain adaption of vehicle detector based on convolutional neural networks. Int. J. Control. Autom. Syst. 2015, 13, 1020–1031. [Google Scholar] [CrossRef]
- Qi, B.Z.; Liu, P.; Ji, T.; Wei, Z.; Suman, B. Augmenting Driving Analytics with Multi-Modal Information; IEEE Vehicular Networking Conference (VNC): Taipei, Taiwan, 2018. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation Inc.: San Diego, CA, USA, 2015; pp. 91–99. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 16 February 2017; pp. 2961–2969. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Song, S.; Yu, H.; Miao, Z.; Zhang, Q.; Lin, Y.; Wang, S. Domain Adaptation for Convolutional Neural Networks-Based Remote Sensing Scene Classification. IEEE Geosci. Remote. Sens. Lett. 2019, 16, 1324–1328. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Jian, S. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems, Montreal, QC, Canada, 11–12 December 2015. [Google Scholar] [CrossRef] [Green Version]
- Khodabandeh, M.; Vahdat, A.; Ranjbar, M.; Macready, W. A Robust Learning Approach to Domain Adaptive Object Detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October 2019; Institute of Electrical and Electronics Engineers (IEEE), 2019; pp. 480–490. [Google Scholar]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; Institute of Electrical and Electronics Engineers: New York, NY, USA, 2017; pp. 2242–2251. [Google Scholar]
- Darrenl, T. Labelimg. Available online: https://github.com/tzutalin/labelImg (accessed on 15 November 2015).
- Kentaro Wada. labelme: Image Polygonal Annotation with Python. Available online: https://github.com/wkentaro/labelme (accessed on 30 September 2016).
Environment Diversity | Conditions | Example | Agriculture Information | Technical Challenges for Bale Detection |
---|---|---|---|---|
Illumination Diversity (Target Domain 1) | Lighting condition | To gain the efficiency of agriculture, different process routines are conducted to crops in the morning, afternoon, and night. | Decreasing the difficulties of shaping an accurate classification models built on the deep learning architecture. | |
Shadow | Shadow is commonly seen during daytime. This always happens in the rainy season. The images taken by UAV includes shadows in certain months. | Shadows crossing the objects decrease the accuracy of the classification on these kinds of objects. The scale of the background and bale size also makes it worse. | ||
Seasonal Change (Target Domain 2) | Hue change | Farms with different plants have various harvest seasons. As a result, the bales and backgrounds vary in different season. | The inconsistent changes in background and bale with season trigger the decrease of bale detection performance. | |
Adverse Weather Conditions (Target Domain 3) | Haze | Haze weather sometimes happen with a temperature drop or precipitation change. This may cause a grain lifecycle adjustment, which needs to be monitored. | The high performance of supervised learning and semi-supervised learning (object detection) in haze weather is always a challenge. | |
Snow covered | Tracking bales in winter and in a snow environment is also important for continuously feeding livestock. | Restoration-based algorithms may mislead or overfit the object compared to the original one. The snow weather reduces the features of the objects in the images. |
Environment | Condition | Training | Validation | Testing |
---|---|---|---|---|
Initial condition | Good illumination, fall, w/o shadow | 243 | 27 | 30 |
Diverse illumination (Target Domain 1) | w/Lighting condition change | 160 | 20 | 20 |
w/Shadow | 158 | 20 | 20 | |
Seasonal change (Target Domain 2) | Hue change (summer) | 185 | 19 | 19 |
Hue change (early winter) | 187 | 12 | 12 | |
Adverse weather conditions (Target Domain 3) | w/Haze | 159 | 20 | 20 |
w/Snow covered | 150 | 19 | 19 |
Data | Images 1 | Precision | Recall | [email protected] | F1 |
---|---|---|---|---|---|
All conditions | 148 | 0.859 | 0.599 | 0.780 | 0.746 |
Initial condition | 30 | 0.929 | 0.993 | 0.987 | 0.960 |
Illumination | 20 | 0.881 | 0.587 | 0.848 | 0.735 |
Shadow | 20 | 0.675 | 0.456 | 0.622 | 0.621 |
Hue change (summer) | 19 | 0. 917 | 0.644 | 0.853 | 0.783 |
Hue change (early winter) | 19 | 0.929 | 0.605 | 0.751 | 0.852 |
Haze | 20 | 0.910 | 0.871 | 0.975 | 0.931 |
Snow | 19 | 0.874 | 0.456 | 0.584 | 0.621 |
Data 1 | Images 2 | Precision | Recall | [email protected] | F1 |
---|---|---|---|---|---|
All conditions | 148 | 0.869 | 0.927 | 0.941 | 0.892 |
Initial condition | 30 | 0.913 | 0.980 | 0.990 | 0.945 |
Illumination | 20 | 0.847 | 0.926 | 0.959 | 0.885 |
Shadow | 20 | 0.847 | 0.933 | 0.854 | 0.888 |
Hue change (summer) | 19 | 0.836 | 0.933 | 0.954 | 0.882 |
Hue change (early winter) | 19 | 0.905 | 0.893 | 0.969 | 0.831 |
Haze | 20 | 0.831 | 0.867 | 0.895 | 0.848 |
Snow | 19 | 0.926 | 0.878 | 0.941 | 0.901 |
Train Approach | Time Cost (Hours) |
---|---|
w/Initial condition images | 90 |
w/Domain adaption images | 90 |
w/Labeled all conditions images 1 | 350 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, W.; Yamada, W.; Li, T.; Digman, M.; Runge, T. Augmenting Crop Detection for Precision Agriculture with Deep Visual Transfer Learning—A Case Study of Bale Detection. Remote Sens. 2021, 13, 23. https://doi.org/10.3390/rs13010023
Zhao W, Yamada W, Li T, Digman M, Runge T. Augmenting Crop Detection for Precision Agriculture with Deep Visual Transfer Learning—A Case Study of Bale Detection. Remote Sensing. 2021; 13(1):23. https://doi.org/10.3390/rs13010023
Chicago/Turabian StyleZhao, Wei, William Yamada, Tianxin Li, Matthew Digman, and Troy Runge. 2021. "Augmenting Crop Detection for Precision Agriculture with Deep Visual Transfer Learning—A Case Study of Bale Detection" Remote Sensing 13, no. 1: 23. https://doi.org/10.3390/rs13010023
APA StyleZhao, W., Yamada, W., Li, T., Digman, M., & Runge, T. (2021). Augmenting Crop Detection for Precision Agriculture with Deep Visual Transfer Learning—A Case Study of Bale Detection. Remote Sensing, 13(1), 23. https://doi.org/10.3390/rs13010023