Multi-Barley Seed Detection Using iPhone Images and YOLOv5 Model
Abstract
:1. Introduction
2. Object Detection Methods
3. Objectives of the Study
4. Materials and Methods
4.1. Barley Material
4.2. Dataset
4.3. Image Pre-Processing
4.4. Object Detection Models
4.5. Loss Function
4.6. Assessment Method
- (1)
- Precision: The ratio of correctly predicted positive values to total values;
- (2)
- Recall: The percentage of correctly predicted positive values to all the values in the relevant class;
- (3)
- mAP: The average mean value of all categories AP.
5. Experiment and Discussion
5.1. Model Training
5.2. Result Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ewert, A.; Wiebe, G.A. Classification of Barley Varieties Grown in the United States and Canada in 1945; Technical Bulletin; U.S. Department of Agriculture: Washington, DC, USA, 1946.
- Li, Y.; Schwarz, P.B.; Barr, J.M.; Horsley, R.D. Factors Predicting Malt Extract within a Single Barley Cultivar. J. Cereal Sci. 2008, 48, 531–538. [Google Scholar] [CrossRef]
- Li, Y.; Lu, J.; Gu, G.; Mao, Z. Characterization of the Enzymatic Degradation of Arabinoxylans in Grist Containing Wheat Malt Using Response Surface Methodology. J. Am. Soc. Brew. Chem. 2005, 63, 171–176. [Google Scholar]
- Schwarz, P.B.; Li, Y. Malting and Brewing Uses of Barley. In Barley: Improvement, Production, and Uses; Ulrich, S.E., Ed.; Wiley: Chichester, UK, 2010; pp. 478–522. [Google Scholar]
- Briggs, D.E.; Hough, J.S.; Stevens, R.; Young, T.W. Malting and Malts; Blackie Academic and Professional: London, UK, 1998. [Google Scholar]
- Penney, D.D.; Chen, L. A Survey of Machine Learning Applied to Computer Architecture Design. arXiv 2019, arXiv:1909.12373. [Google Scholar]
- Cunningham, P.; Cord, M.; Delany, S.J. Supervised learning. In Machine Learning Techniques for Multimedia; Springer: Berlin/Heidelberg, Germany, 2008; pp. 21–29. [Google Scholar]
- Hebb, D.O. The Organization of Behavior: A Neuropsychological Theory; Psychology Press: New York, NY, USA, 1949. [Google Scholar]
- Ballard, D.H. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 1981, 13, 111–122. [Google Scholar] [CrossRef] [Green Version]
- Harris, C.; Stephens, M. A combined corner and edge detector. Alvey Vis. Conf. 1988, 15, 10–5244. [Google Scholar]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a Convolutional Neural Network. In Proceedings of the International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar] [CrossRef] [Green Version]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [Green Version]
- He, K.M.; Gkioxari, G.; Girshick, R. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6517–6525. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Lin, T.-Y.; Dollar, P.; Girshick, R.; He, K.M.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.; Liao, H.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Thuan, D. Evolution of yolo algorithm and yolov5: The state-of-the-art object detection algorithm. Bachelor Dissertation, Oulu University of Applied Sciences, Oulu, Finland, 2021. [Google Scholar]
- Kozłowski, M.; Górecki, P.; Szczypiński, P.M. Varietal Classification of Barley by Convolutional Neural Networks. Biosyst. Eng. 2019, 184, 155–165. [Google Scholar] [CrossRef]
- Dolata, P.; Reiner, J. Barley Variety Recognition with Viewpoint-Aware Double-Stream Convolutional Neural Networks. Proc. Fed. Conf. Comput. Sci. Inf. Syst. 2018, 15, 99–103. [Google Scholar]
- Shi, Y.Y.; Patel, Y.; Rostami, B.; Chen, H.W.; Wu, L.S.; Yu, Z.Y.; Li, Y. Barley Variety Identification by iPhone Images and Deep Learning. J. Am. Soc. Brew. Chem. 2022, 80, 215–224. [Google Scholar] [CrossRef]
- Holopainen-Mantila, U. Composition and Structure of Barley (Hordeum vulgare L.). GRAIN in Relation to End Uses. Ph.D. Dissertation, VTT’ Technical Research Centre of Finland, Espoo, Finland, 2015. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A Survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 658–666. [Google Scholar]
- Ho, Y.; Wookey, S. The real-world-weight cross-entropy loss function: Modeling the costs of mislabeling. IEEE Access 2019, 8, 4806–4813. [Google Scholar] [CrossRef]
Item | Model |
---|---|
CPU + Motherboard | CPU Ryzen 9 Motherboard X570 AORUS Ultra |
CPU Cooler | Noctua NH-U14S TR4-SP3 82.52 CFM CPU Cooler |
Memory | Corsair Vengeance 64 GB |
Storage | Samsung 1TB SSD |
Video Card | RTX 3090 |
Model Category | Precision | Recall | MAP | Training Time |
---|---|---|---|---|
YOLOv5s | 84.6% | 78.4% | 87.3% | 6.68 h |
YOLOv5m | 83.7% | 85.7% | 90.5% | 7.89 h |
YOLOv5l | 94.3% | 94.8% | 96.2% | 10.72 h |
YOLOv5x6 | 98.4% | 98.1% | 97.5% | 19.16 h |
YOLOv5x6 on multi-barley seed dataset | 58.9% | 71% | 57.4% | 7.68 h |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shi, Y.; Li, J.; Yu, Z.; Li, Y.; Hu, Y.; Wu, L. Multi-Barley Seed Detection Using iPhone Images and YOLOv5 Model. Foods 2022, 11, 3531. https://doi.org/10.3390/foods11213531
Shi Y, Li J, Yu Z, Li Y, Hu Y, Wu L. Multi-Barley Seed Detection Using iPhone Images and YOLOv5 Model. Foods. 2022; 11(21):3531. https://doi.org/10.3390/foods11213531
Chicago/Turabian StyleShi, Yaying, Jiayi Li, Zeyun Yu, Yin Li, Yangpingqing Hu, and Lushen Wu. 2022. "Multi-Barley Seed Detection Using iPhone Images and YOLOv5 Model" Foods 11, no. 21: 3531. https://doi.org/10.3390/foods11213531
APA StyleShi, Y., Li, J., Yu, Z., Li, Y., Hu, Y., & Wu, L. (2022). Multi-Barley Seed Detection Using iPhone Images and YOLOv5 Model. Foods, 11(21), 3531. https://doi.org/10.3390/foods11213531