Assessment of Different Object Detectors for the Maturity Level Classification of Broccoli Crops Using UAV Imagery
Abstract
:1. Introduction
2. Materials and Methods
2.1. Experimental Location
2.2. Data Acquisition
2.3. Data Pre-Processing
2.4. Object Detection Pipelines
2.5. Evaluation Metrics
3. Results
3.1. Three-Class Approach
3.2. Two-Class Approach
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Latte, K.P.; Appel, K.E.; Lampen, A. Health benefits and possible risks of broccoli–an overview. Food Chem. Toxicol. 2011, 49, 3287–3309. [Google Scholar] [CrossRef] [PubMed]
- Soane, B.D.; van Ouwerkerk, C. Chapter 1—Soil Compaction Problems in World Agriculture. In Developments in Agricultural Engineering; Elsevier: Amsterdam, The Netherlands, 1994; Volume 11, pp. 1–21. ISSN 0167-4137. ISBN 9780444882868. [Google Scholar] [CrossRef]
- Bechar, A.; Vigneault, C. Agricultural robots for field operations: Concepts and components. Biosyst. Eng. 2016, 149, 94–111. [Google Scholar] [CrossRef]
- Zhao, W.; Yamada, W.; Li, T.; Digman, M.; Runge, T. Augmenting Crop Detection for Precision Agriculture with Deep Visual Transfer Learning—A Case Study of Bale Detection. Remote Sens. 2020, 13, 23. [Google Scholar] [CrossRef]
- Fan, Z.; Lu, J.; Gong, M.; Xie, H.; Goodman, E. Automatic Tobacco Plant Detection in UAV Images via Deep Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 11, 876–887. [Google Scholar] [CrossRef]
- Felzenszwalb, P.F.; Girshick, R.B.; McAllester, D.; Ramanan, D. Object detection with discriminatively trained part based models. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1627–1645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Girshick, R.B. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fiscer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7310–7311. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision—ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; ECCV 2016; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9905. [Google Scholar] [CrossRef] [Green Version]
- Lin, G.S.; Tu, J.C.; Lin, J.Y. Keyword Detection Based on RetinaNet and Transfer Learning for Personal Information Protection in Document Images. Appl. Sci. 2021, 11, 9528. [Google Scholar] [CrossRef]
- Oetomo, D.; Billingsley, J.; Reid, J.F. Agricultural robotics. J. Field Robot. 2009, 26, 501–503. [Google Scholar] [CrossRef]
- Kirkpatrick, K. Technologizing Agriculture. In Communications of the ACM; Association for Computing Machinery: New York, NY, USA, 2019; Volume 62, pp. 14–16. [Google Scholar] [CrossRef] [Green Version]
- Duckett, T.; Pearson, S.; Blackmore, S.; Grieve, B. Agricultural robotics: The future of robotic agriculture. CoRR, abs/1806.06762. arXiv 2018, arXiv:1806.06762. Available online: http://arxiv.org/abs/1806.06762 (accessed on 20 December 2021).
- Roser, M. Employment in agriculture. Our World in Data. 2019. Available online: https://ourworldindata.org/employment-in-agriculture (accessed on 20 December 2021).
- Barbedo, J.G.A. Plant disease identification from individual lesions and spots using deep learning. Biosyst. Eng. 2019, 180, 96–107. [Google Scholar] [CrossRef]
- Wilhoit, J.H.; Koslav, M.B.; Byler, R.K.; Vaughan, D.H. Broccoli head sizing using image texture analysis. Trans. ASAE 1990, 33, 1736–1740. [Google Scholar] [CrossRef]
- Qui, W.; Shearer, S.A. Maturity assessment of broccoli using the discrete Fourier transform. Trans. ASAE 1992, 35, 2057–2062. [Google Scholar]
- Shearer, S.A.; Burks, T.F.; Jones, P.T.; Qiu, W. One-dimensional image texture analysis for maturity assessment of broccoli. In Proceedings of the American Society of Agricultural Engineers, Kansas City, MO, USA, 19–22 June 1994. [Google Scholar]
- Tu, K.; Ren, K.; Pan, L.; Li, H. A study of broccoli grading system based on machine vision and neural networks. In Proceedings of the 2007 International Conference on Mechatronics and Automation, Harbin, Heilongjiang, China, 5–8 August 2007; pp. 2332–2336. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef] [Green Version]
- Bargoti, S.; Underwood, J. Deep fruit detection in orchards. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), 29 May–3 June 2017; pp. 3626–3633. [Google Scholar] [CrossRef] [Green Version]
- Sa, I.; Ge, Z.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. DeepFruits: A fruit detection system using deep neural networks. Sensors 2016, 16, 1222. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Madeleine, S.; Bargoti, S.; Underwood, J. Image based mango fruit detection, localisation and yield estimation using multiple view geometry. Sensors 2016, 16, 1915. [Google Scholar]
- García-Manso, A.; Gallardo-Caballero, R.; García-Orellana, C.J.; González-Velasco, H.M.; Macías-Macías, M. Towards selective and automatic harvesting of broccoli for agri-food industry. Comput. Electron. Agric. 2021, 188, 106263. [Google Scholar] [CrossRef]
- Birrell, S.; Hughes, J.; Cai, J.Y.; Iida, F. A field-tested robotic harvesting system for iceberg lettuce. J. Field Robot. 2020, 37, 225–245. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Junos, M.H.; Khairuddin, A.S.M.; Thannirmalai, S.; Dahari, M. Automatic detection of oil palm fruits from UAV images using an improved YOLO model. Vis. Comput. 2021, 1–15. [Google Scholar] [CrossRef]
- Mutha, S.A.; Shah, A.M.; Ahmed, M.Z. Maturity Detection of Tomatoes Using Deep Learning. SN Comput. Sci. 2021, 2, 441. [Google Scholar] [CrossRef]
- Santos, T.T.; de Souza, L.L.; dos Santos, A.A.; Avila, S. Grape detection, segmentation, and tracking using deep neural networks and three-dimensional association. Comput. Electron. Agric. 2020, 170, 105247. [Google Scholar] [CrossRef] [Green Version]
- Blok, P.M.; van Evert, F.K.; Tielen, A.P.; van Henten, E.J.; Kootstra, G. The effect of data augmentation and network simplification on the image-based detection of broccoli heads with Mask R-CNN. J. Field Robot. 2021, 38, 85–104. [Google Scholar] [CrossRef]
- Le Louedec, J.; Montes, H.A.; Duckett, T.; Cielniak, G. Segmentation and detection from organised 3D point clouds: A case study in broccoli head detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 64–65. [Google Scholar]
- Bender, A.; Whelan, B.; Sukkarieh, S. A high-resolution, multimodal data set for agricultural robotics: A Ladybird’s-eye view of Brassica. J. Field Robot. 2020, 37, 73–96. [Google Scholar] [CrossRef]
- Zhou, C.; Hu, J.; Xu, Z.; Yue, J.; Ye, H.; Yang, G. A monitoring system for the segmentation and grading of broccoli head based on deep learning and neural networks. Front. Plant Sci. 2020, 11, 402. [Google Scholar] [CrossRef] [PubMed]
- Everingham, M.; Gool, L.V.; Williams, C.K.; Winn, J.M.; Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 2009, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
- Arsenovic, M.; Karanovic, M.; Sladojevic, S.; Anderla, A.; Stefanović, D. Solving Current Limitations of Deep Learning Based Approaches for Plant Disease Detection. Symmetry 2019, 11, 939. [Google Scholar] [CrossRef] [Green Version]
- Zheng, Y.; Kong, J.; Jin, X.; Wang, X.; Su, T.; Zuo, M. CropDeep: The Crop Vision Dataset for Deep-Learning-Based Classification and Detection in Precision Agriculture. Sensors 2019, 19, 1058. [Google Scholar] [CrossRef] [Green Version]
- Zhou, X.; Wang, D.; Krähenbühl, P. Objects as Points. arXiv 2019. Available online: https://arxiv.org/abs/1904.07850 (accessed on 20 December 2021).
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10778–10787. [Google Scholar]
- Newell, A.; Yang, K.; Deng, J. Stacked Hourglass Networks for Human Pose Estimation. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016. [Google Scholar]
- Sandler, M.; Howard, A.G.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Lin, T.; Maire, M.; Belongie, S.J.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 5–12 September 2014. [Google Scholar]
- Taylor, L.; Nitschke, G.S. Improving Deep Learning with Generic Data Augmentation. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bengaluru, India, 18–21 November 2018; pp. 1542–1547. [Google Scholar]
- Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic Routing Between Capsules. arXiv. 2017. Available online: https://arxiv.org/abs/1710.09829 (accessed on 20 December 2021).
Object Detector | Backbone | Input Size | FPN | COCO mAP |
---|---|---|---|---|
Faster R-CNN | ResNet-152 | 640 × 640 | No | 32.4 |
SSD | MobileNet-V2 | 640 × 640 | Yes | 28.2 |
RetinaNet | ResNet-152 | 640 × 640 | Yes | 35.4 |
EfficientDet-D1 | EfficientNet-D1 | 640 × 640 | Yes | 38.4 |
CenterNet | HG-104 | 512 × 512 | No | 41.9 |
Hyper-Parameter | Values |
---|---|
Optimizer | {Adam, SGD} |
Learning Rate (LR) | {0.05, 0.01, 0.001} |
Batch Size | {4, 8} |
Warmup steps | {100, 500, 1000} |
IoU Threshold | {0.1, 0.25, 0.5} |
Max. Detections | {10, 50, 100} |
Object Detector | Optimizer | LR | Batch Size | Warmup Steps | IoU Threshold | Max. Detections |
---|---|---|---|---|---|---|
Faster R-CNN | SGD | 0.05 | 4 | 500 | 0.5 | 100 |
SSD | SGD | 0.01 | 8 | 1000 | 0.25 | 100 |
RetinaNet | SGD | 0.01 | 8 | 100 | 0.25 | 50 |
EfficientDet-D1 | SGD | 0.04 | 4 | 500 | 0.5 | 50 |
CenterNet | Adam | 0.001 | 8 | 1000 | - | - |
Object Detector | mAP@50 | mAP@75 | mAP@small |
---|---|---|---|
Faster R-CNN (ResNet) | 82.6 ± 4.13 (86.3) | 71.1 ± 4.01 | 54.66 ± 9.16 |
CenterNet | 80.43 ± 2.46 (79.15) | 73.24 ± 2.81 | 57.05 ± 8.66 |
SSD (MobileNet) | 78.89 ± 3.69 (82.13) | 69.1 ± 3.84 | 44.85 ± 8.75 |
RetinaNet | 78.36 ± 1.84 (81.48) | 62.24 ± 3.3 | 48.09 ± 6.01 |
EfficientDet-D1 | 77.68 ± 3.48 (79.16) | 65.94 ± 4.22 | 48.85 ± 7.9 |
Object Detector | mAP@50 | mAP@75 | mAP@small |
---|---|---|---|
Faster R-CNN (ResNet) | 84.19 ± 2.96 (83.12) | 73.6 ± 3.17 | 60.09 ± 7.78 |
CenterNet | 83.68 ± 1.17 (85.18) | 75.59 ± 1.47 | 59.39 ± 4.31 |
SSD (MobileNet) | 82.81 ± 2.2 (81.36) | 73.92 ± 2.1 | 59.17 ± 7.87 |
EfficientDet-D1 | 82.76 ± 2.56 (82.45) | 71.91 ± 3.2 | 57.58 ± 7.89 |
RetinaNet | 79.53 ± 3.21 (78.95) | 65.44 ± 6.59 | 55.75 ± 5.8 |
Object Detector | mAP@50 | mAP@75 | mAP@small |
---|---|---|---|
Faster R-CNN (ResNet) | 83.17 ± 2.72 (84.16) | 73.52 ± 2.09 | 58.42 ± 6 |
CenterNet | 80.82 ± 2.7 (81.33) | 77.17 ± 2.82 | 55.75 ± 6.21 |
EfficientDet-D1 | 79 ± 2.01 (80.47) | 65.71 ± 2.97 | 52.44 ± 7.27 |
RetinaNet | 77.45 ± 2.52 (76.74) | 63.24 ± 3.91 | 46.18 ± 12.16 |
SSD (MobileNet) | 77.38 ± 2.7 (76.49) | 68.44 ± 3.2 | 45.4 ± 5.66 |
Object Detector | mAP@50 | mAP@75 | mAP@small |
---|---|---|---|
CenterNet | 83.52 ± 1.57 (85.98) | 79.53 ± 1.54 | 59.73 ± 6.92 |
RetinaNet | 83.28 ± 1.83 (84.17) | 71.6 ± 2.15 | 61.22 ± 5.21 |
EfficientDet-D1 | 83.02 ± 1.44 (82.63) | 72.73 ± 3.22 | 58.24 ± 5.29 |
SSD (MobileNet) | 82.56 ± 3.18 (81.92) | 72.14 ± 3.27 | 59.3 ± 7.51 |
Faster R-CNN (ResNet) | 82.52 ± 4.32 (85.17) | 70.96 ± 5.92 | 56.04 ± 8.17 |
Pipeline | mAP@50 | mAP@75 | mAP@small |
---|---|---|---|
Geometrical + CenterNet | 83.52 ± 1.57 (84.79) | 79.53 ± 1.54 | 59.73 ± 6.92 |
Geometrical + Faster R-CNN (ResNet) | 83.28 ± 1.83 (82.85) | 71.6 ± 2.15 | 61.22 ± 5.21 |
Colour + SSD (MobileNet) | 83.02 ± 1.44 (86.19) | 72.73 ± 3.22 | 58.24 ± 5.29 |
Colour + RetinaNet | 82.56 ± 3.18 (82.50) | 72.14 ± 3.27 | 59.3 ± 7.51 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Psiroukis, V.; Espejo-Garcia, B.; Chitos, A.; Dedousis, A.; Karantzalos, K.; Fountas, S. Assessment of Different Object Detectors for the Maturity Level Classification of Broccoli Crops Using UAV Imagery. Remote Sens. 2022, 14, 731. https://doi.org/10.3390/rs14030731
Psiroukis V, Espejo-Garcia B, Chitos A, Dedousis A, Karantzalos K, Fountas S. Assessment of Different Object Detectors for the Maturity Level Classification of Broccoli Crops Using UAV Imagery. Remote Sensing. 2022; 14(3):731. https://doi.org/10.3390/rs14030731
Chicago/Turabian StylePsiroukis, Vasilis, Borja Espejo-Garcia, Andreas Chitos, Athanasios Dedousis, Konstantinos Karantzalos, and Spyros Fountas. 2022. "Assessment of Different Object Detectors for the Maturity Level Classification of Broccoli Crops Using UAV Imagery" Remote Sensing 14, no. 3: 731. https://doi.org/10.3390/rs14030731
APA StylePsiroukis, V., Espejo-Garcia, B., Chitos, A., Dedousis, A., Karantzalos, K., & Fountas, S. (2022). Assessment of Different Object Detectors for the Maturity Level Classification of Broccoli Crops Using UAV Imagery. Remote Sensing, 14(3), 731. https://doi.org/10.3390/rs14030731