A Two-Step Phenotypic Parameter Measurement Strategy for Overlapped Grapes under Different Light Conditions
Abstract
:1. Introduction
2. Methodology
2.1. Step One: Contour Detection
2.2. Step Two: Contour Fitting
2.2.1. Candidate Region Generation
2.2.2. Iterative Least Squares Ellipse Fitting
- (1)
- For the input edge contour map, record the edge contour pixels as E;
- (2)
- Determine the number of iterations K as:
- (3)
- Determine the evaluation criteria of model fitness:
- (4)
- Randomly select 5 pixels to perform least squares ellipse fitting, and calculate the fit degree of the obtained ellipse fitting result, denoted as f. If the fit degree of the current model is higher than that of other models in the historical iteration process, record f as .
- (5)
- Repeat step 4, and loop K times to get the final ellipse equation.
3. Experiments
3.1. Grape Collection and Labeling
3.2. Data Preparation for Object Detection
4. Results and Discussion
4.1. Contour Detection Results
4.1.1. Model Performance Comparison
4.1.2. Comparison Test of Different Illumination Angles
4.2. Candidate Region Generation Results
4.3. Contour-Fitting Results
4.3.1. Recall Rate of Grape Contour Fitting
4.3.2. Contour-Fitting Accuracy Test
4.4. Continuous Monitoring of the Projected Area of Grapes
5. Summary and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Cifre, J.; Bota, J.; Escalona, J.M.; Medrano, H.; Flexas, J. Physiological tools for irrigation scheduling in grapevine (Vitis vinifera L.): An open gate to improve water-use efficiency? Agric. Ecosyst. Environ. 2005, 106, 159–170. [Google Scholar] [CrossRef]
- Gurovich, L.A.; Ton, Y.; Vergara, L.M. Irrigation scheduling of avocado using phytomonitoring techniques. Cienc. Investig. Agrar. 2006, 33, 117–124. [Google Scholar]
- Jones, H.G. Irrigation scheduling: Advantages and pitfalls of plant-based methods. J. Exp. Bot. 2004, 55, 2427–2436. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Luo, L.; Tang, Y.; Lu, Q.; Chen, X.; Zhang, P.; Zou, X. A vision methodology for harvesting robot to detect cutting points. Comput. Ind. 2018, 99, 130–139. [Google Scholar] [CrossRef]
- Tang, Y.C.; Wang, C.; Luo, L.; Zou, X. Recognition and Localization Methods for Vision-Based Fruit Picking Robots: A Review. Front. Plant Sci. 2020, 11, 510. [Google Scholar] [CrossRef] [PubMed]
- Jin, Y. Study on the South grape Expert System Using Artificial Neural Network and Machine Vision. Ph.D. Thesis, Hunan Agricultural University, Changsha, China, 2009. (In Chinese). [Google Scholar]
- Arefi, A.; Motlagh, A.M.; Mollazade, K.; Teimourlou, R.F. Recognition and localization of ripen tomato based on machine vision. Aust. J. Crop Sci. 2011, 5, 1144–1149. [Google Scholar]
- Xiang, R.; Ying, Y.B.; Jiang, H.Y.; Rao, X.; Peng, Y. Recognition of overlapping tomatoes based on edge curvature analysis. Trans. Chin. Soc. Agric. Mach. 2012, 43, 157–162, (In Chinese with English abstract). [Google Scholar]
- Wang, Q.; Ding, Y.; Luo, J.; Xu, K.; Li, M. Automatic Grading Device for Red Globe Grapes Based on Machine Vision and Method Thereof. Patent CN102680414A, 19 September 2012. (In Chinese). [Google Scholar]
- Yan, L.; Park, C.W.; Lee, S.R.; Lee, C.Y. New separation algorithm for touching grain kernels based on contour segments and ellipse fitting. J. Zhejiang Univ. Sci. C 2011, 12, 54–61. [Google Scholar] [CrossRef]
- Wang, C.; Tang, Y.; Zou, X.; SiTu, W.; Feng, W. A robust fruit image segmentation algorithm against varying illumination for vision system of fruit harvesting robot. Optik 2017, 131, 626–631. [Google Scholar] [CrossRef]
- Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 898–916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Dollár, P.; Zitnick, C.L. Fast edge detection using structured forests. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 1558–1570. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ganin, Y.; Lempitsky, V. N4-fields: Neural network nearest neighbor fields for image transforms. In Proceedings of the Asian Comference on Computer Vision (ACCV), Singapore, 1–5 November 2014; Springer: Cham, Switzerland, 2014. [Google Scholar]
- Bertasius, G.; Shi, J.; Torresani, L. Deepedge: A multi-scale bifurcated deepnetwork for top-down contour detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Shen, W.; Wang, X.; Wang, Y.; Bai, X.; Zhang, Z. Deep-contour: A deep convolutional feature learned by positive-sharing loss for contour detection draft version. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Liu, Y.; Cheng, M.M.; Hu, X.; Wang, K.; Bai, X. Richer convolutional features for edge detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Yang, J.; Price, B.; Cohen, S.; Lee, H.; Yang, M.H. Object contour detection with a fully convolutional encoder-decoder network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Dice, L.R. Measures of the amount of ecologic association between species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
- Deng, R.; Shen, C.; Liu, S.; Wang, H.; Liu, X. Learning to predict crisp boundaries. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Russell, B.C.; Torralba, A.; Murphy, K.P.; Freeman, W.T. LabelMe: A database and web-based tool for image annotation. Int. J. Comput. Vis. 2008, 77, 157–173. [Google Scholar] [CrossRef]
- Zitnick, C.L.; Dollár, P. Edge boxes: Locating object proposals from edges. In Proceedings of the European conference on computer vision (ECCV), Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Meng, C.; Li, Z.; Bai, X.; Zhou, F. Arc Adjacency Matrix-Based Fast Ellipse Detection. IEEE Trans. Image Process. 2020, 29, 4406–4420. [Google Scholar] [CrossRef] [PubMed]
- Pătrăucean, V.; Gurdjos, P.; Von Gioi, R.G. A parameterless line segment and elliptical arc detector with enhanced ellipse fitting. In Proceedings of the European Conference on Computer Vision (ECCV), Florence, Italy, 7–13 October 2012. [Google Scholar]
- Lou, Y.; Miao, Y.; Wang, Z.; Wang, L.; Li, J.; Zhang, C.; Xu, W.; Inoue, M.; Wang, S. Establishment of the soil water potential threshold to trigger irrigation of K yoho grapevines based on berry expansion, photosynthetic rate and photosynthetic product allocation. Aust. J. Grape Wine Res. 2016, 22, 316–323. [Google Scholar] [CrossRef]
- Chen, Y.J.; Chen, L.; Lou, Y.S.; Qin, Z.G.; Dong, X.; Ma, C.; Miao, Y.B.; Zhang, C.X.; Xu, W.P.; Wang, S.P. Determination of thresholds to trigger irrigation of ‘Kyoho’grapevine during berry development periods based on variations of shoot diameter and berry projected area. J. Fruit Sci. 2019, 36, 612–620. [Google Scholar]
Network Structure | ODS | OIS | FPS |
---|---|---|---|
DeepEdge | 0.527 | 0.571 | 8 |
HED-MS | 0.841 | 0.853 | 32 |
RCF-MS | 0.861 | 0.867 | 33 |
Ours | 0.857 | 0.868 | 38 |
Light Conditions | Algorithms | Index | ||
---|---|---|---|---|
D | R | A | ||
Side light | Ours | 0.33 | 0.69 | 35.7% |
HED-MS | 0.24 | 0.62 | 48.3% | |
RCF | 0.30 | 0.67 | 33.9% | |
Canny | 0.27 | 0.67 | 46.5% | |
DeepEgde | 0.20 | 0.42 | 41.6% | |
Back light | Ours | 0.27 | 0.59 | 33.1% |
HED-MS | 0.19 | 0.57 | 47.5% | |
RCF-MS | 0.23 | 0.54 | 31.5% | |
Canny | 0.21 | 0.49 | 43.2% | |
DeepEgde | 0.20 | 0.38 | 35.1% | |
Front light | Ours | 0.31 | 0.63 | 32.3% |
HED-MS | 0.22 | 0.59 | 43.3% | |
RCF-MS | 0.28 | 0.59 | 31.0% | |
Canny | 0.18 | 0.51 | 51.9% | |
DeepEgde | 0.24 | 0.47 | 34.2% |
Index | F1/% | Precision/% | Recall/% |
---|---|---|---|
Score | 95.44 | 94.65 | 96.24 |
Fitting Method | Recall/100% |
---|---|
Ours | 96.34 |
AMMED | 88.7 |
ELSD | 81.5 |
Serial Number | The Least Square Method/Pixel | Aamed/Pixel | Elsd/Pixel | Ours/Pixel | Ground Truth/Pixel |
---|---|---|---|---|---|
1 | 530 | 537 | 560 | 542 | 545 |
2 | 545 | 540 | 540 | 548 | 550 |
3 | 589 | 595 | 595 | 599 | 603 |
4 | 600 | 587 | 590 | 591 | 593 |
5 | 570 | 570 | 571 | 576 | 577 |
6 | 564 | 561 | 561 | 563 | 569 |
7 | 553 | 573 | 569 | 575 | 573 |
8 | 575 | 587 | 584 | 578 | 581 |
9 | 573 | 579 | 570 | 573 | 573 |
10 | 555 | 562 | 560 | 570 | 569 |
AARD | 1.62% | 1.16% | 1.21% | 0.62% |
Serial Number | The Least Square Method/Pixel | AAMED/Pixel | ELSD/Pixel | Ours/Pixel | Ground Truth/Pixel |
---|---|---|---|---|---|
1 | 304,285 | 315,765 | 305,894 | 315,951 | 318,959 |
2 | 342,628 | 347,598 | 342,146 | 347,561 | 345,793 |
3 | 274,105 | 289,075 | 274,589 | 289,014 | 288,606 |
4 | 220,553 | 216,800 | 201,454 | 216,859 | 219,015 |
5 | 238,936 | 242,457 | 245,896 | 241,367 | 242,047 |
6 | 199,784 | 204,784 | 203,654 | 205,657 | 203,955 |
7 | 55,334 | 55,697 | 54,356 | 53,738 | 54,728 |
8 | 39,128 | 42,475 | 40,289 | 39,418 | 40,242 |
9 | 54,456 | 53,457 | 53,006 | 53,982 | 53,287 |
10 | 57,055 | 57,689 | 56,126 | 56,529 | 56,224 |
AARD | 2.21% | 1.35% | 2.12% | 0.88% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Miao, Y.; Huang, L.; Zhang, S. A Two-Step Phenotypic Parameter Measurement Strategy for Overlapped Grapes under Different Light Conditions. Sensors 2021, 21, 4532. https://doi.org/10.3390/s21134532
Miao Y, Huang L, Zhang S. A Two-Step Phenotypic Parameter Measurement Strategy for Overlapped Grapes under Different Light Conditions. Sensors. 2021; 21(13):4532. https://doi.org/10.3390/s21134532
Chicago/Turabian StyleMiao, Yubin, Leilei Huang, and Shu Zhang. 2021. "A Two-Step Phenotypic Parameter Measurement Strategy for Overlapped Grapes under Different Light Conditions" Sensors 21, no. 13: 4532. https://doi.org/10.3390/s21134532
APA StyleMiao, Y., Huang, L., & Zhang, S. (2021). A Two-Step Phenotypic Parameter Measurement Strategy for Overlapped Grapes under Different Light Conditions. Sensors, 21(13), 4532. https://doi.org/10.3390/s21134532