Green Grape Detection and Picking-Point Calculation in a Night-Time Natural Environment Using a Charge-Coupled Device (CCD) Vision Sensor with Artificial Illumination
Abstract
:1. Introduction
2. Materials and Methods
2.1. Image Acquisition
2.2. Color Feature Analysis of Images
3. Grape Detection and Picking-Point Calculation
3.1. Algorithm Flow Diagram
3.2. Image Pre-Processing
3.3. Fruit Area Extraction
3.3.1. Improved Chan–Vese (C–V) Level-Set Model
3.3.2. Improved Algorithm Speed
3.3.3. Fruit Detection
Algorithm 1. The pseudo code of image segmentation and targets extraction |
Input: Night-time grape image
|
- Step 1.
- Extract the R component histogram of the night-time image. Then, calculate the number a of pixels with values greater than 140 and the number b of all pixels in the image. If Formula (5) is workable, go to Step 2. Otherwise, stop the grape picking.
- Step 2.
- Perform segmentation of the grape image combining the improved C–V level-set segmentation. Then, conduct morphology processing and remove the minor noise.
- Step 3.
- Calculate the number of connected regions. If the number is greater than 1, retain the largest region. If the number is equal to 1, the image is regarded as being a single grape cluster.
3.4. Picking-Point Calculation
- Step 1.
- Determine the minimum enclosing rectangle of the grape fruits and figure out its centroid.
- Step 2.
- Draw a vertical line through the centroid of the grape cluster, set the area above the rectangle to be the region of interest, detect straight lines of the region of interest using the Hough straight line-detection method (as the red box in Figure 9e), and remove straight lines with an angle between them and vertical lines greater than 15°, shown as Figure 9a,b.
- Step 3.
- Calculate the angle of the remaining straight lines and the vertical line, take the fitting straight line with the minimal angle between itself and the vertical line as the line with the picking point, and take the middle point of the fitting segment as the picking point shown as Figure 9c.
4. Results and Discussion
4.1. Grape Fruit Detection Experiment
4.2. Picking-Point Calculation Experiment
4.3. Algorithm Running Time Experiment
5. Conclusions and Future Work
- (1)
- Color models of the night-time grape images are analyzed using the exploratory analysis method with a result that the R component of the RGB grape image is suitable for implementation of night-time image detection of the green grape cluster.
- (2)
- Based on the R component of the night-time grape image, the background of the grape image is removed by the improved C–V level-set model combined with morphological treatment. The visual-detection experiment results show that the accuracy of grape fruit detection was 91.67% or more, and the average running time of the proposed algorithm was 0.46 s.
- (3)
- According to the growth characteristics of the grapes, the Hough line detection method was used to fit the fruit stem above the fruit, and the picking points on the fruit stem were determined. The experimental result of picking-point calculation of the night-time grape cluster showed that the highest accuracy rate was 92.5% for a depth of 500 mm while the lowest was 80.0% for a depth of 1000 mm. From image input to result return (the coordinates of the picking point or the cessation command), the average running time of the night-time images was 0.58 s. This study provides technical support for grape-picking robots in a natural environment.
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Bac, C.W.; Hemming, J.; van Henten, E.J. Stem localization of sweet-pepper plants using the support wire as a visual cue. Comput. Electron. Agric. 2014, 105, 111–120. [Google Scholar] [CrossRef]
- Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
- Song, Y.; Glasbey, C.A.; Horgan, G.W.; Polder, G.; Dieleman, J.A.; van der Heijden, G.W.A.M. Automatic fruit recognition and counting from multiple images. Biosyst. Eng. 2014, 118, 203–215. [Google Scholar] [CrossRef]
- Gatica, G.; Best, S.; Ceroni, J.; Lefranc, G. Olive Fruits Recognition Using Neural Networks. Procedia Comput. Sci. 2013, 17, 412–419. [Google Scholar] [CrossRef]
- Bulanon, D.M.; Kataoka, T. Fruit detection system and an end effector for robotic harvesting of Fuji apples. CIGR J. 2010, 12, 203–210. [Google Scholar]
- Ji, W.; Zhao, D.; Cheng, F.; Xu, B.; Zhang, Y.; Wang, J. Automatic recognition vision system guided for apple harvesting robot. Comput. Electr. Eng. 2012, 38, 1186–1195. [Google Scholar] [CrossRef]
- De-An, Z.; Jidong, L.; Wei, J.; Ying, Z.; Yu, C. Design and control of an apple harvesting robot. Biosyst. Eng. 2011, 110, 112–122. [Google Scholar] [CrossRef]
- Chen, X.Y.; Chaudhary, K.; Tanaka, Y.; Nagahama, K.; Yaguchi, H.; Okada, K.; Inaba, M. Reasoning-Based Vision Recognition for Agricultural Humanoid Robot toward Tomato Harvesting. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 28 September–2 October 2015; pp. 6487–6494. [Google Scholar]
- Bargoti, S.; Underwood, J.P. Image Segmentation for Fruit Detection and Yield Estimation in Apple Orchards. J. Field Robot. 2017, 34, 1039–1060. [Google Scholar] [CrossRef]
- Chaivivatrakul, S.; Dailey, M.N. Texture-based fruit detection. Precis. Agric. 2014, 15, 662–683. [Google Scholar] [CrossRef]
- Kitamura, S.; Oka, K. Improvement of the ability to recognize sweet peppers for picking robot in greenhouse horticulture. In Proceedings of the International Joint Conference, Busan, Korea, 18–21 October 2006; p. 4091. [Google Scholar]
- Font, D.; Palleja, T.; Tresanchez, M.; Runcan, D.; Moreno, J.; Martinez, D.; Teixido, M.; Palacin, J. A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm. Sensors 2014, 14, 11557–11579. [Google Scholar] [CrossRef] [PubMed]
- Kusumam, K.; Krajnik, T.; Pearson, S.; Duckett, T.; Cielniak, G. 3D-Vision based detection, localization, and sizing of broccoli heads in the field. J. Field Robot. 2017, 34, 1505–1518. [Google Scholar] [CrossRef]
- Mehta, S.S.; Ton, C.; Asundi, S.; Burks, T.F. Multiple camera fruit localization using a particle filter. Comput. Electron. Agric. 2017, 142, 139–154. [Google Scholar] [CrossRef]
- Rakun, J.; Stajnko, D.; Zazula, D. Detecting fruits in natural scenes by using spatial-frequency based texture analysis and multiview geometry. Comput. Electron. Agric. 2011, 76, 80–88. [Google Scholar] [CrossRef]
- Sa, I.; Ge, Z.Y.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. DeepFruits: A Fruit Detection System Using Deep Neural Networks. Sensors 2016, 16, 1222. [Google Scholar] [CrossRef] [PubMed]
- Henten, E.J.V.; Hemming, J.; Tuijl, B.A.J.V.; Kornet, J.G.; Meuleman, J.; Bontsema, J.; Os, E.A.V. An Autonomous Robot for Harvesting Cucumbers in Greenhouses. Auton. Robot. 2002, 13, 241–258. [Google Scholar] [CrossRef]
- Stein, M.; Bargoti, S.; Underwood, J. Image Based Mango Fruit Detection, Localisation and Yield Estimation Using Multiple View Geometry. Sensors 2016, 16, 1915. [Google Scholar] [CrossRef] [PubMed]
- Amatya, S.; Karkee, M.; Gongal, A.; Zhang, Q.; Whiting, M.D. Detection of cherry tree branches with full foliage in planar architecture for automated sweet-cherry harvesting. Biosyst. Eng. 2015, 146, 3–15. [Google Scholar] [CrossRef]
- Yamamoto, S.; Hayashi, S.; Yoshida, H.; Kobayashi, K. Development of a Stationary Robotic Strawberry Harvester with a Picking Mechanism that Approaches the Target Fruit from Below. Jpn. Agric. Res. Q. 2014, 48, 261–269. [Google Scholar] [CrossRef]
- Payne, A.; Walsh, K.; Subedi, P.; Jarvis, D. Estimating mango crop yield using image analysis using fruit at ‘stone hardening’ stage and night time imaging. Comput. Electron. Agric. 2014, 100, 160–167. [Google Scholar] [CrossRef]
- Arefi, A.; Motlagh, A.M.; Mollazade, K.; Teimourlou, R.F. Recognition and localization of ripen tomato based on machine vision. Aust. J. Crop Sci. 2011, 5, 1144–1149. [Google Scholar]
- Linker, R.; Kelman, E. Apple detection in nighttime tree images using the geometry of light patches around highlights. Comput. Electron. Agric. 2015, 114, 154–162. [Google Scholar] [CrossRef]
- Qureshi, W.S.; Payne, A.; Walsh, K.B.; Linker, R.; Cohen, O.; Dailey, M.N. Machine vision for counting fruit on mango tree canopies. Precis. Agric. 2016, 17, 1–21. [Google Scholar] [CrossRef]
- Liu, S.; Whitty, M. Automatic grape bunch detection in vineyards with an SVM classifier. J. Appl. Log. 2015, 13, 643–653. [Google Scholar] [CrossRef]
- Nuske, S.; Achar, S.; Bates, T.; Narasimhan, S. Yield estimation in vineyards by visual grape detection. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2352–2358. [Google Scholar]
- Dolezel, P.; Skrabanek, P.; Gago, L. Detection of grapes in natural environment using feedforward neural network as a classifier. In Proceedings of the SAI Computing Conference, London, UK, 13–15 July 2016; pp. 1330–1334. [Google Scholar]
- Reis, M.J.C.S.; Morais, R.; Peres, E.; Pereira, C.; Contente, O.; Soares, S.; Valente, A.; Baptista, J.; Ferreira, P.J.S.G.; Cruz, J.B. Automatic detection of bunches of grapes in natural environment from color images. J. Appl. Log. 2012, 10, 285–290. [Google Scholar] [CrossRef]
- Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed]
- Mumford, D.; Shah, J. Optimal Approximations by Piecewise Smooth Functions and Associated Variational Problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef] [Green Version]
- Smith, A.R. Color Gamut Transformation Pairs. Comput. Graph. 1978, 12, 12–19. [Google Scholar] [CrossRef]
- Miao, L.; Xue, Y.J.; Kong, D.Y.; Huang, K.; Lu, Q.F.; Wang, K. A hybrid H component histogram threshold and sparse field level set algorithm for litchi image automatic segmentation. In Proceedings of the International Conference on Electric Information and Control Engineering, Wuhan, China, 15–17 April 2011; pp. 1001–1004. [Google Scholar] [CrossRef]
- Smeets, D.; Loeckx, D.; Stijnen, B.; De Dobbelaer, B.; Vandermeulen, D.; Suetens, P. Semi-automatic level set segmentation of liver tumors combining a spiral-scanning technique with supervised fuzzy pixel classification. Med. Image Anal. 2010, 14, 13–20. [Google Scholar] [CrossRef] [PubMed]
- Zhang, K.H.; Zhang, L.; Song, H.H.; Zhou, W.G. Active contours with selective local or global segmentation: A new formulation and level set method. Image Vis. Comput. 2010, 28, 668–676. [Google Scholar] [CrossRef]
- Goffaux, V.; Peters, J.; Haubrechts, J.; Schiltz, C.; Jansma, B.; Goebel, R. From coarse to fine? Spatial and temporal dynamics of cortical face processing. Cereb. Cortex 2011, 21, 467–476. [Google Scholar] [CrossRef] [PubMed]
- Schmidt, H.P.; Kammann, C.; Niggli, C.; Evangelou, M.W.H.; Mackie, K.A.; Abiven, S. Biochar and biochar-compost as soil amendments to a vineyard soil: Influences on plant growth, nutrient uptake, plant health and grape quality. Agric. Ecosyst. Environ. 2014, 191, 117–123. [Google Scholar] [CrossRef]
Reality | Classification Results | |||
---|---|---|---|---|
a | b | |||
Amounts | Ratio | Amounts | Ratio | |
a 1 | 193 | 96.5% | 7 | 3.5% |
b 2 | 9 | 4.5% | 191 | 95.5% |
Percentage of Correct Area | Percentage of False Area | ||||||
---|---|---|---|---|---|---|---|
>90% | 70~90% | <70% | <5% | 5~15% | >15% | ||
Total image | 300 | 275 | 12 | 13 | 271 | 16 | 13 |
Ratio | 100% | 91.67% | 4.00% | 4.33% | 90.33% | 5.33% | 4.33% |
Reality | Segmentation Result | |||
---|---|---|---|---|
Fruit Pixels | Background Pixels | |||
Amounts | Ratio | Amounts | Ratio | |
Fruit pixels | 34,545,372 | 91.70% | 3,125,679 | 8.30% |
Background pixels | 179,314 | 6.34% | 2,649,635 | 93.66% |
Depth/mm | Number of Images with Visible Grapes | Pixels Error In Row | Accuracy Rate/% 1 | ||
---|---|---|---|---|---|
0/Pixel | 1~5/Pixel | >5/Pixel | |||
500 | 40 | 26 | 11 | 3 | 92.5 |
600 | 40 | 24 | 12 | 4 | 90.0 |
700 | 40 | 23 | 13 | 4 | 90.0 |
800 | 40 | 18 | 16 | 5 | 85.0 |
900 | 40 | 15 | 19 | 6 | 85.0 |
1000 | 40 | 10 | 22 | 8 | 80.0 |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xiong, J.; Liu, Z.; Lin, R.; Bu, R.; He, Z.; Yang, Z.; Liang, C. Green Grape Detection and Picking-Point Calculation in a Night-Time Natural Environment Using a Charge-Coupled Device (CCD) Vision Sensor with Artificial Illumination. Sensors 2018, 18, 969. https://doi.org/10.3390/s18040969
Xiong J, Liu Z, Lin R, Bu R, He Z, Yang Z, Liang C. Green Grape Detection and Picking-Point Calculation in a Night-Time Natural Environment Using a Charge-Coupled Device (CCD) Vision Sensor with Artificial Illumination. Sensors. 2018; 18(4):969. https://doi.org/10.3390/s18040969
Chicago/Turabian StyleXiong, Juntao, Zhen Liu, Rui Lin, Rongbin Bu, Zhiliang He, Zhengang Yang, and Cuixiao Liang. 2018. "Green Grape Detection and Picking-Point Calculation in a Night-Time Natural Environment Using a Charge-Coupled Device (CCD) Vision Sensor with Artificial Illumination" Sensors 18, no. 4: 969. https://doi.org/10.3390/s18040969
APA StyleXiong, J., Liu, Z., Lin, R., Bu, R., He, Z., Yang, Z., & Liang, C. (2018). Green Grape Detection and Picking-Point Calculation in a Night-Time Natural Environment Using a Charge-Coupled Device (CCD) Vision Sensor with Artificial Illumination. Sensors, 18(4), 969. https://doi.org/10.3390/s18040969