Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications. The Case Study of Sweet Pepper Robotic Harvesting
Abstract
:1. Introduction
2. Algorithms
2.1. FNF Algorithm
- Detect Flash/No-Flash IlluminationWhile the camera was configured to alternate triggering of the LED array between frames, various factors could interrupt this timing such as: the camera’s variable frame-rate, “dropped” frames, and communication latency between various system components (camera/PC, camera/LED controller). This necessitated constant evaluation of the incoming image stream in order to determine which images were taken under flash illumination. To accomplish this goal, the average brightness of consecutive images was compared, and if it exceeded a manually defined threshold the images were considered a valid FNF pair. The system’s FNF threshold was determined once via field-testing, and provided stable performance throughout the database acquisition process.
- Subtract Latest Flash/No-Flash Image PairOnce a valid FNF image pair was acquired, the ”no-Flash” image was subtracted from the “Flash” image on a per-pixel basis. Color artifacts were avoided by excluding overexposed or “saturated” pixels in the “Flash” from this subtraction process. Similarly, pixels that contained negative values following this process were corrected to 0 in order to produce a valid RGB image.The basic process of FNF image acquisition and its results are demonstrated in Figure 2.
2.2. Color-Based Detection Algorithm
- Hue level: 20/360–50/360
- Saturation level: 90/255–255/255
- Minimum object size: 400 px (image resolution: 320 × 240)
2.3. Deep-Learning Based Algorithm
3. Methods
3.1. Data
3.2. Data Acquisition
3.3. Data Processing and Labelling
3.4. Performance Measures
- FNF images vs. Flash-only images. To evaluate the impact the FNF acquisition methodology has on the appearance of the processed images, we first computed the distribution of hue and saturation of images acquired with the FNF protocol and compared them to the same measures for the Flash-only images.
- Detection accuracy measures. To evaluate the detection rate provided by the algorithms we computed precision and recall (Equations (1) and (2)) performance of both algorithms on both the Flash-only and FNF data.
- Time measures. To evaluate the resources required for the color-based detection algorithm as opposed to the advanced deep learning algorithm, the training times and operation times were logged on different hardware.
3.5. Sensitivity Analysis
- “Strict”—Detection of partially matured peppers considered a false positive.
- “Flexible”—Detection of partially matured peppers considered a true positive.
4. Results
4.1. FNF Images vs. Flash Only Images
4.2. Color-Based Detection Results
- TP—Correct detection of a fruit.
- FP2—Partially-mature fruit detected as mature.
- FP1—Non-mature fruit detected as mature.
- DC—Distant, out of range, fruit detected (ignored).
- FP—False detection (no fruit at detected location).
- FN—False misdetection (fruit present but not detected)
4.3. Deep Learning Results
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Bac, C.W.; Henten, E.J.; Hemming, J.; Edan, Y. Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead. J. Field Robot. 2014, 31, 888–911. [Google Scholar] [CrossRef]
- Kapach, K.; Barnea, E.; Mairon, R.; Edan, Y.; Ben-Shahar, O. Computer vision for fruit harvesting robots–state of the art and challenges ahead. Int. J. Comput. Vis. Robot. 2012, 3, 4–34. [Google Scholar] [CrossRef]
- Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
- Sa, I.; Ge, Z.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. Deepfruits: A fruit detection system using deep neural networks. Sensors 2016, 16, 1222. [Google Scholar] [CrossRef] [PubMed]
- McCool, C.; Sa, I.; Dayoub, F.; Lehnert, C.; Perez, T.; Upcroft, B. Visual detection of occluded crop: For automated harvesting. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 2506–2512. [Google Scholar]
- Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
- Ostovar, A.; Ringdahl, O.; Hellström, T. Adaptive Image Thresholding of Yellow Peppers for a Harvesting Robot. Robotics 2018, 7, 11. [Google Scholar] [CrossRef]
- Chen, S.W.; Shivakumar, S.S.; Dcunha, S.; Das, J.; Okon, E.; Qu, C.; Taylor, C.J.; Kumar, V. Counting apples and oranges with deep learning: A data-driven approach. IEEE Robot. Autom. Lett. 2017, 2, 781–788. [Google Scholar] [CrossRef]
- McCool, C.; Perez, T.; Upcroft, B. Mixtures of lightweight deep convolutional neural networks: Applied to agricultural robotics. IEEE Robot. Autom. Lett. 2017, 2, 1344–1351. [Google Scholar] [CrossRef]
- Milioto, A.; Lottes, P.; Stachniss, C. Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 41. [Google Scholar] [CrossRef]
- Vitzrabin, E.; Edan, Y. Adaptive thresholding with fusion using a RGBD sensor for red sweet-pepper detection. Biosyst. Eng. 2016, 146, 45–56. [Google Scholar] [CrossRef]
- Zheng, L.; Zhang, J.; Wang, Q. Mean-shift-based color segmentation of images containing green vegetation. Comput. Electron. Agric. 2009, 65, 93–98. [Google Scholar] [CrossRef]
- Kurtser, P.; Edan, Y. Statistical models for fruit detectability: Spatial and temporal analyses of sweet peppers. Biosyst. Eng. 2018, 171, 272–289. [Google Scholar] [CrossRef]
- Barth, R.; IJsselmuiden, J.; Hemming, J.; Van Henten, E.J. Data synthesis methods for semantic segmentation in agriculture: A Capsicum annuum dataset. Comput. Electron. Agric. 2018, 144, 284–296. [Google Scholar] [CrossRef]
- Nguyen, B.P.; Heemskerk, H.; So, P.T.C.; Tucker-Kellogg, L. Superpixel-based segmentation of muscle fibers in multi-channel microscopy. BMC Syst. Biol. 2016, 10, 124. [Google Scholar] [CrossRef] [PubMed]
- Chen, X.; Nguyen, B.P.; Chui, C.K.; Ong, S.H. Automated brain tumor segmentation using kernel dictionary learning and superpixel-level features. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 002547–002552. [Google Scholar]
- Hernandez-Lopez, J.J.; Quintanilla-Olvera, A.L.; López-Ramírez, J.L.; Rangel-Butanda, F.J.; Ibarra-Manzano, M.A.; Almanza-Ojeda, D.L. Detecting objects using color and depth segmentation with Kinect sensor. Procedia Technol. 2012, 3, 196–204. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Birchfield, S.T. Image-based segmentation of indoor corridor floors for a mobile robot. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 837–843. [Google Scholar]
- Śluzek, A. Novel machine vision methods for outdoor and built environments. Autom. Constr. 2010, 19, 291–301. [Google Scholar] [CrossRef]
- Wu, X.; Pradalier, C. Illumination Robust Monocular Direct Visual Odometry for Outdoor Environment Mapping. HAL 2018, hal-01876700. [Google Scholar]
- Son, J.; Kim, S.; Sohn, K. A multi-vision sensor-based fast localization system with image matching for challenging outdoor environments. Expert Syst. Appl. 2015, 42, 8830–8839. [Google Scholar] [CrossRef]
- He, S.; Lau, R.W.H. Saliency Detection with Flash and No-flash Image Pairs. In Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part III; Springer International Publishing: Cham, Switzerland, 2014; pp. 110–124. [Google Scholar]
- Bargoti, S.; Underwood, J.P. Image segmentation for fruit detection and yield estimation in apple orchards. J. Field Robot. 2017, 34, 1039–1060. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part I; European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
- Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A.; Fischer, I.; Wojna, Z.; Song, Y.; Guadarrama, S.; et al. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Volume 4. [Google Scholar]
- Szeliski, R. Computer Vision: Algorithms and Applications; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Koirala, A.; Walsh, K.B.; Wang, Z.; McCarthy, C. Deep learning for real-time fruit detection and orchard fruit load estimation: Benchmarking of ‘MangoYOLO’. Precis. Agric. 2019. [Google Scholar] [CrossRef]
- Arad, B.; Efrat, T.; Kurtser, P.; Ringdahl, O.; Hohnloser, P.; Hellstrom, T.; Edan, Y.; Ben-Shachar, O. SWEEPER Project Deliverable 5.2: Basic Software for Fruit Detection, Localization and Maturity; Wageningen UR Greenhouse Horticulture: Wageningen, The Netherlands, 2016. [Google Scholar]
- Barth, R.; Hemming, J.; van Henten, E.J. Design of an eye-in-hand sensing and servo control framework for harvesting robotics in dense vegetation. Biosyst. Eng. 2016, 146, 71–84. [Google Scholar] [CrossRef] [Green Version]
- Ringdahl, O.; Kurtser, P.; Edan, Y. Evaluation of approach strategies for harvesting robots: Case study of sweet pepper harvesting. J. Intell. Robot. Syst. 2018, 1–16. [Google Scholar] [CrossRef]
- Vitzrabin, E.; Edan, Y. Changing task objectives for improved sweet pepper detection for robotic harvesting. IEEE Robot. Autom. Lett. 2016, 1, 578–584. [Google Scholar] [CrossRef]
Paper | Crop | Dataset | Algo. | FPR | TPR | F | A | P | R |
---|---|---|---|---|---|---|---|---|---|
Ostovar et al., 2018 [7] | Sweet peppers | 170 img | AD | – | – | – | 91.5% | – | – |
Chen et al., 2017 [8] | Apples | 1749 (21 img) | DL | 5.1% | 95.7% | – | – | – | – |
Oranges | 7200 (71 img) | 3.3% | 96.1% | ||||||
McCool et al., 2017 [9] | Weed | Pre-train: img tune & test: 60 img | D-CNN | – | – | – | 93.9% | – | – |
Milioto et al., 2017 [10] | Weed | 5696 (867 img) 26,163 (1102 img) | CNN | – | – | – | 96.8% 99.7% | 97.3% 96.1% | 98.1% 96.3% |
Sa et al., 2016 [4] | Sweet pepper | 122 img | DL | – | – | 82.8% | – | – | – |
Rock melon | 135 img | 84.8% | |||||||
Apple | 64 img | 93.8% | |||||||
Avocado | 54 img | 93.2% | |||||||
Mango | 170 img | 94.2% | |||||||
Orange | 57 img | 91.5% | |||||||
Vitzrabin et al., 2016 [11] | Sweet pepper | 479 (221 img) | AD | 4.6% | 90.0% | – | – | – | – |
Zheng et al., 2009 [12] | Vegetation | 20 img 80 img | Mean-Shift | – | – | – | 95.4% 95.9% | – | – |
Our Results (FNF strict/flexible) | Sweet pepper | 156 img | AD | – | – | – | – | 65%/95% | 94%/95% |
Our Results (SSD) | Sweet pepper | 156 img | DL | – | – | – | – | 84% | – |
View Point | Distance to Stem (mm) | Tilt (Degrees) | Azimuth (Degrees) |
---|---|---|---|
1 | 190 | 10 | −50 |
2 | 190 | 20 | 20 |
3 | 170 | 0 | 0 |
Image Type | Measure | Strict | Flexible |
---|---|---|---|
FNF | Recall | 75% | 80% |
Precision | 60% | 82% | |
Flash-only | Recall | 60% | 64% |
Precision | 81% | 98% |
Image Type | Measure | Strict | Flexible |
---|---|---|---|
FNF | Recall | 94% | 95% |
Precision | 65% | 95% | |
Flash-only | Recall | 65% | 69% |
Precision | 82% | 99% |
Train | Test | |
---|---|---|
Split 1 | 128 | 40 |
Split 2 | 138 | 30 |
Split 3 | 129 | 39 |
Split 4 | 119 | 49 |
CPU | GPU | Approximate | Deep Learning | Color-Based |
---|---|---|---|---|
System Cost | Performance | Performance | ||
2 X Intel© Xeon© E5-2637v4 3.5 GHz | Nvidia Titan X | $9200 | 30 fps | 44 fps |
2 X Intel© Xeon© E5-2637v4 3.5 GHz | none | $7800 | 0.28 fps | 44 fps |
8-core ARM v8.2 64-bit CPU | 512-core Volta GPU | $1400 | 33 fps | 56 fps |
Intel© CoreTM i7-4700MQ 2.4 GHz | none | $800 | 0.19 fps | 30 fps |
Cortex-A53 64-bit SoC 1.4 GHz | none | $35 | 0.22 fps | 35 fps |
(Rasberry Pi 3 B+) |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Arad, B.; Kurtser, P.; Barnea, E.; Harel, B.; Edan, Y.; Ben-Shahar, O. Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications. The Case Study of Sweet Pepper Robotic Harvesting. Sensors 2019, 19, 1390. https://doi.org/10.3390/s19061390
Arad B, Kurtser P, Barnea E, Harel B, Edan Y, Ben-Shahar O. Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications. The Case Study of Sweet Pepper Robotic Harvesting. Sensors. 2019; 19(6):1390. https://doi.org/10.3390/s19061390
Chicago/Turabian StyleArad, Boaz, Polina Kurtser, Ehud Barnea, Ben Harel, Yael Edan, and Ohad Ben-Shahar. 2019. "Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications. The Case Study of Sweet Pepper Robotic Harvesting" Sensors 19, no. 6: 1390. https://doi.org/10.3390/s19061390
APA StyleArad, B., Kurtser, P., Barnea, E., Harel, B., Edan, Y., & Ben-Shahar, O. (2019). Controlled Lighting and Illumination-Independent Target Detection for Real-Time Cost-Efficient Applications. The Case Study of Sweet Pepper Robotic Harvesting. Sensors, 19(6), 1390. https://doi.org/10.3390/s19061390