Next Article in Journal
Correction: Majeed, A.; Lee, S. Towards Privacy Paradigm Shift Due to the Pandemic: A Brief Perspective. Inventions 2021, 6, 24
Next Article in Special Issue
Determination of the Expected Value of Losses Caused by the Cargo Transportation Insurance Risks by Water Transport
Previous Article in Journal
Manufacturing of a Granular Fertilizer Based on Organic Slurry and Hardening Agent
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Location of Table Grapes Picking Point Based on Infrared Tube

1
Shandong Agricultural Equipment Intelligent Engineering Laboratory, Shandong Provincial Key Laboratory of Horticultural Machinery and Equipment, College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai’an 271018, China
2
State Key Laboratory of Crop Biology, College of Life Sciences, Shandong Agricultural University, Tai’an 271018, China
*
Authors to whom correspondence should be addressed.
Inventions 2022, 7(1), 27; https://doi.org/10.3390/inventions7010027
Submission received: 27 January 2022 / Revised: 14 February 2022 / Accepted: 15 February 2022 / Published: 24 February 2022
(This article belongs to the Special Issue Low-Cost Inventions and Patents: Series II)

Abstract

:
This study investigates the low mechanization level of grape picking, and the problems associated with the difficult location of grape picking points in three-dimensional space. A method for rapidly locating the picking points of table grapes based on an infrared tube was proposed in this paper. Firstly, the Otsu algorithm and the maximum connected domain were used to obtain the image of the target grape, which realized the fast recognition and segmentation of the target grape in two-dimensional space. Secondly, a location device for grape-picking points based on an infrared tube was designed, which resolved the technical problem related to the difficulty of locating picking points in three-dimensional space, and realized the accurate positioning of picking points for table grapes. Finally, the experimental results show that the proposed method can quickly and accurately locate the picking points of table grapes in three-dimensional space. The average running time of the proposed algorithm is 0.61 s, and the success rate of location identification is 90.0%. It provides a feasible scheme for the mechanized picking of table grapes.

1. Introduction

As the world’s largest fruit and vegetable output and consumption country, the picking of fruits and vegetables is mainly artificial in China. With the development of intelligent and machine vision, machine picking to replace manual picking has become a trend [1,2,3,4]. Additionally, accurately locating picking points plays an important role in mechanical picking. In addition, China is one of the countries with the largest grape outputs in the world. Rapidly and accurately locating grape-picking points is the key to realizing automatic mechanical picking of table grapes, which is also of great significance to ensure picking quality, improve competitiveness and increase farmers’ income [5,6,7,8].
Scarfe et al. used the fixed-threshold method to remove the background and used template matching to identify kiwifruit [9]. However, the fixed threshold could not change with the light. Peng Hongxing et al. successfully isolated citrus fruits by using I2 components in I1, I2, and I3 space [10]. In the above study, only the stone fruit (e.g., peach, apricot) and the kernel fruit (e.g., apple, pear) in two-dimensional space were recognized and segmented. However, the shape of the berry fruits (e.g., grapes, cherry tomatoes) varies. So, the picking location method of the stone fruit and the kernel fruit is not suitable for berries.
In recent years, Liu Ping et al. used a K-means clustering algorithm and Chan-Vese model to study the overlapping grapes in two-dimensional space [11]. Luo Lufeng et al. completed the identification and segmentation of overlapping grapes based on a contour analysis [12]. The above studies only completed the identification and segmentation of grapes but did not study the location of grape-picking points. In addition, Xiong Juntao et al. successfully recognized the stem of grapes by OTSU threshold, and located picking points through Hough straight line fitting [13]. However, the localized accuracy of grapes was low. Furthermore, Lei Wangxiong et al. detected and located grape-picking points based on watershed and angle-constraint methods [14]. The success rate of the method was 89.2%, and the average time of position was 0.65 s. In addition, deep learning has been applied more and more widely in fruit and vegetable recognition and segmentation. Ning Zhengtong et al. achieved the detection of grape picking points by Mask R-CNN [15]. Liang et al. realized fruit-stem detection of litchi at night based on improved YOLOv3 [16]. However, these studies, based on deep learning, required the collection of a large number of images. In addition, we must annotate them one by one. This made for a lot of work. Moreover, the sample training has high requirements for the equipment. In conclusion, both classical methods and deep-learning methods are carried out on one side of berry fruits in two-dimensional space. However, due to the variation in the appearance of berry fruits, it is difficult to replace the whole berry fruits in three-dimensional space using only a side image of berry fruits in two-dimensional space. As a result, the success rate of picking location is reduced.
In view of the above problems, many scholars conducted relevant studies. Luo Lufeng et al. proposed a method of grapes based on binocular stereo vision [17]. To accurately locate tomatoes, Zhejiang University used the stereo-detection system built by two CCD color cameras [18]. Recently, Cao Jingjun built an image acquisition system based on a depth camera, which completed the precise positioning of Agaricus Bisporus in three-dimensions space [19]. However, the depth camera is affected by external interference, which has a great impact on the positioning accuracy. Besides, the price of a depth camera is higher than that of an ordinary camera, which increases the cost. Therefore, it is particularly important to study an efficient and fast location method for grape picking points in three-dimensional space.
However, all of the above papers only studied the picking point positioning system or positioning method. They pay less attention to the end-effector. By the end of the 20th century, Japan designed and developed a tomato-picking robot [20]. It used a color camera to extract and identify tomatoes and absorbed the fruit by suction. Furthermore, it cut the stem by the manipulator. The robot had a harvesting efficiency of 15 s–1, with a success rate of about 70%. In 2019, Duan Hongyan et al. designed an end-effector that adapted to picking string tomatoes [21]. The next year, Lu Jie et al. also developed a tomato-picking end actuator with the ability to clamp and cut [22]. Recently, Wei Bo et al. designed an underactuated end-effector using the combined control of three-finger grip and deflection, with a success rate of 98.3% [23]. After that, Zhang Xiaowei et al. analyzed the process of the person picking safflower and proposed a three-finger pull-type safflower end-effector with a recovery picking rate of 92.71% [24]. However, all the above studies are designed for the end-effector through physical characteristics or software simulation. However, most of the existing end-effectors are selected after the controller completes the picking point positioning. The process, which uses step-by-step working, leads to low harvesting efficiency. Therefore, it is particularly important to design an end-effector that combines picking point positioning and harvesting.
So, this paper proposed a fast and accurate location method of the table grape picking point based on the infrared tubes. This method provides a feasible scheme for the fast and nondestructive picking of table grapes.

2. Materials and Methods

2.1. Image Collection

The planar image of table grapes was collected in Mountain Jinniu of the Shandong Fruit Research Institute. The time was 09:00–12:00, 15th July 2019. A total of 203 images were taken. The grape variety was Summer Black. The rear camera of the smartphone was positioned parallel to the ground to complete the shooting. Since the captured image was about 12 megapixels (3024 × 4032), the bilinear interpolation method was used to scale the image. Finally, the image was adjusted to 1.08 megapixels (900 × 1200). The partial grape images are shown in Figure 1.

2.2. Recognition and Segmentation of Table Grapes in Two-Dimensional Space

In the natural environment, light is the most important factor that affects the recognition and segmentation of table grapes. To reduce the influence of light on the recognition and segmentation of table grapes, this paper proposes an Otsu method based on the S component (HSI color space, as shown in Figure 2) to identify and segment multiple clusters of grapes [25]. The method, based on the maximum connected domain as the criterion of optimal target grape, can rapidly and accurately identify and segment table grapes.

2.3. Image Preprocessing

By using the nonlinear transformation method, which is expressed as Equation (1), the RGB color space is converted into HSI color space.
{ H = { θ , B < G 360 θ , B > G S = 1 3 R + G + B [ min ( R , G , B ) ] I = R + G + B 3
where θ in Equation (1) is
θ = cos 1 { [ ( R G ) + ( R B ) ] / 2 [ ( R G ) 2 + ( R B ) ( G B ) ] 1 2 }
If the normalization of the RGB color space is completed, the saturation component (S) and intensity component (I) in the HSI color space will be normalized accordingly. Additionally, following this, the initial component H divided by 360 is the normalized component H.
By comparing the H, S, and I component histograms in all images of table grapes, an obvious boundary becomes apparent between grapes and the background in the S component, as shown in Figure 3c. The S component has a better effect on peaks and troughs. Therefore, the S component was used as the basis of the segmentation of table grapes.

2.4. Recognition and Segmentation of Table Grapes Based on Otsu

Because the grapes have a distinct boundary with the background in the S component, the rapid and effective separation between the grapes and background is realized based on Otsu. Firstly, the threshold ( T ( k ) = k ,   0 < k < 1 ) between the grapes and the background in the image is determined. Then, the threshold of the grapes image is divided into C 1 and C 2 , where C 1 [ 0 , k ] , C 2 [ k + 1 , 1 ] . Furthermore, the threshold θ B 2 ( K ) that maximizes the variance of the two types of pixels C 1 and C 2 is obtained by Equation (3). Then, the table grapes can be rapidly segmented. The results of recognition and segmentation are shown in Figure 4a.
θ B 2 ( k ) = P 1 ( k ) [ m 1 ( k ) + m G ] 2 + P 2 ( k ) [ m 2 ( k ) + m G ] 2
p 1 ( k ) = i = 0 k p i , p 2 ( k ) = 1 p 1 ( k )
where m 1 ( k ) is the pixel mean values of C 1 , m 2 ( k ) is the pixel mean values of C 2 , m G is the mean value of total pixel value n , P 1 ( k ) is the probability of C 1 and P 2 ( k ) is the probability of C 2 .
It can be seen from Figure 4b that there are many small noises in the image, so the mathematical morphology method was adopted to remove noise quickly and effectively. At the same time, to avoid the influence of interfering grapes (grapes other than the optimal target grape), the area threshold method was used to obtain the maximum connected domain within the visual range. So, the target grape can be rapidly and accurately segmented using the maximum area threshold, as shown in Figure 4c. Additionally, the Sobel operator was used to extract the boundary contour of the target grape. The result of recognition is represented as an external rectangle. Meanwhile, the center point of the external rectangle was labeled to obtain the planar coordinate information of the target grape. The results of recognition and labeling are shown in Figure 4d.
Overall, 100 images of table grapes were selected as samples from 203 images. The success rate of segmentation was 87%. Among them, the thicker branches of grapes were similar in color to grapes, which led to the failure of the experiment on nine occasions. The number of failures caused by illumination was three. Another failure was caused by overlapping grapes. Part of the experimental results is shown in Figure 5.
From the experimental results, it is not difficult to analyze that the identification method proposed in this paper can complete the identification of grapes very quickly. Of course, this paper also compares other methods, as shown in Table 1. Although the chromatic aberration method (1.1 × G − B) can also achieve segmentation for table grapes, 1.1 × G − B is not suitable for all images because each image has its own color characteristics, which leads to a low success rate of grapes recognition. In addition, the K-means clustering algorithm was adopted for grapes recognition, and the overall recognition success rate was 90%. However, due to the iterative operation of the algorithm, the recognition time was relatively high. Therefore, the recognition method proposed in this paper has high comprehensive performance. In future, deep learning may replace this method with the expansion of data, and we will continue to study this problem.

3. Results

3.1. Design of Detecting Device for Table Grape Picking Point

The outer rectangle of the target grape was obtained according to the method of recognition and segmentation proposed in Section 3. Furthermore, the infrared detection device for the picking points of table grapes was designed based on the infrared tube to locate the picking points.

3.2. Selection of Infrared Tube

To meet the requirements of the method proposed in this paper and to meet the growth characteristics of grapes, an infrared detection device was designed to locate grape-picking points. The shape of the grape cluster is conical. The length of the grape cluster is in the range of 160–230 mm and the width of the grape cluster is in the range of 135–160 mm. In addition, the camera is parallel to the ground in the image taken in this paper, so complete information of the cluster’s width can be obtained. Moreover, due to the limited opening and closing space of the manipulator, the volume of the selected sensor should be as small as possible on the premise of completing the detection task. According to these requirements, the QT30CM infrared tube was selected. Its external size is 21 × 11 × 6 mm. The detection distance is 20–300 mm, with a transmission angle of <5° and receiving angle of <10°. The electro-optical conversion efficiency is high and the power consumption is low. Furthermore, both the transmission and receiving angles are small [26]. Therefore, this sensor meets the requirements of grape picking location.

3.3. Structural Design of Picking Point Detection Device

According to the characteristics of grape clusters, six groups of infrared tubes were arranged in pairs to ensure the infrared tubes covered the whole width of clusters. Additionally, all the infrared tubes were fixed on custom acrylic plates. In addition, the detection device was fixed under the manipulator, which can realize the synchronous movement of the detection device and the pickup device (Figure 6).
At the same time, to avoid the mutual influence between adjacent infrared tubes, a geometric analysis was conducted on the receiving tube of the infrared tube. According to Equation (5), the distance between point E and point F was calculated, which not only ensures the success rate of positioning but also improves the positioning accuracy. The analysis diagram is shown in Figure 7.
{ L D F = L A C × tan ϕ L E F = L D F L D E
where L D F is the distance between point D and point F. Because the ear width of the Summer Black grape is 135–160 mm, and L is the distance between the infrared transmitting tube and infrared receiving tube, L A C = L / 2 = 80   mm was obtained. In addition, L D E = 5.5   mm can be obtained with the QT30cm technical parameters. In order to ensure the best working state of the receiving tube, a receiving angle of ϕ = 7 ° was used. Finally, L E F = 4.3   mm was obtained. The parameter Settings are shown in Table 2. Therefore, the spacing between the two receiving tubes was 8.6   mm , and the infrared transmitting tube corresponds to the receiving tube one by one, so the spacing between the transmitting tubes was also 8.6 mm.

3.4. Location Strategy of Table Grape Picking Point

The location strategy of table-grape picking points proposed in this paper is shown in Figure 8, and the method flow chart of picking-point positioning is shown in Figure 9.
After the end of recognition and segmentation, the controller takes the length of the outer rectangle’s short edge as the opening stroke of the detection device. Then, according to the planar coordinate information, the device used for detection and picking moves to the vertical position of the target grape. The device is then moved vertically while the infrared tube begins detection. Only one set of infrared tubes is triggered when they are near the base of the grape. When the device detection and picking rises, the infrared tube is triggered. If the detection and picking device is moved to the maximum horizontal length of the grape, the number of infrared tubes triggered reaches the maximum. Furthermore, if the detection and picking device continues to rise, the number of infrared tube triggers decreases. Once the number of infrared tube triggers is reduced to the minimum level and the state does not change for 30 mm, it indicates that the picking equipment has reached the position of the grape stem, namely the position of the picking point. The controller sends out the picking letter instruction to pick the target grape.

4. Discussion

In order to verify the feasibility of the method proposed in this paper, a simulation test platform (Figure 10) was built to carry out the test. The available image of the Summer Black grape was collected by RealSense R-200 and processed by MATLAB 2020(b) software. The computer processing platform used in this study is a notebook computer, the processor is Intel (R) Core (TM) i7, the main frequency is 2.60 GHz, with 16 GB running memory, and a 500 GB hard disk.
In order to verify the effectiveness of the proposed method, 100 location experiments were carried out on multiple clusters of table grapes with different shapes using a simulation test platform. The average running time of the algorithm is 0.61 s, and the success rate of localization is 90.0%. Locating failed 10 times. Due to the occlusion of leaves, the identification of the grape boundary contour was not accurate for a total of six times, which led to a calibration error of the enclosing rectangle. Secondly, because of the influence of light on the test site, the table grapes were not identified and segmented successfully three times. In addition, because of the sparse berry of grapes, the area without berry was considered as the fruit stem. So, the picking point location was incorrect in this instance.
The data of 10 randomly selected experimental are shown in Table 3. Illumination is the main reason for the failure of recognition for sample 9, which led to the failure of outside rectangle labeling. Finally, locating the grape picking points failed. Compared with reference [14], the method proposed in this paper not only reduces the positioning time of the table-grape picking point, but also led to a positioning success rate of 90.0%, marking an increase of 7.1%.

5. Conclusions

The method proposed in this paper realizes the fast and accurate locating of grape picking points in three-dimensional space. By analyzing each component diagram in HSI color space, the S component is finally taken as the basis. Additionally, Otsu was selected as the segmentation method. The maximum connected domain was used as the segmentation basis of the optimal target grape. Fast and effective recognition and segmentation of table grapes in the natural environment was achieved. Compared with other methods, although the method proposed in this paper is greatly affected by the environment (e.g., lighting, fruit overlapping), it is simpler and faster for target detection and recognition. More efficient and accurate methods may be used to replace this method in the future.
Furthermore, the detection device of grape picking points was designed based on six groups of infrared tubes, and the picking points of table grapes in three-dimensional space were successfully located by the planar images. Finally, the results show that the average running time is 0.61 s and the positioning success rate is 90.0%. Even if the end-effector does not prevent mechanical damage to the fruit, the bottom-up approach is still a good choice. In addition, the detection device can realize the synchronization of detection and picking, which greatly improves the picking efficiency. The device is suitable for picking a variety of fruits. Therefore, the method proposed has broad application prospects. It also provides a reliable theoretical basis for picking table grapes quickly and without damage.

Author Contributions

Conceptualization, P.L. and X.L.; Validation, L.L. and Y.Z.; Writing—original draft, Y.Z. and T.Z.; Writing—review and editing, P.L. and X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “NSFC (Nos. 31871543, 31700644)”, “Natural Science Foundation of Shandong (No. ZR2020KF002)”, and the “project of Shandong provincial key laboratory of horticultural machinery and equipment (No. YYJX201905)”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y.-H.; Zhao, P.-K.; Bian, D.-L. Study on the application of fruit-picking system based on computer vision. J. Agric. Mech. Res. 2018, 40, 200–203. [Google Scholar]
  2. Zhou, H.; Du, Z.-L.; Wu, Z.-Y.; Song, C.; Guo, N.; Lin, Y.-N. Application progress of machine vision technology in the field of modern agricultural equipment. J. Chin. Agric. Mech. 2017, 38, 86–92. [Google Scholar]
  3. Federica, C.; Susanna, S.; Dennis, J.-M.; Eugenio, C. Comprehension rates of safety pictorials affixed to agricultural machinery among Pennsylvania rural population. Saf. Sci. 2018, 103, 162–171. [Google Scholar]
  4. Luo, X.-W.; Liao, J.; Zou, X.-J.; Zhang, Z.-G.; Zhou, Z.-Y.; Zang, Y.; Hu, L. Enhancing agricultural mechanization level through information technology. Trans. Chin. Soc. Agric. Eng. 2016, 32, 1–14. [Google Scholar]
  5. David, T.; Kenneth, G.C.; Pamela, A.M.; Rosamond, N.; Stephen, P. Agricultural sustainability and intensive production practices. Nature 2002, 418, 671–677. [Google Scholar]
  6. Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
  7. Wang, N.; Zhang, N.-Q.; Wang, M.-H. Wireless sensors in agriculture and food industry-Recent development and future perspective. Comput. Electron. Agric. 2005, 50, 1–14. [Google Scholar] [CrossRef]
  8. Zhu, F.-W.; Yu, F.-H.; Zou, L.-N.; Yue, S.-D. Research status quo and future perspective of agricultural robots. Agric. Eng. 2013, 3, 10–13. [Google Scholar]
  9. Scarfe, A.-J. Development of an Autonomous Kiwifruit Harvester. Ph.D. Thesis, Massey University, Manawatu, New Zealand, 2012. [Google Scholar]
  10. Peng, H.-X.; Zou, X.-J.; Guo, A.-X.; Xiong, J.-T.; Chen, Y. Color model analysis and recognition for parts of citrus based on Exploratory Data Analysis. Trans. Chin. Soc. Agric. Mach. 2013, 44, 253–259. [Google Scholar]
  11. Liu, P.; Zhu, Y.-J.; Zhang, T.-X.; Hou, J.-L. Algorithm for recognition and image segmentation of overlapping grape cluster in natural environment. Trans. Chin. Soc. Agric. Eng. 2020, 36, 161–169. [Google Scholar]
  12. Luo, L.-F.; Zou, X.-J.; Wang, C.-L.; Chen, X.; Yang, Z.-S.; Situ, M.-W. Recognition method for two overlapping and adjacent grape clusters based on image contour analysis. Trans. Chin. Soc. Agric. Mach. 2017, 48, 15–22. [Google Scholar]
  13. Xiong, J.-T.; He, Z.-L.; Tang, L.-Y.; Lin, R.; Liu, Z. Visual localization of disturbed grape picking point in non-structural environment. Trans. Chin. Soc. Agric. Mach. 2017, 48, 29–33. [Google Scholar]
  14. Lei, W.-X.; Lu, J. Visual positioning method for picking point of grape picking robot. J. Jiangsu Agric. Sci. 2020, 36, 1015–1021. [Google Scholar]
  15. Ning, Z.-T.; Luo, L.-F.; Liao, J.-X.; Wen, H.-J.; Wei, H.-L.; Lu, Q.-H. Recognition and the optimal picking point location of grape stems based on deep learning. Trans. Chin. Soc. Agric. Eng. 2021, 37, 222–229. [Google Scholar]
  16. Liang, C.-X.; Xiong, J.-T.; Zheng, Z.-H.; Zhuo, Z.; Li, Z.-H.; Chen, S.-M.; Yang, Z.-G. A visual detection method for nighttime litchi fruits and fruiting stems. Comput. Electron. Agric. 2020, 169, 105192. [Google Scholar] [CrossRef]
  17. Luo, L.-F.; Zou, X.-J.; Ye, M.; Yang, Z.-S.; Zhang, C.; Zhu, N.; Wang, C.-L. Calculation and location of bounding volume of grape for undamaged fruit picking based on binocular stereo vision. Trans. Chin. Soc. Agric. Eng. 2016, 32, 41–47. [Google Scholar]
  18. Xiang, R. Recognition and Localization for Tomatoes under Open Enviroments Based on Binocular Stereo Vision. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2013. [Google Scholar]
  19. Cao, J.-J. Research on Computer Vision System of Agaricus Bisporus Harvesting Robot Based on Deep Learning. Ph.D. Thesis, Zhejiang University, Hangzhou, China, 2021. [Google Scholar]
  20. Monta, M.; Kondo, N.; Ting, K.-C. End-effectors for tomato harvesting robot. Artif. Intell. Rev. 1998, 12, 11–25. [Google Scholar] [CrossRef]
  21. Duan, H.-Y.; Li, S.-J.; Yang, H. Design and Clamping Experiment of End Effector for Picking String Tomato. J. Agric. Mech. Res. 2021, 43, 5. [Google Scholar]
  22. Lu, J.; Liang, X.-F. Design and Motion Simulation of Tomato Fruit Picking End Actuator. J. Agric. Mech. Res. 2020, 42, 88–93. [Google Scholar]
  23. Weu, B.; He, J.-Y.; Shi, Y.; Jiang, G.-L.; Zang, X.-Y.; Ma, Y. Design and Experiment of Underactuated End-effector for Citrus Picking. Trans. Chin. Soc. Agric. Mach. 2021, 52, 120–128. [Google Scholar]
  24. Zhang, X.-W.; Ge, Y.; Chen, F.; Yu, P.-F. Design of Three-Finger Pull-Out Safflower Picking End Effector. Mach. Des. Manuf. 2022, 371, 145–149. [Google Scholar]
  25. Gao, W.; Wang, Z.-H.; Zhao, X.-B.; Sun, F.-M. Robust and efficient cotton contamination detection method based on HSI space. Acta Autom. Sin. 2008, 34, 729–735. [Google Scholar] [CrossRef]
  26. Zhou, K.; Xie, S.-Y.; Liu, J. Design and implementation of seeder based on infrared emitting diode. J. Agric. Mech. Res. 2014, 36, 151–154. [Google Scholar]
Figure 1. Images of grapes. (a) Grape image 1. (b) Grape image 2.
Figure 1. Images of grapes. (a) Grape image 1. (b) Grape image 2.
Inventions 07 00027 g001
Figure 2. HSI space model (H is the hue. S is the saturation).
Figure 2. HSI space model (H is the hue. S is the saturation).
Inventions 07 00027 g002
Figure 3. Original image and HSI space component. (a) Grayscale image. (b) Hue H. (c). Saturation S. (d) Intensity I. (e) Histogram of gray image. (f) Histogram of hue. (g) Histogram of saturation. (h) Histogram of intensity.
Figure 3. Original image and HSI space component. (a) Grayscale image. (b) Hue H. (c). Saturation S. (d) Intensity I. (e) Histogram of gray image. (f) Histogram of hue. (g) Histogram of saturation. (h) Histogram of intensity.
Inventions 07 00027 g003aInventions 07 00027 g003b
Figure 4. Recognition results. (a) Threshold segmentation. (b) Binary image. (c) Maximum connected domain. (d) Recognition results. (The red box is the outer rectangle of the target grape, and the red point is the center point of the outer rectangle).
Figure 4. Recognition results. (a) Threshold segmentation. (b) Binary image. (c) Maximum connected domain. (d) Recognition results. (The red box is the outer rectangle of the target grape, and the red point is the center point of the outer rectangle).
Inventions 07 00027 g004
Figure 5. Image of partial test result. (a) Test 1. (b) Test 2. (c) Test 3. (d) Test 4.
Figure 5. Image of partial test result. (a) Test 1. (b) Test 2. (c) Test 3. (d) Test 4.
Inventions 07 00027 g005
Figure 6. Detection and picking device structure drawing. (a) Structure image. (b) Physical image.
Figure 6. Detection and picking device structure drawing. (a) Structure image. (b) Physical image.
Inventions 07 00027 g006
Figure 7. Diagram of the distance between infrared tubes ( L is the distance between the infrared transmitting tube and infrared receiving tube). (a) Schematic diagram of detection device spacing. (b) Schematic diagram of receiving pipe spacing (A is the midpoint of L, D is the midpoint of the infrared tube, E is the inner foot point of the infrared tube, and F is the vertical point of A on the acrylic plate).
Figure 7. Diagram of the distance between infrared tubes ( L is the distance between the infrared transmitting tube and infrared receiving tube). (a) Schematic diagram of detection device spacing. (b) Schematic diagram of receiving pipe spacing (A is the midpoint of L, D is the midpoint of the infrared tube, E is the inner foot point of the infrared tube, and F is the vertical point of A on the acrylic plate).
Inventions 07 00027 g007
Figure 8. Diagram of picking location strategy.
Figure 8. Diagram of picking location strategy.
Inventions 07 00027 g008
Figure 9. Flow chart of picking point location algorithm.
Figure 9. Flow chart of picking point location algorithm.
Inventions 07 00027 g009
Figure 10. The test platform of the visual system. (a) Front view. (b) Identification results 1. (c) Front view. (d) Identification results 2.
Figure 10. The test platform of the visual system. (a) Front view. (b) Identification results 1. (c) Front view. (d) Identification results 2.
Inventions 07 00027 g010
Table 1. Comparison of multiple methods.
Table 1. Comparison of multiple methods.
Types of AlgorithmsMean Running Time/sRecognition
Success Rate
The Test Results
The method proposed in this paper0.59 s87% Inventions 07 00027 i001
chromatic aberration method
(1.1 × G − B)
0.60 s76% Inventions 07 00027 i002
K-means3.20 s90% Inventions 07 00027 i003
Table 2. Parameter Settings.
Table 2. Parameter Settings.
NameLength
L 160 mm
L A C = L / 2 80 mm
L D E 5.5 mm
L D F = L B C = L A C × tan ϕ 9.8 mm
L E F = L D F L D E 4.3 mm
Table 3. Partial test data.
Table 3. Partial test data.
Num.Running TimeLocating Results
10.61 sSucceed
20.53 sSucceed
30.63 sSucceed
40.65 sSucceed
50.51 sSucceed
60.62 sSucceed
70.56 sSucceed
80.59 sSucceed
90.35 sFailed
100.61 sSucceed
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhu, Y.; Zhang, T.; Liu, L.; Liu, P.; Li, X. Fast Location of Table Grapes Picking Point Based on Infrared Tube. Inventions 2022, 7, 27. https://doi.org/10.3390/inventions7010027

AMA Style

Zhu Y, Zhang T, Liu L, Liu P, Li X. Fast Location of Table Grapes Picking Point Based on Infrared Tube. Inventions. 2022; 7(1):27. https://doi.org/10.3390/inventions7010027

Chicago/Turabian Style

Zhu, Yanjun, Tongxun Zhang, Lipeng Liu, Ping Liu, and Xiang Li. 2022. "Fast Location of Table Grapes Picking Point Based on Infrared Tube" Inventions 7, no. 1: 27. https://doi.org/10.3390/inventions7010027

APA Style

Zhu, Y., Zhang, T., Liu, L., Liu, P., & Li, X. (2022). Fast Location of Table Grapes Picking Point Based on Infrared Tube. Inventions, 7(1), 27. https://doi.org/10.3390/inventions7010027

Article Metrics

Back to TopTop