A Deep-Learning Extraction Method for Orchard Visual Navigation Lines
Abstract
:1. Introduction
2. Materials and Methods
2.1. Detection of Fruit Tree and Trunk
2.1.1. Network Structure of YOLO V3
2.1.2. Image Datasets
2.1.3. Model Training
2.2. Path Extraction of Orchard Machinery Navigation
2.2.1. Reference Point Generation
Algorithm 1 Obtain available coordinate points |
Input: Acquired raw image |
[r c] = size(img) Imghalfwidth = c/3/2 |
A = importdata (txt) |
[m,n] = size (A.data) |
1: for I = 1:m |
2: if textdata including “trunk” |
3: if Second data < imghalfwidth 4: y = The fifth data value in A 5: x = 0.5 (fourth data value - second data value) + second data value 6: else 7: y = The fifth data value in A 8: x = 0.5 (fourth data value - second data value) + second data value 9: end 10: end |
11: end |
2.2.2. Line Fitting of the Tree Rows
Algorithm 2 Obtain the reference lines |
Input: Sorting the coordinates of the reference points of the left- and right-side fruit trees, respectively 1: if the number of points on the left is equal to or greater than 3 |
2: least-square method |
3: else if less than 3 points on the left 4: Connect two points 5: end 6: The right-fitting line is the same as above 7: if the number of points on the right is equal to or greater than 3 8: Fit a straight line using the least-square method |
9: else if less than 3 points on the left |
10: Connect two points line 11 k = (ycord (1) - ycord (2))/(xcord (1) - xcord (2)) |
b = ycord (1) |
11: end |
2.2.3. Obtaining the Centerline
Algorithm 3 Obtain the centerline |
Input: the left and right reference lines 1: sort coordinate for the left rectangle label |
2: search the nearest point Pl2corresponding Point Pr2(xr2, yr2) |
3: middle point coordinate (xm2, ym2) 4: search the furthest point Pl1 corresponding Point Pr1 (xr1, yr1) 5: calculate the coordinates of point Pm1 (xm1, ym1) by points Pl1 and Pr1 6: calculate the coordinates of point Pm2 (xm2, ym2) by points Pl2 and Pr2 7: line connecting points Pm1 and Pm2 |
8: end |
3. Results and Discussion
3.1. Tree and Trunk Detection Results
3.2. Results of Reference Point Generation
3.3. Results of Tree-Row Line Fitting
3.4. Results of Centerline Extraction
3.5. Discussion
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Radcliffe, J.; Cox, J.; Bulanon, D.M. Machine vision for orchard navigation. Comput. Ind. 2018, 98, 165–171. [Google Scholar] [CrossRef]
- Guo, S.Q. Research on Identifying and Locating Apple in Orchard Based on Neural Network and 3D Vision. Master’s Thesis, Beijing Jiaotong University, Beijing, China, 2019. [Google Scholar]
- Blok, P.M.; van Boheemen, K.; van Evert, F.K.; IJsselmuiden, J.; Kim, G.H. Robot navigation in orchards with localization based on Particle filter and Kalman filter. Comput. Electron. Agric. 2019, 157, 261–269. [Google Scholar] [CrossRef]
- Feng, J.; Liu, G.; Si, Y.S.; Wang, S.; He, B.; Ren, W. Algorithm based on image processing technology to generate navigation directrix in orchard. Trans. Chin. Soc. Agric. Mach. 2012, 43, 184–189. [Google Scholar] [CrossRef]
- He, B.; Liu, G.; Ji, Y.; Si, Y.S.; Gao, R. Auto recognition of navigation path for harvest robot based on machine vision. In Proceedings of the International Conference on Computer and Computing Technologies in Agriculture, Nanchang, China, 23–24 October 2010. [Google Scholar] [CrossRef] [Green Version]
- Li, W.Y. Research on the Method of Generating Visual Navigation Path of Kiwi Picking Robot. Master’s Thesis, North West Agriculture and Forestry University, Yangling, China, 2017. [Google Scholar]
- Ali, W.; Georgsson, F.; Hellstrom, T. Visual tree detection for autonomous navigation in forest environment. In Proceedings of the IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008. [Google Scholar] [CrossRef] [Green Version]
- Lyu, H.K.; Park, C.H.; Han, D.H.; Kwak, S.W.; Choi, B. Orchard free space and center line estimation using naive bayesian classifier for unmanned ground self-driving vehicle. Symmetry 2018, 10, 355. [Google Scholar] [CrossRef] [Green Version]
- Zhou, J.J.; Hu, C. Inter-row localization method for agricultural robot working in close planting orchard. Trans. Chin. Soc. Agric. Mach. 2015, 46, 22–28. [Google Scholar]
- Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion–Part A: Tree detection. Comput. Electron. Agric. 2015, 119, 254–266. [Google Scholar] [CrossRef]
- Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion–Part B: Mapping and localisation. Comput. Electron. Agric. 2015, 119, 267–278. [Google Scholar] [CrossRef]
- Zhang, J.; Karkee, M.; Zhang, Q.; Zhang, X.; Yaqoob, M.; Fu, L.S.; Wang, S.M. Multi-class object detection using faster R-CNN and estimation of shaking locations for automated shake-and-catch apple harvesting. Comput. Electron. Agric. 2020, 173, 105384. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
- Liu, J.; Wang, X.W. Early recognition of tomato gray leaf spot disease based on MobileNetv2-YOLOv3 model. Plant Methods 2020, 16, 83. [Google Scholar] [CrossRef] [PubMed]
- Cenggoro, T.W.; Aslamiah, A.H.; Yunanto, A. Feature pyramid networks for crowd counting. Procedia Comput. Sci. 2019, 157, 175–182. [Google Scholar] [CrossRef]
- Luo, Z.; Yu, H.; Zhang, Y. Pine Cone Detection Using Boundary Equilibrium Generative Adversarial Networks and Improved YOLOv3 Model. Sensors 2020, 20, 4430. [Google Scholar] [CrossRef] [PubMed]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Wu, D.H.; Wu, Q.; Yin, X.Q.; Jiang, B.; Wang, H.; He, D.J.; Song, H.B. Lameness detection of dairy cows based on the YOLOv3 deep learning algorithm and a relative step size characteristic vector. Biosyst. Eng. 2020, 189, 150–163. [Google Scholar] [CrossRef]
- Tian, Y.N.; Yang, G.D.; Wang, Z.; Wang, H.; Li, E.; Liang, Z.Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
- Liu, G.; Nouaze, J.C.; Mbouembe, P.L.T.; Kim, J.H. YOLO-Tomato: A robust algorithm for tomato detection based on YOLOv3. Sensors 2020, 20, 2145. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Han, Z.H.; Li, J.; Yuan, Y.W.; Fang, X.F.; Zhao, B.; Zhu, L.C. Path Recognition of Orchard Visual Navigation Based on U-Net. Trans. Chin. Soc. Agric. Mach. 2021, 52, 30–39. [Google Scholar] [CrossRef]
Type | AP/% |
---|---|
Tree | 92.70 |
Trunk | 91.51 |
Sub-Figure | Original Coordinates | Reference Point Coordinates | Manual Marking Coordinates | Error (Pixel) |
---|---|---|---|---|
(a) | (232,425) (308,512) | (270,512) | (268,512) | 2.00 |
(b) | (180,439) (345,632) | (262.5,632) | (260,631) | 2.69 |
(171,532) (302,685) | (236.5,685) | (235,685) | 1.50 | |
(c) | (29,328) (53,373) | (41,373) | (40,372) | 1.41 |
(534,352) (581,411) | (557.5,411) | (558,413) | 2.06 |
Weed Environment | Weak Sunlight | Strong Sunlight | Normal Sunlight | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Left row line | to the left | to the right | correct | to the left | to the right | correct | to the left | to the right | correct | to the left | to the right | correct |
1 | 0 | 6 | 1 | 0 | 6 | 0 | 0 | 7 | 1 | 0 | 8 | |
Right row line | 0 | 0 | 7 | 0 | 0 | 7 | 0 | 1 | 6 | 1 | 0 | 8 |
Total | 1 | 0 | 13 | 1 | 0 | 13 | 0 | 1 | 13 | 2 | 0 | 16 |
Weed Environment | Weak Sunlight | Strong Sunlight | Normal Sunlight | |||||
---|---|---|---|---|---|---|---|---|
type | Little deviation | correct | Little deviation | correct | Little deviation | correct | Little deviation | correct |
amount | 1 | 6 | 1 | 6 | 0 | 7 | 1 | 8 |
Method | Weak Sunlight | Normal Sunlight | Strong Sunlight | |||
---|---|---|---|---|---|---|
Maximum | Mean Value | Maximum | Mean Value | Maximum | Mean Value | |
U-Net [22] | 19 | 11.8 | 10 | 6.5 | 7 | 2.9 |
DL_LS | 8 | 5.2 | 5 | 3.4 | 4 | 2.1 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, J.; Geng, S.; Qiu, Q.; Shao, Y.; Zhang, M. A Deep-Learning Extraction Method for Orchard Visual Navigation Lines. Agriculture 2022, 12, 1650. https://doi.org/10.3390/agriculture12101650
Zhou J, Geng S, Qiu Q, Shao Y, Zhang M. A Deep-Learning Extraction Method for Orchard Visual Navigation Lines. Agriculture. 2022; 12(10):1650. https://doi.org/10.3390/agriculture12101650
Chicago/Turabian StyleZhou, Jianjun, Siyuan Geng, Quan Qiu, Yang Shao, and Man Zhang. 2022. "A Deep-Learning Extraction Method for Orchard Visual Navigation Lines" Agriculture 12, no. 10: 1650. https://doi.org/10.3390/agriculture12101650
APA StyleZhou, J., Geng, S., Qiu, Q., Shao, Y., & Zhang, M. (2022). A Deep-Learning Extraction Method for Orchard Visual Navigation Lines. Agriculture, 12(10), 1650. https://doi.org/10.3390/agriculture12101650