A Diameter Measurement Method of Red Jujubes Trunk Based on Improved PSPNet
Abstract
:1. Introduction
- (1)
- MobileNetV2 network was used as the backbone of the segmentation network to reduce the parameters and the model size.
- (2)
- CBAM was introduced into the backbone network to improve the feature extraction ability of the network.
- (3)
- RRB was introduced into the main branch and side branch to obtain more image details and realize accurate and efficient jujube tree segmentation.
- (4)
- A measurement method to accurately measure trunk diameter was proposed.
2. Materials and Methods
2.1. Image Data Acquisition
2.2. Data Annotation and Dataset Division
2.3. Improvement of PSPNet Segmentation Model
2.3.1. Baseline PSPNet Model
2.3.2. Backbone Network Based on MobilenetV2
2.3.3. Backbone Feature Extraction Network Embedding CBAM
2.3.4. Improved PSPNet Model Embedding RRB
2.3.5. Improved PSPNet Model
2.3.6. Measurement Method of Jujube Tree Diameter Based on Centerline
3. Results and Discussion
3.1. Experimental Platform and Model Training
3.2. Metrics for Model Performance Evaluation
3.3. Experimental Results and Analysis of Trunk Segmentation
3.4. Different Model Segmentation Results and Analysis
3.5. Diameter Detection Results Based on Improved PSPNet
4. Conclusions
- (1)
- Expand the types of data sets and increase the robustness of the model. There are only two kinds of jujube trees in the data set used in this research, so it is necessary to add more kinds of jujube trunk data to enhance the robustness of the model.
- (2)
- Enhance the segmentation ability of the model for small objects. The background of jujube tree is complex, and there are many similar features between the background and jujube tree, which easily leads to false segmentation and missing segmentation. Therefore, the feature fusion ability should be strengthened in the follow-up work to reduce data loss and improve detection accuracy.
- (3)
- In order to further serve intelligent agriculture, this method can be applied to the robot picking operation, providing guidance for the robot to accurately pick fruits.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Peng, J.; Xie, H.; Feng, Y.; Fu, L.; Sun, S.; Cui, Y. Simulation study of vibratory harvesting of Chinese winter jujube (Zizyphus jujuba Mill. cv. Dongzao). Comput. Electron. Agric. 2017, 143, 57–65. [Google Scholar] [CrossRef]
- Ren, G.; Lin, T.; Ying, Y.; Chowdhary, G.; Ting, K. Agricultural robotics research applicable to poultry production: A review. Comput. Electron. Agric. 2020, 169, 105216. [Google Scholar] [CrossRef]
- Utstumo, T.; Urdal, F.; Brevik, A.; Dørum, J.; Netland, J.; Overskeid, Ø.; Berge, T.W.; Gravdahl, J.T. Robotic in-row weed control in vegetables. Comput. Electron. Agric. 2018, 154, 36–45. [Google Scholar] [CrossRef]
- Wan, H.; Fan, Z.; Yu, X.; Kang, M.; Wang, P.; Zeng, X. A real-time branch detection and reconstruction mechanism for harvesting robot via convolutional neural network and image segmentation. Comput. Electron. Agric. 2022, 192, 106609. [Google Scholar] [CrossRef]
- Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion–Part B: Mapping and localisation. Comput. Electron. Agric. 2015, 119, 267–278. [Google Scholar] [CrossRef]
- Jiang, A.; Noguchi, R.; Ahamed, T. Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN. Sensors 2022, 22, 2065. [Google Scholar] [CrossRef]
- Reder, S.; Mund, J.-P.; Albert, N.; Waßermann, L.; Miranda, L. Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks. Remote Sens. 2021, 14, 75. [Google Scholar] [CrossRef]
- Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef]
- Zhang, C.; Yang, G.; Jiang, Y.; Xu, B.; Li, X.; Zhu, Y.; Lei, L.; Chen, R.; Dong, Z.; Yang, H. Apple tree branch information extraction from terrestrial laser scanning and backpack-lidar. Remote Sens. 2020, 12, 3592. [Google Scholar] [CrossRef]
- Westling, F.; Underwood, J.; Bryson, M. Graph-based methods for analyzing orchard tree structure using noisy point cloud data. Comput. Electron. Agric. 2021, 187, 106270. [Google Scholar] [CrossRef]
- Hackenberg, J.; Spiecker, H.; Calders, K.; Disney, M.; Raumonen, P. SimpleTree—an efficient open source tool to build tree models from TLS clouds. Forests 2015, 6, 4245–4294. [Google Scholar] [CrossRef]
- Bargoti, S.; Underwood, J.P.; Nieto, J.I.; Sukkarieh, S. A pipeline for trunk localisation using LiDAR in trellis structured orchards. In Field and Service Robotics; Springer: Berlin/Heidelberg, Germany, 2015; pp. 455–468. [Google Scholar]
- Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion–Part A: Tree detection. Comput. Electron. Agric. 2015, 119, 254–266. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, B.; Luo, L. Multi-feature fusion tree trunk detection and orchard mobile robot localization using camera/ultrasonic sensors. Comput. Electron. Agric. 2018, 147, 91–108. [Google Scholar] [CrossRef]
- Shen, Y.; Zhuang, Z.; Liu, H.; Jiang, J.; Ou, M. Fast recognition method of multi-feature trunk based on realsense depth camera. Trans. Chin. Soc. Agric. Mach. 2022, 53, 304–312. [Google Scholar]
- Liu, H.; Zhu, S.; Shen, Y.; Tang, J. Fast segmentation algorithm of tree trunks based on multi-feature fusion. Trans. Chin. Soc. Agric. Mach. 2020, 51, 221–229. [Google Scholar]
- Zhao, M.; Liu, Q.; Jha, A.; Deng, R.; Yao, T.; Mahadevan-Jansen, A.; Tyska, M.J.; Millis, B.A.; Huo, Y. VoxelEmbed: 3D instance segmentation and tracking with voxel embedding based deep learning. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Strasbourg, France, 27 September 2021; pp. 437–446. [Google Scholar]
- Zhao, M.; Jha, A.; Liu, Q.; Millis, B.A.; Mahadevan-Jansen, A.; Lu, L.; Landman, B.A.; Tyskac, M.J.; Huo, Y. Faster mean-shift: Gpu-accelerated embedding-clustering for cell segmentation and tracking. arXiv 2020, arXiv:2007.14283 2020. [Google Scholar] [CrossRef]
- You, L.; Jiang, H.; Hu, J.; Chang, C.; Chen, L.; Cui, X.; Zhao, M. GPU-accelerated Faster Mean Shift with euclidean distance metrics. arXiv 2021, arXiv:2112.13891 2021. [Google Scholar]
- Li, X.; Pan, J.; Xie, F.; Zeng, J.; Li, Q.; Huang, X.; Liu, D.; Wang, X. Fast and accurate green pepper detection in complex backgrounds via an improved Yolov4-tiny model. Comput. Electron. Agric. 2021, 191, 106503. [Google Scholar] [CrossRef]
- Zeng, L.; Feng, J.; He, L. Semantic segmentation of sparse 3D point cloud based on geometrical features for trellis-structured apple orchard. Biosyst. Eng. 2020, 196, 46–55. [Google Scholar] [CrossRef]
- Majeed, Y.; Zhang, J.; Zhang, X.; Fu, L.; Karkee, M.; Zhang, Q.; Whiting, M.D. Apple tree trunk and branch segmentation for automatic trellis training using convolutional neural network based semantic segmentation. IFAC-Pap. 2018, 51, 75–80. [Google Scholar] [CrossRef]
- Zhang, J.; He, L.; Karkee, M.; Zhang, Q.; Zhang, X.; Gao, Z. Branch detection for apple trees trained in fruiting wall architecture using depth features and Regions-Convolutional Neural Network (R-CNN). Comput. Electron. Agric. 2018, 155, 386–393. [Google Scholar] [CrossRef]
- Majeed, Y.; Zhang, J.; Zhang, X.; Fu, L.; Karkee, M.; Zhang, Q.; Whiting, M.D. Deep learning based segmentation for automated training of apple trees on trellis wires. Comput. Electron. Agric. 2020, 170, 105277. [Google Scholar] [CrossRef]
- Gao, F.; Fu, L.; Zhang, X.; Majeed, Y.; Li, R.; Karkee, M.; Zhang, Q. Multi-class fruit-on-plant detection for apple in SNAP system using Faster R-CNN. Comput. Electron. Agric. 2020, 176, 105634. [Google Scholar] [CrossRef]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2016. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861 2017. [Google Scholar]
- Chen, S.; Song, Y.; Su, J.; Fang, Y.; Shen, L.; Mi, Z.; Su, B. Segmentation of field grape bunches via an improved pyramid scene parsing network. Int. J. Agric. Biol. Eng. 2021, 14, 185–194. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. Learning a discriminative feature network for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1857–1866. [Google Scholar]
- Ma, B.; Du, J.; Wang, L.; Jiang, H.; Zhou, M. Automatic branch detection of jujube trees based on 3D reconstruction for dormant pruning using the deep learning-based method. Comput. Electron. Agric. 2021, 190, 106484. [Google Scholar] [CrossRef]
Reference | Event | Data Type | Method | Result |
---|---|---|---|---|
[11] | Hackenberg et al. established a high-precision tree segmentation method. | Terrestrial laser scan point clouds | A statistical method of cylinder radii was presented, based on point clouds data. | The total relative error was 8%. |
[12] | Bargoti et al. presented a identification method of individual apple trees. | LiDAR point clouds data | Hidden Semi-Markov Model and Hough Transform was used to detect trunk, based on LiDAR data. | The accuracy of tree segmentation was 89%. |
[13] | Shalal et al. presented a trees segmentation algorithm to discriminate between trees and non-tree objects | laser point clouds and camera images | A data fusion method of camera and laser scanner was proposed to detect trunk. | The detection accuracy was 96.64%. |
[14] | Chen et al. presented A trunk detection algorithm based on multi-sensor integration technology | RGB images | HOG and SVM were used to train classifier, the gray histograms were used to optimize the classifier and Robert cross edge detector was used to improve accuracy. | The recall and accuracy of citrus trunk recognition experiments were 92.14% and 95.49%. |
[15] | Shen Yue et al. proposed a fast tree trunk recognition method based on tree trunk features. | RGBD images | Super pixel segmentation was used for trunk segmentation and parallel edge feature detection was used to detect the trunk edge. | The recognition accuracy of trunk under normal illumination was 91.35%. |
[16] | Liu Hui et al. proposed a fast trunk segmentation algorithm. | RGBD images | Super pixel algorithm was used to segment trunk and the color matching of the super pixel blocks was used to distinguish the trunk from the non-trunk. | The detection accuracy was 95. 0%. |
Size of Input | Operators | Channel Dimension Expansion Factor | Channel Dimension | Stride |
---|---|---|---|---|
3 × 480 × 480 | Conv2d | - | 32 | 2 |
32 × 240 × 240 | Bottleneck × 1 | 1 | 16 | 1 |
16 × 240 × 240 | Bottleneck × 2 | 6 | 24 | 2 |
24 × 240 × 240 | Bottleneck × 3 | 6 | 32 | 2 |
32 × 120 × 120 | Bottleneck × 4 | 6 | 64 | 2 |
64 × 60 × 60 | Bottleneck × 3 | 6 | 96 | 1 |
96 × 60 × 60 | Bottleneck × 3 | 6 | 160 | 2 |
160 × 30 × 30 | Bottleneck × 1 | 6 | 320 | 1 |
320 × 30 × 30 | Conv2d | - | 1280 | 1 |
1280 × 30 × 30 | Avgpool | - | - | - |
1280 × 1 × 1 | Conv2d | - | 2 | 1 |
Project | Parameter |
---|---|
depth field of view (FOV) | 85.2° × 58° × 94° |
maximum output resolution | 1280 × 720 |
minimum depth distance (m) | 0.1 |
RGB sensor FOV (Before calibration) | 69.4° × 42.5° × 77° |
RGB sensor FOV (After calibration) | 53.4° × 42.5° |
Configuration | Parameter |
---|---|
CPU | Intel(R) Core (TM) i7-10700K |
GPU | NVIDIA GeForce RTX 3070 |
Accelerated environment | CUDA11.1 CUDNN8.2.1 |
Development environment | Pycharm 2021.3.2 |
Operating system | Ubuntu 18.04 |
Model | IoU/% | PA/% | Fps | Parameters |
---|---|---|---|---|
ResNet50 | 81.21 | 89.44 | 49.77 | 4.91 × 107 |
ResNet50 + CBAM | 80.82 | 89.44 | 48.57 | 5.04 × 107 |
ResNet50 + RRB | 81.41 | 90.86 | 51.02 | 4.91 × 107 |
ResNet50 + CBAM + RRB | 81.80 | 90.56 | 48.76 | 5.04 × 107 |
MobileNetV2 | 81.36 | 89.97 | 54.02 | 2.45 × 106 |
MobileNetV2 + CBAM | 81.37 | 89.13 | 51.76 | 2.48 × 106 |
MobileNetV2 + RRB | 81.82 | 90.39 | 54.84 | 2.45 × 106 |
Ours model | 81.88 | 91.39 | 50.90 | 2.48 × 106 |
Model | IoU/% | PA/% | Fps | Parameters |
---|---|---|---|---|
BiseNet | 74.86 | 80.74 | 122.58 | 2.31 × 107 |
DeepLab v3+ | 71.22 | 77.87 | 37.96 | 5.46 × 107 |
FCN | 79.31 | 88.15 | 55.92 | 2.01 × 107 |
Unet | 78.54 | 86.91 | 56.44 | 7.77 × 106 |
Unet++ | 78.99 | 87.13 | 21.29 | 9.16 × 106 |
Ours model | 81.88 | 91.39 | 50.90 | 2.48 × 106 |
m_dis /mm | t_dia /mm | PSPNet | Improved PSPNet | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
p_dis /Pixel | m_dia /mm | a_err /mm | r_err /% | m_acc /% | p_dis /Pixel | m_dia /mm | a_err /mm | r_err /% | m_acc /% | ||
450.32 | 32.86 | 44.08 | 29.86 | 3.00 | 9.13 | 90.87 | 51.52 | 35.09 | 2.23 | 6.80 | 93.20 |
480.02 | 33.23 | 43.05 | 31.06 | 2.17 | 6.52 | 93.48 | 47.50 | 34.39 | 1.16 | 3.48 | 96.52 |
480.36 | 33.23 | 38.23 | 27.50 | 5.73 | 17.24 | 82.76 | 47.43 | 34.36 | 1.13 | 3.40 | 96.60 |
500.52 | 35.32 | 39.54 | 29.67 | 5.65 | 15.99 | 84.01 | 46.58 | 35.14 | 0.18 | 0.52 | 99.48 |
500.65 | 23.09 | 26.16 | 19.44 | 3.65 | 15.80 | 84.20 | 32.26 | 24.08 | 0.99 | 4.30 | 95.70 |
510.36 | 38.98 | 38.77 | 29.65 | 9.33 | 23.95 | 76.05 | 48.16 | 37.09 | 1.89 | 4.86 | 95.14 |
511.32 | 42.23 | 47.43 | 36.58 | 5.65 | 13.39 | 86.61 | 52.00 | 40.23 | 2.00 | 4.73 | 95.27 |
514.56 | 31.25 | 37.94 | 29.24 | 2.01 | 6.44 | 93.56 | 39.20 | 30.23 | 1.02 | 3.26 | 96.74 |
520.23 | 32.13 | 37.63 | 29.31 | 2.82 | 8.78 | 91.22 | 42.64 | 33.33 | 1.20 | 3.74 | 96.26 |
520.53 | 54.23 | 55.81 | 44.08 | 10.15 | 18.71 | 81.29 | 66.96 | 53.34 | 0.89 | 1.65 | 98.35 |
average value | 5.02 | 13.59 | 86.41 | - | - | 1.27 | 3.67 | 96.33 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qiao, Y.; Hu, Y.; Zheng, Z.; Qu, Z.; Wang, C.; Guo, T.; Hou, J. A Diameter Measurement Method of Red Jujubes Trunk Based on Improved PSPNet. Agriculture 2022, 12, 1140. https://doi.org/10.3390/agriculture12081140
Qiao Y, Hu Y, Zheng Z, Qu Z, Wang C, Guo T, Hou J. A Diameter Measurement Method of Red Jujubes Trunk Based on Improved PSPNet. Agriculture. 2022; 12(8):1140. https://doi.org/10.3390/agriculture12081140
Chicago/Turabian StyleQiao, Yichen, Yaohua Hu, Zhouzhou Zheng, Zhanghao Qu, Chao Wang, Taifeng Guo, and Juncai Hou. 2022. "A Diameter Measurement Method of Red Jujubes Trunk Based on Improved PSPNet" Agriculture 12, no. 8: 1140. https://doi.org/10.3390/agriculture12081140
APA StyleQiao, Y., Hu, Y., Zheng, Z., Qu, Z., Wang, C., Guo, T., & Hou, J. (2022). A Diameter Measurement Method of Red Jujubes Trunk Based on Improved PSPNet. Agriculture, 12(8), 1140. https://doi.org/10.3390/agriculture12081140