Cucumber Leaf Segmentation Based on Bilayer Convolutional Network
Abstract
:1. Introduction
- (1)
- Multi-Scale Strategy and Dilated Convolutions: Considering the significant morphological changes in plant leaves during growth, we employ a multi-scale strategy based on deep learning to divide the original images into blocks. Subsequently, dilated convolutions are used for fusion, allowing better extraction of features from leaves of varying shapes.
- (2)
- Attention Mechanism for Edge Features: The model’s ability to represent the edges of overlapping plant leaves is often insufficient. By incorporating an attention mechanism after the feature maps, the model can more accurately capture critical edge features of each leaf, facilitating improved leaf segmentation.
- (3)
- Varifocal Loss for Dense Regions: In images with densely distributed plant leaves, CNNs often overlook many target features. We replace the Focal Loss in BCNet with the Varifocal Loss from VarifocalNet, which is more effective for detecting dense targets.
2. Materials and Methods
2.1. BCNet
2.2. Algorithm Pipeline
- (1)
- (2)
- According to the position of the target object detection frame, the RoI Align [30] algorithm is used to obtain the RoI sub-region in the feature map in the image, and this region is used as the input of BCNet.
- (3)
- Instance segmentation through BCNet: Firstly, the target characteristics of RoI are inputted through the graph convolution network layer at the top, the appearance of the upper target is modeled, and the mask and edge of the upper target in the box of interest are outputted. Secondly, combined with the upper target data extracted from the convolution network of the top graph and the features of the RoI sub-region obtained using the RoI Align algorithm, the new features of the occluded target are obtained by adding and inputting the convolution network layer into the bottom graph. Finally, according to the above characteristics of the occluder object and the occludee object in the region, the instance segmentation task of the target is completed.
2.3. Improved Scheme
- (1)
- In order to accurately extract edge information on the overlapped parts of plant leaves, this study introduces an attention mechanism module after the last convolution layer of the ResNet-50 backbone network to allow the model to pay more attention to key information concerning the edge features of each leaf, to realize the accurate segmentation of plant leaves.
- (2)
- Due to the presence of numerous densely packed areas of plant leaves in the images, CNNs often overlook a lot of target feature information. To address this issue, we use the Varifocal Loss function of VarifocalNet to replace the classification loss function, Focal Loss, in FCOS, which is conducive to detecting dense targets.
2.3.1. CBAM
2.3.2. Loss Function
2.4. Data Acquisition and Model Training
2.4.1. Image Acquisition
2.4.2. Image Annotation
- (1)
- If the covered area of the blade is less than 20%, they are marked as Upper Level.
- (2)
- If the covered area of the blade accounts for about 20–60% of the whole blade area, they are marked as Middle Level.
- (3)
- If the covered area of leaves exceeds 60%, but the shape characteristics of cucumber leaves are retained, they are marked as Incomplete.
- (4)
- Targets with an occluded area of more than 60% and no cucumber leaf shape features will not be marked.
2.4.3. Image Augmentation
2.4.4. Model Training
2.5. Evaluating Indicator
3. Results
3.1. Results and Analysis of Glass Greenhouse Image Segmentation
3.2. Results and Analysis of Plastic Greenhouse Image Segmentation
3.3. Comparison and Analysis of Different Models
4. Discussion
4.1. Model Performance Evaluation
4.2. Further Application of Leaf Segmentation in Phenotypic Data Collection
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Sudhesh, K.M.; Sowmya, V.; Kurian, S.; Sikha, O.K. AI based rice leaf disease identification enhanced by Dynamic Mode Decomposition. Eng. Appl. Artif. Intell. 2023, 120, 105836. [Google Scholar]
- Zhang, H.; Wang, L.; Jin, X.; Bian, L.; Ge, Y. High-throughput phenotyping of plant leaf morphological, physiological, and biochemical traits on multiple scales using optical sensing. Crop J. 2023, 11, 1303–1318. [Google Scholar] [CrossRef]
- Pieruschka, R.; Schurr, U. Plant phenotyping: Past, present, and future. Plant Phenomics 2019, 2019, 7507131. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Hu, C.; Li, P. Automatic segementation of overlapped poplar seedling leaves combining Mask R-CNN and DBSCAN. Comput. Electron. Agric. 2020, 178, 105753. [Google Scholar] [CrossRef]
- Agustini, E.P.; Gernowo, R.; Wibowo, A.; Warsito, B. Bibliometric Analysis: Research Trends in Leaf Image Segmentation and Classification. In Proceedings of the 2024 IEEE International Conference on Artificial Intelligence and Mechatronics Systems (AIMS), Virtual, 22–23 February 2024; pp. 1–5. [Google Scholar]
- Guerra Ibarra, J.P.; Cuevas de la Rosa, F.J.; Arellano Arzola, O. Segmentation of Leaves and Fruits of Tomato Plants by Color Dominance. AgriEngineering 2023, 5, 1846–1864. [Google Scholar] [CrossRef]
- Yang, R.; Wu, Z.; Fang, W.; Zhang, H.; Wang, W.; Fu, L.; Majeed, Y.; Li, R.; Cui, Y. Detection of abnormal hydroponic lettuce leaves based on image processing and machine learning. Inf. Process. Agric. 2023, 10, 1–10. [Google Scholar] [CrossRef]
- Abdul-Nasir, A.S.; Mashor, M.Y.; Mohamed, Z. Colour image segmentation approach for detection of malaria parasites using various colour models and k-means clustering. WSEAS Trans. Biol. Biomed. 2013, 10, 41–55. [Google Scholar]
- Zhang, X.; Li, M.; Liu, H. Overlap functions-based fuzzy mathematical morphological operators and their applications in image edge extraction. Fractal Fract. 2023, 7, 465. [Google Scholar] [CrossRef]
- Nikbakhsh, N.; Baleghi, Y.; Agahi, H. A novel approach for unsupervised image segmentation fusion of plant leaves based on G-mutual information. Mach. Vis. Appl. 2021, 32, 5. [Google Scholar] [CrossRef]
- Rauf, H.T.; Saleem, B.A.; Lali, M.I.; Khan, M.A.; Sharif, M.; Bukhari, S.A. A citrus fruits and leaves dataset for detection and classification of citrus diseases through machine learning. Data Brief 2019, 26, 104340. [Google Scholar] [CrossRef]
- Omrani, E.; Khoshnevisan, B.; Shamshirband, S.; Saboohi, H.; Anuar, N.B.; Nasir, M.H.N.M. Potential of radial basis function-based support vector regression for apple disease detection. Measurement 2014, 55, 512–519. [Google Scholar] [CrossRef]
- Hu, B.; Mao, H.; Zhang, Y. Weed image segmentation algorithm based on two-dimensional histogram. Trans. Chin. Soc. Agric. Mach. 2007, 38, 199–202. [Google Scholar]
- Williams, D.; Macfarlane, F.; Britten, A. Leaf only SAM: A segment anything pipeline for zero-shot automated leaf segmentation. Smart Agric. Technol. 2024, 8, 100515. [Google Scholar] [CrossRef]
- Fang, J.; Jiang, H.; Zhang, S.; Sun, L.; Hu, X.; Liu, J.; Gong, M.; Liu, H.; Fu, Y. BAF-Net: Bidirectional attention fusion network via CNN and transformers for the pepper leaf segmentation. Front. Plant Sci. 2023, 14, 1123410. [Google Scholar] [CrossRef]
- Ferro, M.V.; Sørensen, C.G.; Catania, P. Comparison of different computer vision methods for vineyard canopy detection using UAV multispectral images. Comput. Electron. Agric. 2024, 225, 109277. [Google Scholar] [CrossRef]
- Vayssade, J.A.; Jones, G.; Gée, C.; Paoli, J.N. Pixelwise instance segmentation of leaves in dense foliage. Comput. Electron. Agric. 2022, 195, 106797. [Google Scholar] [CrossRef]
- Xu, L.; Li, Y.; Sun, Y.; Song, L.; Jin, S. Leaf instance segmentation and counting based on deep object detection and segmentation networks. In Proceedings of the Joint 10th International Conference on Soft Computing and Intelligent Systems(SCIS) and 19th International Symposium Symposium on Advanced Intelligent Systems(ISIS), Toyama, Japan, 5–8 December 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
- Kuznichov, D.; Zvirin, A.; Honen, Y.; Kimmel, R. Data augmentation for leaf segmentation and counting tasks in rosette plants. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Kumar, J.; Domnic, S. Rosette plant segmentation with leaf count using orthogonal transform and deep convolutional neural network. Mach. Vis. Appl. 2020, 31, 6. [Google Scholar]
- Jin, S.; Su, Y.; Gao, S.; Wu, F.; Ma, Q.; Xu, K.; Hu, T.; Liu, J.; Pang, S.; Guan, H.; et al. Separating the structural components of maize for field phenotyping using terrestrial LiDAR data and deep convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2644–2658. [Google Scholar] [CrossRef]
- Lai, Y.; Lu, S.; Qian, T.; Chen, M.; Zhen, S.; Guo, L. Segmentation of Plant Point Cloud based on Deep Learning Method. Comput.-Aided Des. Appl. 2022, 42, 161–168. [Google Scholar] [CrossRef]
- Lu, S.L.; Song, Z.; Chen, W.K.; Qian, T.T.; Zhang, Y.Y.; Chen, M.; Li, G. Counting Dense Leaves under Natural Environments via an Improved Deep-Learning-Based Object Detection Algorithm. Agriculture 2021, 11, 1003. [Google Scholar] [CrossRef]
- Lou, L. Cost-Effective Accurate 3-D Reconstruction Based on Multi-View Images for Plant Phenotyping. Ph.D. Thesis, Aberystwyth University, Aberystwyth, UK, 2016. [Google Scholar]
- Huang, W.; Gong, H.; Zhang, H.; Wang, Y.; Wan, X.; Li, G.; Li HShen, H. BCNet: Bronchus Classification via Structure Guided Representation Learning. IEEE Trans. Med. Imaging 2024, 1. [Google Scholar] [CrossRef] [PubMed]
- Ke, L.; Tai, Y.W.; Tang, C.K. Deep occlusion-aware instance segmentation with overlapping bilayers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Virtual, 19–25 June 2021; pp. 4019–4028. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. Fcos: Fully convolutional one-stage object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.; Kweon, I. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Zhang, H.; Wang, Y.; Dayoub, F.; Sünderhauf, N. Varifocalnet: An iou-aware dense object detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Mehta, S.S.; Ton, C.; Asundi, S.; Burks, T.F. Multiple camera fruit localization using a particle filter. Comput. Electron. Agric. 2017, 142, 139–154. [Google Scholar] [CrossRef]
- Wang, K.; Liew, J.H.; Zou, Y.; Zhou, D.; Feng, J. Panet: Few-shot image semantic segmentation with prototype alignment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Jiang, B.; Zhang, J.; Hong, Y.; Luo, J.; Liu, L.; Bao, H. Bcnet: Learning body and cloth shape from a single image. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Huang, X.Y.; He, R.J.; Dai, Y.C.; He, M.Y. Semantic Segmentation of Remote Sensing Images with Multi-scale Features and Attention Mechanism. In Proceedings of the 2023 IEEE 18th Conference on Industrial Electronics and Applications (ICIEA), Ningbo, China, 18–22 August 2023. [Google Scholar]
- Li, Z.; Lin, Y.; Fang, Z.; Li, S.; Li, X. AV-GAN: Attention-Based Varifocal Generative Adversarial Network for Uneven Medical Image Translation. arXiv 2024, arXiv:2404.10714. [Google Scholar]
- Ngugi, L.C.; Abdelwahab, M.; Abo-Zahhad, M. Tomato leaf segmentation algorithms for mobile phone applications using deep learning. Comput. Electron. Agric. 2020, 178, 105788. [Google Scholar] [CrossRef]
- Talasila, S.; Rawal, K.; Sethi, G. PLRSNet: A semantic segmentation network for segmenting plant leaf region under complex background. Int. J. Intell. Unmanned Syst. 2023, 11, 132–150. [Google Scholar] [CrossRef]
- Weyler, J.; Magistri, F.; Seitz, P.; Behley, J.; Stachniss, C. In-field phenotyping based on crop leaf and plant instance segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022. [Google Scholar]
- Amean, Z.M.; Low, T.; Hancock, N. Automatic leaf segmentation and overlapping leaf separation using stereo vision. Array 2021, 12, 100099. [Google Scholar] [CrossRef]
Picture Category | Image in Glass Greenhouse | Image in Plastic Greenhouse |
---|---|---|
Early Growth Stage, Sunny | 38 | 30 |
Early Growth Stage, Cloudy | 34 | 36 |
Metaphase Growth Stage, Sunny | 50 | - |
Metaphase Growth Stage, Cloudy | 61 | - |
Terminal Growth Stage, Sunny | 41 | 23 |
Terminal Growth Stage, Cloudy | 66 | 21 |
Total | 290 | 110 |
Project | Configuration |
---|---|
Operating System | Ubuntu 18.04 |
CPU | Intel(R) Xeon(R) Gold 6230 |
GPU | Tesla V100S-PCIE-32GB |
Video Memory | 32 G |
Memory | 72 G |
Programing Language | Python 3.7 |
Picture Category | ||||||
---|---|---|---|---|---|---|
Early Growth Stage, Sunny | 0.9900 | 0.9900 | 0.9630 | 0.8754 | 0.9304 | 0.9304 |
Early Growth Stage, Cloudy | 0.9874 | 0.9871 | 0.9627 | 0.8730 | 0.9307 | 0.9304 |
Metaphase Growth Stage, Sunny | 0.9857 | 0.9894 | 0.9795 | 0.8653 | 0.9295 | 0.9287 |
Metaphase Growth Stage, Cloudy | 0.9831 | 0.9872 | 0.9795 | 0.8579 | 0.9202 | 0.9198 |
Terminal Growth Stage, Sunny | 0.9370 | 0.9208 | 0.9107 | 0.8235 | 0.8908 | 0.8857 |
Terminal Growth Stage, Cloudy | 0.9266 | 0.9117 | 0.9028 | 0.8129 | 0.8907 | 0.8759 |
Average | 0.9657 | 0.9504 | 0.9396 | 0.8431 | 0.9723 | 0.9078 |
Picture Category | ||||||
---|---|---|---|---|---|---|
Early Growth Stage, Sunny | 0.9828 | 0.9810 | 0.9550 | 0.8675 | 0.9274 | 0.9170 |
Early Growth Stage, Cloudy | 0.9833 | 0.9804 | 0.9527 | 0.8628 | 0.9279 | 0.9185 |
Terminal Growth Stage, Sunny | 0.9616 | 0.9499 | 0.9398 | 0.8123 | 0.8793 | 0.8728 |
Terminal Growth Stage, Cloudy | 0.9608 | 0.9423 | 0.9418 | 0.8101 | 0.8760 | 0.8721 |
Average | 0.9686 | 0.9520 | 0.9423 | 0.8327 | 0.8710 | 0.8708 |
Model | Object Detection | Instance Segmentation | ||||
---|---|---|---|---|---|---|
Precision | Recall | AP | Precision | Recall | AP | |
Faster R-CNN | 0.84 | 0.81 | 0.8200 | - | - | - |
Mask R-CNN | 0.87 | 0.82 | 0.8467 | 0.68 | 0.64 | 0.6614 |
YOLOv4 | 0.93 | 0.88 | 0.9100 | - | - | - |
PANet | 0.88 | 0.86 | 0.8770 | 0.76 | 0.73 | 0.7467 |
BCNet | 0.96 | 0.92 | 0.9513 | 0.87 | 0.82 | 0.8389 |
Improved BCNet | 0.97 | 0.94 | 0.9657 | 0.87 | 0.83 | 0.8471 |
Model | FLOPs | Detection Time (s) |
---|---|---|
Faster R-CNN | 181 M | 3.52 |
Mask R-CNN | 286 M | 3.13 |
YOLOv4 | 90 M | 0.46 |
BCNet | 207 M | 3.20 |
Improved BCNet | 155 M | 1.72 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qian, T.; Liu, Y.; Lu, S.; Li, L.; Zheng, X.; Ju, Q.; Li, Y.; Xie, C.; Li, G. Cucumber Leaf Segmentation Based on Bilayer Convolutional Network. Agronomy 2024, 14, 2664. https://doi.org/10.3390/agronomy14112664
Qian T, Liu Y, Lu S, Li L, Zheng X, Ju Q, Li Y, Xie C, Li G. Cucumber Leaf Segmentation Based on Bilayer Convolutional Network. Agronomy. 2024; 14(11):2664. https://doi.org/10.3390/agronomy14112664
Chicago/Turabian StyleQian, Tingting, Yangxin Liu, Shenglian Lu, Linyi Li, Xiuguo Zheng, Qingqing Ju, Yiyang Li, Chun Xie, and Guo Li. 2024. "Cucumber Leaf Segmentation Based on Bilayer Convolutional Network" Agronomy 14, no. 11: 2664. https://doi.org/10.3390/agronomy14112664
APA StyleQian, T., Liu, Y., Lu, S., Li, L., Zheng, X., Ju, Q., Li, Y., Xie, C., & Li, G. (2024). Cucumber Leaf Segmentation Based on Bilayer Convolutional Network. Agronomy, 14(11), 2664. https://doi.org/10.3390/agronomy14112664