Semantic Segmentation of Cucumber Leaf Disease Spots Based on ECA-SegFormer
Abstract
:1. Introduction
- A new dataset, including four cucumber leaf diseases, is collected under natural conditions to demonstrate the effectiveness of the proposed method.
- The Efficient Channel Attention (ECA) module is added to the decoder of SegFormer. ECA can improve the representational power of the model by extracting attention from the channel dimensions of the feature map and focusing on the most salient components of the information.
- The FPN module is used in the decoder of SegFormer to represent the output of picture information by fusing features from different layers and using multi-scale feature maps for prediction.
- The segmentation results for different types of disease, different numbers of disease types, different clarity of spots, different levels of shading, and different levels of sparseness are visualized.
2. Materials and Methods
2.1. Dataset
2.1.1. Image Data Acquisition
2.1.2. Dataset Preprocessing
- Before the experimental training, the 1558 annotated diseased-cucumber leaf images were divided in a ratio of 8:2. A total of 1245 images were selected as the training and validating sets, and 313 were chosen as the testing set. During the network training process, the 1245 images were divided into training and validating sets with a ratio of 8:2. The training set was used for the learning of the weight parameters of the model; the validating set was used to optimize the structure of the model while reducing the complexity of the model, and the testing set was used to evaluate the model.
- Firstly, the picture was resized to 2048 × 512 pixels with a size scaling-ratio range of 0.5–2.0 and blank areas filled with black pixels. Then, the image was cropped to 512 × 512 pixels, with each type of disease spot taking up less than 0.75 of the whole picture.
- To increase the diversity of the dataset and the robustness of the model, in this study, we performed data augmentation in both the training and validating sets. Each augmentation operation was performed for each image with a probability of 0.5. The data augmentation operations and the corresponding values are shown in Table 1.
- Before entering the model, three channels’ mean values (123.675, 116.28, 103.53) and variances (58.395, 57.12, 57.375) were taken to normalize the image values to speed up model convergence.
2.2. Semantic Segmentation Based on ECA-SegFormer
2.2.1. SegFormer
2.2.2. ECA-SegFormer Network Structure
2.2.3. Efficient Channel Attention Module
2.2.4. Feature Pyramid Networks Module
3. Experiments and Results
3.1. Implementation Details and Evaluation Metrics
3.1.1. Implementation Details
3.1.2. Evaluation Metrics
3.2. Experimental Results and Analysis
3.2.1. Comparison of Different Pyramid Modules
3.2.2. Comparison of Different Attention Modules
3.2.3. Comparison of Different Positions of ECA
3.2.4. Comparison of Different Stages of Using FPN
3.2.5. Comparison of ECA-SegFormer Using Different Hyperparameters
3.2.6. Comparison with Other Segmentation Models
3.2.7. Visualization of Segmentation for Different Scenarios
- ECA-SegFormer can improve the disease spot segmentation effect of cucumber leaves with four disease types in each scenario.
- For cucumber leaves with different numbers of disease types, the ECA-SegFormer can correctly segment and identify disease spots.
- In particular, for the dense scenario (the last row in Figure 7c), the SegFormer model cannot accurately segment cucumber leaf disease spots due to the large number and dense adhesion of the disease spots. The ECA-SegFormer had higher accuracy and robustness for segmenting densely connected disease spots, focusing on the most significant components of the information and realizing multi-scale feature fusion.
- Furthermore, Figure 7 shows that while ECA-SegFormer can segment the spots at the same location as SegFormer, the ECA-SegFormer is more accurate, and the segmented spots overlap more with the actual spots.
- ECA-SegFormer correctly segments and identifies some disease spots in the original image that are not manually labeled, demonstrating that ECA-SegFormer can reduce the subjective errors caused by manual labeling.
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Atallah, O.O.; Osman, A.; Ali, M.A.; Sitohy, M. Soybean β-conglycinin and catfish cutaneous mucous p22 glycoproteins deteriorate sporangial cell walls of Pseudoperonospora cubensis and suppress cucumber downy mildew. Pest Manag. Sci. 2021, 77, 3313–3324. [Google Scholar] [CrossRef] [PubMed]
- Martinelli, F.; Scalenghe, R.; Davino, S.; Panno, S.; Scuderi, G.; Ruisi, P.; Villa, P.; Stroppiana, D.; Boschetti, M.; Goulart, L.; et al. Advanced methods of plant disease detection. A review. Agron. Sustain. Dev. 2015, 35, 1–25. [Google Scholar] [CrossRef] [Green Version]
- Deenan, S.; Janakiraman, S.; Nagachandrabose, S. Image segmentation algorithms for Banana leaf disease diagnosis. J. Inst. Eng. Ser. C 2020, 101, 807–820. [Google Scholar] [CrossRef]
- Pugoy, R.A.; Mariano, V. Automated rice leaf disease detection using color image analysis. In Third International Conference on Digital Image Processing; SPIE: Bellingham, WA, USA, 2011; Volume 8009. [Google Scholar] [CrossRef]
- Revathi, P.; Hemalatha, M. Classification of cotton leaf spot diseases using image processing edge detection techniques. In Proceedings of the 2012 International Conference on Emerging Trends in Science, Engineering and Technology (INCOSET), Tiruchirappalli, India, 13–14 December 2012; pp. 169–173. [Google Scholar]
- Wang, Z.; Wang, K.y.; Pan, S.; Han, Y.y. Segmentation of Crop Disease Images with an Improved K-means Clustering Algorithm. Appl. Eng. Agric. 2018, 34, 277–289. [Google Scholar] [CrossRef]
- Zhao, J.; Fang, Y.; Chu, G.; Yan, H.; Hu, L.; Huang, L. Identification of Leaf-Scale Wheat Powdery Mildew (Blumeria graminis f. sp. Tritici) Combining Hyperspectral Imaging and an SVM Classifier. Plants 2020, 9, 936. [Google Scholar] [CrossRef]
- Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Nucl. Sci. 2020, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
- Jiang, F.; Lu, Y.; Chen, Y.; Cai, D.; Li, G. Image recognition of four rice leaf diseases based on deep learning and support vector machine. Comput. Educ. 2020, 179, 105824. [Google Scholar] [CrossRef]
- Yao, N.; Ni, F.; Wu, M.; Wang, H.; Li, G.; Sung, W.K. Deep Learning-Based Segmentation of Peach Diseases Using Convolutional Neural Network. Front. Plant Sci. 2022, 13, 876357. [Google Scholar] [CrossRef] [PubMed]
- Craze, H.A.; Pillay, N.; Joubert, F.; Berger, D.K. Deep Learning Diagnostics of Gray Leaf Spot in Maize under Mixed Disease Field Conditions. Plants 2022, 11, 1942. [Google Scholar] [CrossRef]
- Yong, L.Z.; Khairunniza-Bejo, S.; Jahari, M.; Muharam, F.M. Automatic Disease Detection of Basal Stem Rot Using Deep Learning and Hyperspectral Imaging. Agriculture 2023, 13, 69. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Nucl. Sci. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Agarwal, M.; Gupta, S.K.; Biswas, K. A compressed and accelerated SegNet for plant leaf disease segmentation: A differential evolution based approach. In Proceedings of the Advances in Knowledge Discovery and Data Mining: 25th Pacific-Asia Conference, PAKDD 2021, Virtual Event, 11–14 May 2021; pp. 272–284. [Google Scholar]
- Yue, Y.; Li, X.; Zhao, H.; Wang, H. Image segmentation method of crop diseases based on improved SegNet neural network. In Proceedings of the 2020 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 13–16 October 2020; pp. 1986–1991. [Google Scholar]
- Wang, C.; Du, P.; Wu, H.; Li, J.; Zhao, C.; Zhu, H. A cucumber leaf disease severity classification method based on the fusion of DeepLabV3+ and U-Net. Comput. Electron. Agric. 2021, 189, 106373. [Google Scholar] [CrossRef]
- Jia, Z.; Shi, A.; Xie, G.; Mu, S. Image segmentation of persimmon leaf diseases based on UNet. In Proceedings of the 2022 7th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 15–17 April 2022; pp. 2036–2039. [Google Scholar]
- Li, Y.; Qiao, T.; Leng, W.; Jiao, W.; Luo, J.; Lv, Y.; Tong, Y.; Mei, X.; Li, H.; Hu, Q. Semantic Segmentation of Wheat Stripe Rust Images Using Deep Learning. Agronomy 2022, 12, 2933. [Google Scholar] [CrossRef]
- Bhujel, A.; Khan, F.; Basak, J.K.; Jaihuni, M.; Sihalath, T.; Moon, B.E.; Park, J.; Kim, H.T. Detection of gray mold disease and its severity on strawberry using deep learning networks. J. Plant Dis. Prot. 2022, 129, 579–592. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.u.; Polosukhin, I. Attention Is All You Need. arXiv 2017, 30. [Google Scholar] [CrossRef]
- Duong, L.T.; Le, N.H.; Tran, T.B.; Ngo, V.M.; Nguyen, P.T. Detection of tuberculosis from chest X-ray images: Boosting the performance with vision transformer and transfer learning. Expert Syst. Appl. 2021, 184, 115519. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 213–229. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:https://arxiv.org/abs/2010.11929v2. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
- Zheng, S.; Lu, J.; Zhao, H.; Zhu, X.; Luo, Z.; Wang, Y.; Fu, Y.; Feng, J.; Xiang, T.; Torr, P.H. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 6881–6890. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
- Wang, F.; Rao, Y.; Luo, Q.; Jin, X.; Jiang, Z.; Zhang, W.; Li, S. Practical cucumber leaf disease recognition using improved Swin Transformer and small sample size. Comput. Electron. Agric. 2022, 199, 107163. [Google Scholar] [CrossRef]
- Wu, J.; Wen, C.; Chen, H.; Ma, Z.; Zhang, T.; Su, H.; Yang, C. DS-DETR: A Model for Tomato Leaf Disease Segmentation and Damage Evaluation. Agronomy 2022, 12, 2023. [Google Scholar] [CrossRef]
- Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer neural network for weed and crop classification of high resolution UAV images. Remote Sens. 2022, 14, 592. [Google Scholar] [CrossRef]
- Li, Z.; Chen, P.; Shuai, L.; Wang, M.; Zhang, L.; Wang, Y.; Mu, J. A Copy Paste and Semantic Segmentation-Based Approach for the Classification and Assessment of Significant Rice Diseases. Plants 2022, 11, 3174. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Cen, C.; Li, F.; Liu, M.; Mu, W. CRFormer: Cross-Resolution Transformer for segmentation of grape leaf diseases with context mining. Expert Syst. Appl. 2023, 229, 120324. [Google Scholar] [CrossRef]
- Hu, Z.; Yang, H.; Lou, T. Dual attention-guided feature pyramid network for instance segmentation of group pigs. Comput. Electron. Agric. 2021, 186, 106140. [Google Scholar] [CrossRef]
- Hu, Z.; Yan, H.; Lou, T. Parallel channel and position attention-guided feature pyramid for pig face posture detection. Int. J. Agric. Biol. Eng. 2022, 15, 222–234. [Google Scholar] [CrossRef]
- Hu, Z.; Yang, H.; Yan, H. Attention-Guided Instance Segmentation for Group-Raised Pigs. Animals 2023, 13, 2181. [Google Scholar] [CrossRef] [PubMed]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Nucl. Sci. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [Green Version]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8 September 2018; pp. 3–19. [Google Scholar]
- Li, Y.; Yao, T.; Pan, Y.; Mei, T. Contextual transformer networks for visual recognition. IEEE Trans. Nucl. Sci. 2022, 45, 1489–1500. [Google Scholar] [CrossRef]
- Fan, D.P.; Ji, G.P.; Zhou, T.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Pranet: Parallel reverse attention network for polyp segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, 4–8 October 2020; pp. 263–273. [Google Scholar]
- Liu, H.; Liu, F.; Fan, X.; Huang, D. Polarized self-attention: Towards high-quality pixel-wise regression. arXiv 2021, arXiv:2107.00782. [Google Scholar] [CrossRef]
- Li, X.; Hu, X.; Yang, J. Spatial group-wise enhance: Improving semantic feature learning in convolutional networks. arXiv 2019, arXiv:2019.09646. [Google Scholar] [CrossRef]
- Zhang, Q.L.; Yang, Y.B. Sa-net: Shuffle attention for deep convolutional neural networks. In Proceedings of the ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; pp. 2235–2239. [Google Scholar]
- Yang, L.; Zhang, R.Y.; Li, L.; Xie, X. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
- Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar]
- Zhou, H.; Li, J.; Peng, J.; Zhang, S.; Zhang, S. Triplet Attention: Rethinking the Similarity in Transformers. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Data Mining, Virtual Event, Singapore, 14–18 August 2021; pp. 2378–2388. [Google Scholar]
- Chen, L.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Operation | Value |
---|---|
flip | horizontal flip |
brightness | [−32, 32] |
contrast | [−0.5, 1.5] |
saturation | [−0.5, 1.5] |
hue | [−18, 18] |
Parameter | Value |
---|---|
Channel number | [32, 64, 160, 256] |
Num layer | [2, 2, 2, 2] |
Num head | [1, 2, 5, 8] |
Patch size | [7, 3, 3, 3] |
Stride | [4, 2, 2, 2] |
Sr ratio | [8, 4, 2, 1] |
Expansion ratio | [8, 8, 4, 4] |
Environment | Item | Value |
---|---|---|
Hardware environment | CPU | i5-9300 H |
GPU | NVIDIA GeForce GTX 1650 | |
video memory | 4 GB | |
code base | mmsegmentation | |
Software environment | OS | Windows 10 |
Python | 3.8 | |
PyTorch | 1.8.1 | |
CUDA | 10.2 |
Item | Value |
---|---|
Optimizer | AdamW |
Initial learning rate | 0.00006 |
Minimum learning rate | 0.0 |
Weight decay | 0.01 |
Beta1 | 0.9 |
Beta2 | 0.999 |
Learning strategy | poly |
Dropout ratio | 0.1 |
Kernel | 3 |
Module | MIoU | MPA | DPA | PPA | TPA | APA |
---|---|---|---|---|---|---|
NONE | 36.56 | 46.31 | 68.72 | 21.28 | 47.95 | 47.32 |
SPP | 23.52 | 25.99 | 56.08 | 17.70 | 10.52 | 19.67 |
FPN | 35.75 | 48.98 | 73.54 | 27.07 | 33.71 | 61.63 |
Attention | MIoU | MPA | DPA | PPA | TPA | APA |
---|---|---|---|---|---|---|
NONE | 36.56 | 46.31 | 68.72 | 21.28 | 47.95 | 47.32 |
CBAM | 35.97 | 46.19 | 75.63 | 30.12 | 38.16 | 40.87 |
CoT | 35.03 | 44.92 | 72.32 | 23.57 | 29.41 | 54.40 |
ECA | 38.03 | 60.86 | 79.08 | 36.77 | 69.75 | 57.84 |
ParNet | 29.81 | 36.61 | 71.31 | 26.20 | 14.99 | 33.97 |
SE | 37.82 | 57.84 | 75.79 | 39.69 | 55.42 | 60.48 |
PSA | 35.31 | 49.44 | 65.10 | 33.50 | 67.75 | 31.44 |
SGE | 36.73 | 48.36 | 75.02 | 30.49 | 43.94 | 44.01 |
SA | 36.37 | 50.37 | 77.19 | 29.34 | 43.38 | 51.57 |
SimAM | 37.50 | 55.64 | 79.57 | 30.19 | 61.83 | 50.97 |
SK | 34.64 | 41.66 | 69.03 | 23.03 | 29.87 | 44.74 |
TripA | 37.55 | 51.64 | 72.09 | 29.32 | 56.01 | 49.14 |
Position | MIoU | MPA | DPA | PPA | TPA | APA |
---|---|---|---|---|---|---|
NONE | 36.56 | 46.31 | 68.72 | 21.28 | 47.95 | 47.32 |
a | 38.03 | 60.86 | 79.08 | 36.77 | 69.75 | 57.84 |
b | 35.42 | 48.08 | 76.88 | 28.76 | 48.67 | 38.03 |
c | 36.87 | 52.73 | 76.43 | 30.64 | 56.98 | 46.89 |
Stage | MIoU | MPA | DPA | PPA | TPA | APA |
---|---|---|---|---|---|---|
NONE | 36.56 | 46.31 | 68.72 | 21.28 | 47.95 | 47.32 |
3-4 | 36.15 | 44.49 | 64.86 | 25.09 | 47.60 | 40.42 |
2-3 | 35.63 | 44.15 | 68.29 | 24.36 | 46.35 | 37.60 |
1-2 | 37.35 | 47.16 | 69.90 | 27.49 | 44.99 | 46.29 |
3-4+2-3 | 36.24 | 53.61 | 73.46 | 28.08 | 51.98 | 60.95 |
2-3+1-2 | 36.13 | 48.75 | 67.36 | 24.00 | 64.60 | 39.05 |
3-4+1-2 | 28.24 | 34.99 | 68.20 | 22.70 | 14.98 | 34.11 |
1-2+2-3+3-4 | 38.03 | 60.86 | 79.08 | 36.77 | 69.75 | 57.84 |
Item | Value | mIoU | MPA |
---|---|---|---|
Initial learning rate | 0.00003 | 35.80 | 48.73 |
0.00006 | 38.03 | 60.86 | |
0.00009 | 36.97 | 52.47 | |
0.000006 | 20.16 | 25.58 | |
0.0006 | 17.52 | 20.40 | |
Dropout ratio | 0.1 | 38.03 | 60.86 |
0.3 | 37.43 | 55.39 | |
0.5 | 35.57 | 48.87 | |
0.7 | 34.21 | 48.70 | |
Kernel | 1 | 35.96 | 47.63 |
3 | 38.03 | 60.86 | |
5 | 36.10 | 46.84 | |
7 | 36.26 | 53.07 |
Model | Backbone | mIoU (%) | MPA (%) | Params (M) | FLOPs (G) |
---|---|---|---|---|---|
DeepLabV3+ | Xception | 32.10 | 51.25 | 54.71 | 166.87 |
U-Net | Vgg16 | 37.50 | 46.04 | 24.89 | 451.77 |
PSPNet | Resnet50 | 28.51 | 42.03 | 46.71 | 118.43 |
HRNet | - | 31.02 | 53.75 | 29.54 | 79.96 |
SETR | - | 21.40 | 23.78 | 96.99 | 123.41 |
SegFormer | - | 36.56 | 46.31 | 1.22 | 3.72 |
ECA-SegFormer | - | 38.03 | 60.86 | 4.04 | 10.64 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, R.; Guo, Y.; Hu, Z.; Gao, R.; Yang, H. Semantic Segmentation of Cucumber Leaf Disease Spots Based on ECA-SegFormer. Agriculture 2023, 13, 1513. https://doi.org/10.3390/agriculture13081513
Yang R, Guo Y, Hu Z, Gao R, Yang H. Semantic Segmentation of Cucumber Leaf Disease Spots Based on ECA-SegFormer. Agriculture. 2023; 13(8):1513. https://doi.org/10.3390/agriculture13081513
Chicago/Turabian StyleYang, Ruotong, Yaojiang Guo, Zhiwei Hu, Ruibo Gao, and Hua Yang. 2023. "Semantic Segmentation of Cucumber Leaf Disease Spots Based on ECA-SegFormer" Agriculture 13, no. 8: 1513. https://doi.org/10.3390/agriculture13081513
APA StyleYang, R., Guo, Y., Hu, Z., Gao, R., & Yang, H. (2023). Semantic Segmentation of Cucumber Leaf Disease Spots Based on ECA-SegFormer. Agriculture, 13(8), 1513. https://doi.org/10.3390/agriculture13081513