Critical Information Mining Network: Identifying Crop Diseases in Noisy Environments
Abstract
:1. Introduction
- In this study, we propose a model called CIMNet, which can accurately identify crop diseases in noisy environments.
- We propose an MSCM, which fuses shallow key features with deep features to help the model focus on multi-scale key features and reduce noise interference to image recognition.
- We conducted experiments on two common plant disease datasets in noisy and stable environments, respectively, and compared them with existing methods. The experimental results show that CIMNet can effectively deal with the problems caused by noise.
2. Related Work
3. Materials and Methods
3.1. Datasets
3.1.1. Potato Datasets
3.1.2. Tomato Datasets
3.2. Image Preprocessing
3.2.1. Image Resizing
3.2.2. Data Augmentation
3.3. Method
3.3.1. Non-Local Attention Module (Non-Local)
3.3.2. Multi-Scale Critical Information Fusion Module (MSCM)
- Through the Key Information Extraction Module (KIB), the effective parts of shallow features are mined to improve the image recognition ability of the model in complex scenes.
- The shallow information and deep information are integrated to enrich the deep network feature information.
3.3.3. Loss Function
4. Experimental Results and Discussion
4.1. Experimental Environment
4.2. Experimental Settings
Model Performance Evaluation
4.3. Comparative Experiments
4.3.1. Noise Environment Experiment
4.3.2. Stable Environment Experiment
4.4. Ablation Experiment
4.5. Model Performance Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Yuan, Y.; Chen, L.; Wu, H.; Li, L. Advanced agricultural disease image recognition technologies: A review. Inf. Process. Agric. 2022, 9, 48–59. [Google Scholar] [CrossRef]
- Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 215232. [Google Scholar] [CrossRef]
- Arnal Barbedo, J.G. Digital image processing techniques for detecting, quantifying and classifying plant diseases. SpringerPlus 2013, 2, 660. [Google Scholar] [CrossRef]
- Alsharif, M.H.; Kelechi, A.H.; Yahya, K.; Chaudhry, S.A. Machine learning algorithms for smart data analysis in internet of things environment: Taxonomies and research trends. Symmetry 2020, 12, 88. [Google Scholar] [CrossRef]
- Li, L.; Zhang, S.; Wang, B. Plant disease detection and classification by deep learning—A review. IEEE Access 2021, 9, 56683–56698. [Google Scholar] [CrossRef]
- Ebrahimi, M.; Khoshtaghaza, M.H.; Minaei, S.; Jamshidi, B. Vision-based pest detection based on SVM classification method. Comput. Electron. Agric. 2017, 137, 52–58. [Google Scholar] [CrossRef]
- Kannadasan, R.; Alsharif, M.H.; Jahid, A.; Khan, M.A. Categorizing diseases from leaf images using a hybrid learning model. Symmetry 2021, 13, 2073. [Google Scholar]
- Nandhini, S.; Ashokkumar, K. Improved crossover based monarch butterfly optimization for tomato leaf disease classification using convolutional neural network. Multimed. Tools Appl. 2021, 80, 18583–18610. [Google Scholar] [CrossRef]
- Zeng, T.; Li, C.; Fu, W. Rubber leaf disease recognition based on improved deep convolutional neural networks with a cross-scale attention mechanism. Front. Plant Sci. 2022, 13, 829479. [Google Scholar] [CrossRef]
- Aggarwal, M.; Khullar, V.; Goyal, N. Exploring classification of rice leaf diseases using machine learning and deep learning. In Proceedings of the 2023 3rd International Conference on Innovative Practices in Technology and Management (ICIPTM), Uttar Pradesh, India, 22–24 February 2023; IEEE: New York, NY, USA, 2023; pp. 1–6. [Google Scholar]
- Peng, J.; Wang, Y.; Jiang, P.; Zhang, R.; Chen, H. RiceDRA-net: Precise identification of rice leaf diseases with complex backgrounds using a res-attention mechanism. Appl. Sci. 2023, 13, 4928. [Google Scholar] [CrossRef]
- Aggarwal, M.; Khullar, V.; Goyal, N.; Gautam, R.; Alblehai, F.; Elghatwary, M.; Singh, A. Federated transfer learning for rice-leaf disease classification across multiclient cross-silo datasets. Agronomy 2023, 13, 2483. [Google Scholar] [CrossRef]
- Hamid, O.H. From model-centric to data-centric AI: A paradigm shift or rather a complementary approach? In Proceedings of the 2022 8th International Conference on Information Technology Trends (ITT), Dubai, United Arab Emirates, 25–26 May 2022; IEEE: New York, NY, USA, 2022; pp. 196–199. [Google Scholar]
- Hamid, O.H. Data-centric and model-centric AI: Twin drivers of compact and robust industry 4.0 solutions. Appl. Sci. 2023, 13, 2753. [Google Scholar] [CrossRef]
- Ng, A. MLOps: From Model-Centric to Data-Centric AI. AI. 2021. Available online: https://www.youtube.com/watch?v=06-AZXmwHjo (accessed on 5 March 2024).
- Jarrahi, M.H.; Memariani, A.; Guha, S. The principles of data-centric AI (DCAI). arXiv 2022, arXiv:2211.14611. [Google Scholar]
- Gautam, V.; Trivedi, N.K.; Singh, A.; Mohamed, H.G.; Noya, I.D.; Kaur, P.; Goyal, N. A transfer learning-based artificial intelligence model for leaf disease assessment. Sustainability 2022, 14, 13610. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Tang, Z.; Yang, J.; Li, Z.; Qi, F. Grape disease image classification based on lightweight convolution neural networks and channelwise attention. Comput. Electron. Agric. 2020, 178, 105735. [Google Scholar] [CrossRef]
- De Ocampo, A.L.P.; Dadios, E.P. Mobile platform implementation of lightweight neural network model for plant disease detection and recognition. In Proceedings of the 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Baguio City, Philippines, 29 November–2 December 2018; IEEE: New York, NY, USA, 2018; pp. 1–4. [Google Scholar]
- Rahman, C.R.; Arko, P.S.; Ali, M.E.; Khan, M.A.I.; Apon, S.H.; Nowrin, F.; Wasif, A. Identification and recognition of rice diseases and pests using convolutional neural networks. Biosyst. Eng. 2020, 194, 112–120. [Google Scholar] [CrossRef]
- Alfarisy, A.A.; Chen, Q.; Guo, M. Deep learning based classification for paddy pests & diseases recognition. In Proceedings of the 2018 International Conference on Mathematics and Artificial Intelligence, Chengdu, China, 20–22 April 2018; pp. 21–25. [Google Scholar]
- Aggarwal, M.; Khullar, V.; Goyal, N. Contemporary and futuristic intelligent technologies for rice leaf disease detection. In Proceedings of the 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 13–14 October 2022; IEEE: New York, NY, USA, 2022; pp. 1–6. [Google Scholar]
- Mishra, A.M.; Harnal, S.; Mohiuddin, K.; Gautam, V.; Nasr, O.A.; Goyal, N.; Alwetaishi, M.; Singh, A. A Deep Learning-Based Novel Approach for Weed Growth Estimation. Intell. Autom. Soft Comput. 2022, 31, 1157–1173. [Google Scholar] [CrossRef]
- Zhang, S.; Huang, W.; Zhang, C. Three-channel convolutional neural networks for vegetable leaf disease recognition. Cogn. Syst. Res. 2019, 53, 31–41. [Google Scholar] [CrossRef]
- Aggarwal, M.; Khullar, V.; Goyal, N.; Singh, A.; Tolba, A.; Thompson, E.B.; Kumar, S. Pre-trained deep neural network-based features selection supported machine learning for rice leaf disease classification. Agriculture 2023, 13, 936. [Google Scholar] [CrossRef]
- Kanna, G.P.; Kumar, S.J.; Kumar, Y.; Changela, A.; Woźniak, M.; Shafi, J.; Ijaz, M.F. Advanced deep learning techniques for early disease prediction in cauliflower plants. Sci. Rep. 2023, 13, 18475. [Google Scholar] [CrossRef] [PubMed]
- Dhaka, V.S.; Meena, S.V.; Rani, G.; Sinwar, D.; Ijaz, M.F.; Woźniak, M. A survey of deep convolutional neural networks applied for prediction of plant leaf diseases. Sensors 2021, 21, 4749. [Google Scholar] [CrossRef]
- Kundu, N.; Rani, G.; Dhaka, V.S.; Gupta, K.; Nayak, S.C.; Verma, S.; Ijaz, M.F.; Woźniak, M. IoT and interpretable machine learning based framework for disease prediction in pearl millet. Sensors 2021, 21, 5386. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16×16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Zeng, W.; Li, M. Crop leaf disease recognition based on Self-Attention convolutional neural network. Comput. Electron. Agric. 2020, 172, 105341. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, H.; Peng, Z. Rice diseases detection and classification using attention based neural network and bayesian optimization. Expert Syst. Appl. 2021, 178, 114770. [Google Scholar] [CrossRef]
- Lee, S.H.; Goëau, H.; Bonnet, P.; Joly, A. Attention-based recurrent neural network for plant disease classification. Front. Plant Sci. 2020, 11, 601250. [Google Scholar] [CrossRef] [PubMed]
- Wang, P.; Niu, T.; Mao, Y.; Zhang, Z.; Liu, B.; He, D. Identification of apple leaf diseases by improved deep convolutional neural networks with an attention mechanism. Front. Plant Sci. 2021, 12, 723294. [Google Scholar] [CrossRef] [PubMed]
- Rashid, J.; Khan, I.; Ali, G.; Almotiri, S.H.; AlGhamdi, M.A.; Masood, K. Multi-level deep learning model for potato leaf disease recognition. Electronics 2021, 10, 2064. [Google Scholar] [CrossRef]
- Hughes, D.; Salathé, M. An open access repository of images on plant health to enable the development of mobile disease diagnostics. arXiv 2015, arXiv:1511.08060. [Google Scholar]
- Bovik, A.C. Basic gray level image processing. In The Essential Guide to Image Processing; Elsevier: Amsterdam, The Netherlands, 2009; pp. 43–68. [Google Scholar]
- Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; IEEE: New York, NY, USA, 2005; Volume 2, pp. 60–65. [Google Scholar]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7794–7803. [Google Scholar]
- Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2net: A new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 652–662. [Google Scholar] [CrossRef] [PubMed]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Chen, M.; Lin, M.; Li, K.; Shen, Y.; Wu, Y.; Chao, F.; Ji, R. Cf-vit: A general coarse-to-fine method for vision transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Singapore, 17–19 July 2023; Volume 37, pp. 7042–7052. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Heo, B.; Yun, S.; Han, D.; Chun, S.; Choe, J.; Oh, S.J. Rethinking spatial dimensions of vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 11936–11945. [Google Scholar]
- Han, K.; Xiao, A.; Wu, E.; Guo, J.; Xu, C.; Wang, Y. Transformer in transformer. Adv. Neural Inf. Process. Syst. 2021, 34, 15908–15919. [Google Scholar]
- Graham, B.; El-Nouby, A.; Touvron, H.; Stock, P.; Joulin, A.; Jégou, H.; Douze, M. Levit: A vision transformer in convnet’s clothing for faster inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 12259–12269. [Google Scholar]
- Dai, Z.; Liu, H.; Le, Q.V.; Tan, M. Coatnet: Marrying convolution and attention for all data sizes. Adv. Neural Inf. Process. Syst. 2021, 34, 3965–3977. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Wu, H.; Xiao, B.; Codella, N.; Liu, M.; Dai, X.; Yuan, L.; Zhang, L. Cvt: Introducing convolutions to vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 22–31. [Google Scholar]
- Chen, Y.; Dai, X.; Chen, D.; Liu, M.; Dong, X.; Yuan, L.; Liu, Z. Mobile-former: Bridging mobilenet and transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5270–5279. [Google Scholar]
- Guo, Y.; Lan, Y.; Chen, X. CST: Convolutional Swin Transformer for detecting the degree and types of plant diseases. Comput. Electron. Agric. 2022, 202, 107407. [Google Scholar] [CrossRef]
Class | Name | Number |
---|---|---|
Potato disease leaf dataset | ||
00 | Early Blight | 1628 |
01 | Late Blight | 1414 |
02 | Healthy | 1020 |
Tomato subset of Plant Village | ||
00 | Bacterial Spot | 391 |
01 | Early Blight | 392 |
02 | Late Blight | 423 |
03 | Leaf Mold | 403 |
04 | Septoria Leaf Spot | 377 |
05 | Spider Mites | 415 |
06 | Target Spot | 413 |
07 | Yellow Leaf Curl Virus | 393 |
08 | Mosaic Virus | 393 |
09 | Healthy | 421 |
Network Layer | Kernel Size | Number of Steps | Output Dimension |
---|---|---|---|
Input | - | - | (b, 3, 224, 224) |
Conv2d | 7 × 7 | 1 | (b, 64, 112, 112) |
Max pool | 3 × 3 | 1 | (b, 64, 56, 56) |
Conv2d | 3 × 3 | 2 | (b, 64, 56, 56) |
Conv2d | 3 × 3 | 2 | (b, 128, 28, 28) |
Non-Local | - | 1 | (b, 128, 28, 28) |
Conv2d | 3 × 3 | 2 | (b, 256, 14, 14) |
Non-Local | - | 1 | (b, 256, 14, 14) |
KIB | - | 1 | (b, 256, 14, 14) |
KIB | - | 1 | (b, 256, 14, 14) |
Conv2d | 3 × 3 | 2 | (b, 512, 7, 7) |
Average Pool | 7 × 7 | 1 | (b, 512, 1, 1) |
FC | - | - | - |
Parameter | Value | Description |
---|---|---|
Epoch | 100 | Number of complete training with all data from the training set |
Batch Size | 32 | Number of images in a batch |
Optimizer | SGD | Minimizing the loss function |
Scheduler | MultiStepLR | Adjusting the learning rate of the optimizer |
Learning rate | 0.0001 | The parameters of the MultiStepLR |
Loss function | CrossEntropyLoss | Measuring the difference between the value of predicted and true |
Models | Potato Top1-Acc | Tomato Top1-Acc | ||||
---|---|---|---|---|---|---|
10% | 20% | 30% | 10% | 20% | 30% | |
CNNS | ||||||
AlexNet | 0.904 | 0.835 | 0.775 | 0.695 | 0.534 | 0.380 |
SqueezeNet1_0 | 0.667 | 0.491 | 0.430 | 0.411 | 0.231 | 0.138 |
SqueezeNet1_1 | 0.506 | 0.494 | 0.491 | 0.298 | 0.211 | 0.167 |
MobileNetV2 | 0.383 | 0.257 | 0.252 | 0.254 | 0.157 | 0.118 |
MobileNetV3_small | 0.412 | 0.400 | 0.395 | 0.331 | 0.229 | 0.175 |
Transformer based models | ||||||
PiT-s | 0.889 | 0.844 | 0.793 | 0.743 | 0.545 | 0.372 |
PiT-b | 0.886 | 0.857 | 0.812 | 0.610 | 0.398 | 0.312 |
LeViT-128 | 0.640 | 0.447 | 0.435 | 0.223 | 0.160 | 0.145 |
LeViT-256 | 0.432 | 0.407 | 0.400 | 0.374 | 0.244 | 0.131 |
ViT | 0.768 | 0.647 | 0.509 | 0.635 | 0.441 | 0.325 |
CF-ViT ( = 1.0) | 0.874 | 0.811 | 0.761 | 0.687 | 0.496 | 0.330 |
TNT | 0.894 | 0.830 | 0.778 | 0.677 | 0.511 | 0.365 |
CoAtNet | 0.862 | 0.721 | 0.578 | 0.745 | 0.559 | 0.394 |
Swin Transformer-small | 0.810 | 0.588 | 0.477 | 0.534 | 0.417 | 0.355 |
Swin Transformer-base | 0.869 | 0.768 | 0.654 | 0.622 | 0.405 | 0.253 |
CNN+Transformer | ||||||
CvT | 0.796 | 0.631 | 0.519 | 0.633 | 0.431 | 0.249 |
Mobile-Former | 0.895 | 0.831 | 0.798 | 0.712 | 0.511 | 0.381 |
Our model | ||||||
CIMNet | 0.965 | 0.923 | 0.866 | 0.754 | 0.643 | 0.436 |
Models | Complexity (Gmac) | Parameters (M) | Top1-Acc Potato | F1 Score Potato | Top1-Acc Tomato | F1 Score Tomato |
---|---|---|---|---|---|---|
CNNS | ||||||
AlexNet | 0.71 | 57.16 | 0.958 | 0.957 | 0.939 | 0.938 |
SqueezeNet1_0 | 1.47 | 1.25 | 0.965 | 0.966 | 0.930 | 0.930 |
SqueezeNet1_1 | 0.27 | 1.24 | 0.973 | 0.972 | 0.939 | 0.939 |
MobileNetV2 | 0.32 | 3.4 | 0.978 | 0.977 | 0.971 | 0.971 |
MobileNetV3_small | 0.16 | 1.77 | 0.973 | 0.972 | 0.977 | 0.977 |
Transformer based models | ||||||
PiT-s | 2.42 | 23.46 | 0.916 | 0.915 | 0.891 | 0.891 |
PiT-b | 10.54 | 73.76 | 0.894 | 0.894 | 0.754 | 0.753 |
LeViT-128 | 0.37 | 8.46 | 0.911 | 0.910 | 0.978 | 0.978 |
LeViT-256 | 1.05 | 17.89 | 0.926 | 0.925 | 0.970 | 0.970 |
ViT | 1.36 | 1.36 | 0.852 | 0.851 | 0.766 | 0.766 |
CF-ViT ( = 1.0) | 4.00 | 29.8 | 0.971 | 0.971 | 0.914 | 0.914 |
TNT | 13.4 | 65.24 | 0.926 | 0.925 | 0.758 | 0.758 |
CoAtNet | 11.51 | 10.5 | 0.970 | 0.969 | 0.916 | 0.916 |
Swin Transformer-small | 8.51 | 48.75 | 0.924 | 0.921 | 0.795 | 0.795 |
Swin Transformer-base | 15.13 | 86.62 | 0.867 | 0.869 | 0.850 | 0.850 |
CNN+Transformer | ||||||
CvT | 4.53 | 19.98 | 0.932 | 0.931 | 0.855 | 0.855 |
Mobile-Former | 0.34 | 11.42 | 0.974 | 0.974 | 0.969 | 0.969 |
Our model | ||||||
CIMNet | 2.18 | 12.64 | 0.986 | 0.986 | 0.980 | 0.980 |
Model | Noisy Environment | Stabilized Environment | ||
---|---|---|---|---|
Potato | Tomato | Potato | Tomato | |
CIMNet | 0.923 | 0.643 | 0.986 | 0.980 |
w/o Non-Local | 0.893 | 0.481 | 0.977 | 0.973 |
w/o MSCM | 0.835 | 0.358 | 0.981 | 0.977 |
w/o Non-Local + MSCM | 0.793 | 0.275 | 0.975 | 0.972 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shao, Y.; Yang, W.; Lu, Z.; Geng, H.; Chen, D. Critical Information Mining Network: Identifying Crop Diseases in Noisy Environments. Symmetry 2024, 16, 652. https://doi.org/10.3390/sym16060652
Shao Y, Yang W, Lu Z, Geng H, Chen D. Critical Information Mining Network: Identifying Crop Diseases in Noisy Environments. Symmetry. 2024; 16(6):652. https://doi.org/10.3390/sym16060652
Chicago/Turabian StyleShao, Yi, Wenzhong Yang, Zhifeng Lu, Haokun Geng, and Danny Chen. 2024. "Critical Information Mining Network: Identifying Crop Diseases in Noisy Environments" Symmetry 16, no. 6: 652. https://doi.org/10.3390/sym16060652
APA StyleShao, Y., Yang, W., Lu, Z., Geng, H., & Chen, D. (2024). Critical Information Mining Network: Identifying Crop Diseases in Noisy Environments. Symmetry, 16(6), 652. https://doi.org/10.3390/sym16060652