Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study
Abstract
:1. Introduction
- Improved Disease Detection: The study highlights the use of deep learning models, such as AlexNet, VGG16, InceptionV3, MobileNetV3, and EfficientNet, for the classification of sunflower diseases. These models have demonstrated high precision, recall, F1-score, and accuracy in detecting and classifying various sunflower diseases. This contribution improves the early detection of diseases, allowing farmers to implement timely management strategies and minimize crop yield and quality losses.
- Comparative Analysis of Deep Learning Models: The study provides a comparative analysis of different deep learning models, allowing researchers and practitioners to assess their performance in sunflower disease classification. By evaluating the precision, recall, F1-score, and accuracy of each model, the study offers valuable insights into their effectiveness and helps in selecting the most suitable model for sunflower disease detection tasks.
- Potential Benefits for Farmers and Agronomists: The study’s results emphasize the potential of deep learning models in early disease detection and classification, offering significant benefits to farmers and agronomists. By utilizing these models, farmers can quickly identify and categorize sunflower diseases, enabling them to implement timely and appropriate disease management strategies. This contribution has practical implications in enhancing crop productivity and minimizing economic losses in sunflower farming.
- General Applicability to Other Crops: While this study specifically focuses on sunflower disease detection, the findings have broader implications for other crops as well. Deep learning models trained on image data can be adapted and applied to different plant species, aiding in disease identification and classification across various agricultural contexts. The study’s comparative analysis provides valuable insights that can be utilized in similar studies on different crops, benefiting farmers and researchers in multiple agricultural domains.
- Reduced Training Epochs: The study suggests that models like MobileNetV3 and EfficientNetB3 offer high performance while requiring relatively fewer training epochs. This finding is beneficial in terms of computational efficiency and time-saving during the training process. By reducing the required training time, farmers and researchers can expedite the development and deployment of disease detection models, facilitating faster decision-making and implementation of appropriate management strategies.
2. Material and Methods
2.1. Dataset
2.2. State-of-Art Models
2.2.1. AlexNet
2.2.2. VGG16
2.2.3. InceptionV3
2.2.4. MobileNetV3
2.2.5. EfficientNet
2.3. Model Tuning
2.4. K-Fold Validation
- 1.
- Shuffle the entire dataset and split data set into training and test dataset with the ration of 08:20.
- 2.
- Split training set into 4 subsets. Create a loop to train model 4 times.
- 3.
- In first loop first subset is used for validation and the last 3 parts are used for training. Train and test the model.
- 4.
- In second loop second subset is used for validation and the remaining 3 parts are used for training. Train and test the model.
- 5.
- In third loop third subset is used for validation and the remaining 3 parts are used for training. Train and test the model.
- 6.
- In last loop last subset is used for validation and the first 3 parts are used for training. Train and test the model.
- 7.
- Summarize the overall performance of all models of all trained K-Folds.
2.5. Experimental Environment Settings and Performance Evaluation Metrics
3. Results
4. Discussion and Conclusions
- Compound Scaling: EfficientNetB3 includes a technique known as compound scaling, which systematically grows the model’s depth, width, and resolution all at once. This method enables EfficientNetB3 to perform better across a wide variety of computational resources and dataset sizes. EfficientNetB3 achieves improved accuracy while keeping efficiency by scaling the model in a balanced manner.
- Depth and Width: When compared to MobileNetV3 and InceptionV3, EfficientNetB3 is deeper and wider. Because of the enhanced depth, it can capture more complicated features and hierarchies in the data. The increased width, which refers to the number of channels in each layer, improves the model’s representational capacity and allows for more fine-grained detail to be captured.
- Resolution: When compared to AlexNet, MobileNetV3, InceptionV3, and VGG16, EfficientNetB3 has higher-resolution input images. The higher the input resolution, the more visual information the model can learn from, which is very useful when working with detailed or high-quality photos.
- Efficient Architecture: Despite its depth and breadth, EfficientNetB3 retains computational efficiency. It accomplishes this by employing efficient network design ideas such as inverted residual blocks and squeeze-and-excite modules. When compared to VGG16 and InceptionV3, these design choices lower the number of parameters and operations, resulting in faster training and inference times.
- State-of-the-Art Performance: EfficientNetB3 has regularly achieved top performance in a variety of computer vision tasks, including picture classification and object recognition, in many benchmark datasets, including ImageNet. The balanced scalability and efficient architecture contribute to its high performance, making it a suitable solution for many real-world applications.
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Production Volume of Sunflower Seed in Major Producer Countries in 2022/2023. Available online: https://www.statista.com/statistics/263928/production-of-sunflower-seed-since-2000-by-major-countries/#:~:text=Sunflower%20seed%20production%20in%20major%20countries%202022%2F2023&text=During%20that%20time%20period%2C%20Russia,metric%20tons%20in%202022%2F2023 (accessed on 11 June 2023).
- Adeleke, B.S.; Babalola, O.O. Oilseed crop sunflower (Helianthus annuus) as a source of food: Nutritional and health benefits. Food Sci. Nutr. 2020, 8, 4666–4684. [Google Scholar] [CrossRef]
- Urrutia, R.I.; Yeguerman, C.; Jesser, E.; Gutierrez, V.S.; Volpe, M.A.; González, J.O.W. Sunflower seed hulls waste as a novel source of insecticidal product: Pyrolysis bio-oil bioactivity on insect pests of stored grains and products. J. Clean. Prod. 2020, 287, 125000. [Google Scholar] [CrossRef]
- Hladni, N.; Terzić, S.; Mutavdžić, B.; Zorić, M. Classification of confectionary sunflower genotypes based on morphological characters. J. Agric. Sci. 2017, 155, 1594–1609. [Google Scholar] [CrossRef]
- Malik, A.; Vaidya, G.; Jagota, V.; Eswaran, S.; Sirohi, A.; Batra, I.; Rakhra, M.; Asenso, E. Design and Evaluation of a Hybrid Technique for Detecting Sunflower Leaf Disease Using Deep Learning Approach. J. Food Qual. 2022, 2022, 9211700. [Google Scholar] [CrossRef]
- Sasaki, Y.; Okamoto, T.; Imou, K.; Torii, T. Automatic diagnosis of plant disease-Spectral reflectance of healthy and diseased leaves. IFAC Proc. Vol. 1998, 31, 145–150. [Google Scholar] [CrossRef]
- Haber, S. Diagnosis of Flame Chlorosis by Reverse Transcription-Polymerase Chain Reaction (RT-PCR). Plant Dis. 1995, 79, 626–630. [Google Scholar] [CrossRef]
- Koo, C.; Malapi-Wight, M.; Kim, H.S.; Cifci, O.S.; Vaughn-Diaz, V.L.; Ma, B.; Kim, S.; Abdel-Raziq, H.; Ong, K.; Jo, Y.-K.; et al. Development of a Real-Time Microchip PCR System for Portable Plant Disease Diagnosis. PLoS ONE 2013, 8, e82704. [Google Scholar] [CrossRef]
- Aggarwal, S.; Gupta, S.; Gupta, D.; Gulzar, Y.; Juneja, S.; Alwan, A.A.; Nauman, A. An Artificial Intelligence-Based Stacked Ensemble Approach for Prediction of Protein Subcellular Localization in Confocal Microscopy Images. Sustainability 2023, 15, 1695. [Google Scholar] [CrossRef]
- Ayoub, S.; Gulzar, Y.; Rustamov, J.; Jabbari, A.; Reegu, F.A.; Turaev, S. Adversarial Approaches to Tackle Imbalanced Data in Machine Learning. Sustainability 2023, 15, 7097. [Google Scholar] [CrossRef]
- Ayoub, S.; Gulzar, Y.; Reegu, F.A.; Turaev, S. Generating Image Captions Using Bahdanau Attention Mechanism and Transfer Learning. Symmetry 2022, 14, 2681. [Google Scholar] [CrossRef]
- Khan, S.A.; Gulzar, Y.; Turaev, S.; Peng, Y.S. A Modified HSIFT Descriptor for Medical Image Classification of Anatomy Objects. Symmetry 2021, 13, 1987. [Google Scholar] [CrossRef]
- Hamid, Y.; Elyassami, S.; Gulzar, Y.; Balasaraswathi, V.R.; Habuza, T.; Wani, S. An improvised CNN model for fake image detection. Int. J. Inf. Technol. 2022, 15, 5–15. [Google Scholar] [CrossRef]
- Gulzar, Y.; Khan, S.A. Skin Lesion Segmentation Based on Vision Transformers and Convolutional Neural Networks—A Comparative Study. Appl. Sci. 2022, 12, 5990. [Google Scholar] [CrossRef]
- Alam, S.; Raja, P.; Gulzar, Y. Investigation of Machine Learning Methods for Early Prediction of Neurodevelopmental Disorders in Children. Wirel. Commun. Mob. Comput. 2022, 2022, 5766386. [Google Scholar] [CrossRef]
- Anand, V.; Gupta, S.; Gupta, D.; Gulzar, Y.; Xin, Q.; Juneja, S.; Shah, A.; Shaikh, A. Weighted Average Ensemble Deep Learning Model for Stratification of Brain Tumor in MRI Images. Diagnostics 2023, 13, 1320. [Google Scholar] [CrossRef]
- Gulzar, Y.; Alwan, A.A.; Abdullah, R.M.; Abualkishik, A.Z.; Oumrani, M. OCA: Ordered Clustering-Based Algorithm for E-Commerce Recommendation System. Sustainability 2023, 15, 2947. [Google Scholar] [CrossRef]
- Qurashi, J.M.; Jambi, K.M.; Eassa, F.E.; Khemakhem, M.; Alsolami, F.; Basuhail, A.A. Toward Attack Modeling Technique Addressing Resilience in Self-Driving Car. IEEE Access 2022, 11, 2652–2673. [Google Scholar] [CrossRef]
- Hanafi, M.F.F.M.; Nasir, M.S.F.M.; Wani, S.; Abdulghafor, R.A.A.; Gulzar, Y.; Hamid, Y. A Real Time Deep Learning Based Driver Monitoring System. Int. J. Perceptive Cogn. Comput. 2021, 7, 79–84. [Google Scholar]
- Sahlan, F.; Hamidi, F.; Misrat, M.Z.; Adli, M.H.; Wani, S.; Gulzar, Y. Prediction of Mental Health Among University Students. Int. J. Perceptive Cogn. Comput. 2021, 7, 85–91. [Google Scholar]
- Lauriola, I.; Lavelli, A.; Aiolli, F. An introduction to Deep Learning in Natural Language Processing: Models, techniques, and tools. Neurocomputing 2021, 470, 443–456. [Google Scholar] [CrossRef]
- Bantan, R.A.R.; Ali, A.; Naeem, S.; Jamal, F.; Elgarhy, M.; Chesneau, C. Discrimination of sunflower seeds using multispectral and texture dataset in combination with region selection and supervised classification methods. Chaos Interdiscip. J. Nonlin. Sci. 2020, 30, 113142. [Google Scholar] [CrossRef]
- Kurtulmuş, F. Identification of sunflower seeds with deep convolutional neural networks. J. Food Meas. Charact. 2020, 15, 1024–1033. [Google Scholar] [CrossRef]
- Gulzar, Y.; Hamid, Y.; Soomro, A.B.; Alwan, A.A.; Journaux, L. A Convolution Neural Network-Based Seed Classification System. Symmetry 2020, 12, 2018. [Google Scholar] [CrossRef]
- Sirohi, A.; Malik, A. A Hybrid Model for the Classification of Sunflower Diseases Using Deep Learning; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 58–62. [Google Scholar]
- Albarrak, K.; Gulzar, Y.; Hamid, Y.; Mehmood, A.; Soomro, A.B. A Deep Learning-Based Model for Date Fruit Classification. Sustainability 2022, 14, 6339. [Google Scholar] [CrossRef]
- Chen, S.; Lv, F.; Huo, P. Improved Detection of Yolov4 Sunflower Leaf Diseases. In Proceedings of the Proceedings-2021 2nd International Symposium on Computer Engineering and Intelligent Communications, ISCEIC, Nanjing, China, 6–8 August 2021. [Google Scholar]
- Carbone, C.; Potena, C.; Nardi, D. Augmentation of Sunflower-Weed Segmentation Classification with Unity Generated Imagery Including Near Infrared Sensor Data. Lect. Notes Netw. Syst. 2022, 306, 42–63. [Google Scholar]
- Dawod, R.G.; Dobre, C. Automatic Segmentation and Classification System for Foliar Diseases in Sunflower. Sustainability 2022, 14, 11312. [Google Scholar] [CrossRef]
- Gulzar, Y. Fruit Image Classification Model Based on MobileNetV2 with Deep Transfer Learning Technique. Sustainability 2023, 15, 1906. [Google Scholar] [CrossRef]
- Aktaş, H.; Kızıldeniz, T.; Ünal, Z. Classification of pistachios with deep learning and assessing the effect of various datasets on accuracy. J. Food Meas. Charact. 2022, 16, 1983–1996. [Google Scholar] [CrossRef]
- Ünal, Z.; Aktaş, H. Classification of hazelnut kernels with deep learning. Postharvest Biol. Technol. 2023, 197, 112225. [Google Scholar] [CrossRef]
- Dawod, R.G.; Dobre, C. ResNet interpretation methods applied to the classification of foliar diseases in sunflower. J. Agric. Food Res. 2022, 9, 100323. [Google Scholar] [CrossRef]
- Song, Z.; Wang, P.; Zhang, Z.; Yang, S.; Ning, J. Recognition of sunflower growth period based on deep learning from UAV remote sensing images. Precis. Agric. 2023, 24, 1417–1438. [Google Scholar] [CrossRef]
- Barrio-Conde, M.; Zanella, M.A.; Aguiar-Perez, J.M.; Ruiz-Gonzalez, R.; Gomez-Gil, J. A Deep Learning Image System for Classifying High Oleic Sunflower Seed Varieties. Sensors 2023, 23, 2471. [Google Scholar] [CrossRef]
- Sathi, T.A.; Hasan, M.A.; Alam, M.J. SunNet: A Deep Learning Approach to Detect Sunflower Disease. In Proceedings of the 7th International Conference on Trends in Electronics and Informatics, ICOEI 2023—Proceedings, Tirunelveli, India, 11–13 April 2023; pp. 1210–1216. [Google Scholar]
- Ghosh, P.; Mondal, A.K.; Chatterjee, S.; Masud, M.; Meshref, H.; Bairagi, A.K. Recognition of Sunflower Diseases Using Hybrid Deep Learning and Its Explainability with AI. Mathematics 2023, 11, 2241. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25. Available online: https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html (accessed on 18 June 2023). [CrossRef] [Green Version]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 2818–2826. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 4510–4520. [Google Scholar]
- Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Sara, U.; Rajbongshi, A.; Shakil, R.; Akter, B.; Sazzad, S.; Uddin, M.S. An extensive sunflower dataset representation for successful identification and classification of sunflower diseases. Data Brief 2022, 42, 108043. [Google Scholar] [CrossRef]
- Nti, I.K.; Nyarko-Boateng, O.; Aning, J. Performance of Machine Learning Algorithms with Different K Values in K-fold CrossValidation. Int. J. Inf. Technol. Comput. Sci. 2021, 13, 61–71. [Google Scholar] [CrossRef]
- Hamid, Y.; Wani, S.; Soomro, A.B.; Alwan, A.A.; Gulzar, Y. Smart Seed Classification System Based on MobileNetV2 Architecture. In Proceedings of the 2022 2nd International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi Arabia, 25–27 January 2022; pp. 217–222. [Google Scholar]
Class Name | Training Set | Validation Set | Testing Set | Number of Images Per Class |
---|---|---|---|---|
Downy Mildew | 329 | 70 | 71 | 470 |
Fresh Leaf | 360 | 77 | 78 | 515 |
Gray Mold | 278 | 60 | 60 | 398 |
Leaf Scars | 356 | 76 | 77 | 509 |
Total | 1323 | 283 | 286 | 1892 |
Model | Class | Precision | Recall | F1-Score | Accuracy |
---|---|---|---|---|---|
AlexNet | Downy Mildew | 0.824 | 0.662 | 0.734 | 0.662 |
Fresh Leaf | 0.950 | 0.974 | 0.962 | 0.974 | |
Gray Mold | 0.952 | 0.983 | 0.967 | 0.983 | |
Leaf Scars | 0.747 | 0.844 | 0.793 | 0.844 | |
VGG16 | Downy Mildew | 0.942 | 0.915 | 0.929 | 0.915 |
Fresh Leaf | 1.000 | 1.000 | 1.000 | 1.000 | |
Gray Mold | 1.000 | 1.000 | 1.000 | 1.000 | |
Leaf Scars | 0.924 | 0.948 | 0.936 | 0.948 | |
InceptionV3 | Downy Mildew | 0.914 | 0.901 | 0.908 | 0.901 |
Fresh Leaf | 1.000 | 1.000 | 1.000 | 1.000 | |
Gray Mold | 1.000 | 1.000 | 1.000 | 1.000 | |
Leaf Scars | 0.910 | 0.922 | 0.916 | 0.922 | |
MobilNetV3 | Downy Mildew | 0.956 | 0.915 | 0.935 | 0.915 |
Fresh Leaf | 1.000 | 1.000 | 1.000 | 1.000 | |
Gray Mold | 1.000 | 1.000 | 1.000 | 1.000 | |
Leaf Scars | 0.925 | 0.961 | 0.943 | 0.961 | |
EfficientNetB3 | Downy Mildew | 0.957 | 0.944 | 0.950 | 0.944 |
Fresh Leaf | 1.000 | 1.000 | 1.000 | 1.000 | |
Gray Mold | 1.000 | 1.000 | 1.000 | 1.000 | |
Leaf Scars | 0.949 | 0.961 | 0.955 | 0.961 |
Models | Models | Precision | Recall | F1-Score | Accuracy | Epochs |
---|---|---|---|---|---|---|
Before Transfer Learning | AlexNet | 0.835 | 0.831 | 0.832 | 0.835 | 300 |
VGG16 | 0.934 | 0.936 | 0.935 | 0.934 | 300 | |
InceptionV3 | 0.932 | 0.933 | 0.932 | 0.932 | 300 | |
MobilenetV3 | 0.832 | 0.835 | 0.838 | 0.832 | 300 | |
EfficientNetB3 | 0.902 | 0.890 | 0.889 | 0.885 | 300 | |
After Transfer Learning | AlexNet | 0.865 | 0.866 | 0.861 | 0.864 | 117 |
VGG16 | 0.965 | 0.965 | 0.965 | 0.965 | 76 | |
InceptionV3 | 0.954 | 0.954 | 0.954 | 0.954 | 201 | |
MobilenetV3 | 0.969 | 0.969 | 0.969 | 0.969 | 126 | |
EfficientNetB3 | 0.976 | 0.976 | 0.976 | 0.976 | 89 |
Models | Loop1 Acc | Loop2 Acc | Loop3 Acc | Loop4 Acc | Mean Acc |
---|---|---|---|---|---|
AlexNet | 0.846 | 0.857 | 0.854 | 0.846 | 0.851 |
VGG16 | 0.960 | 0.960 | 0.981 | 0.957 | 0.965 |
InceptionV3 | 0.936 | 0.955 | 0.955 | 0.981 | 0.957 |
MobilenetV3 | 0.959 | 0.955 | 0.960 | 0.955 | 0.957 |
EfficientNetB3 | 0.984 | 0.970 | 0.976 | 0.989 | 0.980 |
Model Name | Batch Size = 1 (ms) | Batch Size = 16 (ms) | Batch Size = 32 (ms) |
---|---|---|---|
AlexNet | 4.4 | 1.8 | 1.7 |
VGG16 | 9.9 | 4.1 | 4.0 |
InceptionV3 | 18.3 | 3.8 | 3.3 |
MobileNetV3 | 11.1 | 2.2 | 1.9 |
EfficientNetB3 | 25.7 | 9.4 | 5.7 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gulzar, Y.; Ünal, Z.; Aktaş, H.; Mir, M.S. Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study. Agriculture 2023, 13, 1479. https://doi.org/10.3390/agriculture13081479
Gulzar Y, Ünal Z, Aktaş H, Mir MS. Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study. Agriculture. 2023; 13(8):1479. https://doi.org/10.3390/agriculture13081479
Chicago/Turabian StyleGulzar, Yonis, Zeynep Ünal, Hakan Aktaş, and Mohammad Shuaib Mir. 2023. "Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study" Agriculture 13, no. 8: 1479. https://doi.org/10.3390/agriculture13081479
APA StyleGulzar, Y., Ünal, Z., Aktaş, H., & Mir, M. S. (2023). Harnessing the Power of Transfer Learning in Sunflower Disease Detection: A Comparative Study. Agriculture, 13(8), 1479. https://doi.org/10.3390/agriculture13081479