1. Introduction
Agriculture holds a crucial role on the global stage, with numerous nations, including prominent ones like China, Malaysia, and India, actively cultivating a diverse range of crops such as pulses, fruits, rice, condiments, spices, and wheat [
1]. The financial well-being of farmers in these regions hinges significantly on the caliber of their harvests, the health of the plants they nurture, and the resulting yield. Consequently, the accurate identification and assessment of plant health and diseases constitute pivotal aspects within the agricultural domain. Plants are susceptible to diseases that disrupt their growth, subsequently impacting the agricultural ecosystem and farmers’ livelihoods [
2].
Accurately identifying and assessing plant health and diseases are of utmost importance for the well-being of humans and animals. Not only do agricultural plants provide nutritious crops for human consumption, but they also serve as a crucial source of livestock feed. The health of livestock populations directly affects the livelihoods of farmers who rely on them for economic sustenance and as a source of high-quality proteins for human diets. Therefore, disease-free cultivation of crops like rice is essential in maintaining the productivity of both plant and animal agriculture.
In today’s globalized trade networks, timely detection of crop pathogenic infections is more critical than ever. The unchecked spread of plant diseases across borders through the transportation of infected products can destabilize international food supply chains and compromise food security on a global scale. It requires coordinated surveillance efforts among trading partners to help contain transboundary outbreaks.
The imperative to detect diseases in plant leaves is rooted in its profound impact on human and animal welfare. Essential nutrients and a diverse array of supplements derived from crops are fundamental requirements for the survival of all living organisms. This fundamental need underscores the critical significance of ensuring an abundant food supply for individual nations and the global community through cross-border trade. Such efforts are instrumental in guaranteeing adequate sustenance, thereby playing a pivotal role in alleviating hunger and poverty, particularly among disadvantaged populations. When a significant portion of crops suffers damage due to undetected diseases, it reduces supply which, coupled with constant demand, invariably leads to the inflation of prices for various crop plants [
2].
Wealthy individuals typically do not encounter significant obstacles when purchasing food, even with high prices. However, the same cannot be said for those living in poverty. This stark contrast creates a significant predicament for economically disadvantaged individuals, as they struggle to afford essential food items, leading to a considerable financial burden on their shoulders. Thus, it becomes crucial to cultivate robust and disease-free plants to fulfill the nutritional needs of all members of society.
The vast expanse of agricultural land worldwide offers a valuable opportunity to cultivate healthy plants through prudent strategies such as providing appropriate manure and fertilizers. It is essential to focus on preventing plant infections in the first place, rather than solely relying on reactive measures once an infection has been confirmed. By concentrating on proactive disease prevention, we can ensure a consistent and reliable supply of nutritious crops, mitigating the adverse impact on the impoverished and the broader population. An automated diagnostic technique is advantageous for the early detection of plant diseases. Symptoms of these diseases manifest in various parts of the plant, making timely identification crucial. Virtually every crop is susceptible to specific diseases when its health is compromised. Rice, for instance, faces a multitude of diseases and pests that can significantly impair its yield [
3,
4,
5].
Furthermore, plant disease outbreaks that reduce crop yields disproportionately impact food affordability for economically disadvantaged populations. While price increases may be absorbable for affluent consumers, the same is not valid for impoverished communities striving to meet even basic nutritional needs. Cultivating robust, disease-resistant varieties of crops is necessary to ensure steady production volumes and avoid inflationary pressures on staple commodity prices. Resilient farming practices’ role in addressing food access and affordability issues underscores the humanitarian significance of innovations that enable early disease detection.
The Convolutional Neural Network (CNN) model offers several compelling advantages [
6]: it boasts a quick execution time, cost-effectiveness, and a highly efficient means of interpreting diseases from the surface of rice plant leaves. This technology holds the potential catalyzing the transition to a digital agricultural system in rice-growing countries [
7]. This research aims to develop a CNN-based model that excels in predicting diseases accurately and without errors. Deep learning [
8], a fundamental aspect of this approach, is a powerful tool in tackling challenges within agricultural production, ultimately ensuring food safety and quality [
9]. Numerous diseases affecting rice plant leaves significantly threaten crop yield, resulting in substantial production losses. These diseases diminish the overall quality and quantity of the harvest and contribute to increased production costs [
3,
10,
11].
Rice farmers face numerous challenges stemming from these diseases, which can be mitigated through early disease screening [
12]. Manual assessment by farmers is incredibly time-consuming, complex, and expensive. Conversely, automated systems provide a more cost-effective and practical solution. Modern Machine Learning (ML) systems are extensively employed to streamline and automate such processes [
13]. To gauge the effectiveness of the research, various researchers have utilized a range of performance metrics, including accuracy, recall, precision, and F1-score [
14].
This paper introduces a custom CNN model to enhance the accuracy of rice leaf disease classification. The model is designed to capture unique features of rice leaf images, yielding competitive accuracy while preventing overfitting. It outperforms transfer learning models in accuracy and performance [
15]. The methodology involves dataset preparation, a tailored CNN architecture, model training, performance evaluation, and comparison with other methods. The model’s advantages include hierarchical feature extraction, customized architecture, regularization techniques, and comprehensive performance metrics. The study emphasizes its potential for precision agriculture and broader applications in plant disease diagnostics.
The following summarizes the key contributions of this paper:
Custom CNN model for rice leaf disease classification: This paper presents a unique CNN model designed to accurately classify diseases in rice leaves. Unlike transfer learning models, this custom model captures distinctive features of rice leaf images, resulting in superior performance. This contribution addresses the crucial need for precise and efficient automated disease detection in rice plants;
Promotion of precision agriculture: The research emphasizes the broader significance of its findings by highlighting the potential of the custom CNN model for precision agriculture. This technology can significantly improve crop yields, reduce production costs, and enhance food security by enabling early disease detection and intervention. The study positions itself as a valuable contribution to modernizing agricultural practices and safeguarding global food supplies.
The paper is structured as follows:
Section 2 reviews related work,
Section 3 outlines materials and methods,
Section 4 presents results and discussion, and
Section 5 summarizes our contributions and suggests future research directions.
2. Related Work
The rice leaf disease classification problem revolves around identifying and categorizing various diseases that affect rice plants through the analysis of their leaf images. Rice is a staple food for much of the world’s population, and diseases can significantly impact its yield and quality. Early detection and accurate classification of these diseases are crucial for effective disease management and food security.
This problem has gained prominence due to its practical implications in agriculture. Timely identification of diseases can help farmers take appropriate actions such as applying targeted treatments, adjusting irrigation or fertilization, and preventing the spread of diseases, thus contributing to higher crop yields and reduced economic losses. Accordingly, massive research has been introduced and proposed to address such a problem.
The authors of [
2] addressed the rice leaf disease classification problem by employing the CNN utilizing a new Indian dataset collected from rice fields and the internet. In developing the proposed model, a Transfer Learning function is used. The proposed model proved its efficiency in addressing the problem by reaching 92.46% accuracy.
In [
3], the authors introduced an effective system for predicting rice leaf disease using various deep learning techniques. They collected and processed images of rice leaf diseases to meet the algorithm’s requirements. The authors extracted features using 32 pre-trained models and subsequently employed multiple ML and ensemble learning classifiers to classify the images depicting diseases. The comparative analysis demonstrated that their proposed method outperformed existing approaches in achieving the highest accuracy and excelling in performance metrics.
In [
4], the application of computer vision in traditional agriculture was leveraged to identify rice leaf diseases within intricate backgrounds accurately. The authors proposed the RiceDRA-Net, a deep residual network model tailored for this purpose, enabling the identification of four distinct rice leaf diseases. They introduced two sets for testing: the CBG-Dataset, comprising rice leaf disease images against complex backgrounds, and the SBG-Dataset, a new collection featuring single background images, derived from the original dataset. The Res-Attention module, characterized by 3 × 3 convolutional kernels and denser connections compared to other attention mechanisms, was employed to mitigate information loss. The experimental findings indicated that RiceDRA-Net exhibited the highest accuracy for the SBG-Dataset and CBG-Dataset. This outcome strongly underscored the model’s competence in recognizing rice leaf diseases amidst intricate backgrounds.
In [
5], the authors presented a method known as RiceNet, which employs a two-stage approach to effectively recognize four significant rice diseases: rice panicle neck blast, rice false smut, rice leaf blast, and rice stem blast. Initially, the YoloX model detected the afflicted regions within rice images. These detected areas were then extracted to form a novel dataset of rice disease patches. A Siamese Network was employed in the subsequent stage to identify the rice disease patches obtained earlier. The comparative analysis demonstrated that the proposed model outperformed other detection models. Moving to the identification stage, the Siamese Network showcased exceptional accuracy, surpassing the performance of alternative models. The experimental results clearly showcased that the proposed RiceNet model outperformed existing methods and boasted advantages such as rapid detection speed and minimal weight size for identifying rice diseases.
The authors of [
16] proposed three crucial stages: keypoint detection, hypercolumn deep feature extraction from CNN layers, and classification. A hypercolumn vector contains activations from all CNN layers for a given pixel. Keypoints denote salient image points that highlight distinctive features. The model’s initial phase involves identifying keypoints in the image and extracting hypercolumn features based on these points of interest. In the next stage, ML experiments involve classifier algorithms applied to the extracted features. The assessment results underscore the paper’s proposed method’s capability in detecting rice leaf diseases. In the evaluation stage, the Random Forest classifier displayed exceptional performance when utilizing hypercolumn deep features.
The authors of [
9] investigated two transfer learning methods for diagnosing rice leaf diseases. The first method involves utilizing the output of a pre-trained CNN-based model, with the addition of a suitable classifier.
The second method focuses on freezing the pre-trained network’s lower layers, fine-tuning its upper layers’ weights, and integrating an appropriate classifier. The study evaluates seven distinct CNN models under these methodologies. The simulation outcomes highlight the remarkable performance of four specific networks, achieving 100% accuracy and an F1-score of 1. Furthermore, the proposed approach demonstrated superior accuracy and reduced training time compared to other compared models.
In [
17], the authors introduced an innovative approach that involves intelligent segmentation and a hybrid ML-based classification to predict rice diseases effectively. The methodology encompasses several stages, including pre-processing, segmentation, feature extraction/selection, and classification. The process begins with data normalization using the Synthetic Minority Oversampling Technique-based preprocessing. Next, efficient segmentation is achieved using Modified Feature Weighted Fuzzy Clustering. Feature extraction is performed via Principal Component Analysis to enhance classifier performance, while Linear Discriminant Analysis handles feature selection. The final step integrates an Enhanced Recurrent Neural Network with a Support Vector Machine (SVM), forming a hybrid classification model designed to enhance predictive capabilities. To assess the method’s effectiveness, metrics such as accuracy, recall, precision, timing, and F-measure are employed. Simulation results indicate that the proposed approach outperformed existing classifiers in terms of overall performance, highlighting its potential for improved disease prediction in rice crops.
The authors of [
18] developed the CNN model to recognize and categorize images of rice diseases. Specifically, this model is tailored for classifying rice images sourced from the Punjab province of India. The methodology aids in effectively categorizing these images based on the severity levels of five different illnesses. As a result of the successful implementation of the collected image dataset, the model showcases an impressive binary and multi-classification accuracy.
4. Results and Discussion
In this section, we present the results of our experiments for enhancing rice leaf disease classification using a customized CNN approach with transfer learning. We evaluate the performance of three models: Transfer Learning Inception-v3, Transfer Learning EfficientNet-B2, and our proposed custom model. These models were chosen based on their effectiveness, generalization capabilities, and resource efficiency in various computer vision tasks. The evaluation metrics used are accuracy and loss. A thorough comparative analysis was conducted using these models, contributing to the improved reliability of our findings. The Results section confirms that our proposed method outperformed established benchmarks, highlighting the efficacy of our approach in classifying rice leaf disease.
The training and validation accuracy and loss for the Transfer Learning Inception-v3 model are shown in
Figure 2 and
Figure 3. The model achieved an initial training accuracy of 0.586 and a validation accuracy 0.766 in the first epoch. The accuracy gradually increased over the epochs and reached 0.957 in the final epoch. Similarly, the loss decreased from 1.297 to 0.166 during the training process. However, a noteworthy observation emerges from the validation loss, which experienced an ascent from 0.677 to 0.818. This upward trend implies the persistence of a gap between the model’s performance on training data and unseen validation data. This discrepancy may be attributed to potential overfitting, necessitating prudent strategies to enhance generalization. Overall, the Transfer Learning Inception-v3 model demonstrated good performance with high accuracy and low loss.
The training and validation accuracy and loss for the Transfer Learning EfficientNet-B2 model are depicted in
Figure 4 and
Figure 5. Unlike the Inception-v3 model, the EfficientNet-B2 model showed lower accuracy and higher loss throughout the training process. The initial training accuracy was 0.223, and the validation accuracy was 0.25 in the first epoch. However, the accuracy did not significantly improve over subsequent epochs, reaching only 0.234 in the final epoch. The loss values fluctuated, and the model did not converge to a low loss value. The results indicate that the Transfer Learning EfficientNet-B2 model did not perform well in classifying rice leaf diseases.
We also evaluated and compared our proposed custom model’s performance with the transfer learning models. The training and validation accuracy and loss for the custom model are presented in
Figure 6 and
Figure 7. The custom model achieved an initial training accuracy of 0.531 and a validation accuracy of 0.609 in the first epoch. Over the epochs, the accuracy steadily increased and reached 0.914 in the final epoch. The loss values decreased, starting from 1.643 and reaching 0.215. The validation loss also decreased from 1.470 to 0.523. These results indicate that the custom model performed well in classifying rice leaf diseases.
To compare the performance of the models, we consider the accuracy and loss values achieved in the final epoch. The Transfer Learning Inception-v3 model achieved the highest accuracy of 0.957, followed by the custom model with an accuracy of 0.914. The Transfer Learning EfficientNet-B2 model had the lowest accuracy of 0.234. In terms of loss, the custom model achieved the lowest value of 0.155, followed by the Transfer Learning Inception-v3 model with a loss of 0.166. The Transfer Learning EfficientNet-B2 model had the highest loss value of 2.439.
These results indicate that the Transfer Learning Inception-v3 and custom models outperformed the Transfer Learning EfficientNet-B2 models regarding accuracy and loss. Being a well-established architecture, the Transfer Learning Inception-v3 model demonstrated excellent performance with high accuracy and low loss. However, our proposed custom model, although relatively new, showed competitive performance and achieved high accuracy and low loss values.
Figure 8 comprehensively visualizes the training and validation accuracy trends across 100 epochs. The x-axis signifies the number of training epochs, while the y-axis denotes the corresponding accuracy values. The plot showcases the dynamic interplay between the model’s training accuracy (represented by solid lines) and validation accuracy (depicted by dashed lines) as the training progresses. The initial epochs depict modest accuracy levels, with the training accuracy commencing at around 24.6% and the validation accuracy at approximately 32.8%. As training advances, both accuracies exhibit a consistent upward trajectory, indicative of the model’s capacity to learn from the training data and generalize to unseen validation data. However, periodic fluctuations are observable, potentially signaling the presence of minor overfitting tendencies.
The plot facilitates understanding the model’s convergence toward higher accuracy values and provides insights into its ability to adapt and improve over time. It also raises questions regarding potential overfitting concerns, warranting further analysis and optimization strategies to fine-tune the model’s performance.
Figure 9 visualizes the training and validation loss trends throughout. Similar to
Figure 8, the x-axis represents the number of training epochs, while the y-axis denotes the corresponding loss values. This plot illustrates the learning dynamics of the model by contrasting the training loss (solid lines) with the validation loss (dashed lines).
At the outset, the training loss starts at 1.5418, and the validation loss begins at around 1.3770. These initial values encapsulate the model’s starting point in the learning process. As the training progresses, both loss values consistently decline, implying the model’s ability to minimize errors and improve its predictions. Notably, the training and validation losses demonstrate a synchronous descent, indicating the absence of severe overfitting.
The plot provides insights into the model’s convergence toward lower loss values a pivotal objective in training deep neural networks. The relative alignment between training and validation losses indicates the model’s potential to effectively generalize its learning to unseen data. However, a meticulous assessment of the plot’s trends can uncover nuances that might necessitate strategic interventions to enhance the model’s performance further.
In addition to the results presented, there are several factors that can contribute to the performance differences observed among the models. These factors include the architecture of the models, the amount and quality of the training data, and the transfer learning approach employed.
The Transfer Learning Inception-v3 model is known for its depth and complex architecture, which allows it to capture intricate features from the input images. It has been pre-trained on a large dataset, such as ImageNet, which consists of millions of images from various categories. This pre-training enables the model to learn general features that can be applied to different image classification tasks. By fine-tuning the model on the rice leaf disease dataset, it can leverage its learned representations and adapt them to the specific task, resulting in higher accuracy.
On the other hand, although a powerful architecture, the Transfer Learning EfficientNet-B2 model may not have been as well-suited for the rice leaf disease classification task. EfficientNet models balance accuracy and efficiency by scaling the model’s depth, width, and resolution. While this design choice can benefit general-purpose image classification, it might not capture the intricate details necessary to distinguish between different rice leaf disease types. The lower accuracy and higher loss observed with this model could be attributed to the limitations of the learned representations in this specific context.
In contrast to the transfer learning models, the custom model was specifically designed for the rice leaf disease classification task. Although it may not have the depth and complexity of the transfer learning models, it was trained from scratch on the rice leaf disease dataset. This approach allows the model to learn task-specific features directly from the data. The custom model showed competitive performance with high accuracy and low loss, indicating its ability to capture the discriminative characteristics of the rice leaf diseases effectively. However, it is worth noting that training a model from scratch typically requires a larger dataset to avoid overfitting, and further exploration with more diverse and extensive data could potentially improve its performance even more.
It is important to consider the quality and diversity of the training data when evaluating the models’ performance. The success of deep learning models relies heavily on having a diverse and representative dataset that covers different variations and manifestations of the target classes. A limited or imbalanced dataset can hinder the model’s generalization of unseen samples. Therefore, future work should focus on collecting more diverse and balanced rice leaf disease datasets to assess the models’ performance further.
The results highlight the effectiveness of transfer learning, particularly with the Transfer Learning Inception-v3 model, for enhancing rice leaf disease classification. The pre-trained models leverage their learned representations to capture meaningful features from the input images, resulting in higher accuracy and lower loss. However, the custom model, specifically designed for the task, shows promising performance and demonstrates the potential for tailored architectures in addressing specific domain challenges. Further research and experimentation are needed to explore the strengths and limitations of different models, improve dataset quality, and refine the classification process for more accurate and reliable rice leaf disease identification.
Additionally, it is worth discussing the computational considerations and deployment implications of the different models. The Transfer Learning Inception-v3 and Transfer Learning EfficientNet-B2 models, pre-trained on large datasets like ImageNet, require significant computational resources during the training and inference phases. These models typically have more parameters and may require more memory and processing power to operate efficiently. This could be a limitation in scenarios where computational resources are constrained, such as deploying the model on resource-limited devices or in real-time applications.
In contrast, the custom model, being trained from scratch on the rice leaf disease dataset, has the advantage of potentially being more lightweight and computationally efficient. Since it is designed specifically for the target task, it can be optimized to meet the performance requirements of the deployment environment. This can be particularly advantageous when deploying the model in edge devices or in situations where real-time or near real-time inference is crucial.
However, it is essential to a balance model complexity and computational efficiency. While the custom model may offer advantages in terms of efficiency, there is a trade-off with accuracy. More complex architectures like Inception-v3 and EfficientNet-B2, with their deep layers and intricate feature representations, have the potential to capture more nuanced patterns and improve classification accuracy. Therefore, the choice of model architecture should be carefully considered based on the specific application requirements, available computational resources, and the desired trade-off between accuracy and efficiency.
Moreover, evaluating the models in real-world scenarios is important. The evaluation conducted in this study is based on a specific rice leaf disease dataset, and the models’ performance may vary when faced with different datasets or unseen examples. To ensure the models’ effectiveness in practical applications, it is crucial to test them on diverse datasets that encompass various environmental conditions, lighting variations, and disease severities. This will help identify any potential biases or limitations and ensure that the models perform reliably across different scenarios.
The classification model’s performance was also evaluated through a confusion matrix, a vital tool for understanding the model’s classification outcomes comprehensively. This matrix encapsulates the tally of the model’s correct and erroneous predictions across different classes. In
Figure 10, the confusion matrix derived from the application of the classification model to the crop disease image dataset is illustrated. In this matrix, the rows correspond to the true classes, while the columns represent the predicted classes. The values within the matrix provide a count of images that were classified correctly or incorrectly.
Upon scrutinizing the confusion matrix, it is evident that the model excelled in classifying the blast and blight categories, boasting 74 correct predictions for each. Nonetheless, there were a few instances of misclassification, with two images from the blast class erroneously categorized as blight and vice versa. Similarly, the model exhibited a single incorrect prediction for the brown spot and leaf smut classes. Additionally, the confusion matrix unveils the model’s exceptional performance in classifying the tungro class, with an impressive 75 out of 80 images being accurately identified. Only two images from the tungro class were misclassified as blast, and two were misclassified as blight. These findings underscore the effectiveness of the classification model in distinguishing between the blast, blight, brown spot, leaf smut, and tungro classes. Nonetheless, the observed misclassifications suggest the existence of similarities or ambiguities in the image features of certain classes, leading to sporadic classification errors. To enhance classification accuracy further, future research endeavors could concentrate on bolstering the discriminative features employed by the model or exploring advanced ML techniques. Moreover, augmenting the size and diversity of the dataset, especially for classes with a limited number of images, can alleviate the impact of data imbalance and elevate the overall classification performance.
In conclusion, while the choice of model architecture depends on several factors, such as computational resources, accuracy requirements, and deployment constraints, transfer learning and custom models each have their advantages and considerations. Transfer learning models leverage pre-trained representations to achieve high accuracy but may require more computational resources. On the other hand, custom models can be more lightweight and tailored to specific tasks, offering potential advantages in efficiency and deployment scenarios. Further research and experimentation are necessary to explore the full potential of these models, consider additional factors, and refine their performance in real-world applications.
Comparing the models’ performance at the final epoch, it becomes evident that the custom model outperforms both Transfer Learning Inception-v3 and Transfer Learning EfficientNet-B2 in terms of overfitting and overall effectiveness.
4.1. Advantages and Strengths
Optimal balance of accuracy and overfitting: The custom model has a balance accuracy and overfitting, achieving a competitive accuracy of 0.914 and a relatively low loss of 0.215. While there is a slight indication of potential overfitting due to accuracy improvement over epochs, the model maintains its generalization capability, showcasing effective learning;
Task-specific architecture: The architecture of the custom model has been meticulously designed to address the intricacies of the rice leaf disease task. This specialization ensures the model captures relevant features more precisely, potentially leading to better performance on this specific task than generic architectures like Transfer Learning EfficientNet-B2.
Data efficiency: The custom model’s robust performance underscores its ability to utilize available data effectively. Unlike Transfer Learning EfficientNet-B2, which struggled with convergence due to its architecture, the custom model demonstrates how tailoring the architecture to the task can yield substantial improvements;
Potential for further refinement: Since the custom model is purpose-built, there is room for continuous refinement. By iterating on the architecture and fine-tuning, it is possible to enhance performance even more, making it an adaptable and evolving solution.
4.2. Factors Amplifying Custom Model’s Superiority
Architecture finesse: While Transfer Learning Inception-v3 does perform well, the custom model’s architecture is tailored explicitly for the rice leaf disease task. This architecture finesse contributes to its impressive accuracy and controlled overfitting;
Task-specific design: The custom model stands out due to its design catering to the particularities of the rice leaf disease. Generic transfer learning models cannot replicate this advantage;
Holistic performance reflection: the custom model’s accuracy and loss metrics reflect its ability to capture task-specific features, making it a reliable choice for real-world applications;
Data quality adaptation: unlike Transfer Learning EfficientNet-B2, which struggles with suboptimal convergence, the custom model adapts well to the dataset’s quality and characteristics, highlighting its robustness.
Table 3 outlines the performance of different models, including Transfer Learning Inception-v3, Transfer Learning EfficientNet-B2, and a custom model. These models were evaluated based on their accuracy and loss values in the final epoch, along with a summarized performance overview. Noteworthy observations include the high accuracy (0.957) and low loss (0.166) of Transfer Learning Inception-v3, despite demonstrating indications of overfitting. Conversely, Transfer Learning EfficientNet-B2 exhibited poor accuracy (0.234) and high loss (2.439), accompanied by significant overfitting. The custom model achieved competitive accuracy (0.914) and low loss (0.155), showing minimal overfitting.
It is important to note that while the proposed approach for detecting and categorizing rice leaf diseases through deep learning is promising, there are certain limitations to consider. Specifically, the method necessitates a thorough understanding of deep learning and computer vision to execute and fine-tune, which could restrict its availability for those without such expertise. Furthermore, the method may need to be more easily scalable for extensive farming operations, where many images must be analyzed, and significant investments in hardware and software may be required.
4.3. Considerations for Future Development and Deployment
Resource efficiency: the custom model’s streamlined architecture makes it computationally efficient, especially crucial for real-world deployment in resource-constrained environments;
Continual enhancement: The custom model’s success paves the way for further research and development. Refining its architecture, incorporating newer techniques, and expanding the dataset can yield even better results;
Diverse testing scenarios: evaluating the custom model across diverse environmental conditions, lighting variations, and disease severities will ascertain its real-world applicability and robustness;
Utilization of optimization algorithms: Improve the custom model’s performance and efficiency through the use of optimization algorithms, such as Lemurs optimizer [
20], Sine cosine algorithm, Coronavirus herd immunity optimizer (CHIO) [
21], and Salp Swarm Algorithm [
22]. These algorithms fine-tune hyperparameters and weights, enhancing training speed and accuracy. Regularly applying techniques ensures continual model refinement for more effective and resource-efficient deployment.