Next Article in Journal
Digital Twin Used in Real-Time Monitoring of Operations Performed on CNC Technological Equipment
Previous Article in Journal
Cervical Cancer Prediction Based on Imbalanced Data Using Machine Learning Algorithms with a Variety of Sampling Methods
Previous Article in Special Issue
Simultaneous Determination of Glyphosate and 13 Multiclass Pesticides in Agricultural Soil by Direct-Immersion SPME Followed by Solid–Liquid Extraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pesticide Residue Coverage Estimation on Citrus Leaf Using Image Analysis Assisted by Machine Learning

by
Adarsh Basavaraju
1,*,
Edwin Davidson
1,2,
Giulio Diracca
1,2,
Chen Chen
3 and
Swadeshmukul Santra
1,2,4,*
1
NanoScience Technology Center, University of Central Florida, Orlando, FL 32826, USA
2
Department of Chemistry, University of Central Florida, Orlando, FL 32826, USA
3
Center for Research in Computer Vision, University of Central Florida, Orlando, FL 32826, USA
4
Burnett School of Biomedical Sciences, University of Central Florida, Orlando, FL 32826, USA
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(22), 10087; https://doi.org/10.3390/app142210087
Submission received: 12 September 2024 / Revised: 25 October 2024 / Accepted: 30 October 2024 / Published: 5 November 2024
(This article belongs to the Special Issue Detection of Agrochemical Residues in Agriculture)

Abstract

:

Featured Application

This work is intended to propose an accessible and flexible method for citrus growers to assess in-field the pesticide residue coverage on the leaf surface. The developed software enables the analysis of digital images captured under sunlight and/or UV light exposure to visualize the residue deposition coverage. This technological tool could serve growers in the determination of pesticide residue coverage to guide their decision-making process for pesticide application timing and frequency.

Abstract

Globally, the agricultural industry has benefited from using pesticides to minimize crop losses. Nevertheless, the indiscriminate overuse of pesticides has led to significant risks associated with a detrimental impact on the environment and human health. Therefore, emerging concerns of pesticide residue found in crops, food, and livestock are a pressing issue. To address the above challenges, there have been many efforts made towards implementing machine learning to enable precision agricultural practices to reduce pesticide overuse. As of today, there are no guiding digital tools available for citrus growers to provide pesticide residue leaf coverage analysis after foliar applications. Herein, we are the first to report software assisted by lightweight machine learning (ML) to determine the Kocide 3000 and Oxytetracycline (OTC) residue coverage on citrus leaves based on image data analysis. This tool integrates a foundational Segment Anything Model (SAM) for image preprocessing to isolate the area of interest. In addition, Kocide 3000 and Oxytetracycline (OTC) residue coverage analysis was carried out using a specialized Mask Region-Based Convolutional Neural Network (CNN). This CNN was pre-trained on the MS COCO dataset and fine-tuned by training with acquired datasets in laboratory and field conditions. The developed software demonstrated excellent performance on both pesticides’ accuracy, precision, and recall, and F1 score metrics. In summary, this tool has the potential to assist growers with the decision-making process for controlling pesticide use rate and frequency, minimizing pesticide overuse.

1. Introduction

Over the past decades, the agricultural industry has benefited from the use of pesticides to minimize crop losses [1]. Globally, 20 to 40% of crop production is estimated to be lost due to pests, leading to an economical cost of USD 220 billion according to the Food and Agriculture Organization (FAO) from the United Nations [2], and hence the current use of approximately 4 million tons of pesticides to protect the nearly 3 billion metric tons of crops produced each year [3,4]. A predominant amount of pesticide is usually conventionally applied, resulting in uncontrolled and non-targeted release with only a small fraction reaching the desired target organism [5]. This challenge has led to agricultural practices with indiscriminate overuse of pesticides, exacerbating the risks to the environment and human health [6,7]. In particular, prolonged pesticide application increases the risks of environmental toxicity due to field runoff, leaching, volatilization, and spray drift, among other factors [8,9], in addition to the emerging concerns of the persistence of pesticide residue found in food and livestock [10,11].
Due to increased pesticide residue concerns, there have been collective efforts by international and national entities (e.g., EPA, USDA, FAO, Codex Alimentarius Commission, World Health Organization, etc.) to establish policies and regulations regarding the maximum residue limits [12,13,14,15]. In order to comply with the pesticide’s regulations, several analytical methods have been developed and validated to accurately quantify the concentration of pesticide residue. Some of the widely used analytical methods involve techniques such as gas chromatography, high performance liquid chromatography, liquid chromatography with mass spectrometry, supercritical fluid chromatography, and capillary electrophoresis, among others [16,17,18]. Although these analytical techniques possess high accuracy and specificity, they also require expensive instrumentation and supplies, controlled laboratory environments, and time-consuming sample preparation. This therefore imposes a major economical constraint to growers, hindering the USDA (United States Department of Agriculture)’s efforts towards socio-economical equity in the agricultural sector [19].
To address these challenges, there has been extensive work regarding the Agri-Tech revolution, with the development of precise farming [3]. This type of farming incorporates recent technological advancements in the Internet of Things (IoTs), Machine Learning (ML), and Artificial Intelligence (AI), among others, in the current agricultural practices [20,21,22]. These technological tools have enabled the digital automatization of processes related to data collection, decision-making, data processing, and data mining [21]. Recently, developed technological tools have been extended to disease detection, pest identification, crop management planning, and yield predictions [23,24,25,26]. Furthermore, there has been some attraction towards the development of imaging pesticide residue analysis powered by AI to render more accessible, cost-effective, and non-destructive methods [27,28,29]. Nevertheless, the use of AI-powered tools and conventional imaging systems for pesticide residue analysis holds some considerable limitations [30,31]. These are mainly related to pesticide residue transparency to digital cameras and high background signals from ununiform leaf surfaces, leading to low accuracy in the analysis [32,33]. These are in addition to the shortage of labeled and annotated data from both controlled and non-controlled environments, which play a major factor in the design of the system and the developed algorithms [34,35].
Recent studies have overcome some of these limitations by focusing their imaging analysis on the inherent IR or fluorescent properties of some pesticides [36,37,38]. Both fluorescent and image analysis have opened the door to fast and non-destructive methods to determine pesticide residue coverage, attaining the spectral and spatial data from the sample [28,39]. Some of the previously reported work harnessed hyperspectral imagery to analyze pesticide residues using Vis-NIR and NIR apparatus on grape berries. The data were randomly divided into training, test, and validation sets, with a combination of three machine learning algorithms and two deep learning neural networks [28]. Similarly, fluorescent hyperspectral imagery was recently reported for pesticide residue identification on black tea samples assisted by machine learning [40]. However, these recent reports fail to provide simplicity to the data collection process, still requiring NIR and UV hyperspectral apparatus. While these apparatuses are less expensive, they are not transferable to field conditions for in situ assessment, in particular in citrus groves. More importantly, these methods will require scientific knowledge, limiting the accessibility of the technique for use by growers.
For these reasons, the present work aims to provide to citrus growers an accessible, inexpensive, and portable imaging chamber to assess the pesticide residue on leaves. Herein, an imaging analysis software is developed and assisted with a lightweight machine learning system for pesticide residue detection and coverage approximation on the leaf surface from the captured image. This platform will reduce analysis costs and guide the decision-making process regarding pesticide applications, preventing the excessive overuse of pesticides. In this study, Kocide 3000 and Oxytetracycline (OTC) were selected as model pesticides due to their high relevance to the citrus industry and their widespread use as pest management strategies for different plant diseases. The effective implementation of this developed technological tool has the potential to serve as a grower friendly alternative for qualitative pesticide residue analysis to provide more sustainable and equitable agricultural practices in the citrus industry.

2. Materials and Methods

2.1. Sample Preparation and Data Acquisition

The leaf sample preparation was conducted in the laboratory with the use of spray bottles to simulate the foliar application based on a previously reported protocol [28]. These experiments were carried out using 1-year-old grafted citrus (Southern Citrus nurseries, LLC, Dundee, FL, USA) with scion variety ray ruby and root stock US942. Briefly, the citrus leaves were sprayed with OTC (Alfa Aesar, Haverhill, MA, USA) and Kocide 3000 (Dupont, Wilmington, DE, USA) at different concentrations and left to dry at room temperature. Then, the leaves were placed on a sample compartment with a black velvet mat and illuminated with a handheld UV light source (395 nm, Everbrite, LLC, Greenfield, WI, USA). This was followed by the acquisition of digital images with a camera (Galaxy A71, Samsung, Suwon-si, South Korea) at constant lighting and height above the leaves.

2.2. Manual Pesticide Residue Coverage Area Determination

Prior to the development of the AI-powered software, a proof-of-concept study was performed to determine the feasibility of the proposed work. Firstly, a manual pesticide residue coverage area was determined to assess the correlation between pixel areas of residue and the concentration of the applied pesticide. The coverage of the OTC and Kocide 3000 residue on the citrus leaves was performed following previously reported protocols and with some modifications [41,42,43]. The digital images were acquired following the previously described data acquisition protocol, followed by image processing using open-source software (Image J) version 1.53t to determine the area of coverage of both pesticides on the leaf surface. Initially, the image was split into RGB colors, and then the green image was used to determine the OTC signal through a set threshold analysis. Conversely, for Kocide 3000, the blue image was used to determine the Kocide signal through a set threshold analysis. This analysis generated a binary mask, and the OTC coverage area was measured. Similarly, the red image was processed under threshold analysis to determine the total leaf surface area. The percentage of pesticide coverage was calculated with the following equation:
P e s t i c i d e   C o v e r a g e   a r e a   % = P i x e l s   i n   g r e e n   o r   b l u e   t h r e s h o l d   a r e a   P i x e l s   i n   r e d   t h r e s h o l d   a r e a × 100
Lastly, the OTC coverage area (%) was calculated with the aforementioned equation using the pixels from the green threshold area, and the Kocide was calculated with the pixels from the blue threshold area.

2.3. Sample Compartment Design and Fabrication

In order to create a controlled environment for acquiring reproducible images, both in laboratory and field conditions, the container was modeled in Onshape and 3D-printed using an Ender 3 Pro (Crealty, Shenzhen, China) with white 1.75 mm PLA filament. The 3D printer’s plate was set at a warming temperature of 120 degrees Fahrenheit, with a nozzle temperature of 160 degrees Fahrenheit and a 15% infill with a grid pattern. Figure 1 shows the compartment design with dimensions of 220 × 220 × 132 mm and a detachable lid, equipped with a remote controllable LED light strip, which allows for different singular wavelength light illumination of the leaf sample.
The lid of the sample compartment was designed with a central aperture of 44 × 44 mm to facilitate the use of conventional cameras or cell phone cameras for image acquisition. The aperture dimensions were chosen such that the aperture would not permit external light to enter the sample compartment. Subsequently, several images used in this work were taken under white, red, green, and UV light, with the sprayed citrus leaf placed squarely in the center of the box using the camera present on a Samsung Galaxy A71 device.

2.4. Image Data Preprocessing

To remove noise from the datasets, a preprocessing method was applied to the images to provide image segmentation, as shown in Figure 2. Image segmentation is a pivotal task in most computer vision applications, with significant improvements when integrated with deep learning. This process was performed to isolate the region of interest in the image, providing separation of the citrus leaf from the background. Since the ultimate goal of the proposed algorithm is to analyze images acquired in field conditions, it is highly relevant to reduce the prevalence of miscellaneous objects in the background that may cause false positives by increasing background noise. For this reason, a foundational open-source segmentation model, the Segment Anything Model (SAM) [44], was applied for image preprocessing. This SAM is a state-of-the-art instance segmentation model developed by Meta Platforms Inc.’s Fundamental AI Research (FAIR) lab, aiming to become a foundational model in the area of computer vision segmentation tasks upon which future segmentation models are developed. The model itself displays unrivaled precision in performing complex image segmentation tasks and has become the basis for newer segmentation models that aim to provide higher quality segmentation masks [45], provide nearly identical segmentation masks at a much lower computational cost [14,46], with significantly smaller datasets [47]. In contrast, other models such as U-Net were considered for segmentation during the image preprocessing step. This model is widely used in the biomedical sector for image segmentation with effective pixel-level classification; however, it also requires significantly larger amounts of annotated datasets for model training. Furthermore, the U-Net model is known to have limitations under field conditions due to the unstructured environment; this may lead to major setbacks in future stages of the software development for end-user (growers) applicability. Therefore, based on this project’s aims and the constraint of a limited number of datasets, a SAM proved to be a better alternative, achieving the desired segmentation and isolation of the region of interest during the preprocessing stage. SAMs provided high-quality segmentation at lower computational cost with smaller annotated datasets, which is particularly valuable to later stages with real-world images.
Briefly, the preprocessing stage consisted of the applied model generation of multiple segmentation masks sorted by the prominence of the region of interest (Figure 2). Subsequently, the three most prominent masks were selected for further analysis. This model provided higher quality segmentation masks and nearly identical segmentation masks at a much lower computational cost. It is worth noting that the preliminary testing of the SAM on our dataset resulting in high-accuracy segmentation masks, considering the limited annotated data available. The model was deemed suitable for applications with no required additional fine-tuning, and thus suitable for integration into the algorithm with no further modification. Later image processing will be performed with a convolutional neural network (CNN) to provide a functional and competitive algorithm with the use of much lower computing resources.

2.5. Data Analysis Methodology

The present work developed an imaging analysis software assisted by a lightweight machine learning system for pesticide residue detection for citrus growers. This will effectively overcome the current limitations and complexities associated with data collection in field conditions, with previously reported techniques such as NIR and UV hyperspectral apparatus.
In this study, the proposed models were trained and fine-tuned on a computer with an NVIDIA GeForce GTX 1650 (NVIDIA, Santa Clara, CA, USA) graphic card, and the algorithms were implemented on open-source programming language Python3 version 3.10.11. Although SAM segmentation could be fine-tuned to achieve the required pesticide residue area coverage, this is a large model. Therefore, to provide a competitive and accessible algorithm for digital residue analysis, a well-established convolutional neural network (CNN) was integrated. The use of a CNN enabled the fine-tuning of the segmentation model with the limited annotated data available from the laboratory- and in-field-condition images. Another added benefit of incorporating the CNN for image segmentation processing was the use of fewer computing resources in contrast to the larger SAM.

2.5.1. Convolutional Neural Network (CNN)

A convolutional neural network (CNN) architecture was designed based on previously reported studies, with some modification [28,48,49]. The proposed and developed network is divided into three major blocks (Figure 3). This architecture, shown in Figure 3, shows the CNN backbone, which consists of the ResNet50 and FPN, the feature maps leading to the region proposal network, and the region of interest (RoI) alignment. Subsequently, from the RoI, there are the connected layers with classifications and the bounding box, finally producing the masks from the fully convoluted networks. Moreover, in order to achieve the aim of determining the pesticide residue coverage on the citrus leaves, a more specialized Mask-Region-Based CNN was integrated in this work [50].
The mask R-CNN is widely known for its ability to effectively perform instance segmentation, which is particularly helpful in this project for the object detection process and the creation of a mask for each instance [51]. This capability aligns with the project aims of distinguishing multiple objects (e.g., leaves) appearing in a single image. Conversely, a recent model, YOLOv11 [52], was considered for the object detection, instance image segmentation, and image classification. This model builds on a previous version of YOLO [53] to achieve higher accuracy rates with the addition of enhanced instance image segmentation and advanced data augmentation. Nevertheless, mask R-CNN can provide higher performance than YOLOv11 in scenarios in which accurate instance segmentation is required. Since YOLOv11 is more applicable to driving, object tracking, and surveillance [52,54], the mask R-CNN was selected as a more suitable alternative considering the project’s objective, its higher accuracy, and the widely established applicability to image analysis and robotic vision [55,56,57].

2.5.2. Mask-Region-Based Convolutional Neural Network (Mask R-CNN)

To determine the pesticide residue coverage, a Mask R-CNN was developed based on previously reported architectures with the integration of Faster R-CNN and Fast R-CNN [50,58,59]. The proposed standard Mask R-CNN model was based on the popular ResNet50 backbone, a ubiquitous pre-trained convolutional neural network, in addition to an FPN [60], which creates a multi-scale feature pyramid containing varying spatial resolutions from rich semantic information to precise spatial details, by combining features from different levels of the backbone network. Generally, the Mask R-CNN consists of four stages: (1) The Region Proposal Network (RPN), used for generating region proposals (bounding boxes) that have a probability of containing objects within the image, operating on the feature map generated by the aforementioned backbone network. The Region of Interest (RoI) Align layer is introduced to align the features within a region of interest with the output spatial grid of the output feature map. This layer is crucial in preventing information loss that can occur during the quantization of the RoI’s spatial coordinates to the nearest integer. (2) The RoI pooling layer is crucial for resizing images to a fixed dimension regardless of the original aspect ratio. (3) The feature extraction component feeds the resized RoIs to the CNN to extract the features of each region. (4) Classification and bounding Box regression is used to detect each object and classify it into the most suitable class, hence generating a binary mask for each object detected in the image by delineating the precise pixel values of each object instance. Altogether, these processes are based on the image features and the bounding box is specified based on the coordinates of the confined object present in the image. The Mask R-CNN was selected due to its wide acceptance and usage as segmentation model in the computer vision field.
On the other hand, to avoid poor performance during both training and testing, as well as false positives and the generation of low-quality masks, in this work, the Mask R-CNN was firstly trained on a well-annotated and comprehensive dataset such as the MS COCO dataset [61]. The limited availability of specific high-quality annotated data led to the decision to fine-tune a pre-trained network according to the specifications rather than to train the network solely on the collected data. In this study, the model was finely tuned to identify and classify specific features related to the study objectives, following the previously reported method with some modification [62]. Lastly, the Mask R-CNN generated a high-quality segmentation mask of the area of the leaf coated with the pesticide residue. Subsequently, the number of pixels present in the segmented image was utilized to calculate the approximate pesticide (OTC and Kocide) residue coverage percentage.
In order to acquire classifications and segmentation masks according to requirements, the ResNet-based Mask R-CNN was fine-tuned to be adapted to perform the specific task of pesticide residue detection and segmentation. The limited size of the custom dataset of pesticide-sprayed plant images was a crucial factor in the selection of hyperparameters such as the number of epochs to be trained, the learning rate chosen, and regularization techniques for optimizing the model’s accuracy.
The annotation tool VGG Image Annotator (VIA) was used to perform pixel-wise annotation on the custom dataset so as to provide fine-grained segmentation training masks. The dataset was divided into distinct subsets with a split of 70% for training, 20% for validation, and 10% for testing. To mitigate the risk of overfitting, especially given the small size of the dataset, we selected 50 training epochs. Another critical hyperparameter was the learning rate, which was selected at 0.001, which would allow the model to make gradual adjustments to the pre-trained weights in order to adapt to the new classes used in the annotated dataset. Further techniques such as Dropout and L2 Regularization were used to prevent overfitting and improve generalization. Dropout is a technique through which a random number of neurons present in the fully connected layers are deactivated or “dropped-out” during training, ensuring that the model becomes less reliant on particular neurons and develops a better learning of the features, hence preventing overfitting. A dropout rate of 0.25 was selected, meaning that 25% of the neurons were randomly deactivated at each training step. Similarly, L2 Regularization, also known as Weight Decay, adds a penalty for larger weights, promoting a more even distribution of weights and preventing overfitting. Through experimentation, a Regularization Parameter of 0.001 was found to be the most effective for this dataset.
The dataset used consists of a total of 1164 images. The dataset was split into the following groups: 174 images showcasing on-field conditions for citrus groves, 30 images showcasing the effect of chromatic illumination on rust/lesions present on isolated citrus leaves, and 960 images of OTC- and Kocide-treated leaves taken in laboratory conditions. In order to test the efficacy of SAM for segmenting the image data taken, 100 images from on-field and laboratory conditions were each chosen and their segmentation masks were generated accurately and manually. The accuracy of the SAM’s generated masks was tested by comparing the pixel locations of the generated and human-annotated binary masks. For the main pesticide detection, the 960 images taken in laboratory conditions were split into 768 training images for fine-tuning and 192 validation images.

3. Results and Discussion

3.1. Manual Residue Coverage Area Determination

The feasibility of the proposed software was evaluated with a manual residue coverage area determination analysis. This proof-of-concept experiment resulted in the independent evaluation of citrus leaves treated with OTC and Kocide 3000. The schematic in Figure 4A represents the process of pesticide residue deposition on citrus leaves in a grove after foliar spray. Due to elevated temperatures in field conditions, the pesticide droplets tend to evaporate, leading to a visible build-up of residue on the surface over time. The area of pesticide residue coverage was assessed manually using RGB color separation with the open source software ImageJ to quantify the pixels within blue (Kocide 3000) and green (OTC) images. Figure 4B,C shows the linear response of pesticide coverage on the leaf surface in relation to the concentration of the applied pesticide. The pesticide residue at a higher concentration exhibits a visible pattern of deposition with a higher coverage area. Both pesticides, OTC and Kocide, showcased good Pearson correlation values (R2) of 0.974 and 0.870, respectively. Similar to previously reported work, the OTC residue can only be observed under UV light, and therefore the images for OTC were acquired under this type of light (Figure 4D) [41]. Meanwhile, the Kocide 3000 residue was visible under white light as a light blue solid (Figure 4E). This difference in the light used for residue analysis allows higher sensitivity and visualization of the greater extent of the OTC residue, with a 65.7% coverage at the highest concentrations, in contrast to Kocide 3000, which showed a lower percentage of coverage of up to 4.3% at the highest concentration of copper (Cu).
The concentration range used in this proof-of-concept experiment was based on the relevant application rates used in citrus groves, based on the suggested pesticide label. Altogether, these results served as a baseline study to indicate the feasibility of this work towards developing an ML-powered imaging analysis software to determine the pesticide residue coverage on the leaf surface. The highlighted good visual response between the applied pesticide (OTC and Kocide 3000) concentration and the leaf surface coverage area corroborates the suitability of both agrochemicals to be used as model pesticides for image residue analysis. This image analysis tool will foster the development of an affordable and reliable software for growers to assess residue in-field to guide their crop management strategies, reconsidering pesticide use rate and frequency.

3.2. Analysis of Image Data Preprocessing

As previously stated, the region of interest in this study was the leaf surface, and thus required proper isolation from background and/or other miscellaneous objects from the foreground. The foundational Segment Anything Model (SAM) was implemented during the image data preprocessing step to enable a bounding box for image isolation and the generation of segmentation masks (Figure 5A,B). Figure 5 shows a visual representation of the preprocessing segmentation process that the proposed software undergoes. The SAM generated up to 17 segmentation masks per image; however, 3 were utilized in further analysis. This model was evaluated with two separate datasets: digital images of leaves sprayed with pesticide under controlled laboratory conditions and another group with digital images of one-year-old citrus trees under field conditions.
To assess the accuracy of the segmentation model with the datasets from both groups, average segmentation accuracy (IoU) is summarized in Table 1. The reported IoU values from the dataset in the leaves in the laboratory conditions group ranged from 97.1% to 99.5%. Conversely, the IoU values from the leaves in the field conditions group ranged from 80.8% to 98.2%. However, the segmentation accuracy for the field conditions datasets was significantly lower than the desired. An average segmentation accuracy of 0.92 was deemed to be effective at isolating the leaf surface as the RoI, isolating it from the unnecessary objects present in the image. Furthermore, this accuracy level is to be expected under field conditions considering the multitude of interfering objects in a grove and untrained growers collecting the image data.

3.3. Data Analysis Using a Fine-Tuned Mask R-CNN

The data analysis was performed using the developed Mask R-CNN with the integration of Faster R-CNN. This network model allowed the identification of pesticide residue on the leaf surface after training with well-annotated datasets (e.g., the MS COCO dataset). Moreover, fine-tuning was performed to identify and classify specific features of OTC and Kocide 3000 residue on the leaf surface. Figure 6 showcases an example of the mask R-CNN with one acquired image of citrus leaf sprayed with OTC residue (200 ppm). The developed model resulted in a high-quality segmentation mask of the area of the leaf coated with the pesticide residue. In addition, the model uses the number of pixels present in the segmented image to determine the residue coverage percentage. This model calculation of the pesticide residue coverage area is performed using the same equation as the manual pesticide coverage experiments.
In this study, to evaluate the performance of the proposed model, four common metrics were assessed: Accuracy, Recall, Precision, and F1 score. The results of the metrics for both pesticides OTC and Kocide 3000 are summarized in Table 2.
The metric related to segmentation accuracy represents the number of correctly classified data instances, which in this case is the number of correctly labeled pixels compared to the total number of data instances. The developed model showed an accuracy for OTC slightly higher than Kocide 3000, with values of 0.843 and 0.828, respectively. This is likely due to the higher sensitivity achieved while visualizing OTC images captured under UV light, compared to Kocide visualization images captured under white light. These high-accuracy results support the efficacy of the developed software to assess the pesticide residue coverage on the citrus leaves. It is worth noting that in some cases with unbalanced datasets, the sole use of an accuracy metric has the potential to provide misleading results. The unbalanced datasets originate from cases with significant differences between the number of pixels in one class and another. For this reason, it is important to evaluate the other metrics of Precision, Recall, and F1 score, as shown in this work.
On the other hand, Precision and Recall are two metrics that work in opposition to each other. Precision provides the number of True Positive classifications against the total number of positive classifications, whereas Recall provides the number of True Positive classifications against the sum of True Positives and False Negatives. Ideally, both Precision and Recall should be equal to 1, as the ideal number of misclassifications should be 0. Table 2 shows that the Precision and Recall of the proposed model for OTC were 0.877 and 0.842, respectively. These values are higher than the model metrics of Precision and Recall for Kocide 3000, with values of 0.857 and 0.821, respectively. These results corroborate the reliability and selectivity of the developed models to properly detect the pesticide residue coverage in contrast to other leaf features or defects.
Independently, these two metrics (Precision and Recall) provide a better understanding of the number of misclassifications declared by the predictive model. Hence, the use of F1, a metric that combines both Precision and Recall, should be the most relevant metric to assess the performance of any developed model. The F1 score or Dice Sorensen Coefficient uses the harmonic mean from Precision and Recall to provide the quality classifier of a model in a range from 0 to 1. The developed model exhibited a high quality classifier for both OTC and Kocide 3000, with values of 0.823 and 0.804, respectively. Although these metrics showcased good performance of the developed software, there is an opportunity for further improvements, for instance addressing factors that contribute to noise (e.g., leaf orientation, camera lens, and lesions, among others) hindering higher levels of performance.
Altogether, these findings demonstrate the effective development of an image analysis software assisted by a machine learning system for pesticide residue coverage determination. The developed software model showcased great reliability and versatility for field conditions image analysis. This technological tool was designed to assist citrus growers in their decision-making process during the pesticide application cycle and in their current efforts towards more sustainable agricultural practices.

4. Conclusions

In the present work, a reliable and non-destructive method for detecting and assessing the pesticide residue coverage of the surface of citrus leaves was developed. This software provides an affordable alternative for citrus growers to use in field conditions to guide their current pesticidal application cycles. The 3D-printed sample compartment prototype fabricated for this work is robust, inexpensive, and portable, enabling a more controlled environment for image data acquisition under field conditions. Both studied pesticides, Kocide 3000 and OTC, exhibited a visible pattern of residue deposition on the leaves, allowing the surface image analysis to determine their coverage. The residue image analysis showed a linearly proportional relationship between the concentration of applied pesticide and the area of coverage. This property is desirable in the development of an imaging analysis software assisted by a lightweight machine learning system. Furthermore, the SAM segmentation model with the developed model for fine-tuning a pre-trained Mask R-CNN showed excellent performance metrics to generate segmentation masks of pesticide residue coverage. These findings support the undeniable advantages of integrating recent technological advancements to foster more efficient, affordable, and sustainable agricultural practices.
The developed platform has the potential to directly benefit citrus growers, providing a suitable and equitable alternative for pesticide residue coverage analysis. This technology can mitigate the socio-economic challenges faced by small-scale citrus growers, addressing pesticide residue coverage in real-time with an affordable tool. The on-site pesticide residue coverage information may guide the decision-making-process-related crop management and mitigate the excessive overuse of pesticide. Moreover, in regions with limited technological infrastructure this tool may bridge the gap between growers with large and small operations, with cost savings and reduced financial constraints.
In the future, the current limitations of this methodology and its general applicability to other crops will be addressed. Some of these challenges are improving accuracy, F1 scores, sample size, and data augmentation strategies. For this reason, further studies will incorporate larger numbers of samples and samples subjected to rainfall events to better assess the residue persistence under harsher environmental conditions. In addition, this work will serve as a foundational study to explore the development of a larger software that could encompass other pesticides and crops relevant to the agricultural industry. Ultimately, more efforts are underway to improve the model’s accuracy and F1 score with the application of standard data augmentation technique to improve performance, especially during the future end-user testing. In addition, increasing to a larger annotated dataset could overcome some of the challenges associated with leaf defects causing false positives. These improvements will foster the software development process to proceed into later stages with user interface design and end-users testing feedback.

Author Contributions

Conceptualization, A.B. and S.S.; methodology, A.B., G.D., and E.D.; software, A.B.; validation, A.B.; formal analysis, A.B.; investigation, A.B., G.D., and E.D.; resources, S.S.; data curation, A.B.; writing—original draft preparation, A.B.; writing—review and editing, E.D., G.D., C.C., and S.S.; supervision, S.S.; project administration, S.S. and C.C.; funding acquisition, S.S. and C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by an intramural research program of the U.S. Department of Agriculture, National Institute of Agriculture, project #2024-67022-41788. The findings and conclusions in this preliminary publication have not been formally disseminated by the U.S. Department of Agriculture and should not be construed to represent any agency determination or policy.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

Authors would like to acknowledge Yunjun Xu for their valuable insight in this work. Additionally, we acknowledge the UCF Department of Chemistry, Estes Citrus Inc. grove in Vero beach—Florida, and the NanoScience Technology Center for providing appropriate resources and facilities for data collection.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Donaldson, D.; Kiely, T.; Grube, A. Pesticide’s Industry Sales and Usage 1998–1999 Market Estimates. US Environmental Protection Agency; Washington (DC): Report No. 1997. EPA-733-R-02-OOI. Available online: https://shorturl.at/m5ooC (accessed on 15 July 2024).
  2. Sarkozi, A. New Standards to Curb the Global Spread of Plant Pests and Diseases; FAO: Rome, Italy, 2019. [Google Scholar]
  3. Lowry, G.V.; Avellan, A.; Gilbertson, L.M. Opportunities and challenges for nanotechnology in the agri-tech revolution. Nat. Nanotechnol. 2019, 14, 517–522. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, W. Global pesticide use: Profile, trend, cost/benefit and more. Proc. Int. Acad. Ecol. Environ. Sci. 2018, 8, 1. [Google Scholar]
  5. Wang, D.; Saleh, N.B.; Byro, A.; Zepp, R.; Sahle-Demessie, E.; Luxton, T.P.; Ho, K.T.; Burgess, R.M.; Flury, M.; White, J.C.; et al. Nano-enabled pesticides for sustainable agriculture and global food security. Nat. Nanotechnol. 2022, 17, 347–360. [Google Scholar] [CrossRef] [PubMed]
  6. Fenner, K.; Canonica, S.; Wackett, L.P.; Elsner, M. Evaluating pesticide degradation in the environment: Blind spots and emerging opportunities. Science 2013, 341, 752–758. [Google Scholar] [CrossRef] [PubMed]
  7. Walker, G.W.; Kookana, R.S.; Smith, N.E.; Kah, M.; Doolette, C.L.; Reeves, P.T.; Lovell, W.; Anderson, D.J.; Turney, T.W.; Navarro, D.A. Ecological Risk Assessment of Nano-enabled Pesticides: A Perspective on Problem Formulation. J. Agric. Food Chem. 2018, 66, 6480–6486. [Google Scholar] [CrossRef]
  8. Lewis, K.A.; Tzilivakis, J.; Warner, D.J.; Green, A. An international database for pesticide risk assessments and management. Hum. Ecol. Risk Assess. Int. J. 2016, 22, 1050–1064. [Google Scholar] [CrossRef]
  9. Fagnano, M.; Agrelli, D.; Pascale, A.; Adamo, P.; Fiorentino, N.; Rocco, C.; Pepe, O.; Ventorino, V. Copper accumulation in agricultural soils: Risks for the food chain and soil microbial populations. Sci. Total Environ. 2020, 734, 139434. [Google Scholar] [CrossRef]
  10. Skidmore, M.W.; Ambrus, Á. Pesticide metabolism in crops and livestock. In Pesticide Residues in Food and Drinking Water: Human Exposure and Risks; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2003; pp. 63–120. [Google Scholar]
  11. Tudi, M.; Daniel Ruan, H.; Wang, L.; Lyu, J.; Sadler, R.; Connell, D.; Chu, C.; Phung, D.T. Agriculture development, pesticide application and its impact on the environment. Int. J. Environ. Res. Public Health 2021, 18, 1112. [Google Scholar] [CrossRef]
  12. Zikankuba, V.L.; Mwanyika, G.; Ntwenya, J.E.; James, A. Pesticide regulations and their malpractice implications on food and environment safety. Cogent Food Agric. 2019, 5, 1601544. [Google Scholar] [CrossRef]
  13. Wossink, G.A.; Feitshans, T.A. Pesticide policies in the European Union. Drake J. Agric. L. 2000, 5, 223. [Google Scholar]
  14. Möhring, N.; Ingold, K.; Kudsk, P.; Martin-Laurent, F.; Niggli, U.; Siegrist, M.; Studer, B.; Walter, A.; Finger, R. Pathways for advancing pesticide policies. Nat. Food 2020, 1, 535–540. [Google Scholar] [CrossRef] [PubMed]
  15. Goldman, L.R. Managing pesticide chronic health risks: US policies. J. Agromedicine 2007, 12, 67–75. [Google Scholar] [CrossRef] [PubMed]
  16. Li, C.; Begum, A.; Xue, J. Analytical methods to analyze pesticides and herbicides. Water Environ. Res. 2020, 92, 1770–1785. [Google Scholar] [CrossRef] [PubMed]
  17. Cserháti, T.; Szogyi, M. Chromatographic determination of pesticides in foods and food products. Eur. Chem. Bull 2012, 1, 58–68. [Google Scholar] [CrossRef]
  18. Liang, H.; Bilon, N.; Hay, M.T. Analytical methods for pesticide residues. Water Environ. Res. 2014, 86, 2132–2155. [Google Scholar] [CrossRef]
  19. Plan, E.A. Agricultural Marketing Service; United States Department of Agriculture: Washington, DC, USA, 2023. [Google Scholar]
  20. Raj, E.F.I.; Appadurai, M.; Athiappan, K. Precision Farming in Modern Agriculture. In Smart Agriculture Automation Using Advanced Technologies: Data Analytics and Machine Learning, Cloud Architecture, Automation and IoT; Choudhury, A., Biswas, A., Singh, T.P., Ghosh, S.K., Eds.; Springer: Singapore, 2021; pp. 61–87. [Google Scholar]
  21. Shaikh, T.A.; Mir, W.A.; Rasool, T.; Sofi, S. Machine Learning for Smart Agriculture and Precision Farming: Towards Making the Fields Talk. Arch. Comput. Methods Eng. 2022, 29, 4557–4597. [Google Scholar] [CrossRef]
  22. Hao, G.-F.; Zhao, W.; Song, B.-A. Big Data Platform: An Emerging Opportunity for Precision Pesticides. J. Agric. Food Chem. 2020, 68, 11317–11319. [Google Scholar] [CrossRef]
  23. Suhag, S.; Singh, N.; Jadaun, S.; Johri, P.; Shukla, A.; Parashar, N. IoT based soil nutrition and plant disease detection system for smart agriculture. In Proceedings of the 2021 10th IEEE International Conference on Communication Systems and Network Technologies (CSNT), Bhopal, India, 18–19 June 2021; pp. 478–483. [Google Scholar]
  24. Abioye, E.A.; Hensel, O.; Esau, T.J.; Elijah, O.; Abidin, M.S.Z.; Ayobami, A.S.; Yerima, O.; Nasirahmadi, A. Precision Irrigation Management Using Machine Learning and Digital Farming Solutions. AgriEngineering 2022, 4, 70–103. [Google Scholar] [CrossRef]
  25. Nyéki, A.; Neményi, M. Crop Yield Prediction in Precision Agriculture. Agronomy 2022, 12, 2460. [Google Scholar] [CrossRef]
  26. Talaviya, T.; Shah, D.; Patel, N.; Yagnik, H.; Shah, M. Implementation of artificial intelligence in agriculture for optimisation of irrigation and application of pesticides and herbicides. Artif. Intell. Agric. 2020, 4, 58–73. [Google Scholar] [CrossRef]
  27. Suárez, A.; Molina, R.S.; Ramponi, G.; Petrino, R.; Bollati, L.; Sequeiros, D. Pest detection and classification to reduce pesticide use in fruit crops based on deep neural networks and image processing. In Proceedings of the 2021 XIX Workshop on Information Processing and Control (RPIC), San Juan, Argentina, 3–5 November 2021; pp. 1–6. [Google Scholar]
  28. Ye, W.; Yan, T.; Zhang, C.; Duan, L.; Chen, W.; Song, H.; Zhang, Y.; Xu, W.; Gao, P. Detection of Pesticide Residue Level in Grape Using Hyperspectral Imaging with Machine Learning. Foods 2022, 11, 1609. [Google Scholar] [CrossRef]
  29. Kim, S.B.; Kim, D.S.; Mo, X. An image segmentation technique with statistical strategies for pesticide efficacy assessment. PLoS ONE 2021, 16, e0248592. [Google Scholar] [CrossRef] [PubMed]
  30. Hosseini, H.; Xiao, B.; Jaiswal, M.; Poovendran, R. On the limitation of convolutional neural networks in recognizing negative images. In Proceedings of the 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), Cancun, Mexico, 18–21 December 2017; pp. 352–358. [Google Scholar]
  31. Wu, J.-H.; Liu, T.A.; Hsu, W.-T.; Ho, J.H.-C.; Lee, C.-C. Performance and limitation of machine learning algorithms for diabetic retinopathy screening: Meta-analysis. J. Med. Internet Res. 2021, 23, e23863. [Google Scholar] [CrossRef] [PubMed]
  32. Tsagkaris, A.S.; Pulkrabova, J.; Hajslova, J. Optical Screening Methods for Pesticide Residue Detection in Food Matrices: Advances and Emerging Analytical Trends. Foods 2021, 10, 88. [Google Scholar] [CrossRef] [PubMed]
  33. Weber, F.; Rosa, G.; Terra, F.; Oldoni, A.; Drews, P. A low cost system to optimize pesticide application based on mobile technologies and computer vision. In Proceedings of the 2018 Latin American Robotic Symposium, 2018 Brazilian Symposium on Robotics (SBR) and 2018 Workshop on Robotics in Education (WRE), João Pessoa, Brazil, 6–10 November 2018; pp. 345–350. [Google Scholar]
  34. Bouchard, C.; Bernatchez, R.; Lavoie-Cardinal, F. Addressing annotation and data scarcity when designing machine learning strategies for neurophotonics. Neurophotonics 2023, 10, 044405. [Google Scholar] [CrossRef] [PubMed]
  35. Osinga, S.A.; Paudel, D.; Mouzakitis, S.A.; Athanasiadis, I.N. Big data in agriculture: Between opportunity and solution. Agric. Syst. 2022, 195, 103298. [Google Scholar] [CrossRef]
  36. Makino, Y.; Li MeiLan, L.M.; Oshita, S.; Kawagoe, Y.; Matsuoka, T.; Hashimoto, K.; Arai, K. Nondestructive detection of pesticides on fruits and vegetables using UV camera. In Proceedings of the International Conference of Agricultural Engineering—CIGR-AgEng 2012: Agriculture and Engineering for a Healthier life, Valencia, Spain, 8–12 July 2012. [Google Scholar]
  37. Jamshidi, B.; Mohajerani, E.; Jamshidi, J. Developing a Vis/NIR spectroscopic system for fast and non-destructive pesticide residue monitoring in agricultural product. Measurement 2016, 89, 1–6. [Google Scholar] [CrossRef]
  38. Soltani Nazarloo, A.; Rasooli Sharabiani, V.; Abbaspour Gilandeh, Y.; Taghinezhad, E.; Szymanek, M.; Sprawka, M. Feasibility of using VIS/NIR spectroscopy and multivariate analysis for pesticide residue detection in tomatoes. Processes 2021, 9, 196. [Google Scholar] [CrossRef]
  39. Chen, J.; Peng, Y.; Li, Y.; Wang, W.; Wu, J.; Shan, J. Rapid detection of vegetable pesticide residue based on hyperspectral fluorescence imaging technology. Trans. Chin. Soc. Agric. Eng. 2010, 26, 1–5. [Google Scholar]
  40. Sun, J.; Hu, Y.; Zou, Y.; Geng, J.; Wu, Y.; Fan, R.; Kang, Z. Identification of pesticide residues on black tea by fluorescence hyperspectral technology combined with machine learning. Food Sci. Technol. 2022, 42, e55822. [Google Scholar] [CrossRef]
  41. Pereira, J.; Moreno, D.N.; Giannelli, G.G.; Davidson, E.; Rivera-Huertas, J.; Wang, H.; Santra, S. Targeted delivery of oxytetracycline to the epidermal cell junction and stomata for crop protection. Environ. Sci. Nano 2023, 10, 3012–3024. [Google Scholar] [CrossRef]
  42. Xiang, J.; Hare, M.; Vickers, L.; Kettlewell, P. Estimation of film antitranspirant spray coverage on rapeseed (Brassica napus L.) leaves using titanium dioxide. Crop Prot. 2021, 142, 105531. [Google Scholar] [CrossRef]
  43. Schutte, G.C.; Kotze, C.; van Zyl, J.G.; Fourie, P.H. Assessment of retention and persistence of copper fungicides on orange fruit and leaves using fluorometry and copper residue analyses. Crop Prot. 2012, 42, 1–9. [Google Scholar] [CrossRef]
  44. Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.-Y. Segment anything. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France 1–6 October 2023; pp. 4015–4026.
  45. Ke, L.; Ye, M.; Danelljan, M.; Tai, Y.-W.; Tang, C.-K.; Yu, F. Segment anything in high quality. In Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023) Main Conference Track, New Orleans, LA, USA, 10–16 December 2023. [Google Scholar]
  46. Chen, Z.; Fang, G.; Ma, X.; Wang, X. 0.1% Data Makes Segment Anything Slim. arXiv 2023, arXiv:2312.05284. [Google Scholar]
  47. Zhao, X.; Ding, W.; An, Y.; Du, Y.; Yu, T.; Li, M.; Tang, M.; Wang, J. Fast segment anything. arXiv 2023, arXiv:2306.12156. [Google Scholar]
  48. LeCun, Y.; Bengio, Y. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 1995, 3361, 1995. [Google Scholar]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  50. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
  51. Bharati, P.; Pramanik, A. Deep learning techniques—R-CNN to mask R-CNN: A survey. In Proceedings of the Computational Intelligence in Pattern Recognition: Proceedings of CIPR 2019, 2020; pp. 657–668.
  52. Jensen, M.B.; Nasrollahi, K.; Moeslund, T.B. Evaluating state-of-the-art object detector on challenging traffic light data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 9–15. [Google Scholar]
  53. Sapkotaa, R.; Mengb, Z.; Churuvijaa, M.; Dub, X.; Mab, Z.; Karkeea, M. Comprehensive Performance Evaluation of YOLO11, YOLOv10, YOLOv9 and YOLOv8 on Detecting and Counting Fruitlet in Complex Orchard Environments. arXiv 2024, arXiv:2407.12040. [Google Scholar]
  54. Jain, S.; Indu, S.; Goel, N. Comparative Analysis of YOLO Algorithms for Intelligent Traffic Monitoring. In Proceedings of the International Conference on Data Analytics and Computing, San Francisco, CA, USA, 10–14 July 2022; pp. 159–168. [Google Scholar]
  55. Anantharaman, R.; Velazquez, M.; Lee, Y. Utilizing mask R-CNN for detection and segmentation of oral diseases. In Proceedings of the 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, 3–6 December 2018; pp. 2197–2204. [Google Scholar]
  56. Padma, T.; Kumari, C.U.; Yamini, D.; Pravallika, K.; Bhargavi, K.; Nithya, M. Image segmentation using Mask R-CNN for tumor detection from medical images. In Proceedings of the 2022 International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 16–18 March 2022; pp. 1015–1021. [Google Scholar]
  57. Liu, J.; Li, P. A mask R-CNN model with improved region proposal network for medical ultrasound image. In Proceedings of the Intelligent Computing Theories and Application: 14th International Conference, ICIC 2018, Wuhan, China, 15–18 August 2018; Proceedings, Part II 14. 2018; pp. 26–33. [Google Scholar]
  58. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  59. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef]
  60. Van Quyen, T.; Kim, M.Y. Feature pyramid network with multi-scale prediction fusion for real-time semantic segmentation. Neurocomputing 2023, 519, 104–113. [Google Scholar] [CrossRef]
  61. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft coco: Common objects in context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Proceedings, Part V 13. Springer: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
  62. Sonntag, D.; Barz, M.; Zacharias, J.; Stauden, S.; Rahmani, V.; Fóthi, Á.; Lőrincz, A. Fine-tuning deep CNN models on specific MS COCO categories. arXiv 2017, arXiv:1709.01476. [Google Scholar]
Figure 1. Blueprint of the sample compartment design constructed using Onshape.
Figure 1. Blueprint of the sample compartment design constructed using Onshape.
Applsci 14 10087 g001
Figure 2. Digital images of a citrus leaf with OTC showing the preprocessing of images with the bounding box region of interest within the red box and the output of one segmented image. Source image (Left) with bounding box highlighting the region of interest, and segmented image (Right). Simulated erosion and deposition results in a higher intensity of photoluminescence towards the center of the leaf (Left) due to inherent curvature. Note the debris displayed outside the segmented area (Right), which can result in false positives, hence justifying the need for segmentation and isolation of the region of interest for training the model.
Figure 2. Digital images of a citrus leaf with OTC showing the preprocessing of images with the bounding box region of interest within the red box and the output of one segmented image. Source image (Left) with bounding box highlighting the region of interest, and segmented image (Right). Simulated erosion and deposition results in a higher intensity of photoluminescence towards the center of the leaf (Left) due to inherent curvature. Note the debris displayed outside the segmented area (Right), which can result in false positives, hence justifying the need for segmentation and isolation of the region of interest for training the model.
Applsci 14 10087 g002
Figure 3. Diagram of the proposed convolutional neural network (CNN) structure for pesticide residue image analysis.
Figure 3. Diagram of the proposed convolutional neural network (CNN) structure for pesticide residue image analysis.
Applsci 14 10087 g003
Figure 4. (A) Schematic representation of the pesticide deposition on the tree leaves after foliar spray. (B) Calibration curve of OTC at different concentrations based on green-colored coverage area in the digital images from (D) in relation to the leaf surface. (C) Calibration curve of Kocide 3000 at different copper concentrations based on blue-colored coverage area in the digital images from (E) in relation to the leaf surface. (D) Digital images of citrus leaves under UV light exposure showing the OTC residue coverage area. (E) Digital images of citrus leaves under white light exposure showing the Kocide 3000 residue coverage area.
Figure 4. (A) Schematic representation of the pesticide deposition on the tree leaves after foliar spray. (B) Calibration curve of OTC at different concentrations based on green-colored coverage area in the digital images from (D) in relation to the leaf surface. (C) Calibration curve of Kocide 3000 at different copper concentrations based on blue-colored coverage area in the digital images from (E) in relation to the leaf surface. (D) Digital images of citrus leaves under UV light exposure showing the OTC residue coverage area. (E) Digital images of citrus leaves under white light exposure showing the Kocide 3000 residue coverage area.
Applsci 14 10087 g004
Figure 5. Overview of the image preprocessing method developed using a leaf spray with OTC under laboratory conditions. (A) Digital image used as a source after bounding box region of interests is determined.(B) collection of segmentation masks generated with the implemented SAM. SAM generates every possible segmentation mask based on the number of objects it can detect in the image. Only the most prominent masks featuring the desired object in focus are selected, in this case the first three masks generated as ranked by SAM.
Figure 5. Overview of the image preprocessing method developed using a leaf spray with OTC under laboratory conditions. (A) Digital image used as a source after bounding box region of interests is determined.(B) collection of segmentation masks generated with the implemented SAM. SAM generates every possible segmentation mask based on the number of objects it can detect in the image. Only the most prominent masks featuring the desired object in focus are selected, in this case the first three masks generated as ranked by SAM.
Applsci 14 10087 g005
Figure 6. Overview of the data processing by the developed Mask R-CNN using a leaf spray with OTC under laboratory conditions. The left image in the red box represents the mask generated by Mask R-CNN, the middle image in the blue box represents the originally segmented image, and the right image in the yellow box represents the superimposed image by combining the previous images. The mask generated by the Mask R-CNN provides a close estimate to the percentage of pesticide coverage through pixel-wise estimation. The superimposed image (right) conveys an overall showcase of both the area and the deposition patterns displayed by the subject.
Figure 6. Overview of the data processing by the developed Mask R-CNN using a leaf spray with OTC under laboratory conditions. The left image in the red box represents the mask generated by Mask R-CNN, the middle image in the blue box represents the originally segmented image, and the right image in the yellow box represents the superimposed image by combining the previous images. The mask generated by the Mask R-CNN provides a close estimate to the percentage of pesticide coverage through pixel-wise estimation. The superimposed image (right) conveys an overall showcase of both the area and the deposition patterns displayed by the subject.
Applsci 14 10087 g006
Table 1. Average segmentation accuracy from datasets collected under laboratory and field conditions.
Table 1. Average segmentation accuracy from datasets collected under laboratory and field conditions.
ConditionSAMSAM-BoxSAM-Point
Laboratory0.98940.99320.9901
Field0.92320.93690.9295
Table 2. Accuracy, precision, recall, and F1 score results from testing datasets from Kocide 3000- and OTC-sprayed citrus leaves.
Table 2. Accuracy, precision, recall, and F1 score results from testing datasets from Kocide 3000- and OTC-sprayed citrus leaves.
PesticideAccuracyPrecisionRecallF1 Score
OTC0.8430.8770.8420.823
Kocide 30000.8280.8570.8210.804
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Basavaraju, A.; Davidson, E.; Diracca, G.; Chen, C.; Santra, S. Pesticide Residue Coverage Estimation on Citrus Leaf Using Image Analysis Assisted by Machine Learning. Appl. Sci. 2024, 14, 10087. https://doi.org/10.3390/app142210087

AMA Style

Basavaraju A, Davidson E, Diracca G, Chen C, Santra S. Pesticide Residue Coverage Estimation on Citrus Leaf Using Image Analysis Assisted by Machine Learning. Applied Sciences. 2024; 14(22):10087. https://doi.org/10.3390/app142210087

Chicago/Turabian Style

Basavaraju, Adarsh, Edwin Davidson, Giulio Diracca, Chen Chen, and Swadeshmukul Santra. 2024. "Pesticide Residue Coverage Estimation on Citrus Leaf Using Image Analysis Assisted by Machine Learning" Applied Sciences 14, no. 22: 10087. https://doi.org/10.3390/app142210087

APA Style

Basavaraju, A., Davidson, E., Diracca, G., Chen, C., & Santra, S. (2024). Pesticide Residue Coverage Estimation on Citrus Leaf Using Image Analysis Assisted by Machine Learning. Applied Sciences, 14(22), 10087. https://doi.org/10.3390/app142210087

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop