Next Article in Journal
Mamba-UAV-SegNet: A Multi-Scale Adaptive Feature Fusion Network for Real-Time Semantic Segmentation of UAV Aerial Imagery
Previous Article in Journal
Novel Design and Computational Fluid Dynamic Analysis of a Foldable Hybrid Aerial Underwater Vehicle
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coverage Estimation of Droplets Sprayed on Water-Sensitive Papers Based on Domain-Adaptive Segmentation

1
Department of Biosystems Machinery Engineering, Chungnam National University, Daejeon 34134, Republic of Korea
2
Eco-Friendly Hydrogen Electric Tractor & Agricultural Machinery Institute, Chungnam National University, Daejeon 34134, Republic of Korea
3
Department of Agriculture Engineering, National Institute of Agricultural Sciences, Jeonju 54875, Republic of Korea
4
Department of Crops and Food, Jeonbuk State Agricultural Research and Extension Services, Iksan 54591, Republic of Korea
5
Department of Biosystems Engineering, Kangwon National University, Chuncheon 24341, Republic of Korea
*
Authors to whom correspondence should be addressed.
Drones 2024, 8(11), 670; https://doi.org/10.3390/drones8110670
Submission received: 11 October 2024 / Revised: 3 November 2024 / Accepted: 8 November 2024 / Published: 13 November 2024
(This article belongs to the Section Drones in Agriculture and Forestry)

Abstract

:
Unmanned aerial spraying systems (UASSs) are widely used today for the effective control of pests affecting crops, and more advanced UASS techniques are now being developed. To evaluate such systems, artificial targets are typically used to assess droplet coverage through image processing. To evaluate performance accurately, high-quality binary image processing is necessary; however, this involves labor for sample collection, transportation, and storage, as well as the risk of potential contamination during the process. Therefore, rapid assessment in the field is essential. In the present study, we evaluated droplet coverage on water-sensitive papers (WSPs) under field conditions. A dataset was constructed consisting of paired training examples, each comprising source and target data. The source data were high-quality labeled images obtained from WSP samples through image processing, while the target data were aligned RoIs within field images captured in situ. Droplet coverage estimation was performed using an encoder–decoder model, trained on the labeled images, with features adapted to field images via self-supervised learning. The results indicate that the proposed method detected droplet coverage in field images with an error of less than 5%, demonstrating a strong correlation between measured and estimated values (R2 = 0.99). The method proposed in this paper enables immediate and accurate evaluation of the performance of UASSs in situ.

1. Introduction

Unmanned aerial vehicles (UAVs) have garnered significant attention in recent years for their potential use in agricultural pest control, especially because of their wide application range and their ability to operate unaffected by ground obstacles [1,2]. Additionally, the use of an aerial spraying system can mitigate the adverse effects of pesticide use, as these effects are predominantly associated with ground-based applications rather than aerial ones [3]. In the last decade, UAV-based spraying systems and unmanned aerial spraying systems (UASSs), have been actively developed, and researchers have investigated novel UASSs to make pest control more precise and cost-effective [4]. For example, variable-rate application systems aim to optimize pesticide usage spatially based on real-time field conditions [5,6]. These systems leverage advanced sensors and algorithms to adjust the pesticide dosage dynamically, ensuring that the right amount of chemical is applied at the right place and time, thereby minimizing waste and environmental impact [7,8].
As an essential step in the development of UASSs, performance evaluation of these spraying systems is crucial to validate their efficacy and safety. This assessment of spray application quality in the field has been carried out based on droplet detection, in which the distribution, size, and coverage of spray droplets on artificial targets placed in the designated area are all examined [9]. One commonly used artificial target is water-sensitive paper (WSP), which is coated with a bromoethyl dye that turns blue when it comes into contact with liquid [10]. In various studies, researchers have estimated droplet deposition on WSP to evaluate agricultural pesticide spray applications. Researchers have assessed the accurate delivery of pesticide droplets to a target [11], optimized pesticide applications by analyzing droplet size and distribution [12], measured droplet deposition and canopy penetration [13], and assessed the droplet spectrum and coverage of pesticide spraying in rice fields [14]. In most studies, droplets have been quantified using image-processing methods based on binarization of WSP and the degree of coverage assessed using liquid, typically water, in the sprayed area [15]. Xun and Gil [9] developed a droplet segmentation method that considers overlapping droplets; they reported that concave point detection with ellipse fitting on binarized WSP enabled a more accurate spray quality to be obtained. Dafsari et al. [11] optimized the air-induction nozzle of an agricultural sprayer to minimize drift and runoff from crops, and measured droplet sizes on WSPs under various experimental conditions.
Some researchers have focused on estimating droplet coverage on WSP for practical use in situ, and have proposed applications that can rapidly measure spray coverage in the form of software on mobile devices. Zhu et al. [16] developed the portable scanning system “DepositScan” using a business card scanner and reported that the system could evaluate droplet deposition including individual droplet size, droplet distribution, total counts, and coverage area, etc. This system could provide various parameters for qualifying the droplet deposition; however, it has limitations due to a low pixel resolution, and it is difficult to use immediately in the field. Measurements in situ are more reliable because there is a high possibility of contamination due to disturbances during WSP movement. Therefore, recent applications have been advanced to enable simultaneous capture and evaluation using smartphones. Nansen et al. [17] developed “SnapCard”, which uses a smartphone camera to capture droplet coverage after pesticide spraying. The captured images are analyzed in real time by a web-based algorithm to provide coverage data. This tool offers immediate feedback in the field, enhancing the efficiency of pesticide usage. Özlüoymak and Bolat [8] developed novel imaging software that can estimate spray coverage rates more accurately and more quickly, compared with previously used software. Their software uses a high-resolution camera to capture droplet patterns sprayed on the WSPs, and then determines the size, distribution, and density of droplets from the captured images. In addition, Brandoli et al. [18] proposed the smartphone application “DropLeaf” that can quantify pesticide application coverage in real time. This tool can estimate pesticide coverage with sufficiently high precision to enable mobile phones to replace the expensive, bulky, and time-consuming equipment previously used for such purposes. Studies such as these have demonstrated practical performance using their datasets; however, most of these studies have involved manually designed feature extraction for segmenting the droplets, which is typically optimized for a specific data domain. This limitation makes it challenging to generalize to other domains, and necessitates a time-consuming and labor-intensive repetition of the feature design process when applied to new problems [15]. In real-world captured WSP images, significant variations can be observed even for the same subject, influenced by factors such as lighting conditions, relative positioning, and posture.
Recent studies have demonstrated the potential for achieving high-level segmentation through deep learning (DL), which offers the advantage of automatically extracting domain-invariant features, thereby enabling effective generalization across different domains without manual feature engineering. Specifically, convolutional neural networks (CNNs), designed for the representation of image features, have been introduced. CNNs have exhibited improved performance across various segmentation tasks, compared with conventional methods [19]. Liu et al. [20] sought to overcome the resolution problem to ensure individual droplet detection, using a deep learning model to achieve droplet super-resolution semantic segmentation (DSRSS). Their model was developed to accurately detect and measure individual droplets, and the image measurement system which they utilized can collect images under various experimental conditions, such as differing spray pressures and angles; however, in most cases, it is difficult to practically employ this system in field settings. While the detection of individual droplets is important, the ability to detect coverage on water-sensitive papers (WSPs), which is easily captured in the field, is also of practical significance when evaluating the performance of UASSs.
In the Republic of Korea, evaluations of UASS performance have primarily been conducted based on droplet coverage on WSPs, with coverage data being obtained from high-quality binarization images using a high-cost image-processing device at KOAT (Korea Agricultural Technology Promotion Agency) [1,21,22]. The direct measurement of coverage from images taken in the field has the potential to effectively address challenges related to costs and contamination during the transport and storage of samples. Therefore, this study was conducted on droplet segmentation using deep learning, with the objective of estimating droplet coverage within a WSP image captured in situ by representing the domain-invariant feature of droplet deposition. A deep learning model was trained in a supervised manner to segment droplets using a labelled dataset processed by an image- processing device, and self-supervised representation learning was used to reduce distribution differences between features from labelled and field images as domain adaptation. The performance of the proposed method was analyzed by comparing the case of using labeled images for droplet deposition (supervised) with the case that included self-learning from field images (domain-adapted). Our contribution includes the construction of a dataset of paired WSPs consisting of field images and high-quality labelled images, as well as the establishment of a framework for domain adaptation between field images and processed high-quality binary images. This approach ensures the evaluation performance of field data by leveraging existing labeled data.

2. Materials and Methods

2.1. Data Collection

An octocopter drone (SG-10P, Hankook Samgong Co. Ltd., Seoul, Republic of Korea) was used to spray pesticide on a field in order to collect the droplet deposition data. The specifications of the drone are listed in Table 1. Water was used instead of pesticide because, in this study, we were not directly concerned with pest control performance on plants but sought only to evaluate droplet coverage. The XR Nozzle Series (XR110015VS, TeeJet Technologies, Glendale Heights, IL, USA) was used, which features a volume median diameter (VMD) of 189 μm and a flow rate of 0.58 L/min at a spray pressure of 276 kPa.
A spraying control system [22] was used to control the nozzles precisely through spatial information. The system was developed for variable-rate application, and can be controlled so that spraying occurs only within a target area, which is set based on GPS information. The variable-rate application provides various data with a wider range of coverage, compared with bulk spraying.
The field tests were conducted in a field at Kangwon National University (37°52′6″ N, 127°45′7″ E), and the experimental conditions were set up as shown in Figure 1. The test field area measured 40 m × 30 m (width × length), and the drone’s flight path was determined by selecting a starting point and an endpoint. The starting point was set 2.5 m inward, vertically and horizontally, from the bottom-left boundary of the diagram, while the endpoint was established 2.5 m inward from the bottom-right boundary. The drone performed a round-trip flight of 25 m in both forward and backward directions, with each straight path set at 5-m intervals. A total of four round trips was conducted, and the flight was executed in a race-and-track manner.
Five target areas, designated as the spraying target areas, were selected along the drone flight path, and water-sensitive papers (WSPs) (20301-2N, TeeJet Technology, Glendale Heights, IL, USA) with holders were installed around the target areas to collect the data of droplet deposition. The WSPs were installed densely near the target areas, but more sparsely farther away, as greater numbers of WSPs increase labor requirements, making efficient placement necessary. To be more detailed, we divided the installation areas into two categories based on the center of the target area: (1) a dense measurement area, which was an area of 5 m × 5 m centered on the target area, and (2) a sparse measurement area, which was the surrounding area of 10 m × 10 m, excluding the dense measurement area, where the dense measurement areas were the spraying target, and the spraying system was configured to spray when located within the predefined GPS boundaries of these areas. The droplet deposition was collected in a dense measurement area with WSPs placed every 1 m2 and in the sparse measurement area with WSPs placed every 4 m2. In addition, reference WSPs were installed at 5 m intervals along the drone’s flight path outside the target areas.
The flight conditions were as follows: a flight altitude of 2 m, a flight speed of 2 m/s, and wind speeds between 0.3 and 1.0 m/s. The environmental conditions were measured using a weather sensor (WTS-800, Beijing Tsingsense Technology Co., Ltd., Beijing, China).

2.2. Droplet Deposition Dataset Construction

Figure 2 illustrates the process of creating training examples from the sprayed WSPs. The images were processed in two ways to gather high-quality labeled images and paired field images. The high-quality labeled images were generated by processing the collected WSP samples with the image-processing device. They consisted of high-resolution scanned images from the WSP and a binary ground truth that included the sprayed areas and the background [23]. The source image has a resolution of 1920 × 1080 and 96 dpi, and the binary ground truth was obtained through binarization using Otsu’s thresholding after converting to grayscale. The paired field images were generated from the WSP images captured using a mobile phone (iPhone 14 pro, Apple, CA, USA) in situ; these were the target data for estimating the coverage of droplets in the present study.
The WSP images captured in situ went through two processes: object alignment and RoI (region of interest) extraction. The object alignment was carried out by spatially matching the WSP region within the captured image to the processed high-quality labeled image on a two-dimensional image plane, based on the perspective transformation described in Equation (1).
w x w y w = T x y 1 = t 11 t 12 t 13 t 21 t 22 t 23 t 31 t 32 1 x y 1
where T is the perspective transformation matrix with 8 parameters, x ,   y T is the aligned coordinate of the object at the front view, w is the scaling factor, and x ,   y T is the original coordinate of the object in the field image.
Perspective transformation connects two images that represent different projections of the same three-dimensional object onto distinct projective planes [24]. This implies that the WSP captured from any viewpoint can be estimated as an object captured from a frontal view through perspective transformation. In perspective transformation, eight parameters are calculated for each image using the four corners of the WSP region in the input image and the corresponding four corners in the output image. In the present study, the warpPerspective function in OpenCV (version 4.10, OpenCV team, Palo Alto, CA, USA) was used to implement the perspective transformation. Specifically, the four points of the WSP within the image were manually defined, and, based on these points, the warping was performed to match the size of the target image where the target image was a high-quality labeled image.
The aligned WSP image was occluded by the holder, which is a material that fixes the WSP to the ground. To eliminate interference from the edge holder, 20% of the aligned WSP region, which was the expected occlusion, was removed from both the vertical and horizontal edges of the images. Therefore, only 80% of the area in both horizontal and vertical directions from the center of the image was used as the region of interest (RoI). At the same time, the corresponding high-quality image was synchronized with this RoI process. Using this procedure, extracted samples can be spatially synchronized as much as possible with the WSP in the high-quality processed image and the image captured in the field. Finally, a total of 300 pair-training examples were created, each consisting of the aligned RoI for the sprayed WSP (hereafter, target data) and the high-quality WSP image with GT (ground truth) (hereafter, source data). These examples were resized to a resolution of 1600 × 900 pixels and were utilized as the training dataset for the model.
In the present study, the target image was spatially aligned with the source data that created a training sample pair consisting of two images and one GT from the same source. However, there were still differences between the two images due to variations in image quality and distortion. These differences meant that it was difficult to use the corresponding GTs of source images for supervised learning with target images. We therefore addressed this issue using domain-adaptive model learning, as described in the following Section 2.3.

2.3. Deep Learning Model for Droplet Coverage Estimation

The purpose of the present study was to estimate droplet coverage on in situ WSPs by segmenting droplet deposition. The main idea was to extract features that could represent droplet deposition invariant to the data domain by synchronizing processed images, each with pixel-level annotations, as well as field images, using domain adaptation (DA). The proposed droplet coverage estimation based on the domain-adaptive segmentation method is depicted in Figure 3. As a baseline, we used an encoder–decoder network in which the encoder extracts salient features for representing the droplet deposition, which are sufficient for the decoder to reconstruct them as a pixel-level binary classification.
The paired embeddings were extracted from the source and target images created from the same WSP by the encoder; however, they exhibited different representations due to variations in pixel values influenced by the capturing conditions, aside from the droplet deposition features, even though the paired feature vectors were embedded to represent droplet deposition on the same WSP. In order to address this issue, we used contrastive learning, which is a type of self-supervised representation learning, to reduce the differences between the embeddings of the two images. This technique utilizes unlabeled data to learn robust feature representations by contrasting similar and dissimilar pairs, thereby improving model performance and generalization while reducing dependence on labeled datasets [25]. Using this method, it could be expected that the encoder would be encouraged to extract useful features focused on droplet deposition, rather than on other domain-specific factors. Contrastive learning was implemented by minimizing the distance between two embeddings using a similarity metric to maximize the agreement between the embedding of two images from the same WSP [26]. Supervised semantic segmentation was also conducted simultaneously with the contrastive learning. The extracted embedding for the source image was fed to the decoder for pixel-level binary segmentation at the size of the input image, while the corresponding pixel-level annotation was used as the ground truth (GT). Two tasks, semantic segmentation and self-supervised representation, for estimating the droplet deposition and domain-invariant feature, respectively, were simultaneously conducted with the end-to-end learning method.
The network employed a U-Net architecture [27] with MobileNet V3-Small [28] as the backbone, consisting of five layers for both the encoder and decoder. MobileNet V3-Small is a lightweight CNN optimized for mobile and edge devices which utilizes depth-wise separable convolutions and lightweight attention mechanisms. Its compact design ensures efficient computation and reduced latency, making it suitable for real-time applications.
The input size of the model was 224 × 224 × 3 (height × width × channel), and the input was fed into the model by extracting multiple patches from each training example to form a mini-batch (as explained in Section 2.4). The encoder output, which was the embedding, had a size of 7 × 7 × 576, while the output was a binary image of the same size as the input, 224 × 224 × 1 pixels. The model parameter updates were performed through backpropagation using segmentation losses and contrastive loss; the segmentation losses comprised dice loss and cross-entropy loss, and the contrastive loss utilized cosine similarity loss. The decoder parameters were updated using the segmentation loss, whereas the encoder parameters were updated using both losses for segmentation and contrastive learning.
The model output utilized the semantic segmentation of droplet deposition, and the coverage area of this study was calculated as the occupied area of droplets within the WSP, as shown in Equation (2) [29].
C o v e r a g e   a r e a = p P v p W × H × 100 % ,
where p represents a 2D pixel point, P is the set of pixel points, v p is the value of the pixel point at p , W is the width of the image, and H is the height of the image.

2.4. Training and Implementation

The training examples were randomly distributed into training, validation, and test sets in a ratio of 3:1:1. During model training, patches were extracted from the training examples and organized into mini batches for input into the model. The patch size was 224 × 224, and the mini batch consisted of 112 patches, with 28 patches extracted from each of the 4 training examples. The model training was repeated for a total of 200 iterations, with each iteration using random patches to create mini batches, allowing the model to learn from various data distributions. The training examples were pre-processed and augmented during the model training. The pre-processing involved normalization with the mean and standard deviation of the dataset, and data augmentation included horizontal and vertical flipping, image grid distortion, addition of Gaussian noise, and adjustment of brightness and contrast.
The model parameters were updated using the Adam optimizer, with the initial learning rate set to 1 × 10−6. The learning rate gradually decreased over iterations using a scheduler. To prevent overfitting of the model, weight decay was used as a regularization, with a parameter lambda configured to 1 × 10−4. Additionally, the model parameters were selected when the validation loss was minimized.
The model training was implemented on Python 3.10.11 with PyTorch 2.4.1. The hardware specifications of the system used included an Intel Core i7 12700F CPU and an NVIDIA GeForce RTX 4090 with 16,384 CUDA cores and a 2520 MHz boost clock.

3. Results

3.1. Droplet Deposition Segmentation

The results of semantic segmentation for localizing the droplet deposition were comprehensively presented for six representative samples. Figure 4 and Figure 5 illustrate the results obtained using source data and target data, respectively, and each figure is meticulously organized into rows that represent the source images and the results obtained using the two methods of supervised segmentation and domain-adaptive segmentation. The columns in the figures correspond to the six different samples, enabling a comparison of the segmentation results across various coverages. Segmentation is represented in a binary format, in which areas of droplet deposition are depicted in white, while the background is illustrated in black. The source data are composed of high-quality images accompanied by their corresponding pixel-level annotations, which serve as a benchmark for evaluating the accuracy of the segmentation. In contrast, the target data consist of images that have been aligned with the WSP area derived from field conditions, leading to inherent differences when compared to the GTs of the paired source data, although the GTs were used to evaluate the performance of both the source and target images.
The representative six samples analyzed exhibited a diversity of coverage areas, ranging from 4.8% to 77.2%. For the source images, the results showed that both segmentation methods demonstrated remarkably similar performance levels in detecting the droplet areas. The difference in performance between the two methods for the last sample was approximately 1.2%; this indicated a relatively minor observed discrepancy, ranging from 0% to 0.3%, suggesting a high degree of consistency between the two methods. Furthermore, the maximum difference from the ground truth was observed to be 3.4%, highlighting the overall accuracy of the segmentation results. However, a trend was observed where errors increased as coverage levels rose. This is attributed to the low portion of high coverage samples, with an average coverage of less than 10% in the training data, leading to imbalanced label issues during supervised learning. In the case of source images, the binary segmentation of the background and droplet deposition is not considered a particularly challenging task, as it is performed using high-performance devices that enhance the clarity and precision of the images.
On the other hand, the target images present a more complex scenario. Although these images were obtained from the same WSP, they exhibit various challenges, including noise and distortion arising from field conditions and perspective variations. Despite these challenges, the detection of droplets in the target images is of paramount importance for accurate analysis. The segmentation of target images showed significant performance differences between the two methods, both of which tended to overestimate coverage area values due to inherent noise in the field images. The results were particularly sensitive to noise due to the training process being conducted on clear images, which may not accurately represent the conditions encountered in the field. Nevertheless, using the proposed method, we aimed to minimize feature differences between source and target images, which can reduce sensitivity to noise, compared with traditional supervised learning. It was observed that supervised learning measured the coverage area within a range of 6.4% to 34% higher than the ground truth, indicating a tendency to overestimate the actual values. In contrast, the proposed method yielded a lower range of 2.4% to 19.7%, which was much closer to the measured coverage area, thereby demonstrating its effectiveness in providing more accurate and reliable segmentation results. This highlights the advantages of the proposed method in real-world applications where noise and distortion are prevalent.
Figure 6 illustrates three representative samples where significant performance differences were observed. The high error rates in supervised learning were primarily due to visual differences in color regions caused by variations in illumination and shading in the captured images. Notably, in the bottom sample with a 0% droplet coverage area, the influence of color became more pronounced, resulting in a segmentation outcome where 94.2% was detected by the supervised learning method. In contrast, the use of the proposed method allowed the source to perform segmentation training while simultaneously reducing differences with the target, resulting in a minimal error of just 0.3%. This indicates that the model training in the proposed method can focus more on salient features related to droplet deposition rather than other domain-specific features. Additionally, although target images are not directly used to train the semantic segmentation model, using them results in a data augmentation effect that enhances performance.
Table 2 is a comparison of segmentation performance based on the data domain and learning method, in which three metrics are used to compare performances: IoU (intersection of union), F1-score, and classification accuracy. IoU is a metric that measures the overlap between the predicted and ground truth, calculated as the area of intersection divided by the area of union and the F1-score is the harmonic mean of precision and recall. In terms of source data, supervised learning demonstrated higher performance, compared with the proposed method. Specifically, IoU and F1-score values were approximately 5% higher for supervised learning than for the proposed method. With respect to target data, most performance metrics were observed to be lower compared to the source data, with reductions of 34–42% in all metrics except for accuracy. This may be attributed to the fact that the target images did not directly engage in segmentation training, and even when the GT was paired with images from the source data, there were spatial differences in the distribution of droplets. Nevertheless, using the proposed method, the segmentation performance for target images was found to be approximately 2% higher than that obtained using supervised learning. As suggested in the Introduction, this indicates the challenges of achieving general performance in droplet deposition across various data domains using traditional supervised learning methods.

3.2. Coverage Area Estimation

Table 3 compares the performance of coverage area estimation based on data domain and learning method. The averaged values were expressed for the samples for sparse and dense measurement area, reference samples, and total samples. For the source data, there were no differences between the two methods. The mean absolute error (MAE) was observed to be less than 1% for all methods across three sampling areas, indicating high accuracy in the evaluated values. In contrast, for the target image, the proposed method demonstrated better performance. The coverage area of the test images evaluated with domain-adaptive segmentation was observed to have an MAE of 3.5%, showing a twofold improvement, compared with the results of only supervised learning. In particular, the proposed method showed an MAE approximately 11% lower in reference samples with very low coverage values compared to the supervised method, indicating that our approach can represent domain-invariant features. From this result, it can be concluded that our method can be used to efficiently estimate coverage area within WSP in situ, further enabling expansion to various other data domains. In addition, the performance of our method indicates that the estimated coverage area is significant enough to represent the spraying performance, allowing for assessment without the need to move samples or perform high-cost image processing, as required using previously used methods.
Figure 7 illustrates the results of a linear regression analysis comparing the measured and estimated values of coverage area within the target data. The results are presented according to the range of samples utilized, with the upper row representing the entire test samples and the lower row representing only those samples with a coverage area of 1% or higher. It can be seen that the estimated droplet coverage areas obtained using the proposed method exhibited a significantly stronger correlation with the actual values. The linear correlations of the proposed method yielded an R2 value of 0.99 for all 60 samples in the test set, which was 121% of the performance of one of the supervised learning methods. The test set comprised images with a coverage range from 0% to 77%, and most of these images had a coverage area of less than 1%. Therefore, linear regression was performed again using 22 test images, specifically excluding those with coverage lower than 1% (as shown in the lower row). The R2 value of the proposed method remained consistent with the value obtained when using all test images, while the R2 value of the supervised learning method increased to 0.96. These findings indicate a notable performance degradation in supervised learning for samples with a coverage area below 1%. Such low coverage areas have sparse droplet deposition, which amplifies the influence of noise, thereby complicating the ability of supervised learning to accurately recognize these instances. In contrast, there was not significant decline in performance (R2) based on coverage level using the proposed method. These results underscore that the proposed domain-adaptive segmentation of droplet deposition can effectively enhance coverage area estimation by transferring knowledge from field images through the synchronization of paired images. This approach not only improves accuracy but also demonstrates resilience against the challenges posed by low coverage areas, making it a valuable tool in the analysis of droplet deposition.

3.3. Spatial Visualization of the Estimated Coverage Area

Our research aimed to provide immediate field measurements for the performance evaluation of UASSs, emphasizing the importance of the coverage area. Figure 8 spatially represents the coverage area obtained from the variance dispersion experiment for data collection, visualized through a heatmap. The ground truth was measured using the traditional method, while the estimation was evaluated through the method proposed in this paper. The results show that both heatmaps are similar. Generally, in UASS evaluations, spray patterns are calculated and assessed for uniform spraying performance [30]. The spray pattern is an average of coverage in the direction of movement, and, as shown in the figure, the two methods exhibited similar patterns across five targets. Therefore, it can be anticipated that this method could be used as a substitute for field performance evaluations.

4. Discussion

The proposed method can be used to estimate coverage areas within WSP images captured in situ, offering advantages over conventional processes that involve collecting samples, transporting them, and performing high-cost image processing. The method proposed in this paper involves droplet deposition segmentation using source images with their respective ground truths, transferring feature information to field images as target images, thereby obtaining segmentation knowledge without GT for the target images. In comparison to supervised learning, the results for source images show that localization (IoU) and coverage area errors increase from 14% and 0.4% to 50% and 6.8%, respectively, when predicting target images. This indicates that it is challenging to guarantee performance under different field conditions with the current data [31]. The proposed method reduces the coverage error to 3.5%—approximately 50% lower than that obtained using supervised learning. In addition, the segmentation accuracy was found to be approximately 0.91, demonstrating performance comparable to that reported in other studies [8,9,20,31], despite the high levels of noise in field images. However, while other studies have included various factors related to droplet characteristics beyond the coverage area in their predictions, we focused solely on the coverage area.
Figure 9 depicts the results of comparing our method with DropLeaf [18], showcasing the droplet deposition segmentation results and coverage comparisons for representative samples. The representative samples were drawn from the tutorial images provided by DropLeaf, as well as the source and target images used in our study. DropLeaf features a white background and has the advantage of enabling individual recognition and information provision for droplets, unlike our results, which only provide information on the droplet deposition areas against a black background. While the performance on the images provided by DropLeaf showed that our method overestimated results compared to DropLeaf, it was still capable of detecting droplet deposition areas. However, significant differences were observed in the source and target images used in this study, where our method could accurately assess droplet deposition and coverage, whereas DropLeaf exhibited very high errors due to differences in data domains, particularly related to the background. This indicates that while our research focuses solely on droplet coverage, it is advantageous in ensuring generalized performance across various data domains.

5. Conclusions

This study was conducted to estimate WSP coverage areas captured in the field, so that the performance of a UASS could be effectively evaluated. The conventional process involves moving and storing WSP samples for coverage area calculation using image-processing devices; this increases labor requirements and poses risks of sample contamination. In our proposed method, we aligned the RoI of the WSP in field images with high-quality images processed via the conventional method, synchronizing them effectively. In addition, the spatial discrepancies between droplet deposition and ground truth were addressed using self-supervised contrastive learning. The results showed that, while simple supervised learning models struggled to generalize performance with field images, our proposed method enabled knowledge transfer from these images, resulting in significantly improved performance. Furthermore, our method can be implemented on low-cost portable devices because the method used the light-weight model ‘MobileNet V3-Small’ and users can utilize automated coverage estimation from captured or stored images by securing automatic RoI detection technology.
We did not evaluate various characteristics of the droplets, such as their size and density, which limits the extension of our findings to the effects of UASSs on plants; our primary focus was on UASS performance rather than pest control efficacy. Given that existing evaluation methods rely on the spatial distribution of the coverage area, we believe that our approach can significantly contribute to efficient field evaluations for the development of various forms of UASS technology. Efficient field evaluations will enable the high-precision variable application performance of UASS, which can contribute to minimizing pesticide use and environmental pollution.

Author Contributions

Conceptualization, D.-H.L. and B.-G.S.; methodology, D.-H.L. and B.-G.S.; software, B.-G.S. and S.-Y.B.; validation, D.-H.L., X.H. and S.-H.Y.; formal analysis, Y.-H.K. and S.-Y.B.; investigation, C.-G.L.; resources, C.-G.L. and X.H.; data curation, B.-G.S.; writing—original draft preparation, D.-H.L.; writing—review and editing, D.-H.L., X.H. and S.-H.Y.; visualization, D.-H.L. and B.-G.S.; supervision, S.-H.Y.; project administration, S.-H.Y.; funding acquisition, S.-H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was carried out with the support of “Cooperative Research Program for Agriculture Science and Technology Development (Project No. PJ016983)” Rural Development Administration, Republic of Korea.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Seong, B.G.; Kang, S.W.; Cho, S.H.; Han, X.; Yu, S.H.; Lee, C.G.; Kang, Y.; Lee, D.H. Predicting the spray uniformity of pest control drone using multi-layer perceptron. J. Drive Control. 2023, 20, 25–34. [Google Scholar]
  2. Maheswaran, S.; Murugesan, G.; Duraisamy, P.; Vivek, B.; Selvapriya, S.; Vinith, S.; Vasantharajan, V. Unmanned ground vehicle for surveillance. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020. [Google Scholar]
  3. Li, L.; Hu, Z.; Liu, Q.; Yi, T.; Han, P.; Zhang, R.; Pan, L. Effect of flight velocity on droplet deposition and drift of combined pesticides sprayed using an unmanned aerial vehicle sprayer in a peach orchard. Front. Plant Sci. 2022, 13, 981494. [Google Scholar] [CrossRef] [PubMed]
  4. Meng, Y.; Song, J.; Lan, Y.; Mei, G.; Liang, Z.; Han, Y. Harvest aids efficacy applied by unmanned aerial vehicles on cotton crop. Ind. Crops. Prod. 2019, 140, 111645. [Google Scholar] [CrossRef]
  5. Wen, S.; Zhang, Q.; Deng, J.; Lan, Y.; Yin, X.; Shan, J. Design and experiment of a variable spray system for unmanned aerial vehicles based on PID and PWM control. Appl. Sci. 2018, 8, 2482. [Google Scholar] [CrossRef]
  6. Liu, Y.H.; Jin, W.D.; Guo, S.A.; Yao, W.X.; Yu, F.H.; Chen, C.L. Droplet deposition distribution characteristics of variable spraying by plant protection drones for weed control in paddy fields. J. Shenyang Agric. Univ. 2022, 53, 337–345. [Google Scholar]
  7. Bendig, J.; Bolten, A.; Bareth, G. UAV-based imaging for multi-temporal, very high resolution crop surface models to monitor crop growth variability. Photogra. Fernerk. 2013, 6, 551–562. [Google Scholar]
  8. Özlüoymak, Ö.B.; Bolat, A. Development and assessment of a novel imaging software for optimizing the spray parameters on water-sensitive papers. Comput. Electron. Agric. 2020, 168, 105104. [Google Scholar] [CrossRef]
  9. Xun, L.; Gil, E. A novel methodology for water-sensitive papers analysis focusing on the segmentation of overlapping droplets to better characterize deposition pattern. Crop Prot. 2024, 176, 106492. [Google Scholar] [CrossRef]
  10. Giles, D.K.; Downey, D. Quality control verification and mapping for chemical application. Precis. Agric. 2021, 4, 103–124. [Google Scholar] [CrossRef]
  11. Dafsari, R.A.; Yu, S.; Choi, Y.; Lee, J. Effect of geometrical parameters of air-induction nozzles on droplet characteristics and behaviour. Biosyst. Eng. 2021, 209, 14–29. [Google Scholar] [CrossRef]
  12. Zhou, L.P.; He, Y. Simulation and optimization of multi spray factors in UAV. In Proceedings of the 2016 ASABE Annual International Meeting, Orlando, FL, USA, 17–20 July 2016; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2016; p. 1. [Google Scholar]
  13. Ru, Y.; Hu, C.; Chen, X.; Yang, F.; Zhang, C.; Li, J.; Fang, S. Droplet penetration model based on canopy porosity for spraying applications. Agriculture 2023, 13, 339. [Google Scholar] [CrossRef]
  14. Wang, G.; Li, X.; Andaloro, J.; Chen, P.; Song, C.; Shan, C.; Lan, Y. Deposition and biological efficacy of UAV-based low-volume application in rice fields. Int. J. Precis. Agric. Aviat. 2020, 3, 65–72. [Google Scholar] [CrossRef]
  15. Lipiński, A.J.; Lipiński, S. Binarizing water sensitive papers–how to assess the coverage area properly? Crop Prot. 2020, 127, 104949. [Google Scholar] [CrossRef]
  16. Zhu, H.; Salyani, M.; Fox, R.D. A portable scanning system for evaluation of spray deposit distribution. Comput. Electron. Agric. 2011, 76, 38–43. [Google Scholar] [CrossRef]
  17. Nansen, C.; Ferguson, J.C.; Moore, J.; Groves, L.; Emery, R.; Garel, N.; Hewitt, A. Optimizing pesticide spray coverage using a novel web and smartphone tool, SnapCard. Agron. Sustain. Dev. 2015, 35, 1075–1085. [Google Scholar] [CrossRef]
  18. Brandoli, B.; Spadon, G.; Esau, T.; Hennessy, P.; Carvalho, A.C.; Amer-Yahia, S.; Rodrigues, J.F., Jr. DropLeaf: A precision farming smartphone tool for real-time quantification of pesticide application coverage. Comput. Electron. Agric. 2021, 180, 105906. [Google Scholar] [CrossRef]
  19. Kim, W.S.; Lee, D.H.; Kim, T.; Kim, H.; Sim, T.; Kim, Y.J. Weakly supervised crop area segmentation for an autonomous combine harvester. Sensors 2021, 21, 4801. [Google Scholar] [CrossRef]
  20. Liu, J.; Yu, S.; Liu, X.; Lu, G.; Xin, Z.; Yuan, J. Super-Resolution Semantic Segmentation of Droplet Deposition Image for Low-Cost Spraying Measurement. Agriculture 2024, 14, 106. [Google Scholar] [CrossRef]
  21. Lee, D.H.; Seong, B.G.; Kang, S.W.; Cho, S.H.; Han, X.; Kang, Y.; Lee, C.G.; Yu, S.H. Analysis of spraying performance of agricultural drones according to flight conditions. Korean J. Agric. Sci. 2023, 50, 427–435. [Google Scholar] [CrossRef]
  22. Hanif, A.S.; Han, X.; Yu, S.H.; Han, C.; Baek, S.W.; Lee, C.G.; Lee, D.H.; Kang, Y.H. Modeling of the control logic of a UASS based on coefficient of variation spraying distribution analysis in an indoor flight simulator. Front. Plant Sci. 2023, 14, 1235548. [Google Scholar] [CrossRef]
  23. Yu, S.-H.; Kim, Y.-K.; Jun, H.-J.; Choi, I.S.; Woo, J.-K.; Kim, Y.-H.; Yun, Y.-T.; Choi, Y.; Alidoost, R.; Lee, J. Evaluation of Spray Characteristics of Pesticide Injection System in Agricultural Drones. J. Biosyst. Eng. 2020, 45, 272–280. [Google Scholar] [CrossRef]
  24. Wang, K.; Fang, B.; Qian, J.; Yang, S.; Zhou, X.; Zhou, J. Perspective transformation data augmentation for object detection. IEEE Access 2019, 8, 4935–4943. [Google Scholar] [CrossRef]
  25. Ciga, O.; Xu, T.; Martel, A.L. Self supervised contrastive learning for digital histopathology. Mach. Learn. Appl. 2022, 7, 100198. [Google Scholar] [CrossRef]
  26. Lee, D.H.; Lee, M.; Lee, W.; Seo, S. Sensor-Type Agnostic Heat Detection in Dairy Cows using Multi-autoencoders with Shared Latent Space. Appl. Soft Comput. 2024, 2024, 112200. [Google Scholar] [CrossRef]
  27. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015, Proceedings, Part III 18; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  28. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  29. Seong, B.G.; Han, X.; Yu, S.H.; Lee, C.G.; Kang, Y.; Woo, H.H.; Lee, H.; Lee, D.H. Density map estimation based on deep-learning for pest control drone optimization. J. Drive Control. 2024, 21, 53–64. [Google Scholar]
  30. Hanif, A.S.; Han, X.; Yu, S.H. Independent control spraying system for UAV-based precise variable sprayer: A review. Drones 2022, 6, 383. [Google Scholar] [CrossRef]
  31. Yan, F.; Zhang, Y.; Zhu, Y.; Wang, Y.; Niu, Z.; Abdukamolovich, J.A. An image segmentation of adhesive droplets based approach to assess the quality of pesticide spray. Smart Agric. Technol. 2024, 8, 100460. [Google Scholar] [CrossRef]
Figure 1. Experimental setup for collecting data of droplet deposition by the UASS and the example result of droplet deposition on WSP (bottom-left).
Figure 1. Experimental setup for collecting data of droplet deposition by the UASS and the example result of droplet deposition on WSP (bottom-left).
Drones 08 00670 g001
Figure 2. Dataset construction process in which each pair of training examples consisted of a high-quality labeled image and an undistorted field image for domain-adaptive supervised learning.
Figure 2. Dataset construction process in which each pair of training examples consisted of a high-quality labeled image and an undistorted field image for domain-adaptive supervised learning.
Drones 08 00670 g002
Figure 3. The proposed droplet coverage estimation based on domain-adaptive segmentation. The framework conducts two tasks, namely, semantic segmentation and self-supervised contrastive learning.
Figure 3. The proposed droplet coverage estimation based on domain-adaptive segmentation. The framework conducts two tasks, namely, semantic segmentation and self-supervised contrastive learning.
Drones 08 00670 g003
Figure 4. The represented results for segmenting the droplet deposition within WSP for source data domain. The results of 6 sets are expressed in 6 columns, and each result set is composed of 3 rows comprising images depicting the source image, and the results obtained using the 2 methods of supervised segmentation and domain-adaptive segmentation. The number at the bottom center of each image indicates the coverage area.
Figure 4. The represented results for segmenting the droplet deposition within WSP for source data domain. The results of 6 sets are expressed in 6 columns, and each result set is composed of 3 rows comprising images depicting the source image, and the results obtained using the 2 methods of supervised segmentation and domain-adaptive segmentation. The number at the bottom center of each image indicates the coverage area.
Drones 08 00670 g004
Figure 5. The represented results for segmenting the droplet deposition within WSP for target data domain. The results of 6 sets are expressed in 6 columns, and each result set is composed of 3 rows comprising images depicting the target image, and the results obtained via the 2 methods of supervised segmentation and domain-adaptive segmentation. The number at the bottom center of each image indicates the coverage area.
Figure 5. The represented results for segmenting the droplet deposition within WSP for target data domain. The results of 6 sets are expressed in 6 columns, and each result set is composed of 3 rows comprising images depicting the target image, and the results obtained via the 2 methods of supervised segmentation and domain-adaptive segmentation. The number at the bottom center of each image indicates the coverage area.
Drones 08 00670 g005
Figure 6. Representative samples demonstrating significant performance differences between two methods.
Figure 6. Representative samples demonstrating significant performance differences between two methods.
Drones 08 00670 g006
Figure 7. Linear relationships between estimated and measured coverage areas. Linear regressions were represented using the entire test images (upper row), and only samples with a coverage area of 1% or higher were used (lower row).
Figure 7. Linear relationships between estimated and measured coverage areas. Linear regressions were represented using the entire test images (upper row), and only samples with a coverage area of 1% or higher were used (lower row).
Drones 08 00670 g007
Figure 8. Performance comparison through (a) 2D spatial visualization of droplet coverage; and (b) spray pattern estimation.
Figure 8. Performance comparison through (a) 2D spatial visualization of droplet coverage; and (b) spray pattern estimation.
Drones 08 00670 g008
Figure 9. Performance comparison between DropLeaf and our method for representative samples. DropLeaf provides droplet instance segmentation on a white background, with each droplet represented in a different color. Our method offers droplet semantic segmentation on a black background, where all droplets are represented in white.
Figure 9. Performance comparison between DropLeaf and our method for representative samples. DropLeaf provides droplet instance segmentation on a white background, with each droplet represented in a different color. Our method offers droplet semantic segmentation on a black background, where all droplets are represented in white.
Drones 08 00670 g009
Table 1. Specifications of the octocopter drone used for the field test.
Table 1. Specifications of the octocopter drone used for the field test.
ItemSpecification
Dimension (L × W × H)2075 × 2075 × 700 mm3
Distance between motor shafts1640 mm
Weight14.5 kg
Payload10 kg
Motor powerMax. 1300 W
Flight speedMax. 10 m/s
Work timeMax. 17 min
Table 2. Comparison of segmentation performance based on data domain and learning method.
Table 2. Comparison of segmentation performance based on data domain and learning method.
DataMethodIoUF1-ScoreAccuracy
SourceSupervised0.8560.9100.988
Domain-adapted0.8070.8690.986
TargetSupervised0.5000.5610.883
Domain-adapted0.5220.5810.908
Table 3. Comparison of the performance of estimated coverage area from droplet deposition segmentation based on learning methods and data domain by sampling areas (unit: %).
Table 3. Comparison of the performance of estimated coverage area from droplet deposition segmentation based on learning methods and data domain by sampling areas (unit: %).
Data DomainMethodCoverage AreaMAE
SparseDenseReferenceTotal
SourceSupervisedSparse: 1.4 ± 4.6
Dense: 17.0 ± 23.0
Reference: 0.2 ± 0.5
Total: 9.4 ± 18.6
0.0 ± 0.10.7 ± 1.00.0 ± 0.00.4 ± 0.8
Domain-adapted0.1 ± 0.20.8 ± 1.20.0 ± 0.10.5 ± 1.0
TargetSupervised1.6 ± 4.68.8 ± 9.111.1 ± 31.26.8 ± 13.9
Domain-adapted0.7 ± 2.86.1 ± 7.00.2 ± 0.53.5 ± 6.0
Coverage and MAE (mean absolute error) are expressed as average ± standard deviation by sampling areas (samples for sparse and dense measurement areas, reference samples, and total samples).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, D.-H.; Seong, B.-G.; Baek, S.-Y.; Lee, C.-G.; Kang, Y.-H.; Han, X.; Yu, S.-H. Coverage Estimation of Droplets Sprayed on Water-Sensitive Papers Based on Domain-Adaptive Segmentation. Drones 2024, 8, 670. https://doi.org/10.3390/drones8110670

AMA Style

Lee D-H, Seong B-G, Baek S-Y, Lee C-G, Kang Y-H, Han X, Yu S-H. Coverage Estimation of Droplets Sprayed on Water-Sensitive Papers Based on Domain-Adaptive Segmentation. Drones. 2024; 8(11):670. https://doi.org/10.3390/drones8110670

Chicago/Turabian Style

Lee, Dae-Hyun, Baek-Gyeom Seong, Seung-Yun Baek, Chun-Gu Lee, Yeong-Ho Kang, Xiongzhe Han, and Seung-Hwa Yu. 2024. "Coverage Estimation of Droplets Sprayed on Water-Sensitive Papers Based on Domain-Adaptive Segmentation" Drones 8, no. 11: 670. https://doi.org/10.3390/drones8110670

APA Style

Lee, D. -H., Seong, B. -G., Baek, S. -Y., Lee, C. -G., Kang, Y. -H., Han, X., & Yu, S. -H. (2024). Coverage Estimation of Droplets Sprayed on Water-Sensitive Papers Based on Domain-Adaptive Segmentation. Drones, 8(11), 670. https://doi.org/10.3390/drones8110670

Article Metrics

Back to TopTop