Next Article in Journal
The Synergistic Impact of a Novel Plant Growth-Promoting Rhizobacterial Consortium and Ascophyllum nodosum Seaweed Extract on Rhizosphere Microbiome Dynamics and Growth Enhancement in Oryza sativa L. RD79
Previous Article in Journal
Forecasting and Comparative Application of PV System Electricity Generation for Sprinkler Irrigation Machines Based on Multiple Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence Applied to Support Agronomic Decisions for the Automatic Aerial Analysis Images Captured by UAV: A Systematic Review

by
Josef Augusto Oberdan Souza Silva
1,
Vilson Soares de Siqueira
2,
Marcio Mesquita
3,
Luís Sérgio Rodrigues Vale
1,4,
Jhon Lennon Bezerra da Silva
1,
Marcos Vinícius da Silva
5,
João Paulo Barcelos Lemos
4,
Lorena Nunes Lacerda
6,
Rhuanito Soranz Ferrarezi
7,* and
Henrique Fonseca Elias de Oliveira
1,4
1
Cerrado Irrigation Graduate Program, Goiano Federal Institute—Campus Ceres, GO-154, km 218—Zona Rural, Ceres 76300-000, Goiás, Brazil
2
Faculty of Information Systems, Goiano Federal Institute—Campus Ceres, GO-154, km 218—Zona Rural, Ceres 76300-000, Goiás, Brazil
3
Faculty of Agronomy, Federal University of Goiás (UFG), Campus Samambaia—UFG, Nova Veneza, km 0, Goiânia 74690-900, Goiás, Brazil
4
Faculty of Agronomy, Goiano Federal Institute—Campus Ceres, GO-154, km 218—Zona Rural, Ceres 76300-000, Goiás, Brazil
5
Postgraduate Program in Forest Sciences, Federal University of Campina Grande (UFCG), Av. Universitária, s/n, Santa Cecília, Patos 58708-110, Paraíba, Brazil
6
Department of Crop and Soil Sciences, 3111 Miller Plant Science Building, University of Georgia, Athens, GA 30602, USA
7
Department of Horticulture, 1111 Miller Plant Science Building, University of Georgia, Athens, GA 30602, USA
*
Author to whom correspondence should be addressed.
Agronomy 2024, 14(11), 2697; https://doi.org/10.3390/agronomy14112697
Submission received: 23 September 2024 / Revised: 4 November 2024 / Accepted: 13 November 2024 / Published: 15 November 2024
(This article belongs to the Section Precision and Digital Agriculture)

Abstract

:
Integrating advanced technologies such as artificial intelligence (AI) with traditional agricultural practices has changed how activities are developed in agriculture, with the aim of automating manual processes and improving the efficiency and quality of farming decisions. With the advent of deep learning models such as convolutional neural network (CNN) and You Only Look Once (YOLO), many studies have emerged given the need to develop solutions to problems and take advantage of all the potential that this technology has to offer. This systematic literature review aims to present an in-depth investigation of the application of AI in supporting the management of weeds, plant nutrition, water, pests, and diseases. This systematic review was conducted using the PRISMA methodology and guidelines. Data from different papers indicated that the main research interests comprise five groups: (a) type of agronomic problems; (b) type of sensor; (c) dataset treatment; (d) evaluation metrics and quantification; and (e) AI technique. The inclusion (I) and exclusion (E) criteria adopted in this study included: (I1) articles that obtained AI techniques for agricultural analysis; (I2) complete articles written in English; (I3) articles from specialized scientific journals; (E1) articles that did not describe the type of agrarian analysis used; (E2) articles that did not specify the AI technique used and that were incomplete or abstract; (E3) articles that did not present substantial experimental results. The articles were searched on the official pages of the main scientific bases: ACM, IEEE, ScienceDirect, MDPI, and Web of Science. The papers were categorized and grouped to show the main contributions of the literature to support agricultural decisions using AI. This study found that AI methods perform better in supporting weed detection, classification of plant diseases, and estimation of agricultural yield in crops when using images captured by Unmanned Aerial Vehicles (UAVs). Furthermore, CNN and YOLO, as well as their variations, present the best results for all groups presented. This review also points out the limitations and potential challenges when working with deep machine learning models, aiming to contribute to knowledge systematization and to benefit researchers and professionals regarding AI applications in mitigating agronomic problems.

1. Introduction

Machine learning and deep learning models to automate manual and repetitive agricultural processes play an important role in applications that use unmanned aerial vehicles (UAV) [1,2,3]. The use of artificial intelligence (AI) has transformed the way conventional agricultural processes are carried out in recent decades [4].
Machine learning is used for algorithms that are trained to learn patterns by experience and that can solve specific problems through input data and parameters [5]. Deep learning refers to a class of machine learning programming that uses large volumes of input data in search of solutions to various problems, including in agriculture [6].
In recent years, food production across the globe has become a priority, driven by population growth and severe climate change events [7]. Increasing food production must be carried out without waste and in a profitable way [8]. Furthermore, traditional agricultural production practices can lead to limited labor, resource scarcity, and lower-than-expected productivity [9,10]. Some of the recurring problems that directly affect final production are the appearance of weeds in crops, nutritional deficiency, water stress, agricultural pests, and plant diseases [11,12].
Most current agronomic problems depend on the experience of an agricultural professional, which makes the process of solving them slow and expensive [13]. Furthermore, if the problem is not previously identified, monitored, and corrected, the yield of crops may be affected, causing financial losses to rural growers [14].
In this context, with the advent of UAV technologies in agriculture, combined with machine and deep learning algorithms, the automation of agricultural processes has become increasingly common and practical [15]. AI can be used as an auxiliary tool to identify, quantify, and interpret images captured by UAVs [16]. The use of AI in agriculture can also reduce the time required to analyze aerial images and predict agronomic problems [17,18]. As a result, research with AI, remote sensing, Internet of Things (IoT), UAVs, and other technologies applied to rural areas have become increasingly present in recent years [19,20,21]. In this sense, deep learning models combined with computer vision enable UAVs to make remote diagnoses and prescriptions on-site, without causing damage to plants [22].
This work presents a systematic review of studies on AI techniques applied in the automatic analysis of aerial images captured by UAVs to help guide agricultural management and decisions. This review includes 70 articles, most of them covering a variety of AI applications to automate processes, identify types of agronomic problems, and support agricultural decisions.
The objective of this article is to review AI techniques used to support agricultural management and automate processes that use images as input data. The specific objectives are to research the main scientific bases for studies on AI techniques applied to agriculture, create article summaries, group studies with similar objectives, categorize abstracts, identify the state of the art whenever possible, and identify the challenges/limitations of using AI in this context.
This review presents an investigation into advances in the application of AI for the automated analysis of agricultural images [23]. The articles were grouped and categorized by identification of agronomic problems, machine and deep learning techniques and methods used in each article, their respective metrics and accuracies, and challenges and limitations of the identified research problem.

2. Materials and Methods

Firstly, this review is a study on AI techniques applied to the automated analysis of aerial images captured by UAVs to identify the most common agronomic problems encountered in the field. This review presents the results obtained for images in red, green, and blue (RGB) color channels and multispectral, hyperspectral, and thermal images. This systematic literature review (SLR) was divided into three stages: research planning, selection of articles and results, and discussion. This review is based on the study by Siqueira et al. [23], being carried out by the guidelines described by Kitchenham [24] and following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology and guidelines from Page et al. [25].

2.1. Planning

The planning stage was conducted as follows: (1) an exploratory analysis of the literature was carried out to define the keywords and sources researched; (2) the research was conducted in the main scientific databases (ACM, IEEE, Science Direct, MDPI, and Web of Science), on the official pages available on the internet.
The search was restricted to articles published between January 2018 and January 2024 and written in English. This review aimed to answer the following research questions: (Q1) In what type of image analysis has AI been applied to support agricultural decisions? (Q2) What were the techniques and accuracy of the AI models applied? (Q3) What are the challenges/limitations in the applicability of AI to each type of agricultural image analysis? (Q4) Which techniques/methods were most used? (Q5) How can AI contribute to supporting agricultural decisions in image analysis? (Q6) What was the type of agronomic problem that was most studied in the research?

2.2. Selection

To select the articles that make up this review, the protocol followed the research lines of Siqueira et al. [23], Kitchenham [24], and Page et al. [25] (PRISMA). The selection of articles followed the following inclusion (I) and exclusion (E) criteria: (I1) articles that used AI techniques for agricultural analysis; (I2) complete articles written in English; (I3) articles from peer-reviewed journals; (E1) articles that did not describe the type of agricultural analysis used; (E2) articles that did not specify the AI technique used and that were incomplete or abstract; and (E3) articles that did not present substantial experimental results.
Initially, the terms “Weed”, “Nutritional Deficiency”, “Water Stress”, “Disease”, “Pest”, and “Yield Estimation” were used to search relevant articles to find a satisfactory search keyword. Search filters were added with the inclusion of the terms “Machine learning OR Deep Learning” to refine the search. The term “UAV” was added to filter only articles related to images captured by UAVs. The full search strings were as follows: ((Weed) AND (“Machine learning” OR “Deep Learning”) AND (“UAV”)); ((Nutritional Deficiency) AND (“Machine Learning” OR “Deep Learning”) AND (“UAV”)); ((Water Stress) AND (“Machine Learning” OR “Deep Learning”) AND (“UAV”)); ((“Disease”) AND (“Machine Learning” OR “Deep Learning”) AND (“UAV”)); ((Pests) AND (“Machine Learning” OR “Deep Learning”) AND (“UAV”)); ((Yield Estimation) AND (“Machine Learning” OR “Deep Learning”) AND (“UAV”)). Other keywords such as “Artificial Intelligence”, “Precision Agriculture”, “Agronomic Problems”, and “RGB” were tested, but did not add results to the search. Finally, the articles were extracted from the following scientific databases: Association for Computing Machinery—ACM [26], Institute of Electrical and Electronic Engineers—IEEE [27], Science Direct [28], Multidisciplinary Digital Publishing Institute—MDPI [29], and Web of Science [30].
Figure 1 presents a summary of the quantitative results returned from each scientific database using the search string mentioned in this stage of selecting articles. Searches in the main scientific databases returned a total of 343 articles. After reading the titles and abstracts, 249 articles were discarded. Duplicate publications were also removed. The de-duplication was performed using the Mendeley software, version 1.19.8 (http://mendeley.com (accessed on 18 October 2024)). Thus, 94 articles remained and were selected for full reading. After reading the articles completely, 24 met the exclusion criteria (E1, E2, and E3) and were not included. Ultimately, 70 articles were selected for the data extraction process and included in this review.
Considering the data extraction process in the main articles, the aforementioned techniques are capable of automating and optimizing agricultural fields in terms of identifying weeds, nutritional deficiency, water stress, and plant diseases. Then, each article was read in full and divided into subgroups, according to the methodology adopted, taking into account parameters such as the type of sensor used in data acquisition; improving the quality of the image set (e.g., removing noise); pre-processing and feature extraction from the dataset (e.g., data augmentation process and data labeling); the type of predictive analysis performed on the dataset (detection, segmentation, and/or classification); the application of machine learning techniques using algorithms for each situation; and the type of performance calculation used in each model, using metrics for evaluating, quantifying, and analyzing the algorithms. Therefore, some articles may appear in more than one subgroup. This way, the reader can have an overview of the data extracted from the articles contained in this SLR. Figure 2 presents a summary of the sequential development flow of this SLR.
Among the evaluation metrics for the quantification and analysis of the most used deep learning models in the articles that make up this SLR, the following stand out: (1) precision, (2) accuracy, (3) mean average precision (mAP), and (4) recall.
(1)
Precision indicates whether the deep learning models were able to correctly detect or classify the objects contained in the dataset.
(2)
Accuracy (Acc) calculates the proportion between correctly predicted observations and all observations in the set.
(3)
mAP calculates the area under the precision and recall curve.
(4)
Recall calculates the number of correctly detected objects divided by all ground truth objects.
(5)
F1-score is used in the weighted harmonic average of precision and recall.
Table 1 presents the distribution of scientific articles by year of publication and the number of publications in each year in the main databases used to construct the systematic review.

3. Results and Discussion

In recent years, in all areas, AI has changed the way processes are developed [31]. In the agricultural area, for example, AI is used to assist in manual and repetitive processes [32]. Some common tasks such as assessing nutritional and water deficiencies in the field or identifying weeds in crops are carried out with direct human interference [33], optimizing time by automating routine tasks [34].
Several promising solutions have been presented over the years to automate agricultural processes, including commercial software for geoprocessing and image analysis [35].

3.1. Weed

Weeds are responsible for substantial drops in productivity in crops [36]. For this reason, weed infestation control measures are essential for the final harvest yield. Recent studies show that constant and precise supervision becomes indispensable given the difficulties of controlling weeds in agricultural fields [37]. Research has shown that it is possible to control the proliferation of invasive plants using deep learning models.
Deep learning models are widely used in precision agriculture to detect and control invasive plants. One example is the UNET-ResNet model, which was applied to aerial images captured by UAV and trained to automatically detect winter weeds in wheat (Triticum aestivum L.) fields. Accuracy was shown to be greater than 90%, with a statistically significant correlation (r > 75% and p < 0.00001) between weed maps, derived from aerial images, and data collected in the field [38].
In a study that proposed using Convolutional Neural Networks (CNN) to detect crop rows in aerial images captured by UAV in spinach (Spinacia oleracea L.) and bean (Phaseolus vulgaris L.) crops, the authors labeled the crop rows and used them to identify weeds between the rows. The proposed method detected weeds with a 93.58% success rate in comparison to 70% when using the traditional method [39].
Among the literature included in this study, it can be observed that UAV remote sensing proved to be more viable in mapping weeds when compared to satellites and aircraft (Varah et al. [36]; Fraccaro et al. [38]). The special high resolution of the UAV images, combined with deep machine learning for the segmentation of the dataset features, was able to extract features from the images and produce non-overlapping objects, which ensured higher accuracy (above 80%) in the classification of weeds (Huang et al. [40]).
The use of image augmentation techniques (split, noise, rotation, flip) in the pre-processing step can help build a robust dataset and optimize the accuracy of a deep learning model when classifying weeds (Beehary and Bassoo [41]; Reedha et al. [42]).
Deep learning models, such as models based on neural networks (Fully Convolutional Network (FCN), Artificial Neural Network (ANN), AlexNet, EfficientNet, and ResNet), have demonstrated a greater ability to detect and classify weeds (Bah et al. [39]; Beehary and Bassoo [41]). This occurs because neural networks can extract features from raw images with great generalization capacity (Reedha et al. [42]; Genze et al. [43]). In addition to neural network-based models, another model that stands out for its ability to use training masks to guarantee pixel-by-pixel accuracy is the UNET-Resnet model. The training capacity when using UAV images on sorghum crops (Sorghum bicolor L. Moench), for example, under different conditions, showed a score of 89% in a set of standby tests, in addition to detecting weeds in row crops with the above accuracy of 90% [43].
The YOLO model, predominantly cited among the articles in this SLR, can detect objects when traversing the input dataset once during the training process, and, therefore, guarantee efficiency in identifying characteristics of the set at the same time, which uses less computational resources for training than other models, such as the Single-Shot Detector (SSD) [44]. An example of the use of this model can be found in the new approach proposed by Gallo et al. [44], in which the authors detected chicory plants (Cichorium intybus L.) using deep learning algorithms in RGB aerial images captured by UAV. The study’s results, when comparing the CNN model with YOLOv7 and its older versions, presented average results (mAP) above 74% for the YOLO model. Figure 3 shows an example of the result of the segmentation of organic plants, in soybean and bean crops, using the YOLOv7 model, with 300 training times.
Ajayi et al. [45] proposed to evaluate a YOLOv5 and a CNN model to automatically classify weeds in crops using RGB images captured by UAV. The model classified weeds in different crops, such as spinach, sugarcane (Saccharum officinarum L.), and pepper (Apsicum chinense L.), at different times. The results showed a weed classification accuracy of up to 65% and a classification accuracy of up to 78%.
Pei et al. [46] presented a new weed detection approach using a CNN model, YOLOv4, and UAV-captured RGB aerial images in corn (Zea mays L.) fields. Corn crop lines were detected and masked using YOLOv4-Tiny, so that only the weeds in the masked image needed to be labeled. The result showed that the average detection and classification accuracy of weeds was 86.90%.
Su et al. [47] proposed mapping the blackgrass (Alopecurus myosuroides Huds.) weed in wheat cultivation using multispectral aerial images captured by the DJI S-1000 UAV at an altitude of 20 m as well as machine learning techniques. The authors used an RF classifier with parameter optimization to create a classification map. The results showed a weed classification accuracy of 93.8%.
Barrero and Perdomo [48] proposed a new method to combine RGB and multispectral aerial images captured by UAV and the detection method with neural networks to classify Gramineae weeds in rice crops. The study combines the texture information present in RGB images with the reflectance information provided by multispectral images to detect weeds. The results of the study showed that the master of management (M/MGT) and MP indices (described as the ratio of the number of matching weed pixels between the ground truth image and the analyzed image to the total number of weed pixels of the grayscale image), which calculate the performance evaluation, were between 80 and 108% and between 70 and 85%, respectively, and presented better performance obtained by the RN with the fused image.
Bah et al. [49] presented a new deep learning method using CNN, using a set of images, captured with UAV, for unsupervised training for weed detection. The new method proposed by the authors presents three training phases: detection of crop lines, detection of weeds, and construction of the final model. The result was compared to supervised training, with differences in accuracy of 1.5% in spinach and 6% in beans.
Naveed et al. [50] applied a weed detection method using a biased competition input/predictive coding modulation (PC/BC-DIM) neural network model and multispectral aerial images captured by UAV. The authors determined a saliency map using a neural network and multispectral images to detect weeds. The result of the proposed model achieved an average accuracy of 94.38% in detecting weeds.
Chegini et al. [51] designed a model for detecting and monitoring weeds in pastures in California, United States, using the four models SSD, SSD Lite, Fast RCNN, and Mask-RCNN, in RGB images, to analyze performance in plant detection of weeds. The results of the study showed that the improved model presented 93% mAP accuracy in detecting weeds.
Xu et al. [52] presented a new approach for weed detection in soybean cultivation using UAV-captured images and machine learning models. The authors used a color index to highlight plants from the ground and mitigate lighting and background effects and used machine learning algorithms ResNet101_v and DSASPP in the encoder-decoder architecture to enhance information generation and increase segmentation accuracy in identifying plants and detecting weeds in the set of images. The results of the study showed that the combined machine learning models achieved an accuracy of 90.5% for weeds.
Nagothu et al. [53] used an SSD Mobilenet machine learning algorithm and UAV-captured images to detect weeds in cotton (Gossypium hirsutum L.) crops. The authors used a combination of multispectral and RGB images and trained an SSD Mobilenet model with an extensive dataset. The results of the study showed that the machine learning model presented an accuracy of up to 95% in detecting weeds in cotton.
Nasiri et al. [54] employed the U-Net deep learning architecture as an ANN for weed recognition using aerial images in sugar beet cultivation (Beta vulgaris L.). The authors trained a U-Net deep machine learning model with ResNet50, using a set of RGB images in various flight conditions. The study’s results proved that the trained model obtained accuracy and intersection over union (IoU) scores of 96.06% and 84.23%, respectively, demonstrating good performance in recognizing weeds in beet crops.
Ajayi and Ashi [55] evaluated the effect of multiple training epochs of a region-based CNN (RCNN) and UAV-captured images to detect and classify weeds in an agricultural mixture (sugarcane, spinach, banana (Musa spp.), and pepper). The authors used the RCNN algorithm over five seasons as well as RGB aerial images for weed classification, but the model performance is saturated when the range is within limits. The results showed that the performance of the RCNN model improved with an increasing number of seasons, achieving 99% accuracy in detecting and classifying weeds in mixed crops.
Rahman et al. [56] evaluated the performance of deep learning models for weed detection in cotton. The study used thirteen deep machine learning models and a dataset from three weed classes, including YOLOv5, RetinaNet, EfficientDet, Fast RCNN, and Faster RCNN. The results of the study showed that the YOLOv5 model has the potential to be implemented in real-time devices and features a detection accuracy of 76.58%.
Diao et al. [57] proposed a navigation line extraction algorithm that uses the corn plant kernel to identify and localize corn plants for spraying robots based on the improved YOLOv8s network. The study captured UAV images in an experimental area in Zhengzhou, China, in different environments and growing periods, including environments with the presence of weeds. The authors developed an improved YOLOv8s model for more accurate detection of the nucleus of corn plants, with the central coordinates of the network detection box being used as a target to locate the characteristics of each plant. The results of the study showed that the improved YOLOv8 performed well in extracting cores from corn plants with average precision (mAP) and F1 of 86.4% and 86%, respectively.
Mekhalfa et al. [58] evaluated the performance of deep learning models to detect the presence of weeds in soybean crops using UAV images. The authors classified the dataset and trained six machine learning models, which included AlexNet, VGG16, GoogleNet, ResNet50, SqueezeNet, and MobileNet, for weed detection. The results of the study showed that CNN models performed better, reaching 98% accuracy.
Table 2 presents the articles, included in the SLR, that applied AI techniques for the detection and segmentation of weeds, in different types of crops, as well as the types of sensors and evaluation metrics used.

3.2. Nutritional Deficiency

Plant nutritional deficiency is a common problem in crops and can cause financial losses to the growers [59]. Research has shown that non-destructive methods for detecting nutritional deficiency using machine learning models can help evaluate this problem and aid plant health (Fischer et al. [60]).
The steps in training classifiers that identify nutritional deficiencies in plant leaves include pre-processing the set to improve image quality and resource extraction [60]. In some cases, the application of residual brightness reduction and contrast improvement, through equalization of the image color histogram, are necessary to attenuate the characteristics and extract the maximum number of possible features in the set [60,61,62].
In many studies, the dataset is created with the characteristics and information about leaf area damage for each plant. After that, the multiclass classification is assembled and, finally, many studies present results using a confusion matrix and statistical evaluation metrics, including mean absolute error (MAE), mean absolute percentage error (MAPE), root mean square error (RMSE), and coefficient of determination R² [60,61]. For example, Barzin et al. [61] used a set of multispectral images (Crop Circle ACS-430), vegetation indices, and machine learning algorithms to estimate the level of nitrogen concentration in corn crops. The study extracted features from leaf images using vegetation indices and trained four different machine learning algorithms (RF, XGBoost, GBM, and SVR), achieving satisfactory results. By using multispectral information collected by active canopy sensors, the study indicated leaf nitrogen levels and predicted grain yield. The results of the study showed that the SVR model (R² = 75%) presented the best results in predicting the variability of nitrogen content in plants and the lowest mean absolute percentage error (MAPE = 4.4%), followed by the GBM models (R² = 62%; MAPE = 7.7%); RF (R² = 61%; MAPE = 6.4%); XGBoost (R² = 48%; MAPE = 6.5%).
Sathyavani et al. [62] detected nutritional deficiencies in coriander (Coriandrum sativum L.), tomato (Solanum lycopersicum L.), and peper leaves, among other plants, with IoT devices and a CNN. The study extracted patterns in images of plant leaves and captured data through a plant nutritional analysis device, later processed at CNN to issue a final report. The results showed that, among the CNN models tested, ResNet50 was 88% accurate when detecting leaf nutritional deficiencies.
Recent studies reveal that a lack or excess of nutrients, such as nitrogen, potassium, and calcium, can cause physiological symptoms on the surfaces of leaves in crops, harming plant development [63,64]. Sabzi et al. [64] proposed to predict nitrogen content in cucumber (Cucumis sativus var. ‘Super Arshiya-F1’) using ANN, the particle examination method (ANN-PSO), CNN, and hyperspectral images. The authors captured leaf hyperspectral images before and after applying excess nitrogen. The results showed that the average regression coefficients for ANN-PSO in the range between 93.7% and 96.5%, and CNN between 96.5% and 98%, were accurate in predicting nitrogen content in cucumber leaves.
Table 3 presents the articles, included in the SLR, that applied AI techniques to detect and evaluate nutritional deficiency in different agricultural crops, as well as the types of sensors and evaluation metrics used.

3.3. Water Stress

Water stress in crops is a challenge when it comes to managing soil irrigation and crop yield [65]. Conventional methodologies for measuring soil water content are exhaustive and costly [66]. For the efficient control of water deficiency in crops, methods based on remote sensing technology and AI can help in monitoring this type of stress in plants [67].
In many studies that proposed to predict water stress in plants with the help of AI, researchers validated the experiment using commercial electronic equipment, such as a SPAD 502 chlorophyll meter (Konica Minolta), spectroradiometer, and water potential meter [65,67]. Upon realizing that the results of the studies were aligned with the measurements carried out at the study sites, it was also discovered that the neural network models, the model most used in the research included in this SLR, provided satisfactory accuracy in the classification of irrigation and nitrogen treatments [68]. For example, Bhandari et al. [68] trained machine learning models using CNN and RGB and multispectral aerial images captured by UAV. The study proposed detecting nitrogen and water stress in lettuce (Lactuca sativa L.) crops and, in the end, correlated the data obtained from aerial images with data collected at the experiment site, with an accuracy of 62.3% for water stress detection.
Sankararao et al. [69] used a hyperspectral imaging sensor (HSI) and a UAV to capture aerial images and identify water stress in the millet (Panicum miliaceum L.) crop canopy, together with a support vector machine (SVM) classifier. The authors presented five machine learning-based feature selection methods used to identify the ten wavebands most sensitive to water stress in plant canopies. The results obtained showed that the SVM classifier with a linear kernel presented an accuracy of 80.76% in the early detection of water stress in the millet canopy.
Sankararao et al. [70] proposed the use of a 3D-2D CNN model to classify water stress in the chickpea (Cicer arietinum L.) canopy using hyperspectral aerial images captured by UAV. The authors compared the performance of the 3D-2D CNN with an SVM model and a 2D + 1D CNN model in identifying water stress. The results of the study showed the potential of HSI in detecting water stress in chickpea crops with an accuracy of 95.44%.
Tunca et al. [71] used machine learning models and aerial imagery to calibrate measurements from thermal sensors attached to UAVs. The authors used two different types of commercial thermal sensors, the Micasense Altum and the Flir Duo Pro-R (FDP-R), for field testing to evaluate the performance of each. Next, five machine learning algorithms were used to calibrate the thermal sensors. The results showed that the R2 of the Micasense Altum and FDP-R sensors increased from 89% to 96% and 87% to 94%, respectively.
Bertalan et al. [72] studied the performance of UAV-based thermal and multispectral cameras using machine learning models to estimate soil water content. The authors used four machine learning algorithms, random forest, elastic net (ENR), general linear model (GLM), and robust linear model (RLM), combined with three-pixel value extraction methods, to estimate the water content of the soil. The results of the study showed that the multispectral camera performed better on the input data than the thermal camera, presenting an R2 value of 0.97 and a normalized root mean square error of 10%. The best machine learning models capable of predicting soil water content were random forest and ENR, with R2 of 97% and 88%, respectively.
Niu et al. [73] made assessments of corn cultivation in inner Mongolia for different levels of irrigation and water stress during 2018 and 2019. The images captured by the UAV were used to investigate the effect of image sensors. The authors used five vegetation indices from multispectral images and three machine learning algorithms, random forest, ANN, and multivariate linear regression (MLR) algorithm, to build the fractional vegetation cover (FVC) model, to detect the growth status of the crop and its final yield. The results of the study showed that the random forest regression model had better performance in predicting FVC in corn, presenting R2 of 89.2% and root mean square error (RMSE) of 6.6%.
Das et al. [74] proposed an approach using a machine learning model and UAV-captured thermal images to predict the biomass and yield of wheat grown under drought stress. The authors quantified 18 wheat genotypes in moderate and high sodium soils in northeastern Australia, using a tree classification and regression (CRT) algorithm to classify crop drought stress and predict biomass and grain yield. The results of the study indicated that the machine learning model presented a coefficient of determination (R2) of 0.86, a root mean square error (RMSE) of 41.3 g/m2, and an R2 of 75% for soil with moderate sodium content, as well as an R2 of 78% for soybean yield grains.
Wang et al. [75] proposed a machine learning model to diagnose winter water stress in wheat crops in China based on multispectral and thermal images captured by UAVs. The authors captured images at six stages of wheat growth and calculated fourteen growth indices and two thermal indices for the study. Water content and normalized stomatal conductance in the soil were measured to obtain a reference. Partial least squares (PLS), support vector machine (SVM), and gradient boosting decision tree (GBDT) algorithms were used to predict water content and normalized stomatal conductance in the soil for each step. The results of the study showed that the GBDT model had better performance in the flowering phase, with an R2 index of 88%, mean squared error (RMSE) of 8%, normalized mean squared error (NRMSE) of 14.7%, phase filling with R2 of 90%, RMSE of 5%, and NRMSE of 15.9%.
Table 4 presents the articles, included in the SLR, that applied AI techniques to detect water stress in various agricultural crops, as well as the types of sensors and evaluation metrics used.

3.4. Plant Disease

Plant diseases are a recurring problem in agricultural production [76]. Assessment of plant diseases in the field is time-consuming, less efficient, and expensive. Much research on the subject has emerged seeking to find accurate and efficient methods to assist in the detection of diseases in plants [77].
The steps towards automatic detection of plant diseases presented some challenges for authors who proposed AI detection and classification methods. Among the problems are the low resolution of the captured images, errors in identifying neighboring pixels of healthy and diseased cultures, and the difficulty in relying entirely on manual labeling of large image samples [78]. To overcome the abovementioned problems, many studies proposed new deep learning models that corrected these problems during the image set processing phase, the feature extraction stage, and the generalization stage of the models used (Pan et al. [78], Wu et al. [79]).
Wu et al. [79] discussed early diagnosis of “pine wilt disease” using deep learning algorithms and object detection in aerial images captured by UAVs. The study authors explained that early diagnosis of the disease in pine (Pinus spp.) is important, but existing methods are not suitable for rapid, large-scale screening in the field. The objective of the study was to generate a dataset with aerial images of early-stage pine crowns. Then, the authors applied deep machine learning algorithms, Faster R-CNN and YOLO, for disease detection. The results showed accuracy above 75%, which makes the method promising.
Selvaraj et al. [80] proposed to classify four types of banana diseases in Africa using aerial images captured by UAV with satellite imagery and machine learning models. The authors classified aerial images of banana trees based on pixels using machine learning models such as RF and SVM. The results showed an accuracy of 99.4% for the Banana Fan Top Virus and 92.8% for the disease caused by the bacteria Xanthomonas campestris pv. masacearum.
Amarasingam et al. [81] applied remote sensing techniques using images captured by UAV and deep learning models (YOLOv5, YOLOR, DETR, and Faster R-CNN) to detect “white leaf diseases” in sugarcane crops. The study used a methodology based on acquiring RGB images, pre-processing the dataset, and training with machine learning algorithms. The research evaluated performance between models and experimental results showed that the YOLOv5 network was more accurate than the others (95%).
Yu et al. [82] proposed monitoring “pine wilt disease” (PWD) infection using multispectral images captured by UAVs and deep learning algorithms. In the study, the authors divided the infection into early, intermediate, and late stages based on the color or secretion of the pine resin. Deep machine learning models (Faster R-CNN and YOLOv4) were trained using a set of multispectral images to recognize pine trees infected with the disease. The results showed an accuracy of 66.7% for the Faster R-CNN model and 63.55% for the YOLOv4 model.
Shi et al. [83] presented a new deep learning model (CropdocNet) for disease detection in potato (Solanum tuberosum L.) crops using aerial hyperspectral images. The study authors considered the potential variation in disease radiation reflectance when training the new deep learning model, and, in the end, the results of the proposed model achieved an accuracy of 98%.
Kerkech et al. [84] applied a new deep learning architecture called vine disease detection network (VddNet) using multispectral images and depth maps. The new model presented by the authors has three autodecoders that assign a class to each pixel of the image. The model was compared with other well-known architectures such as SegNet, U-Net, DeepLabv3+, and PSPNet. The training results showed that the proposed architecture is more accurate than known benchmark models (92% of vine-level detection and 87% of leaf-level detection).
Shankar et al. [9] proposed the use of a machine learning model and ANN algorithms to locate regions affected by diseases and pests in crops in India using aerial images captured by UAVs. The authors focused on regions affected by agricultural diseases to identify them and apply the chemicals to specific areas, reducing costs and waste. The results obtained by the authors showed that the ANN model used presented satisfactory results in detecting disease symptoms like brown spots and bacterial leaf blight (more than 25% in the initial stage).
Delgado et al. [85] used a machine learning model to classify Hoja Blanca virus (RHBV) infection in rice crops using multispectral aerial images captured by UAVs. The study demonstrated that the best SVM classifiers presented better sensitivity rates when compared to the SVM (rate of 74%) and random forest (rate of 71%) methods, allowing the early characterization of Hoja Blanca virus varieties.
Khan et al. [86] presented the classification of different pests in different crops using an EfficientNet deep learning model and aerial images captured by UAV. The authors used the state-of-the-art EfficientNet model, which comprises four different locations connected to a global model that receives several parameters from the local model. The results of the study showed that the system achieved an accuracy of 99.55% when increasing the set of images at different angles, contributing to the classification of pests in the agricultural environment.
Oide et al. [87] simplified the automatic detection method of “pine wilt disease” (PWD) using machine learning models and aerial images captured by UAVs. The authors used six machine learning algorithms, including logistic regression, linear support vector machine, SVM, k-nearest neighbors, RF, and ANN to detect PWD and validate the performance of the algorithms using the metrics of accuracy, precision, recall, F1 scores, and area under the characteristic curve with 95% confidence intervals. The results of the study showed that the best-performing machine learning model combined with the HSV color space dataset was ANN, with 99.5% accuracy.
Deng et al. [88] proposed a methodology for pixel-level regression analysis to quantify the rust index in wheat using UAV-captured hyperspectral images and deep learning. The authors used a semantic segmentation method, and a UNET algorithm was trained to obtain accurate thresholds and generate annotations on crop rust diseases. To model the results of the different loss functions, the HRNet_W48 algorithm was used on the dataset. The result of the study showed that the combination of algorithms produced an R2 value of 87.5% and a mean squared error (MSE) of 1.29%, both significant for monitoring diseases in wheat crops.
Casas et al. [89] presented a tool for detecting diseases such as Servomyces phoenicis and Phenacoccus malfatti in palm groves using multispectral imaging and machine learning. The authors used image segmentation and classification techniques to calculate the number of leaves affected by diseases for each palm tree (Phoenix dactylifera L.). Then, the authors compared the calculated data with information collected at the experiment site. The results of the study showed that the accuracy of detecting the disease in palm trees was 96% for affected and healthy leaves.
The Table 5 presents the articles, included in SLR, that applied AI techniques to detect plant diseases in various agricultural crops as well as the types of sensors and evaluation metrics used.

3.5. Agricultural Pest

From sowing to harvest, agricultural pests can invade fields and cause financial losses to the grower [90]. For this reason, new studies seek to detect the early appearance of pests in addition to making the application of inputs more efficient [91]. In this context, the use of remote sensing equipment, with high-resolution cameras composed of multiple sensors, in addition to deep learning for pattern analysis, can be an important ally in the detection and classification of agricultural pests [92]. For example, Duarte et al. [93] presented a study that improves the Longhorned Borer (ELB) pest classification process in eucalyptus (Eucalyptus spp.) crops using multispectral aerial images captured by UAV and machine learning models. The authors applied the supervised machine learning algorithm RF and SVM to classify tree crowns and separate them into healthy and dead trees. The result of the study showed 98.35% accuracy when using the RF algorithm and 97.7% accuracy in classifying ELB pests in eucalyptus crops when using the SVM learning algorithm.
In studies that use deep learning for classification tasks of insect species that attack crops, the performance of neural network algorithms prevails among other known models (Silva et al. [92]). However, some authors have proposed new deep machine learning models to improve the detection and classification of pests in crops (Brodbeck et al. [91]). The research in this SLR focused on the automatic detection of agricultural pests, much of it sought to capture images in real field conditions, under different environmental conditions lighting, object size, and background variations [94]. These particularities made the models accurately identify agricultural pests, even in adverse conditions. For example, Tetila et al. [94] compared the results of five machine learning architectures for classifying soybean agricultural pests using images captured by UAVs. The authors compared the performance of Inception-v3, Resnet-50, VG-16, VGG-19, and Xception, with different fine-tuning strategies, using 5000 aerial images. The result of the study showed that deep machine learning architectures achieved an accuracy of 93.32% compared to other algorithmic approaches for image classification algorithms such as SVM, K-NN, and Random Forest.
Retallack et al. [95] demonstrated the feasibility of using deep learning models and UAV-captured imagery in grasslands in South Australia to combat anthropogenic degradation and facilitate effective management practices. The authors used seven different object detectors along with CNN architecture to identify a dominant species of arid shrub Pearl Bluebush (Maireana sedifolia). Study results showed 75% accuracy in detecting the dominant shrub in a grassland area.
Li et al. [96] proposed a deep learning model to detect Chinese cabbage (Brassica parachinensis) trichomes in a dataset with 10,955 RGB images. The study authors added a RepVGG module to Backbone, as well as a new detection layer, and used the normalized Wassertein Gaussian Distance Loss function to improve performance. The results showed that the model proposed by the authors outperformed classical models with an average accuracy of 94.4%.
Lin et al. [97] developed a new approach to detect pine beetle (PSB) attacks on pine plantations in southwestern China using machine learning and UAV-captured hyperspectral and thermal images. The study authors used a random forest machine learning model to classify different levels of tree damage caused by the pine beetle, based on field measurements and different special distribution characteristics. The results of the study showed that the training and validation dataset presented good accuracy, R2 index above 95%, and RMSE below 1.15 µg/cm2, with chlorophyll being an important variable in the early detection of PSB.
Table 6 presents the articles, included in the SLR, that applied AI techniques for the detection and evaluation of agricultural pests in plants, as well as the types of sensors and evaluation metrics used.

3.6. Yield Estimation

Challenges in food production around the world have become frequent in the face of population growth and climate change [98]. Agricultural yield can be estimated to contribute to the improvement of food production techniques [99]. Recent studies that use remote sensing technology and AI seek, mainly, to predict the number of seeds produced per plant and the behavior of a given variety to obtain the highest possible final yield [100].
Guo et al. [101] presented a methodology for height extraction in corn, based on RGB and multispectral UAV aerial images and machine learning, at different growth stages. The authors used a single logistic model (SLM) and harmonic time series analysis (HANTS) to identify crop phenology using corn plant height. The study results showed that RGB-based vegetation indices were effective in extracting corn height, with an R2 index of 93%.
Xu et al. [102] established a model to estimate cotton production based on UAV remote sensing data and machine learning models. The authors used the U-Net machine learning model to recognize and extract pixels from the image set, estimating the percentage of pixels in the region of interest. Then, the authors combined multispectral images with the previously extracted pixel coverages and used a Bayesian regularization neural network (Back Propagation) to estimate cotton productivity. The result of the study showed that the R2 of the proposed model is 85.3% on a scale of 0.81 m2, which can meet the requirements for cotton productivity review.
In studies that proposed automatically estimating the leaf area of a plant, using AI, to know its final productivity, different techniques were used for the calculation [99,100,101,102,103]. Among these techniques is the calculation of the ratio between unilateral leaf area per unit area of soil [102]. With the help of deep learning models, manual measurement in the field, considered expensive and time-consuming, does not need to be carried out directly [103]. For example, Ilniyaz et al. [103] evaluated the ability of machine learning models to estimate the leaf area index in grape (Vitis vinifera L.) crops, using multispectral and RGB images captured by UAV. The authors carried out field tests with different crop growing seasons, collecting 465 leaf area index samples. They then began training five machine learning models with different spectral indices from aerial images, including training with the ResNet-based CNN model. The study results showed that multispectral images performed better in estimating the leaf area index than RGB images, with R2 and Root Mean Square Error (RMSE) values of 89.9% and 43.4%, respectively.
Peng et al. [104] proposed an estimation of primary liquid productivity in corn cultivation using multispectral aerial images and machine learning models. The authors used four machine learning algorithms, including RF, Support Vector Regression (SVR), Gradient Boosting Regression (GBR), and photosynthetically active vegetation, soil, and radiation factor indices to estimate daytime primary liquid productivity in the corn canopy. The results showed that the GBR model obtained the best performance, with R2 equal to 95.8%, which can estimate 89.90% of primary liquid productivity in corn crops with aerial images.
Barbosa et al. [105] used machine learning algorithms and UAV-captured images to estimate coffee (Coffea arabica L.) tree canopy height and diameter to predict crop productivity. The authors used six parameters, including leaf area index, plant height, and crown diameter to estimate productivity, using five machine learning models, including SVM, GBR, random forest, Partial Least Squares Regression algorithms (PLSR), and Neuroevolution of Augmentation Topologies (NEAT). The result of the study showed that the machine learning model with the best performance in predicting productivity was NEAT with 31.75% mean absolute percentage error (MAPE).
Alabi et al. [106] used machine learning models and UAV-captured multispectral images to assist in the phenotypic workflow to estimate productivity in soybeans in Nigeria in 2020. The authors used indices of vegetation, texture, and plant canopy height, as well as five machine learning algorithms, including Cubist, XGBoost, GBM, SVM, and RF, to predict grain yield in soybean crops. The results of the study showed that the Cubist and RF machine learning models, with R2 of 89%, effectively predicted soybean crop yield.
Teshome et al. [107] evaluated the effectiveness of combining aerial images captured using UAV and machine learning techniques to estimate corn height, biomass, and productivity of the sweet type Zea mays var. saccharata. The authors used a DJI Matrice 210 v2 UAV equipped with a multispectral sensor to capture the images and three machine learning models to predict plant height and biomass, the GLMNET algorithm, RF, SVM, and the k-nearest neighbor algorithm (KNN). The results of the study showed that the SVM and KNN models performed well in estimating biomass, with R2 values between 88% and 99%. Furthermore, the GMLNET algorithm showed better performance in estimating plant height compared to the other models.
Ariza-Sentís et al. [108] estimated spinach seed yield using a deep learning model and images captured by UAV. The authors presented a new approach that correlates the number of plants with the planted area and the percentage of plant canopy coverage. The Mask R-CNN deep machine learning model was applied to count the number of spinach plants by obtaining the object mask from which the plant area is derived. The results of the study showed a linear correlation between the number of seeds and the multivariate linear mixed model of the three variables, with R2 of 80%.
Niu et al. [109] proposed new segmentation model semantics to map crop yield using the HIS-TransUNet deep machine model and UAV-captured hyperspectral images. The authors made modifications to hyperspectral images, improving the existing TransUNet model, to map different cultures. The results of the study showed that the model proposed by the authors achieved 86.05% accuracy in crop identification.
Pandey and Jain [110] presented a new deep machine learning model to identify and classify crops using images captured by UAVs. The authors used a dense CNN (CD-CNN) conjugate with a new activation function called SL-ReLU to classify crops using RGB aerial images. The results of the study showed that the proposed module achieved 96.20% accuracy in classifying crops.
Vong et al. [111] estimated and mapped emergence uniformity in corn crops using UAV imagery and a deep learning model. The authors used a pre-trained ResNet18 CNN and aerial images to estimate emergence parameters such as plant density, plant spacing deviation pattern, and mean days. The results of the study showed that the density, spacing pattern of deviation between plants, and average days after emergence (DAEmean) presented accuracies of 97%, 73%, and 95%, respectively.
Chen et al. [112] developed a pipeline for the feature extraction spectrum and morphology of apple (Malus domestica Borkh.) trees using two machine learning and multispectral images captured from a UAV. The authors used two machine learning models, support vector regression (SVR) and k-nearest neighbor (KNN), using light detection and ranging (LiDAR) to predict individual apple tree yield. Three methods were considered to predict plant productivity, namely fruit spectrum images, morphological properties, and individual tree growth and development characteristics. The results of the study showed an accuracy of 75.80% and an R2 of 81.3% in predicting performance in apple cultivation.
Wang et al. [113] used an improved YOLOv5s machine learning model to detect apples in UAV-captured images. The authors captured 300 high-resolution images, segmented the area of interest, and applied improvements to the RFA, DFP, and Soft-NMS algorithm modules to achieve accuracy in detecting isolated objects in the images. The results of the study showed that the model proposed by the authors achieved an accuracy rate of 95.4%, a recall rate of 86.1%, and a mAP score of 91.8%.
Xu et al. [114] developed a model for counting the number of leaves in corn crops using semi-supervised images from two deep learning models and aerial images captured by UAV. The authors segmented the complete corn seedling dataset using the SOLOv2 and YOLOv5x algorithms. The results of the study showed that the SOLOv2 Resnet101 model outperformed the SOLOv2 Resnet50 model, achieving an average accuracy of 93.6%. The YOLOv5x model presented an average accuracy of 89.6% for fully unfolded leaves and 57.4% for newly appeared leaves.
Feng et al. [115] used deep learning to detect and count cotton seedlings based on multispectral images captured by UAVs. The authors trained three deep machine learning models, YOLOv7, YOLOv5, and CenterNet, to detect and count cotton seedlings in six different periods. The result of the study showed that the YOLOv7 model obtained better results, with precision, recall, and F1-Score of 96.9%, 96.6%, and 96.7%, respectively, and the values of the R2, RMSE, and relative root mean indexes square error (RRMSE) of 94%, 3.83%, and 2.72%, respectively.
Tunca et al. [116] trained five machine learning models to estimate leaf area index in sorghum crops based on aerial UAV images. The authors conducted a field experiment with four treatments over two years and captured multispectral and thermal UAV images, as well as destructive leaf area index measurements for comparison. The five algorithms used for model training include k-nearest neighbors (K-NN), extra trees regressor (ETR), XGBoost, random forest, and support vector regression (SVR). The results of the study showed that the K-NN model presented the highest accuracy among the models, with R2, RMSE, and MAPE indices of 97%, 46%, and 19.7%, respectively.
Ma et al. [117] proposed a deep learning model to predict agricultural productivity based on multispectral and thermal images captured by UAVs in wheat fields. The authors used a MultimodalNet model and compared methods and features based on multimodal images. Then, a final machine learning model was built using the spectral, thermal, and texture features of the canopy. The study results showed that the model performed better in the forcing phase, with a coefficient of determination of 74.11% and an MAPE of 6.05%.
Liu et al. [118] developed a method to improve the accuracy of maize leaf area estimation using multispectral imaging and machine learning. The authors used an RF algorithm to process parallel training data and a gradient boosting tree (GBDT) algorithm to minimize the difference between the predicted and true values of the output. The results of the study showed that the proposed model presented better accuracy, with an R2 index of 94% and NRMSE of 9.35% at leaf stage V14.
Demir et al. [119] developed yield prediction models for organic rose (Rosa rubiginosa) cultivation based on UAV-captured images and machine learning algorithms. The authors used machine learning models, which include multiple linear regression (MLR), multivariate adaptive regression splines (MARS), decision trees (chi-square automatic interaction detector (CHAID)), exhaustive chi-square automatic interaction detector (ExCHAID), classification and regression tree (CART), random forest, and ANN to predict agricultural productivity in organic rose cultivation. The results showed that the trained models obtained R2 of 90.7% (MARS), 88.8% (Ex-CHAID), 93.1% (CART 1), and 90.9% (RF1), and were able to predetermine the early yield of organic roses.
Jamali et al. [120] monitored wheat biophysical variables using UAV multispectral aerial imagery and deep machine learning models. The authors estimated wheat plant height, leaf stage, leaf area index, nitrogen content, and dry matter in the wheat crop at various growth stages. The deep machine learning models used in the study include ANN, SVM, and deep neural network (DNN). The results of the study showed that the deep neural network model presented a better performance across all parameters, except in terms of the percentage of nitrogen, in estimating wheat plant height, leaf stage, leaf area index, nitrogen content, and dry matter in wheat crops at various growth stages, respectively. Estimation of plant height: (r = 82%, 66%, 71%, and 53%; RMSE = 9.61 cm, 17.54 cm, 16.26 cm, and 17.70 cm; MAE = 7.13 cm, 14.91 cm, 14.37 cm, and 14.83 cm).
Qu et al. [121] evaluated two machine learning models to estimate the agricultural yield of wild blueberry (Vaccinium corymbosum L.) crops using UAVs. The authors used RGB aerial images and their color and texture characteristics to predict blueberry production in an experimental area based on the RF and XGBoost models. The results of the study showed that the XGBoost model achieved better performance, with an R2 index of 89%, RMSE of 542 g/m2, and MAPE of 380 g/m2, when compared to the RF model.
Table 7 presents the articles, included in the RSL, that applied AI techniques to estimate and evaluate the final production of various agricultural crops, as well as the types of sensors and evaluation metrics used in each study.
The interpretation of the types of agricultural problems that recur in the field depends on the level of experience of the professional who takes on the problem, in addition to requiring an analysis for each case presented here, which will take time until each case’s problem identified and resolved. For Sivakumar et al. [122], there is recent interest in the use of machine learning algorithms that can help reduce repetitive work in the field and optimize tasks that involve the diagnosis and analysis of agricultural data, saving time and money for growers.
This review identified that research with images captured by UAVs, combined with machine learning models, can significantly assist in the detection, classification, and solution of analysis problems in crops. The popularization of deep learning and the increase in computational power in recent years has contributed considerably to improving the accuracy of computer vision results.

3.7. Percentage of SLR Research According to Techniques That Automate Agricultural Processes

Quantifying the percentage of articles obtained in this review, we have the following: 30.0% from weeds; 4.28% from nutritional deficiency; 11.42% from water stress; 18.57% from plant diseases; 8.57% from agricultural pests; and 28.57% for estimated yield.

3.8. Answers to Research Questions

(Q1) In what types of agronomic study problems have AI been applied through image analysis to support agricultural decisions?
This SLR presents the main applications of AI for the following types of agronomic problems: weeds in Section 3.2 (Table 2); nutritional deficiency in Section 3.3 (Table 3); water stress in Section 3.4 (Table 4); plant diseases in Section 3.5 (Table 5); agricultural pests in Section 3.6 (Table 6); and yield estimation in Section 3.7 (Table 7).
(Q2) What were the techniques and accuracy of the AI models applied?
Techniques/methods, metrics, and accuracy are available in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, in the “Metrics” and “Precision” columns, with the percentage of articles obtained in this systematic literature review having the following configuration: 30.0% of weeds; 4.28% of nutritional deficiency; 11.42% of water stress; 18.57% of plant diseases; 8.57% of agricultural pests; and 28.57% of the estimated yield.
(Q3) What are the challenges/limitations in the applicability of AI to each type of agricultural image analysis?
The challenges/limitations of using AI in analyses were collected from the articles and added, and it was observed that the main limitation was the lack of a public dataset, as private datasets are used, meaning the experiments cannot be replicated or compared, making it impossible to evaluate state-of-the-art results. Furthermore, to train the models, annotations on a large amount of data are necessary for supervised training, as the image labeling process is laborious and requires significant human effort.
(Q4) Which techniques/methods were most used?
Machine learning techniques based on learning supervision were most used, mainly model articulations in CNN. The techniques showed good accuracy in image detection, segmentation, and classification tasks. CNN models are capable of detecting state-of-the-art solutions to problems involving computer vision.
(Q5) How can AI contribute to supporting agricultural image analysis?
Recent research indicates that systems using AI have the potential to automate processes in agriculture, such as automatic weed detection, plant disease detection and classification, and yield estimation, among others. Furthermore, deep learning models can automate processes while taking less time to complete tasks, improving the grower’s profitability.
(Q6) What type of agronomic problem was most studied in research?
The result of the SLR quantification shows that studies concerning weeds correspond to 30% of the articles. This percentage indicates that this was the area of greatest interest studied by the scientific community. Observing Figure 4 and Figure 5, it is possible to notice that the discrepancy in quantity between the number of publications per year and per type of agronomic problem was evident.
Articles showing machine and deep learning techniques unrelated to agricultural analysis were excluded [123,124,125,126,127,128]. Additionally, articles addressing AI techniques in agriculture but not using UAVs for image capture or describing the types of agricultural analysis used were also discarded from this study [129,130,131].
Countries that use AI technologies to improve and assist agricultural processes are highlighted in Figure 6. In the case of Brazil, according to the data collected in this SLR, studies that use AI in agriculture are frequent in the Central-West region and Southeast of the country, focusing on the estimation of agricultural productivity, as well as the detection and classification of pests and weeds in commercial crops, with emphasis on the use of deep learning models such as ANN, RF, PLSR, AlexNet, and SVM [41,94,105].
In other regions of the world, such as China and India (Figure 6), for example, where the number of studies aimed at applying AI to solve agronomic problems is greater than in other countries, the main objective of research carried out in the country focuses on the practical use of this technology in plantations and how it can be made available to producers [46,50,69,70,73,75,79,103]. For example, Huang et al. [40] highlighted in their study that the AI technology proposed by the authors can be applied with the help of cloud computing and Edge Computing to perform model inference and, jointly, create a real-time data processing platform for the detection of weeds. Another study suggested developing automated annotation software that integrates semi-supervised methods with SOLOv2 and YOLOv5x deep learning models, using small amounts of labeled data to train the models and predict objects of interest captured by UAVs in real time, thus reducing the human labor costs generated when hiring this type of service by the agricultural producer [114]. Therefore, the use of AI techniques applied to agriculture is becoming more common and accessible.
The potential applicability of AI, mainly associated with the use of geotechnologies, offers innovative and sustainable solutions, aiming to automate and improve agriculture [105,132]. However, there are major challenges in the short and medium term, such as the lack of technical training for digital data processing and digital technologies, the scarcity of robust publicly accessible datasets for training deep learning models, as well as the lack of knowledge, training, and education of farmers. It is also worth highlighting the distrust and resistance of rural producers to new technologies based on AI. Authors who proposed AI studies applied to agriculture reported that the difficulties and challenges in applying these techniques in practice are real and need to be overcome through the dissemination of information in rural areas [38,39,41,49]. To overcome these obstacles, we also recommend the creation and promotion of programs that address topics on the use and applicability of AI tools in the agricultural field, aiming to train and qualify rural producers in a practical and technical way.
Thus, the AI studies presented in this review contribute to strengthening agricultural practices worldwide, with insights for increasing crop productivity (e.g., optimizing the use of irrigation water and agricultural inputs); real-time monitoring of agricultural areas (e.g., associated with the use of UAVs and satellites/sensors—detecting early problems of water deficiency, diseases, and pests); precision agriculture (efficient management of crop management, for example, in the efficiency of input application associated with geotechnologies); irrigation management (e.g., automation of irrigation and optimization of water use) [133,134,135]; harvest forecasting (e.g., in planning management practices and predicting crop yields); sustainability (e.g., promoting a reduction in chemical use with more sustainable and efficient agricultural practices) [136,137,138].
It is also worth highlighting that, given the efforts of research institutions in partnership with public and private companies specifically in Brazil, small and medium-sized rural producers have been trained and qualified to adopt AI techniques practically and effectively, ensuring more appropriate management and increased crop productivity.

3.9. Limitations

This study is not free from limitations. The limitations of using AI in analyses were collected from the articles, and it was observed that the main limitation in this area was the lack of a public dataset. This occurs because some of the datasets from the studies in this SLR are private, making it impossible to replicate or compare the experiments proposed in these studies, and making it difficult to conduct a robust and meaningful meta-analysis. This highlights the need for more rigorous and replicable research methodologies to increase the reliability of results in research on using AI and UAV imagery to solve agronomic problems. Furthermore, to train the models, annotations on a large amount of data are required for supervised training, as the image labeling process is laborious and requires significant human effort. Even considering the care taken during the planning and selection stages of articles in this SLR, the studies included may reveal variability, indicating that some studies did not reach the highest methodological standards.

4. Conclusions

The results presented in this SLR show that there have been significant advances in supporting agricultural decisions in the last six years. Deep learning methods have shown better results in supporting weed detection, classifying plant diseases, and estimating agricultural productivity in crops through images captured by UAVs. The YOLO and CNN models and their variations presented the best results for all groups presented in this SLR. This research topic is still open, due to the rapid evolution of systems that use AI to aid decision-making; however, it already presents highly satisfactory results in terms of efficiency in data analysis. Monitoring the updates in and applications of AI tools deserves the attention of researchers who continually seek to improve automated processes, with increases in the efficiency of data analysis arising from agricultural management.
The data collected and analyzed in this systematic review synthesized the most recent research, published in the most relevant databases today, which proposed using advanced AI techniques to solve pertinent problems in agriculture. For a complete survey, article summaries were created and divided into group studies with similar objectives to identify the state-of-the-art whenever possible and the challenges and limitations of using AI in this context.
Future research efforts on the application of AI in agriculture should aim to improve the accuracy of existing AI models to identify, segment, and classify new problems that may arise in the agronomic area. With this study, the prospective studies presented are expected to be well-designed and reported, especially when describing the AI techniques used. Therefore, this study can serve as a guide for researchers who continually seek to improve the quality of machine learning models, especially those aimed at solving agricultural problems.
We recommend the creation and promotion of programs that address topics on the use and applicability of AI tools in the agricultural field, aiming to provide practical and technical training to rural producers. Disseminating and promoting digital inclusion in agriculture through AI can favor the education and training of rural producers.

Author Contributions

Conceptualization, J.A.O.S.S. and H.F.E.d.O.; methodology, J.A.O.S.S., H.F.E.d.O., J.L.B.d.S., M.V.d.S., M.M. and V.S.d.S.; software, J.A.O.S.S.; validation, J.A.O.S.S., H.F.E.d.O., M.M. and J.L.B.d.S.; formal analysis, J.A.O.S.S., V.S.d.S., L.S.R.V., J.P.B.L., M.M., L.N.L., H.F.E.d.O., J.L.B.d.S., J.A.O.S.S. and M.V.d.S.; investigation, J.A.O.S.S., H.F.E.d.O., J.L.B.d.S., M.V.d.S., M.M. and V.S.d.S.; resources, J.A.O.S.S., R.S.F., H.F.E.d.O. and J.L.B.d.S.; data curation, J.A.O.S.S. and H.F.E.d.O.; writing—original draft preparation, J.A.O.S.S.; writing—review and editing, J.A.O.S.S., M.M., J.P.B.L., H.F.E.d.O., V.S.d.S., J.L.B.d.S., M.V.d.S., L.N.L., R.S.F. and L.S.R.V.; visualization, J.A.O.S.S., H.F.E.d.O., J.P.B.L., L.S.R.V., J.L.B.d.S., M.M., L.N.L., M.V.d.S. and V.S.d.S.; supervision, H.F.E.d.O.; project administration, H.F.E.d.O.; funding acquisition, R.S.F. and H.F.E.d.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Council for Scientific and Technological Development (CNPq), under processes 407465/2021-9 and 420296/2023-9; the Foundation for Research Support of the State of Goiás (FAPEG), grant number 03/2018; Embrapa Rice and Beans; the University of Georgia; and internal funding from the Goiano Federal Institute—Campus Ceres.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

We would like to thank the Cerrado Irrigation Graduate Program and the Laboratório de Tecnologias de Irrigação (Lab.TI) of the Goiano Federal Institute—Campus Ceres for the technical and technological support in conducting this research.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Brazil na revolução 4.0. CEPEA-ESALQ/USP. 2019. Available online: https://www.cepea.esalq.usp.br/br/opiniao-cepea/o-brasil-na-revolucao-4-0.aspx (accessed on 3 April 2023).
  2. Rossetto, R.; Santiago, A.D. Cana: Plantas Daninhas; Embrapa: Parque Estação Biológica, PqEB: Brasília, Brazil, 2022. Available online: https://www.embrapa.br/agencia-de-informacao-tecnologica/cultivos/cana/producao/manejo/plantas-daninhas (accessed on 5 April 2023).
  3. Bah, M.D.; Hafiane, A.; Canals, R. CRowNet: Deep network for Crop row detection in UAV images. IEEE Access 2019, 8, 5189–5200. [Google Scholar] [CrossRef]
  4. Castro, A.I.; Torres-Sánchez, J.; Peña, J.M.; Jiménez-Brenes, F.M.; Csillik, O.; López-Granados, F. An automatic random for-est-OBIA algorithm for early weed mapping between and within crop rows using UAV imagery. Remote Sens. 2018, 10, 285. [Google Scholar] [CrossRef]
  5. Gill, S.S.; Tuli, S.; Xu, M.; Singh, I.; Singh, K.V.; Lindsay, D.; Tuli, S.; Smirnova, D.; Singh, M.; Jain, U.; et al. Transformative effects of IoT, Blockchain and Artificial Intelligence on cloud computing: Evolution, vision, trends and open challenges. Internet Things 2019, 8, 100118. [Google Scholar] [CrossRef]
  6. Lolito, V.; Zmbelli, T. Pattern detection in colloidal assembly: A mosaic of analysis techniques. Adv. Colloid Inter. Sci. 2020, 284, 102252. [Google Scholar] [CrossRef]
  7. Etienne, A.; Ahmad, A.; Aggarwal, V.; Saraswat, D. Deep learning-based object detection system for identifying weeds using UAS imagery. Remote Sens. 2021, 13, 5182. [Google Scholar] [CrossRef]
  8. Inteligência Artificial Torna Mais Preciso o Mapeamento da Intensificação Agrícola no Cerrado. Embrapa. 2023. Available online: https://www.embrapa.br/busca-de-noticias/-/noticia/83327528/inteligencia-artificial-torna-mais-preciso-o-mapeamento-da-intensificacao-agricola-no-cerrado (accessed on 6 April 2023).
  9. Shankar, H.R.; Veeraraghavan, A.K.; Uvais; Sivaraman, K.U.; Ramachandran, S.S. Application of UAV for Pest, Weeds and Disease Detection using Open Computer Vision. In Proceedings of the 2018 International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 13–14 December 2018; pp. 287–292. [Google Scholar] [CrossRef]
  10. Hassler, S.C.; Baysal-Gurel, F. Unmanned aircraft system (UAS) technology and applications in agriculture. Agronomy 2019, 9, 618. [Google Scholar] [CrossRef]
  11. Hamylton, S.M.; Morris, R.H.; Carvalho, R.C.; Roder, N.; Barlow, P.; Mills, K.; Wang, L. Evaluating techniques for mapping island vegetation from unmanned aerial vehicle (UAV) images: Pixel classification, visual interpretation and machine learning approaches. Int. J. Appl. Earth Obs. Geoinf. 2020, 89, 102085. [Google Scholar] [CrossRef]
  12. Haq, M.A. CNN Based Automated Weed Detection System Using UAV Imagery. Comput. Syst. Sci. Eng. 2021, 42, 837–849. [Google Scholar] [CrossRef]
  13. Islam, N.; Rashid, M.M.; Wibowo, S.; Xu, C.Y.; Morshed, A.; Wasimi, S.A.; Moore, S.; Rahman, S.M. Early weed detection using image processing and machine learning techniques in an Australian chili farm. Agriculture 2021, 11, 387. [Google Scholar] [CrossRef]
  14. Belete, N.A.S.; Tetila, E.C.; Astolfi, G.; Pistori, H. Classification of weed in soybean crops using unmanned aerial vehicle images. In Proceedings of the XV Workshop de Visão Computacional, Sao Bernardo do Campo, Brazil, 9–11 September 2019; pp. 121–125. [Google Scholar] [CrossRef]
  15. Salazar, J.; Sánchez-De La Cruz, E.; Ochoa-Zezzatti, A.; Rivera, M.M. Diagnosis of Collateral Effects in Climate Change Through the Identification of Leaf Damage Using a Novel Heuristics and Machine Learning Framework. In Metaheuristics in Machine Learning: Theory and Applications; Springer: Cham, Switzerland, 2021; pp. 61–75. [Google Scholar] [CrossRef]
  16. Ferreira, C.M.; Barrigossi, J.A.F. Embrapa Rice and Beans: Tradition and Food Security. Technical Editors. 2021, 16. Available online: http://www.cnpaf.embrapa.br/languages/ricebeans.php (accessed on 6 April 2023).
  17. Sanders, J.T.; Jones, E.A.L.; Austin, R.; Roberson, G.T.; Richardson, R.J.; Everman, W.J. Remote sensing for palmer amaranth (Amaranthus palmeri s. wats.) detection in soybean (Glycine max (L.) Merr.). Agronomy 2021, 11, 1909. [Google Scholar] [CrossRef]
  18. Valente, J.; Doldersum, M.; Roers, C.; Kooistra, L. Detecting Rumex Obtusifolius weed plants in grasslands from UAV RGB imagery using deep learning. Remote Sens. Spat. Inf. Sci. 2019, 4, 179–185. [Google Scholar] [CrossRef]
  19. Peña, J.M.; Torres-Sánchez, J.; Castro, A.I.; Kelly, M.; López-Granados, F. Weed Mapping in Early-Season Maize Fields Using Object-Based Analysis of Unmanned Aerial Vehicle (UAV) Images. PLoS ONE 2018, 8, e77151. [Google Scholar] [CrossRef] [PubMed]
  20. Pham, F.; Raheja, A.; Bhandari, S. Machine learning models for predicting lettuce health using UAV imagery. In Proceedings of the SPIE 11008, Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping IV, Baltimore, MD, USA, 14–18 April 2019; p. 110080Q. [Google Scholar] [CrossRef]
  21. Preethi, C.; Brintha, N.C.; Yogesh, C.K. A comprehensive survey on applications of precision agriculture in the context of weed classification, leave disease detection, yield prediction and UAV Image analysis. Adv. Parallel Comput. 2021, 39, 296–306. [Google Scholar] [CrossRef]
  22. Sun, G.; Xie, H.; Sinnott, R.O. A Crop Water Stress Monitoring System Utilizing a Hybrid e-Infrastructure. In Proceedings of the 10th International Conference on Utility and Cloud Computing, Austin, TX, USA, 5–8 December 2017; pp. 161–170. [Google Scholar] [CrossRef]
  23. Siqueira, V.S.; Borges, M.M.; Furtado, R.G.; Dourado, C.N.; Costa, R.M. Artificial intelligence applied to support medical decisions for the automatic analysis of echocardiogram images: A systematic review. Artif. Intell. Med. 2021, 120, 102165. [Google Scholar] [CrossRef] [PubMed]
  24. Kitchenham, B. Procedures for Performing Systematic Reviews; Keele University: Keele, UK, 2004; pp. 1–26. [Google Scholar]
  25. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, 71. [Google Scholar] [CrossRef]
  26. ACM Digital Library. Available online: https://dl.acm.org/search/advanced (accessed on 3 April 2023).
  27. IEEE Xplore Digital Library. Available online: http://ieeexplore.ieee.org/Xplore/home.jsp (accessed on 3 April 2023).
  28. Science Direct—Elsevier. Available online: https://www.sciencedirect.com/search/advanced (accessed on 3 April 2023).
  29. MDPI—Publisher of Open Access Journals. Available online: https://www.mdpi.com/ (accessed on 5 April 2023).
  30. Web of Science. Available online: https://apps.webofknowledge.com (accessed on 5 April 2023).
  31. Jackulin, C.; Murugavalli, S. A comprehensive review on detection of plant disease using machine learning and deep learning approaches. Meas. Sens. 2022, 24, 100441. [Google Scholar] [CrossRef]
  32. Mohidem, N.A.; Che’ya, N.N.; Juraimi, A.S.; Ilahi, W.F.F.; Roslim, M.H.M.; Sulaiman, N.; Saberioon, M.; Noor, N.M. How can unmanned aerial vehicles be used for detecting weeds in agricultural fields? Agriculture 2021, 11, 1004. [Google Scholar] [CrossRef]
  33. Rai, N.; Zhang, Y.; Ram, B.G.; Schumacher, L.; Yellavajjala, R.K.; Bajwa, S.; Sun, X. Applications of deep learning in precision weed management: A review. Comput. Electron. Agric. 2023, 206, 107698. [Google Scholar] [CrossRef]
  34. Shahi, T.B.; Xu, C.-Y.; Neupane, A.; Guo, W. Recent Advances in Crop Disease Detection Using UAV and Deep Learning Techniques. Remote Sens. 2023, 15, 2450. [Google Scholar] [CrossRef]
  35. Kuswidiyanto, L.W.; Noh, H.H.; Han, X.Z. Plant disease diagnosis using deep learning based on aerial hyperspectral images: A review. Remote Sens. 2022, 14, 6031. [Google Scholar] [CrossRef]
  36. Varah, A.; Ahodo, K.; Coutts, S.R.; Hicks, H.L.; Comont, D.; Crook, L.; Hull, R.; Neve, P.; Childs, D.Z.; Freckleton, R.P. The costs of human-induced evolution in an agricultural system. Nat. Sustain. 2020, 3, 63–71. [Google Scholar] [CrossRef] [PubMed]
  37. Hoeser, T.; Bachofer, F.; Kuenzer, C. Object detection and image segmentation with deep learning on Earth observation data: A review—Part II: Applications. Remote Sens. 2020, 12, 3053. [Google Scholar] [CrossRef]
  38. Fraccaro, P.; Butt, J.; Edwards, B.; Freckleton, R.P.; Childs, D.Z.; Reusch, K.; Comont, D. A Deep Learning Application to Map Weed Spatial Extent from Unmanned Aerial Vehicles Imagery. Remote Sens. 2022, 14, 4197. [Google Scholar] [CrossRef]
  39. Bah, H.; Hafiane, A.; Canals, R. Deep Learning with Unsupervised Data Labeling for Weed Detection in Line Crops in UAV Images. Remote Sens. 2018, 10, 1690. [Google Scholar] [CrossRef]
  40. Huang, H.; Lan, Y.; Yang, A.; Zhang, Y.; Wen, S.; Deng, J. Deep learning versus Object-based Image Analysis (OBIA) in weed mapping of UAV imagery. Int. J. Remote Sens. 2020, 41, 3446–3479. [Google Scholar] [CrossRef]
  41. Beeharry, Y.; Bassoo, V. Performance of ANN and AlexNet for weed detection using UAV-based images. In Proceedings of the 2020 3rd International Conference on Emerging Trends in Electrical, Electronic and Communications Engineering (ELECOM), Balaclava, Mauritius, 25–27 November 2020; pp. 163–167. [Google Scholar] [CrossRef]
  42. Reedha, R.; Dericquebourg, E.; Canals, R.; Hafiane, A. Transformer Neural Network for Weed and Crop Classification of High-Resolution UAV Images. Remote Sens. 2022, 14, 592. [Google Scholar] [CrossRef]
  43. Genze, N.; Ajekwe, R.; Güreli, Z.; Haselbeck, F.; Grieb, M.; Grimm, D.G. Deep learning-based early weed segmentation using motion blurred UAV images of sorghum fields. Comput. Electron. Agric. 2022, 202, 168. [Google Scholar] [CrossRef]
  44. Gallo, I.; Rehman, A.U.; Dehkord, R.H.; Landro, N.; La Grassa, R.; Boschetti, M. Deep Object Detection of Crop Weeds: Performance of YOLOv7 on a Real Case Dataset from UAV Images. Remote Sens. 2023, 15, 539. [Google Scholar] [CrossRef]
  45. Ajayi, O.G.; Ashi, J.; Guda, B. Performance evaluation of YOLO v5 model for automatic crop and weed classification on UAV images. Smart Agric. Technol. 2023, 5, 100231. [Google Scholar] [CrossRef]
  46. Pei, H.; Sun, Y.; Huang, H.; Zhang, W.; Sheng, J.; Zhang, Z. Weed Detection in Maize Fields by UAV Images Based on Crop Row Preprocessing and Improved YOLOv4. Agriculture 2021, 12, 975. [Google Scholar] [CrossRef]
  47. Su, J.; Yi, D.; Coombes, M.; Liu, C.; Zhai, X.; McDonald-Maier, K.; Chen, W.-H. Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [Google Scholar] [CrossRef]
  48. Barrero, O.; Perdomo, S.A. RGB and multispectral UAV image fusion for Gramineae weed detection in rice fields. Precis. Agric. 2018, 19, 809–822. [Google Scholar] [CrossRef]
  49. Bah, M.D.; Hafiane, A.; Canals, R.; Emile, B. Deep features and One-class classification with unsupervised data for weed detection in UAV images. In Proceedings of the 2019 9th International Conference on Image Processing Theory, Tools and Applications IPTA, Istanbul, Turkey, 6–9 November 2019; pp. 1–5. [Google Scholar] [CrossRef]
  50. Naveed, A.; Muhammad, W.; Irshad, M.J.; Aslam, M.J.; Manzoor, S.M.; Kauser, T.; Lu, Y. Saliency-Based Semantic Weeds Detection and Classification Using UAV Multispectral Imaging. IEEE Access 2023, 11, 11991–12003. [Google Scholar] [CrossRef]
  51. Chegini, H.; Beltran, F.; Mahanti, A. Designing and Developing a Weed Detection Model for California Thistle. ACM Trans. Internet Technol. 2023, 48, 29. [Google Scholar] [CrossRef]
  52. Xu, W.; Chen, P.; Zhan, Y.; Chen, S.; Zhang, L.; Lan, Y. Cotton yield estimation model based on machine learning using time series UAV remote sensing data. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102511. [Google Scholar] [CrossRef]
  53. Nagothu, S.K.; Anitha, G.; Siranthini, B.; Anandi, V.; Prasad, P.S. Weed detection in agricultural crops using unmanned aerial vehicles and machine learning. Mater. Proc. 2023, in press. [Google Scholar] [CrossRef]
  54. Nasiri, A.; Omid, M.; Taheri-Garavand, A.; Jafari, A. Deep learning-based precision agriculture through weed recognition in sugar beet fields. Sustain. Comput. Inform. Syst. 2022, 35, 100759. [Google Scholar] [CrossRef]
  55. Ajayi, O.G.; Ashi, J. Effect of varying training epochs of a Faster Region-Based Convolutional Neural Network on the Accuracy of an Automatic Weed Classification Scheme. Smart Agric. Technol. 2022, 3, 100128. [Google Scholar] [CrossRef]
  56. Rahman, A.; Lu, Y.; Wang, H. Performance Evaluation of Deep Learning Object Detectors for Herbal Detection weeds for cotton. Smart Agric. Technol. 2022, 3, 100126. [Google Scholar] [CrossRef]
  57. Diao, Z.; Guo, P.; Zhang, B.; Yan, J.; He, Z.; Zhao, S.; Zhao, C.; Zhang, J. Navigation line extraction algorithm for corn spraying robot based on improved YOLOv8s network. Comput. Electron. Agric. 2023, 212, 108049. [Google Scholar] [CrossRef]
  58. Mekhalfa, F.; Yacef, F.; Belhocine, M. Pre-trained Deep Learning Models for UAV-based Weed Recognition. In Proceedings of the 2023 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 20–22 September 2023; IEEE: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  59. Taha, M.F.; Abdalla, A.; ElMasry, G.; Gouda, M.; Zhou, L.; Zhao, N.; Liang, N.; Niu, Z.; Hassanein, A.; Al-Rejaie, S.; et al. Using Deep Convolutional Neural Network for Image-Based Diagnosis of Nutrient Deficiencies in Plants Grown in Aquaponics. Chemosensors 2022, 10, 45. [Google Scholar] [CrossRef]
  60. Fischer, H.; Romano, N.; Jones, J.; Howe, J.; Renukdas, N.; Sinha, A.K. Comparing water quality/bacterial composition and productivity of largemouth bass Micropterus salmoides juveniles in a recirculating aquaculture system versus aquaponics as well as plant growth/mineral composition with or without media. Aquaculture 2021, 538, 736554. [Google Scholar] [CrossRef]
  61. Barzin, R.; Lotfi, H.; Varco, J.J.; Bora, G.C. Machine Learning in Evaluating Multispectral Active Canopy Sensor for Prediction of Corn Leaf Nitrogen Concentration and Yield. Remote Sens. 2022, 14, 120. [Google Scholar] [CrossRef]
  62. Sathyavani, R.; JagnMohan, K.; Kalaavathi, B. Detection of plant leaf nutrients using convolutional neural network based Internet of Things data acquisition. Int. J. Nonlinear Anal. 2021, 2, 1175–1186. [Google Scholar] [CrossRef]
  63. Yang, T.; Kim, H.J. Characterizing Nutrient Composition and Concentration in Tomato-, Basil-, and Lettuce-Based Aquaponic and Hydroponic Systems. Water 2020, 12, 1259. [Google Scholar] [CrossRef]
  64. Sabzi, S.; Pourdarbani, R.; Rhoban, M.H.; Garccía-Mateos, G.; Arribas, J.I. Estimation of nitrogen content in cucumber plant (Cucumis sativus L.) leaves using hyperspectral imaging data with neural network and partial least squares regressions. Chemom. Intell. Lab. Syst. 2021, 217, 104404. [Google Scholar] [CrossRef]
  65. Zhang, L.; Niu, Y.; Han, W.; Liu, Z. Establishing Method of CropWater Stress Index Empirical Model of Field Maize. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2018, 49, 233–239. [Google Scholar] [CrossRef]
  66. Zhang, Z.; Bian, J.; Han, W.; Fu, Q.; Chen, S.; Cui, T. Diagnosis of Cotton Water Stress Using Unmanned Aerial Vehicle Thermal Infrared Remote Sensing after Removing Soil. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2018, 49, 250–260. [Google Scholar] [CrossRef]
  67. Li, Y.; Yan, H.; Cai, D.; Gu, T.; Sui, R.; Chen, D. Evaluating the water application uniformity of center pivot irrigation systems in Northern China. Int. Agric. Eng. J. 2018. [Google Scholar] [CrossRef]
  68. Bhandari, S.; Raheja, A.; Do, D.; Pham, F. Machine learning techniques for the assessment of citrus plant health using UAV-based digital images. In Proceedings of the Autonomous Air and Ground Sensing Systems for Agricultural Optimization and Phenotyping III, Orlando, FL, USA, 15–19 April 2018. [Google Scholar] [CrossRef]
  69. Sankararao, U.G.; Priyanka, G.; Rajalakshmi, P.; Choudhary, S. CNN Based Water Stress Detection in Chickpea Using UAV Based Hyperspectral Imaging. In Proceedings of the 2021 IEEE International India Geoscience and Remote Sensing Symposium (InGARSS), Ahmedabad, India, 6–10 December 2021; pp. 145–148. [Google Scholar] [CrossRef]
  70. Sankararao, U.G.; Rajalakshmi, P.; Kaliamoorthy, S.; Choudhary, S. Water Stress Detection in Pearl Millet Canopy with Selected Wavebands using UAV Based Hyperspectral Imaging and Machine Learning. In Proceedings of the IEEE Sensors Applications Symposium (SAS), Sundsvall, Sweden, 1–3 August 2022; pp. 1–6. [Google Scholar] [CrossRef]
  71. Tunca, E.; Köksal, E.S.; Taner, S.Ç. Calibrating UAV Thermal Sensors using Machine Learning Methods for Improved Accuracy in Agricultural Applications. Infrared Phys. Technol. 2023, 133, 104804. [Google Scholar] [CrossRef]
  72. Bertalan, L.; Holb, I.; Pataki, A.; Négyesi, G.; Szabó, G.; Szalóki, A.K.; Szabó, S. UAV-based multispectral and thermal cameras to predict soil water content—A machine learning approach. Comput. Electron. Agric. 2022, 200, 107262. [Google Scholar] [CrossRef]
  73. Niu, Y.; Han, W.; Zhang, H.; Zhang, L.; Chen, H. Estimating fractional vegetation cover of maize under water stress from UAV multispectral imagery using machine learning algorithms. Comput. Electron. Agric. 2021, 189, 106414. [Google Scholar] [CrossRef]
  74. Das, S.; Christopher, J.; Apan, A.; Choudhury, M.R.; Chapman, S.; Menzies, N.W.; Dang, Y.P. Evaluation of water status of wheat genotypes to aid prediction of yield on sodic soils using UAV-thermal imaging and machine learning. Agric. For. Meteorol. 2021, 307, 108477. [Google Scholar] [CrossRef]
  75. Wang, J.; Lou, Y.; Wang, W.; Liu, S.; Zhang, H.; Hui, X.; Wang, Y.; Yan, H.; Maes, W.H. A robust model for diagnosing water stress of winter wheat by combining UAV multispectral and thermal remote sensing. Agric. Water Manag. 2024, 291, 108616. [Google Scholar] [CrossRef]
  76. Sumesh, K.C.; Ninsawat, S.; Som-ard, J. Integration of RGB-based vegetation index, crop surface model and object-based image analysis approach for sugarcane yield estimation using unmanned aerial vehicle. Comput. Electron. Agric. 2021, 180, 105903. [Google Scholar] [CrossRef]
  77. Sanseechan, P.; Saengprachathanarug, K.; Posom, J.; Wongpichet, S.; Chea, C.; Wongphati, M. Use of vegetation indices in monitoring sugarcane white leaf disease symptoms in sugarcane field using multispectral UAV aerial imagery. IOP Conf. Ser. Earth Environ. Sci. 2019, 301, 12025. [Google Scholar] [CrossRef]
  78. Pan, Q.; Gao, M.; Wu, P.; Yan, J.; Li, S. A Deep-Learning-Based Approach for Wheat Yellow Rust Disease Recognition from Unmanned Aerial Vehicle Images. Sensors 2021, 21, 6540. [Google Scholar] [CrossRef]
  79. Wu, B.; Liang, A.; Zhang, H.; Zhu, T.; Zou, Z.; Yang, D.; Tang, W.; Li, J.; Su, J. Application of conventional UAV-based high-throughput object detection to the early diagnosis of pine wilt disease by deep learning. For. Ecol. Manag. 2021, 486, 15. [Google Scholar] [CrossRef]
  80. Selvaraj, M.G.; Vergara, A.; Montenegro, F.; Ruiz, H.A.; Safari, N.; Raymaekers, D.; Ocimati, W.; Ntamwira, J.; Tits, L.; Omondi, A.B.; et al. Detection of banana plants and their major diseases through aerial images and machine learning methods: A case study in DR Congo and Republic of Benin. ISPRS J. Photogramm. Remote Sens. 2020, 169, 110–124. [Google Scholar] [CrossRef]
  81. Amarasingam, N.; Gonzalez, F.; Salgadoe, A.S.A.; Sandino, J.; Powell, K. Detection of White Leaf Disease in Sugarcane Crops Using UAV-Derived RGB Imagery with Existing Deep Learning Models. Remote Sens. 2022, 14, 6137. [Google Scholar] [CrossRef]
  82. Yu, R.; Luo, Y.; Zhou, Q.; Zhang, X.; Wu, D.; Ren, L. Early detection of pine wilt disease using deep learning algorithms and UAV-based multispectral imagery. For. Ecol. Manag. 2021, 497, 119493. [Google Scholar] [CrossRef]
  83. Shi, Y.; Han, L.; Kleerekoper, A.; Chang, S.; Hu, T. Novel CropdocNet Model for Automated Potato Late Blight Disease Detection from Unmanned Aerial Vehicle-Based Hyperspectral Imagery. Remote Sens. 2022, 14, 396. [Google Scholar] [CrossRef]
  84. Kerkech, M.; Hafiane, A.; Canals, R. VddNet: Vine Disease Detection Network Based on Multispectral Images and Depth Map. Remote Sens. 2020, 12, 3305. [Google Scholar] [CrossRef]
  85. Delgado, C.; Benitez, H.; Cruz, M.; Selvaraj, M. Digital Disease Phenotyping. In Proceedings of the IGARSS 20–9—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 5702–5705. [Google Scholar] [CrossRef]
  86. Khan, F.S.; Khan, S.; Mohd, M.N.H.; Waseem, A.; Khan, M.N.A.; Ali, S.; Ahmed, R. Federated learning-based UAVs for the diagnosis of Plant Diseases. In Proceedings of the International Conference on Engineering and Emerging Technologies (ICEET), Kuala Lumpur, Malaysia, 27–28 October 2022; pp. 1–6. [Google Scholar] [CrossRef]
  87. Oide, A.H.; Nagasaka, Y.; Tanaka, K. Performance of machine learning algorithms for detecting pine wilt disease infection using visible color imagery by UAV remote sensing. Remote Sens. Appl. Soc. Environ. 2022, 28, 100869. [Google Scholar] [CrossRef]
  88. Deng, J.; Zhang, X.; Yang, Z.; Zhou, C.; Wang, R.; Zhang, K.; Lv, X.; Yang, L.; Wang, Z.; Li, P.; et al. Pixel-level regression for UAV hyperspectral images: Deep learning-based quantitative inverse of wheat stripe rust disease index. Comput. Electron. Agric. 2023, 215, 108434. [Google Scholar] [CrossRef]
  89. Casas, E.; Arbelo, M.; Moreno-Ruiz, J.A.; Hernández-Leal, P.A.; Reyes-Carlos, J.A. UAV-Based Disease Detection in Palm Groves of Phoenix canariensis Using Machine Learning and Multispectral Imagery. Remote Sens. 2023, 15, 3584. [Google Scholar] [CrossRef]
  90. Amorim, W.P.; Tetila, E.C.; Pistori, H.; Papa, J.P. Semi-supervised learning with convolutional neural networks for UAV images automatic recognition. Comput. Electron. Agric. 2019, 164, 104932. [Google Scholar] [CrossRef]
  91. Brodbeck, C.; Sikora, E.; Delaney, D.; Pate, G.; Johnson, J. Using Unmanned Aircraft Systems for Early Detection of Soybean Diseases. Precis. Agric. 2017, 8, 802–806. [Google Scholar] [CrossRef]
  92. da Silva, F.L.; Sella, M.L.G.; Francoy, T.M.; Costa, A.H.R. Evaluating classification and feature selection techniques for honeybee subspecies identification using wing images. Comput. Electron. Agric. 2015, 114, 68–77. [Google Scholar] [CrossRef]
  93. Duarte, A.; Borralho, N.; Caetano, M. A Machine Learning Approach to Detect Dead Trees Caused by Longhorned Borer in Eucalyptus Stands Using UAV Imagery. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 5818–5821. [Google Scholar] [CrossRef]
  94. Tetila, E.C.; Machado, B.B.; Astolfi, G.; Belete, N.A.S.; Amorim, W.P.; Roel, A.R.; Pistori, H. Detection and classification of soybean pests using deep learning with UAV images. Comput. Electron. Agric. 2020, 179, 105836. [Google Scholar] [CrossRef]
  95. Retallack, A.; Finlayson, G.; Ostendorf, B.; Lewis, M. Using deep learning to detect an indicator arid shrub in ul-tra-high-resolution UAV imagery. Ecol. Indic. 2022, 145, 109698. [Google Scholar] [CrossRef]
  96. Li, X.; Chen, J.; He, Y.; Yang, G.; Li, Z.; Tao, Y.; Li, Y.; Li, Y.; Huang, L.; Feng, X. High-through counting of Chinese cabbage trichomes based on deep learning and trinocular stereo microscope. Comput. Electron. Agric. 2023, 212, 108134. [Google Scholar] [CrossRef]
  97. Lin, Q.; Huang, H.; Wang, J.; Chen, L.; Du, H.; Zhou, G. Early detection of pine shoot beetle attack using the vertical profile of plant traits through UAV-based hyperspectral, thermal, and lidar data fusion. Int. J. Appl. Earth Obs. Geoinf. 2023, 125, 103549. [Google Scholar] [CrossRef]
  98. Clevers, J.G.P.W.; Kooistra, L.; van den Brande, M.M.M. Using Sentinel-2 Data for Retrieving LAI and Leaf and Canopy Chlorophyll Content of a Potato Crop. Remote Sens. 2017, 9, 405. [Google Scholar] [CrossRef]
  99. Towers, P.C.; Strever, A.; Poblete-Echeverría, C. Comparison of Vegetation Indices for Leaf Area Index Estimation in Vertical Shoot Positioned Vine Canopies with and without Grenbiule Hail-Protection Netting. Remote Sens. 2019, 11, 1073. [Google Scholar] [CrossRef]
  100. Vélez, S.; Barajas, E.; Rubio, J.A.; Vacas, R.; Poblete-Echeverría, C. Effect of Missing Vines on Total Leaf Area Determined by NDVI Calculated from Sentinel Satellite Data: Progressive Vine Removal Experiments. Appl. Sci. 2020, 10, 3612. [Google Scholar] [CrossRef]
  101. Guo, H.; Xiao, Y.; Li, M.; Hao, F.; Zhang, X.; Sun, H.; Beurs, K.; Fu, Y.H.; He, Y. Identifying crop phenology using maize height constructed from multi-sources images. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103121. [Google Scholar] [CrossRef]
  102. Xu, B.; Fan, J.; Chao, J.; Arsenijevic, N.; Werle, R.; Zhang, Z. Instance segmentation method for weed detection using UAV imagery in soybean fields. Comput. Electron. Agric. 2023, 211, 107994. [Google Scholar] [CrossRef]
  103. Ilniyaz, O.; Du, Q.; Shen, H.; He, W.; Feng, L.; Azadi, H.; Kurban, A.; Chen, X. Leaf area index estimation of pergola-trained vineyards in arid regions using classical and deep learning methods based on UAV-based RGB images. Comput. Electron. Agric. 2023, 207, 107723. [Google Scholar] [CrossRef]
  104. Peng, M.; Han, W.; Li, C.; Yao, X.; Shao, G. Modeling the daytime net primary productivity of maize at the canopy scale based on UAV multispectral imagery and machine learning. J. Clean. Prod. 2022, 367, 133041. [Google Scholar] [CrossRef]
  105. Barbosa, B.D.S.; Ferraz, G.A.E.S.; Costa, L.; Ampatzidis, Y.; Vijayakumar, V.; Santos, L.M.D. UAV-based coffee yield prediction utilizing feature selection and deep learning. Smart Agric. Technol. 2021, 1, 100010. [Google Scholar] [CrossRef]
  106. Alabi, T.R.; Abebe, A.T.; Chigeza, G.; Fowobaje, K.R. Estimation of soybean grain yield from multispectral high-resolution UAV data with machine learning models in West Africa. Remote Sens. Appl. Soc. Environ. 2022, 27, 100782. [Google Scholar] [CrossRef]
  107. Teshome, F.T.; Bayabil, H.K.; Hoogenboom, G.; Schaffer, B.; Singh, A.; Ampatzidis, Y. Unmanned aerial vehicle (UAV) imaging and machine learning applications for plant phenotyping. Comput. Electron. Agric. 2023, 212, 108064. [Google Scholar] [CrossRef]
  108. Ariza-Sentís, M.; Valente, J.; Kooistra, L.; Kramer, H.; Mücher, S. Estimation of spinach (Spinacia oleracea) seed yield with 2D UAV data and deep learning. Smart Agric. Technol. 2022, 3, 100129. [Google Scholar] [CrossRef]
  109. Niu, B.; Feng, Q.; Chen, B.; Ou, C.; Liu, Y.; Yang, J. HSI-TransUNet: A Segmentation Model semantics based in transformer for crop mapping from UAV hyperspectral images. Comput. Electron. Agric. 2022, 201, 107297. [Google Scholar] [CrossRef]
  110. Pandey, A.; Jain, K. An intelligent system for crop identification and classification from UAV images using conjugated dense convolutional neural network. Comput. Electron. Agric. 2021, 192, 106543. [Google Scholar] [CrossRef]
  111. Vong, N.; Conway, L.S.; Feng, A.; Zhou, J.; Kitchen, N.R.; Sudduth, K.A. Estimating and Mapping Corn Emergence Uniformity using UAV imagery and deep learning. Comput. Electron. Agric. 2022, 198, 107008. [Google Scholar] [CrossRef]
  112. Chen, R.; Zhang, C.; Xu, B.; Zhu, Y.; Zhao, F.; Han, S.; Yang, G.; Yang, H. Predicting Individual Apple Yield using sensing data remote from multiple UAV sources and ensemble learning. Comput. Electron. Agric. 2022, 201, 107275. [Google Scholar] [CrossRef]
  113. Wang, H.; Feng, J.; Yin, H. Improved Method for Apple Fruit Target Detection Based on YOLOv5s. Agriculture 2023, 13, 2167. [Google Scholar] [CrossRef]
  114. Xu, X.; Wang, L.; Liang, X.; Zhou, L.; Chen, Y.; Feng, P.; Yu, H.; Ma, Y. Maize Seedling Leave Counting Based on Semi-Supervised Learning and UAV RGB Images. Sustainability 2023, 15, 9583. [Google Scholar] [CrossRef]
  115. Feng, Y.; Chen, W.; Ma, Y.; Zhang, Z.; Gao, P.; Lv, X. Cotton Seedling Detection and Counting Based on UAV Multispectral Images and Deep Learning Methods. Remote Sens. 2023, 15, 2680. [Google Scholar] [CrossRef]
  116. Tunca, E.; Köksal, E.S.; Özturk, E.; Akayc, H.; Taner, S.Ç. Accurate leaf area index estimation in sorghum using high-resolution UAV data and machine learning models. Phys. Chem. Earth Pt A/B/C 2024, 133, 103537. [Google Scholar] [CrossRef]
  117. Ma, J.; Liu, B.; Ji, L.; Zhu, Z.; Wu, Y.; Jiao, W. Field-scale yield prediction of winter wheat under different irrigation regimes based on the dynamic fusion of multimodal UAV imagery. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103292. [Google Scholar] [CrossRef]
  118. Liu, S.; Jin, X.; Bai, Y.; Wu, W.; Cui, N.; Cheng, M.; Liu, Y.; Meng, L.; Jia, X.; Nie, C.; et al. UAV multispectral images for accurate estimation of the maize LAI considering the effect of soil background. Int. J. Appl. Earth Obs. Geoinf. 2023, 121, 103383. [Google Scholar] [CrossRef]
  119. Demir, S.; Dedeoğlu, M.; Başayiğit, L. Yield prediction models of organic oil rose farming with agricultural unmanned aerial vehicles (UAVs) images and machine learning algorithms. Remote Sens. Soc. Environ. 2023, 33, 101131. [Google Scholar] [CrossRef]
  120. Jamali, M.; Bakhshandeh, E.; Yeganeh, B.; Özdoğan, M. Development of machine learning models for estimating wheat bio-physical variables using satellite-based vegetation indices. Adv. Space Res. 2024, 73, 498–513. [Google Scholar] [CrossRef]
  121. Qu, H.; Zheng, C.; Ji, H.; Barai, K.; Zhang, Y. A fast and efficient approach to estimate wild blueberry yield using machine learning with drone photography: Flight altitude, sampling method and model effects. Comput. Electron. Agric. 2024, 216, 108543. [Google Scholar] [CrossRef]
  122. Sivakumar, A.N.V.; Li, J.; Scott, S.; Psota, E.; Jhala, A.J.; Luck, J.D.; Shi, Y. Comparison of object detection and patch-based classification deep learning models on mid-to late-season weed detection in UAV imagery. Remote Sens. 2020, 12, 1591. [Google Scholar] [CrossRef]
  123. Ghazali, W.N.W.B.; Zulkifli, C.N.B.; Ponrahono, Z. The Effect of Traffic Congestion on Quality of Community Life. ICRP 2019, 2, 759–766. [Google Scholar] [CrossRef]
  124. Jiber, M.; Mbarek, A.; Yahyaouy, A.; Sabri, M.A.; Boumhidi, J. Road Traffic Prediction Model Using Extreme Learning Machine: The Case Study of Tangier, Morocco. Information 2020, 11, 542. [Google Scholar] [CrossRef]
  125. Patro, K.K.; Allam, J.P.; Hammad, M.; Tadeusiewicz, R.; Pławiak, P. SCovNet: A skip connection-based feature union deep learning technique with statistical approach analysis for the detection of COVID-19. Biocybern. Biomed. Eng. 2023, 43, 352–368. [Google Scholar] [CrossRef] [PubMed]
  126. Pedada, K.R.; Rao, B.; Patro, K.K.; Allam, J.P.; Jamjoom, M.M.; Samee, N.A. A novel approach for brain tumour detection using deep learning based technique. Biomed. Signal Process. Control 2023, 82, 104549. [Google Scholar] [CrossRef]
  127. Shashirangana, J.; Padmasiri, H.; Meedeniya, D.; Perera, C.; Nayak, S.R.; Nayak, J.; Vimal, S.; Kadry, S. License plate recognition using neural architecture search for edge devices. Int. J. Intell. Syst. 2021, 37, 10211–10248. [Google Scholar] [CrossRef]
  128. Padmasiri, H.; Shashirangana, J.; Meedeniya, D.; Rana, O.; Perera, C. Automated License Plate Recognition for Resource-Constrained Environments. Sensors 2022, 22, 1434. [Google Scholar] [CrossRef] [PubMed]
  129. Mushtaq, M.; Akram, M.U.; Alghamdi, N.S.; Fatima, J.; Masood, R.F. Localization and Edge-Based Segmentation of Lumbar Spine Vertebrae to Identify the Deformities Using Deep Learning Models. Sensors 2022, 22, 1547. [Google Scholar] [CrossRef]
  130. Khatab, E.; Onsy, A.; Abouelfarag, A. Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles. Sensors 2022, 22, 1663. [Google Scholar] [CrossRef]
  131. Fan, X.; Sun, T.; Chai, X.; Zhou, J. YOLO-WDNet: A lightweight and accurate model for weeds detection in cotton field. Comput. Electron. Agric. 2024, 225, 1093617. [Google Scholar] [CrossRef]
  132. de Oliveira, H.F.E.; de Castro, L.E.V.; Sousa, C.M.; Alves Júnior, L.R.; Mesquita, M.; Silva, J.A.O.S.; Faria, L.C.; da Silva, M.V.; Giongo, P.R.; de Oliveira Júnior, J.F.; et al. Geotechnologies in Biophysical Analysis through the Applicability of the UAV and Sentinel-2A/MSI in Irrigated Area of Common Beans: Accuracy and Spatial Dynamics. Remote Sens. 2024, 16, 1254. [Google Scholar] [CrossRef]
  133. de Melo, D.A.; Silva, P.C.; da Costa, A.R.; Delmond, J.G.; Ferreira, A.F.A.; de Souza, J.A.; de Oliveira-Júnior, J.F.; da Silva, J.L.B.; da Rosa Ferraz Jardim, A.M.; Giongo, P.R.; et al. Development and Automation of a Photovoltaic-Powered Soil Moisture Sensor for Water Management. Hydrology 2023, 10, 166. [Google Scholar] [CrossRef]
  134. Valverde-l, F.; Prados, J. Prevalence of Sarcopenia Determined by Computed Tomography in Pancreatic Cancer: A Systematic Review and Meta-Analysis of Observational Studies. Cancers 2024, 16, 3356. [Google Scholar] [CrossRef]
  135. Barsouk, A.; Elghawy, O.; Yang, A.; Sussman, J.H.; Mamtani, R.; Mei, L. Meta-Analysis of Age, Sex, and Race Disparities in the Era of Contemporary Urothelial Carcinoma Treatment. Cancers 2024, 16, 3338. [Google Scholar] [CrossRef] [PubMed]
  136. Pesch, M.H.; Mowers, J.; Huynh, A.; Schleiss, M.R. Intrauterine Fetal Demise, Spontaneous Abortion and Congenital Cytomegalovirus: A Systematic Review of the Incidence and Histopathologic Features. Viruses 2024, 16, 1552. [Google Scholar] [CrossRef] [PubMed]
  137. Benster, L.L.; Stapper, N.; Rodriguez, K.; Daniels, H.; Villodas, M.; Weissman, C.R.; Daskalakis, Z.J.; Appelbaum, L.G. Brain Sciences Developmental Predictors of Suicidality in Schizophrenia: A Systematic Review. Brain Sci. 2024, 14, 995. [Google Scholar] [CrossRef] [PubMed]
  138. Simione, L.; Frolli, A.; Scia, F.; Chiarella, S.G. Mindfulness-Based Interventions for People with Autism Spectrum Disorder: A Systematic Literature Review. Brain Sci. 2024, 14, 1001. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the systematic review selection steps according to the PRISMA methodology, according to the PRISMA 2020 statement from Page et al. [25].
Figure 1. Flowchart of the systematic review selection steps according to the PRISMA methodology, according to the PRISMA 2020 statement from Page et al. [25].
Agronomy 14 02697 g001
Figure 2. Flowchart of the systematic literature review data extraction and sequence highlights, adapted from Siqueira et al. [23]. Data extraction steps: (a) list of articles divided by the type of agronomic problem that each proposed to solve; (b) list of articles, divided by type of agronomic problem, that used sensors to acquire the dataset; (c) list of articles, divided by type of agronomic problem, that used image improvement techniques in the dataset; (d) number of articles that used evaluation metrics; (e) list of the main machine learning models used by each article in this study.
Figure 2. Flowchart of the systematic literature review data extraction and sequence highlights, adapted from Siqueira et al. [23]. Data extraction steps: (a) list of articles divided by the type of agronomic problem that each proposed to solve; (b) list of articles, divided by type of agronomic problem, that used sensors to acquire the dataset; (c) list of articles, divided by type of agronomic problem, that used image improvement techniques in the dataset; (d) number of articles that used evaluation metrics; (e) list of the main machine learning models used by each article in this study.
Agronomy 14 02697 g002
Figure 3. Example of data output after training the YOLOv7 model for weed segmentation in commercial crops.
Figure 3. Example of data output after training the YOLOv7 model for weed segmentation in commercial crops.
Agronomy 14 02697 g003
Figure 4. Number of articles and timeline of publications per type of agronomic problems.
Figure 4. Number of articles and timeline of publications per type of agronomic problems.
Agronomy 14 02697 g004
Figure 5. Number of articles published and scientific platforms per type of agronomic problems.
Figure 5. Number of articles published and scientific platforms per type of agronomic problems.
Agronomy 14 02697 g005
Figure 6. Number of articles per country included in this SLR.
Figure 6. Number of articles per country included in this SLR.
Agronomy 14 02697 g006
Table 1. Articles selected by year of publication and main bibliographic databases used.
Table 1. Articles selected by year of publication and main bibliographic databases used.
Publication YearPublication Number (n)Bibliographic Database
20182IEEE 2 (n = 2)
20193IEEE (n = 1);
MDPI 3 (n = 1);
Web of Science 4 (n = 1)
20205IEEE (n = 1);
Science Direct (n = 2);
MDPI (n = 1);
Web of Science (n = 1)
202113IEEE (n = 2);
Science Direct (n = 8);
MDPI (n = 2);
Web of Science (n = 1)
202220ACM 1 (n = 1);
IEEE (n = 2);
Science Direct (n = 12);
MDPI (n = 5)
202322IEEE (n = 2);
Science Direct (n = 15);
MDPI (n = 5)
20245Science Direct (n = 5)
Total70
1 Association for Computing Machinery (ACM), 2 Institute of Electrical and Electronics Engineers (IEEE), 3 Multidisciplinary Digital Publishing Institute (MDPI), 4 Web of Science.
Table 2. Articles included in SLR—Weed.
Table 2. Articles included in SLR—Weed.
IdRef.CropLT 4MTD/TNQ 3Sensor TypeMetricsPrecision
1Fraccaro et al. [38]WheatSP 1UNET-ResNetRGB 9r 5, Acc90.0%
2Bah et al. [39]-SPCNNRGBAcc93.58%
3Huang et al. [40]RiceSPBack Propagation, RF 10, AlexNet, VGGNet, GoogleNet and ResNetRGBAcc80.2%
4Beehary and Bassoo [41]SoybeanSPANN and AlexNetRGBAcc99.81%
5Reedha et al. [42]SpinachSPEfficientNet and ResNetRGBr98.63%
6Bah et al. [49]Spinach/BeanNSP 2CNNRGBAcc1.50% and 6%
7Genze et al. [43]SorghumSPUNET-ResNetRGBHoud-out test89%
8Gallo et al. [44]Sugar beetSPYOLOv7RGBmAP74%
9Ajayi et al. [45]Spinach/SugarcaneSPCNN and YOLOv5RGBAcc78%, recall of 65
10Pei et al. [46]CornSPYOLOv4RGBmAP87%
11Su et al. [47]WheatSPRFMultispectralAcc94%
12Barrero and Perdomo [48]RiceSPNN 8RGB and MultispectralM/MGT(%) and
MP(%) 7
80 to 108%
13Naveed et al. [50]-SPNeural modulation Network (PC/BC-DIM)MultispectralAcc94%
14Chegini et al. [51]PastureSPMask RCNNMultispectralmAP and Acc93% and 95%
15Xu et al. [52]SoybeanSPResNet101_v and DSASPPRGBAcc91%
16Nagothu et al. [53]CottonSPSSD MobilenetRGB and MultispectralAcc95%
17Nasiri et al. [54]Sugar beetSPU-Net and CNNRGBIoU 6The scores of 96.06% and 84.23%, respectively.
18Ajayi and Ashi [55]Sugarcane, banana, spinach and pepperSPRCNNRGBAcc97%, recall of 99%, and F1 score of 99% on 242.000 epochs.
19Rahman et al. [56]CottonSPYOLOv5, RetinaNet,
EfficientDet, Fast RCNN, and Faster RCNN
RGBAcc 11 and mAP 1279.98% and [email protected]
20Diao et al. [57]CornSPYOLOv8sRGBmAP and F1 1386,4% and 86%
21Mekhalfa et al. [58]SoybeanSPAlexNet, VGG16, GoogLeNet, ResNet50, SqueezeNet and MobileNetRGBAcc98%
1 Supervised (SP), 2 unsupervised (NSP), 3 method/technique (MTD/TNQ), 4 learning type (LT), 5 correlation (r), 6 intersection over union (IoU), 7 master of management (M/MGT(%), MP(%), 8 neural network (NN), 9 red, green, and blue (RGB), 10 random forest (RF), 11 accuracy (Acc), 12 mean average precision (mAP), 13 F-Score (F1).
Table 3. Articles included in SLR—Nutritional Deficiency.
Table 3. Articles included in SLR—Nutritional Deficiency.
IdRef.CropLTMTD/TNQSensor TypeMetricsPrecision
1Barzin et al. [61]CornSPSVR 5, RF, GBM 6, XGBoost 4Multispectralr and MAPE 275% and 4.4%, respectively
2Sathyavani et al. [62]Tomato/PepperSPCNN 3RGBAcc79.11%
3Sabzi et al. [64]CucumberSPCNNHyperspectralR 196.50%
1 Regression (R), 2 Mean absolute percentage error (MAPE), 3 convolutional neural network (CNN), 4 extreme gradient boosting (XGBoost), 5 support vector regression (SVR), 6 stochastic gradient boosting (GBM).
Table 4. Articles included in SLR—Water Stress.
Table 4. Articles included in SLR—Water Stress.
IdRef.CropLTMTD/TNQSensor TypeMetricsPrecision
1Bhandari et al. [68]LettuceSPCNNRGB and MultispectralR62.30%
2Sankararao et al. [69]MilletNSPSVMHyperspectralAcc81%
3Sankararao et al. [70]ChickpeaSPCNNHyperspectralAcc95.44%
4Tunca et al. [71]-SPRF, SVM 1, KNN 2 and XGBoostRGBrr = 89% to 96% Micasense Altum and 87% to 94% (FDP-R).
5Bertalan et al. [72]CornSPRF, ENR 3, GLM 4 and RLM 5Thermal and Multispectralr and NRMSEr = 97% vs. 71%, NRMSE = 10 vs. 25%, respectively, RF (r = 97%) and ENR (r = 88%).
6Niu et al. [73]CornSPRF, ANN 6 and MLR 7RGB and Multispectralr and NRMSEFVC in corn (r of 89.2% and RMSE = 6.6%)
7Das et al. [74]WheatSPCRT 8Thermalr, RMSE 9 and NRMSE 10r = 86%; RMSE = 41.3 g/m2, r = 75%; RMSE = 47.7 g/m2, grain yield, where r = 78%; RMSE = 16.7 g/m2, r = 69%; RMSE = 23.2 g/m/2.
8Wang et al. [75]WheatSPPLS 11, SVM and GBDT 12Multispectral and Thermalr, RMSE and NRMSEr = 88%, RMSE = 8%, NRMSE = 14.7%, and filling phase with r = 90%, RMSE = 5% and NRMSE = 15.9%.
1 Support vector machine (SVM), 2 K-nearest neighbors (KNN), 3 elastic net (ENR), 4 general linear model (GLM), 5 robust linear model (RLM), 6 ANN, 7 multivariate linear regression (MLR), 8 classifications and regression tree (CRT), 9 root mean square error (RMSE), 10 mean-normalized root mean square error (NRMSE), 11 partial least squares (PLS), 12 gradient boosting decision tree (GBDT).
Table 5. Articles included in SLR—Plant Disease.
Table 5. Articles included in SLR—Plant Disease.
IdRef.CropLTMTD/TNQSensor TypeMetricsPrecision
1Pan et al. [78]WheatSPPSPNet 2RGBAcc98%
2Wu et al. [79]PineSPFaster R-CNN 6 and YOLO 3RGBAcc78%
3Selvaraj et al. [80]BananaSPRF and RetinaNetRGBAccBanana bunchy top disease (99.40%), Xanthomonas Wilt of Banana (92.80%), healthy banana cluster (93.30%), and individual banana plants (90.80%).
4Amarasingam et al. [81]SugarcaneSPYOLOv5, YOLOR 4, DETR 5 and Faster R-CNNRGBAcc95%
5Yu et al. [82]PineSPFaster R-CNN and YOLOv4MultispectralAcc66.70% (Faster R-CNN) and 63.55% (YOLOv4)
6Shi et al. [83]PotatoSPCropdocNetHyperspectralAcc98%
7Kerkech et al. [84]GrapeSPVddNet 1MultispectralAcc92% of vine-level detection and 87% of leaf level
8Shankar et al. [9]-SPANNRGBmax(R,G,B) + min(R,G,B)/2>25% in initial stage
9Delgado et al. [85]RiceNSPSVM and RFMultispectralr74% (SVM) versus 71% (RF)
10Khan et al. [86]-SPEfficientNetRGBAcc99.55%
11Oide et al. [87]PineSPSVM, RF, ANNRGBAcc99.50%
12Deng et al. [88]WheatSPUNET, HRNet_W48Hyperspectralr, MSEr = 87.5% and MSE = 1.29%
13Casas et al. [89]Palm treeSPSVM, ANN, and RFMultispectralAcc96%
1 Vine disease detection network (VddNet), 2 pyramid scene parsing network (PSPNet), 3 you only look once (YOLO), 4 you only look once representation (YOLOR), 5 detection transformer (DETR), 6 region-based convolutional neural network (R-CNN).
Table 6. Articles included in SLR—Agricultural Pest.
Table 6. Articles included in SLR—Agricultural Pest.
IdRef.CropLTMTD/TNQSensor TypeMetricsPrecision
1Duarte et al. [93]EucalyptusSPSVM and RFMultispectralAccRF = 98.35%, SVM = 97.7%
2Tetila et al. [94]SoybeanSPInception-v3, Resnet-50, VGG-16 1, VGG-19, XceptionRGBAcc93.82%
3Retallack et al. [95]PastureSPCNNRGBAcc75%
4Li et al. [96]CabbageSPYOLOv8RGBAP5094.40%
5Lin et al. [97]PineSPRFHyperspectral and thermalR2 and RMSER2 = 95% and RMSE = 1.15 µg/cm
1 Very Deep Convolutional Network (VGG).
Table 7. Articles included in SLR—Yield Estimation.
Table 7. Articles included in SLR—Yield Estimation.
IdRef.CropLTMTD/TNQSensor TypeMetricsPrecision
1Guo et al. [101]CornSPSLM 1 and HANTSMultispectral and RGBr93%.
2Xu et al. [102]CottonSPU-Net, Back PropagationMultispectralr85.3%
3Ilniyaz et al. [103]GrapeSPCNN, ResNetMultispectral and RGBr and RMSEr = 89,8% and RMSE = 43.4%
4Peng et al. [104]CornSPRF, SVR, GBR 3Multispectralr89.9%
5Barbosa et al. [105]CoffeeSPSVM, RF, GBR, PLSR 2RGBMAPE31.75%
6Alabi et al. [106]SoybeanSPCubist, XGBoost, GBM, SVM, and RFMultispectralr89%
7Teshome et al. [107]CornSPSVM, RF, KNN, GLMNETMultispectralr, d 5 and MAPEd = 99%; r = 99%; MAPE = 5 cm
8Ariza-Sentís et al. [108]SpinachSPMask R-CNNRGBr80%
9Niu et al. [109]-SPTransUNetHyperspectralAcc86.05%
10Pandey and Jain [110]-SPCD-CNN 4RGBAcc96.20%
11Vong et al. [111]CornSPCNN ResNet18RGBAcc97%, 73% and 95%
12Chen et al. [112]AppleSPKNN, SVRMultispectralAcc, rAcc = 75.80% and r = 81.3%
13Wang et al. [113]AppleSPYOLOv5sRGBAcc, mAP, recallAcc = 95.4%, mAP = 86.1% and recall = 91.8%
14Xu et al. [114]CornSSPSOLOv2 and YOLOv5xRGBmAP93.6%, 89.6% and 57.4%
15Feng et al. [115]CottonSPYOLOv7, YOLOv5 and CenterNetMultispectralr, RMSE, and RRMSE 130.94, 3.83 and 2.72%, respectively
16Tunca et al. [116]SorghumSPK-NN, ETR 6, XGBoost, RF and SVRMultispectral and thermalr, RMSE, and MAE97%, 46% and 19.7%, respectively
17Ma et al. [117]WheatSPMultimodalNetMultispectral and thermalr and MAEr = 74.11% and MAE = 6.05%.
18Liu et al. [118]CornSPRF and GBDTMultispectralr and RRMSEr = 94% and rRMSE = 9.35% in the leaf stage V14
19Demir et al. [119]RoseSPMLR 7, MARS 8, CHAID 9, ExCHAID 10, CART 11, RF and RNARGBrr = 90.7% (MARS), r = 88.8% (ExCHAID), r = 93.1% (CART 1) and r = 90.9% (RF1).
20Jamali et al. [120]WheatSPDNN 12,ANN, SVM and MLRMultispectralr, RMSE, and MAEPlant height: r = 82%, 66%, 71% and 53%; RMSE = 9.61 cm, 17.54 cm, 16.26 cm, and 17.70 cm; MAE = 7.13 cm, 14.91 cm, 14.37 cm, and 14.83 cm
21Qu et al. [121]BlueberrySPRF and XGBoostRGBr, RMSE, and MAEr = 89%, RMSE = 542 g/m2 and MAE = 380 g/m2
1 sentence-level language modeling (SLM), 2 partial least square regression (PLSR), 3 gradients boosting regression (GBR), 4 cross-domain convolutional neural network (CD-CNN), 5 height (d), 6 extra trees regressor (ETR), 7 multiple linear regression (MLR), 8 multivariate adaptive regression splines (MARS), 9 chi-square automatic interaction detector (CHAID), 10 exhaustive chi-square automatic interaction detector (ExCHAID), 11 classification and regression tree (CART), 12 deep neural network (DNN), 13 relative root mean square error (RRMSE).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Silva, J.A.O.S.; Siqueira, V.S.d.; Mesquita, M.; Vale, L.S.R.; Silva, J.L.B.d.; Silva, M.V.d.; Lemos, J.P.B.; Lacerda, L.N.; Ferrarezi, R.S.; Oliveira, H.F.E.d. Artificial Intelligence Applied to Support Agronomic Decisions for the Automatic Aerial Analysis Images Captured by UAV: A Systematic Review. Agronomy 2024, 14, 2697. https://doi.org/10.3390/agronomy14112697

AMA Style

Silva JAOS, Siqueira VSd, Mesquita M, Vale LSR, Silva JLBd, Silva MVd, Lemos JPB, Lacerda LN, Ferrarezi RS, Oliveira HFEd. Artificial Intelligence Applied to Support Agronomic Decisions for the Automatic Aerial Analysis Images Captured by UAV: A Systematic Review. Agronomy. 2024; 14(11):2697. https://doi.org/10.3390/agronomy14112697

Chicago/Turabian Style

Silva, Josef Augusto Oberdan Souza, Vilson Soares de Siqueira, Marcio Mesquita, Luís Sérgio Rodrigues Vale, Jhon Lennon Bezerra da Silva, Marcos Vinícius da Silva, João Paulo Barcelos Lemos, Lorena Nunes Lacerda, Rhuanito Soranz Ferrarezi, and Henrique Fonseca Elias de Oliveira. 2024. "Artificial Intelligence Applied to Support Agronomic Decisions for the Automatic Aerial Analysis Images Captured by UAV: A Systematic Review" Agronomy 14, no. 11: 2697. https://doi.org/10.3390/agronomy14112697

APA Style

Silva, J. A. O. S., Siqueira, V. S. d., Mesquita, M., Vale, L. S. R., Silva, J. L. B. d., Silva, M. V. d., Lemos, J. P. B., Lacerda, L. N., Ferrarezi, R. S., & Oliveira, H. F. E. d. (2024). Artificial Intelligence Applied to Support Agronomic Decisions for the Automatic Aerial Analysis Images Captured by UAV: A Systematic Review. Agronomy, 14(11), 2697. https://doi.org/10.3390/agronomy14112697

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop