Next Article in Journal
Spatiotemporal Characterization of the Urban Expansion Patterns in the Yangtze River Delta Region
Previous Article in Journal
Spatiotemporal Monitoring of a Grassland Ecosystem and Its Net Primary Production Using Google Earth Engine: A Case Study of Inner Mongolia from 2000 to 2020
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review

by
Ildar Rakhmatulin
1,
Andreas Kamilaris
2,3 and
Christian Andreasen
4,*
1
Department of Power Plant Networks and Systems, South Ural State University, 454080 Chelyabinsk City, Russia
2
CYENS Center of Excellence, Dimarchias Square 23, Nicosia 1016, Cyprus
3
Department of Computer Science, University of Twente, 7522 NB Enschede, The Netherlands
4
Department of Plant and Environmental Sciences, University of Copenhagen, Højbakkegaard Allé 13, DK 2630 Taastrup, Denmark
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(21), 4486; https://doi.org/10.3390/rs13214486
Submission received: 12 October 2021 / Revised: 2 November 2021 / Accepted: 4 November 2021 / Published: 8 November 2021

Abstract

:
Automation, including machine learning technologies, are becoming increasingly crucial in agriculture to increase productivity. Machine vision is one of the most popular parts of machine learning and has been widely used where advanced automation and control have been required. The trend has shifted from classical image processing and machine learning techniques to modern artificial intelligence (AI) and deep learning (DL) methods. Based on large training datasets and pre-trained models, DL-based methods have proven to be more accurate than previous traditional techniques. Machine vision has wide applications in agriculture, including the detection of weeds and pests in crops. Variation in lighting conditions, failures to transfer learning, and object occlusion constitute key challenges in this domain. Recently, DL has gained much attention due to its advantages in object detection, classification, and feature extraction. DL algorithms can automatically extract information from large amounts of data used to model complex problems and is, therefore, suitable for detecting and classifying weeds and crops. We present a systematic review of AI-based systems to detect weeds, emphasizing recent trends in DL. Various DL methods are discussed to clarify their overall potential, usefulness, and performance. This study indicates that several limitations obstruct the widespread adoption of AI/DL in commercial applications. Recommendations for overcoming these challenges are summarized.

Graphical Abstract

1. Introduction

Weeds constitute one of the most devastating constraints for crop production, and efficient weed control is a prerequisite for increasing crop yield and food production for a growing world population [1]. However, weed control may negatively affect the environment [2]. The application of herbicides may result in pollution of the environment because, in most cases, only a tiny proportion of the applied chemicals hits the targets while most herbicides hit the ground, and a part of them may drift away [2,3]. Mechanical weed control may result in erosion and harm beneficial organisms such as earthworms in the soil and spiders on the soil surface [4,5]. Other weed control methods have other disadvantages and often affect the environment negatively. Sustainable weed control methods need to be designed only to affect the weed plants and interfere as little as possible with the surroundings. Weed control could be improved and be more sustainable if weeds were identified and located in real-time before applying any control methods.

1.1. Motivation

Rehman et al. [6] considered how machine vision could automate the weed detection problem using field or airborne cameras. Machine vision was used to detect and kill weeds with powerful lasers. A cascade classifier was trained using Haar-like features to detect weeds on images.
However, the Haar features strongly depend on the orientation of the object being monitored, especially on the angle of rotation [7,8,9]. Histogram of Oriented Gradients (HOG) has similar problems. HOG are descriptors of special points that are used in computer vision and image processing for the purpose of object recognition. Abouzahir et al. [10] used HOG as an auxiliary tool to generate visual words and a backpropagation neural network for weed detection and plant classification. Che’Ya et al. [11] used a hyperspectral reflectance method for the assessment of weed classification for Amaranthus macrocarpus, Urochoa panicoides, and Malva sp. The images were from a real field but they did not consider gaps with dense scenes, which greatly simplifies the weedy identification process.
For classification tasks, Bayesian classification, discriminant analysis, and the nearest neighbor method have been widely used. Rainville et al. [12] used Bayesian classification and unsupervised learning for the isolation of weeds in row crops. Finally, the authors correctly classified an average of 94% of corn and soybean plants and 85% of weeds (multiple species). Islam et al. [13] considered several machine learning algorithms, random forest (RF), support vector machine (SVM), and k-nearest neighbors (KNN), to detect weeds using UAV images. The authors concluded that RF performed better than other classifiers. For the analyses, the authors only used images from one field. However, under other conditions, another classifier may be preferable. Hung et al. [14] presented an overview of machine learning methods in weed classification tasks.
Weed detection is an applied task and is not conducted for statistics, but for the subsequent control of weeds. Therefore, it is important to know the position of the weed and, at the same time, identify it quickly, because the cameras are installed on moving objects such as tractors, autonomous vehicles, and drones. The methods described above perfectly cope with the task of classification in conditions close to those of a laboratory, where they have certain criteria and requirements for images, such as background, light, angle, etc., but in field conditions, these factors vary all the time. Accurately determining the position of the weed in natural conditions remains a difficult task because of the high variability in weed size, color, occlusion, high density of weed and crop plants, and the overlapping of plant parts. For this reason, this study focuses only on papers that have successfully used machine vision for weed detection in images with high accuracy, emphasizing the deep learning (DL) technique.

1.2. Research Methodology and Criteria for Comparison

The fields of probabilistic modelling, AI, and neural networks are broad, covering many fields. In this section, we describe the process of selecting and analyzing articles included in this review. The search for articles was performed mainly via keywords, carried out directly for the following publishers through their websites and search features/menus:
  • Elsevier;
  • Taylor & Francis;
  • Springer;
  • Wiley;
  • IEEE;
  • Informa;
  • MDPI;
  • Hindawi.
The keyword-based search was also performed on the following academic search engines:
We focused on papers published over the last ten years. Keywords were used in various combinations (e.g., “weed corn detection”, “weed corn classification”, and “object detection in agriculture”), including different names of crops (e.g., tomato, apple, and corn) as well as modern, specific, widely used DL architectures (e.g., EfficientNet, EfficientDet, SpineNet, CenterNet, ThunderNet, CSPNet, DenseNet, SAUNet, DetNASNet, SM-NAS, AmoebaNet, Graph Neural Network, and Growing Neural Cellular Automata). The following criteria were considered when analyzing the retrieved papers:
  • The DL model/architecture used because this directly affects the requirements for the hardware part, as shown by Pourghassemi et al. [15]. In particular, the possibility of using neural networks on families of single-board computers was included in the review process.
  • The number of images (i.e., dataset size) used for training the neural network.
  • The types of platforms used to collect the images, with an option of whether these were mobile.
  • The time of day.
  • The training and inference time, overall speed of the DL model, and memory requirements. We examined whether authors used low-cost tools for training models, such as Google Collab (https://colab.research.google.com/, accessed on 5 November 2021). Expensive GPU or GPU farms significantly complicated the process of verifying the results presented by the authors.
  • The availability of open-source code for the DL model and dataset used for training/testing. The type of license was not considered.
  • The dataset type and quality.
  • The camera used and the distance from the points of interest (i.e., weeds), and the number/volume of weeds captured on images. The dimensions of the camera, and whether it was installed on a vehicle, were also considered.
We did not include research where experimental studies were not reported (e.g., Kulkarni et al. [16]) or papers where only artificial conditions were presented (e.g., [17]).
For completeness, we also mention some commercial efforts for addressing the weed control problem. The company Tertill (https://tertill.com/, accessed on 5 November 2021) sells a small compact robot for the elimination of small weeds. Ecorobotix (http://www.ecorobotix.com, accessed on 5 November 2021) offers a robot that uses a DL neural network to detect weeds and destroy them using herbicides. These companies provide solutions that have been tested in operational environments (TRL7+), and they deliver an entirely commercial product, which covers user manual and product specifications but not the underlying algorithms and methods. Therefore, such products are difficult to assess because of a lack of technical information. These commercial efforts were not included in the review that follows.

1.3. Contribution and Previous Reviews

This study contributes to the application of machine vision technologies for weed detection by identifying and studying relevant works that have used neural networks (especially DL-based approaches) in real-world agricultural datasets. Several key points that determine the success of weed classification and weed control technologies, including the distribution and proximity of weeds around the crop plants, various types of occlusion, different illumination/lighting conditions, and the color/texture similarities of weeds and plants, are considered during the review.
Several similar reviews exist in this field. Most of them provide only general information, while others focus mainly on analyzing the neural network methods used rather than their application, performance, and effectiveness. For example, Gikunda et al. [18] and Jouandeau et al. [19] analyzed modern neural networks that were used for agricultural tasks. However, their review of neural networks was only partly tied to agriculture. Gikunda [20] provided an overview of DL in crop production. Kamilaris [21] highlighted the problems of detecting small objects in the scene, but possible solutions to this problem were not provided or discussed.
In contrast to the existing ones mentioned above, our review has a specific task (machine vision for weed control). Therefore, it focuses less on general agricultural problems and processes—such as general automatization, yield estimation, crops’ collection, pest management, etc. This research is intended for researchers who investigate weed classification and automatic control of weeds. Appendix A shows a block diagram on the various stages of the weed detection process that were considered in this paper.

2. A Brief Overview of DL

2.1. The History: Birth, Decline and Prosperity

Deep neural networks became popular after 2012 when the neural network AlexNet [22] won the ImageNet competition (https://image-net.org/update-mar-11-2021.php access on 5 November 2021). After that, it became customary to evaluate the accuracy of a neural network on massive public datasets (ImageNet, MS COCO dataset, CIFAR, etc.). Accordingly, various neural networks were published by their authors/creators with weights obtained while training their models based on these datasets. Most of these datasets focus on general tasks such as tracking a person, animals, transportation, etc.
DL techniques benefit from a large amount of training data, which has sufficiently been captured at all possible variations that can exist in the targeted environment (e.g., a natural environment with variation in lighting conditions), adequately covering the distribution of data. Transfer learning techniques have been widely used to train new models on several other applications where the training data were limited. Such methods can save time and labor that would otherwise be used for capturing images and manually labelling classes (annotation). However, transfer learning and fine-tuning are efficient only if the new detected objects’ classes being modelled are similar to some of the detected objects’ classes that contributed to the training process.
The need for weed recognition datasets, which could aid in accurately controlling weeds using automated means, has been in high demand. Since automatic weed detection and control can reduce time and labor while increasing productivity, there has been a significant interest in an efficient solution. Some works performed before the appearance of DL embrace various methods for increasing productivity, without any of them prevailing over the rest. For example, Liu et al. [23] proposed the combination of HSI (hue, saturation, lightness) and RGB (red, green, blue) color for weed detection. Watchareeruetai et al. [24] proposed two methods for detecting weeds on lawns using computer vision technology. The first exploited statistical differences between weed and grass areas in the edge images and used a Bayesian classifier to distinguish them. Padmapriya et al. [25], Olsen et al. [26], and Downey et al. [27] demonstrated an approach that included various steps (i.e., preliminary processing, feature extraction, and a classification stage with a pronounced background) to detect weeds with high accuracy. The information presented in these works may still be helpful for image pre-processing and feature extraction before training DL models, which can be especially important when only a limited number of images is available in the dataset.

2.2. Architecture and Advantages of CNN

Convolutional neural networks (CNNs) constitute a special architecture of DL networks proposed by Cun in 1988 [28]. They target effective image recognition based on the use of convolutions. CNNs are a set of machine learning (ML) methods based on general feature representations rather than specialized algorithms for specific tasks. While several CNNs architectures have been proposed, and their accuracy has been benchmarked on various standard public datasets, it is still difficult to determine the best performer for some specific applications such as weed detection. Wen et al. [29], Gothai et al. [30], and Su [31] presented reviews of numerous neural networks carried out in the last years for automated weed control. The authors provided general information about the state of the field of neural networks of weeds to a wide range of readers, but did not discuss how the accuracy of object detection could be approved. Li et al. [32] and Kattenborn et al. [33] explained the benefits of deep neural networks over other methods. They can recognize deeper and unexpected patterns in the data with improved performance over previous techniques used. However, the authors only gave a few examples. Mahony et al. [34] presented various benefits of employing DL for machine vision problems and concluded that, for the detection of objects in real-life, DL is the best tool. However, a disadvantage of DL is the limited ability of algorithms to learn visual relations. Wang et al. [35] published a review paper summarizing advances in weed detection using ground-based vision and imaging technologies. An important point was that the authors, besides the standard use of DL, presented methods that utilized color indices and threshold- and learning-based functions. They used four categories of biological feature morphology, spectral features, visual textures, and spatial contexts. Dhillon et al. [36], Ren et al. [37], Gorach [38], Naranjo-Torres et al. [39], and Jiao et al. [40] presented overviews of various deep architectures and models. The operation of various CNN architectures and their components were described in detail in these overviews, including popular architectures and models such as LeNet, AlexNet, ZFNet, GoogleNet, VGGNet, ResNet, ResNeXt, SENet, DenseNet, Xception, and PNAS/ENAS.
Hundreds of varieties of neural networks have been presented. In recent years, computer science researchers have tended to publish the results as preprints because the relevance of their studies quickly becomes obsolete. Consequently, the information presented remains unreviewed. Generally, neural networks should be tested on published datasets and the results, such as training time and FPS, should be reported. In most cases, the efficiencies of neural networks are tested on unpublished datasets, which makes it extremely difficult to evaluate the capabilities of the neural network in question.
The object detection efficiency largely depends on the choice of training parameters for a convolutional neural network. One of the most important points is hyperparameter optimization. Hyperparameter optimization is a machine learning task that involves choosing a set of optimal hyperparameters for a training algorithm. The settings of hyperparameters were discussed by He et al. [41], Ma et al. [42], and Aydoğan et al. [43].

2.3. DL and CNN in Generic Object Detection in Agriculture

CNN has already been employed in research in the agricultural domain. Agarwal et al. [44] developed a custom CNN model with only eight layers for the identification of tomato crop diseases. The authors proposed the use of image pre-processing by changing the image brightness after image augmentation. If the weed has a color contrast with the crop, this method can be used even on low-power processors, such as single-board computers including the Raspberry PI. The topic of the identification of crop diseases through neural networks was also considered by Li et al. [32]. The authors concluded, in line with the findings of Agarwal et al. [44], that for some tasks involving the identification of leaves, shallow CNNs are useful. Combining shallow CNN and classical ML classification algorithms is a promising and simple way to deal with the identification of plant diseases. Boulent et al. [45] reviewed papers where neural networks were used for the detection of diseases on plant leaves. The authors analyzed 19 studies that used CNNs to identify crop diseases automatically and provided recommendations to maximize the potential of CNNs.
Jiang et al. [46] performed a large survey of studies in which various CNN architectures had been employed for plant stress assessment, plant development, and post-harvest quality assessment. The authors organized the studies in their review based on technical developments resulting from image classification, object detection, and image segmentation. Noon et al. [47] reviewed different deep learning techniques for the identification of plant leaf stresses. The authors paid special attention to one of the most popular frameworks created in python, the Keras DL framework (https://keras.io/, accessed on 5 November 2021), and its ability to increase the speed of weed recognition.
Moreover, Mishra et al. [48] presented a pretrained deep convolutional neural networks (DCNN) model deployed on a RaspberryPi 3 (https://www.raspberrypi.com/products/raspberry-pi-3-model-b/ accessed on 5 November 2021) using an Intel Movidius Neural Compute Stick consisting of dedicated CNN hardware units. The DL model achieved an accuracy of 88.46% in recognizing corn leaf diseases. The authors used a single-board computer with power limitations, and therefore, they proposed that their method should be employed on more powerful machines with GPUs. Many current applications of machine vision in agriculture consider the use of stereovision and time-of-flight (TOF) cameras (e.g., Kinect v2), as described by Badhan et al. [49] and Gai et al. [50], to determine the camera-to-object distance. It is necessary to understand that the quality of the image strongly depends on the choice of the camera. A comparison of the most popular technologies for cameras (CMOS and CCD) was made by Gottardi [51], Helmers [52], and Silfhout [53]. Krishna et al. [54] presented an overview of camera frameworks in their review paper. In all the above cases, the application falls into the machine vision pipeline but is too broad to be considered in this paper.
Generally, simple/shallow neural networks and classical ML techniques were mainly used during the previous decade to model problems requiring simple classification. However, in recent years, DL-based algorithms and architectures have proven to be superior in difficult classification tasks, as illustrated by Trung et al. [55], Wang et al. [56], and Bui et al. [57]. DL methods trained on synthetic data have achieved satisfactory results on real data as well [58]. Barth et al. [59] reported better classification results in the modelling of artificial conditions (training/testing on synthetic images or controlled lighting conditions) to improve the segmentation of the details of yellow pepper. However, such methods can fail when exposed to unstructured natural environments with several variations in lighting conditions.

3. Datasets and Image Pre-Processing

3.1. Datasets for Training Neural Networks

A dataset is needed to train neural networks, and image annotation of datasets is one of the main tasks for developing a computer vision system. Neural networks can be trained via multi-dimensional data and have the potential to model and extract meaning from a wide range of input information to address complex and challenging image classification and object detection problems. However, image datasets targeted for training neural networks should contain a significant number of images and enough variation in the object classes. When tracking an object with an accuracy of a centimeter, the dataset must be marked up appropriately, which is a highly labor-intensive task in terms of the time and effort required. At the same time, whichever neural network architecture is used, it is advisable to initialize the neurons using weights from a trained model that was trained using similar/related objects/images (i.e., the transfer learning technique). Transfer learning is a way to transfer the understanding (trained weights) of a model trained especially on the large dataset for initialization on a new dataset with similar objects, or on a smaller training set. Zichao [60] used 3500 images in 12 types of weeds and trained the first 14 layers of VGG16 with the Keras framework. Chen et al. [61] collected 5187 color images with 15 types of weeds and used 27 types of deep learning models through transfer learning (Figure 1). The paper considered, in detail, many specific moments as unweighted cross-entropy (CE) loss functions. The authors conducted an impressive amount of work, and, at the same time, provided links to both the software on GitHub and the dataset on Kaggle.
Espejo-Garcia et al. [62] used five pre-trained convolutional networks (Xception, Inception-Resnet, VGNets, Mobilenet and Densenet) for detecting two weed species (black nightshade (Solanum nigrum L.) and velvetleaf (Abutilon theophrasti Medik.)). They attained a weed detection accuracy of 99.29%. The advantage of transfer learning is that we can consider other objects than weeds. In transfer learning, a model trained in plant identification can also be used to find weeds with pretrained weights (after adding the last layers, it can be trained for weed objects) Al-Qurran et al. [63] discussed the possibility of applying transfer learning to the identification of plants in images from the environment with natural backgrounds. Data augmentation is a preprocessing technique, which is becoming increasingly popular in ML for the training of neural networks that works by creating large amounts of training data with a significant variance based on smaller datasets [64]. This technique helps to create more efficient and robust models [65]. Zheng et al. [66] proposed a dataset consisting of 31,147 images with over 49,000 annotated instances from 31 different types of crops. In contrast to existing vision datasets, the images were collected from a variety of cameras and equipment installed in greenhouses (Figure 2).
Sudars et al. [67] presented a smaller dataset that consists of 1118 images, identifying six food crops and eight weed species. Pictures were taken of food crops and weeds grown in a controlled environment and the field at various growth stages. Cap et al. [68] took a completely different path and created a dataset to identify the disease on the leaves of a cultivated culture with Leaf Artifact-Suppression Super Resolution (LASSR). According to the results, the preselected novel artefact-suppression super-resolution method had higher accuracy than the GAN and the CylceGAN models [69] (Figure 3).
In general, the selection of datasets for DL model training plays a crucial role in the model’s accuracy and prediction capacity. At the same time, it is extremely difficult to predict which results a different dataset will provide. Thus, considerations such as which dataset we need and how to properly prepare and pre-process the images are quite difficult. Huang et al. [70] and He et al. [71] discussed various recommendations for creating training datasets. From their recommendations, the resolution of the images and markups in the dataset, the format of the data shapes, the size of the objects, and their relative sizes, rotation, tilt, and lighting seem to affect the accuracy of neural networks trained with these datasets. Equally important are the scales of the input images, turns, lighting from different sides, and different backgrounds. Information about popular datasets are presented in Table 1.
Lu et al. [76] devoted one chapter to the topic of weed datasets in their review article on datasets in the agricultural sector.

3.2. Image Preprocessing

Image preprocessing (e.g., color space transform, rescaling, flipping, rotation, and translation) is a stage of image preparation that focuses on creating accurate data in the format needed for the efficient use of image processing algorithms. The advantages are that these methods do not require high computing power and these processes can be implemented in real-time on single-board computers. Faisal et al. [77] presented a good example of a typical preprocessing process, exploring the use of a support vector machine (SVM) and a Bayesian classifier as an ML algorithm for the classifying of pigweed (Amaranthus retroflexus L.) in image classification processes. With sufficiently bright color contrast, this method accurately recognizes the position of the object (Figure 4).
Image preprocessing has a broad scope with many different possible filters and different options for transforming an image. Dyrmann [78] presented a review of various filters for preprocessing but did not give enough examples with images. Preprocessing can also be used in real-time. Pajares et al. [64] provided guidance on choosing a vision system for optimal performance given adverse outdoor conditions with large light variability, uneven terrain, and different plant growth conditions. Calibration was used for color balance, especially when the illumination changed (Figure 5). This kind of system can be used in conjunction with DL. Since DL can be trained to show only one illumination, this will reduce the number of images required to train the model.
With a sufficiently large image dataset and the use of data augmentation (e.g., brightness, hue, and saturation alteration), the color balance may not be required since the neural network will already be trained for different illuminations. However, in addition to adjusting the camera settings, for example, when working through the digital camera interface (DMI) protocol in real-time, the presence of color balance can significantly enhance the results. For machine vision, tasks use simple technology, as a rule, and various filters for preprocessing with the subsequent definition of an object against a pronounced background. For example, Chang et al. [79] used image processing methods such as HSV (hue (H), saturation (S), value (V)) color conversion for weed detection. Slaughter et al. [80] used the OpenCV library for weed control with autonomous robotic systems, and Abhisesh [81] used it for robotic apple harvesting. The OpenCV library is well suited for objects with a pronounced contrast.
Despite the high efficiency of preprocessing methods in some works, these methods are not suitable for real-time use. The parameters of the preprocessing methods can be effectively adjusted to specific ones by human observation, and after that, these settings are no longer possible without human intervention to effect change in real-time. At the slightest change in the color contrast of the image, the effectiveness of the method decreases. Therefore, these methods may be suitable for analyzing yields on farms where artificial conditions for growing corn are created. Consequently, the robot can only identify weeds under these specific conditions.

3.3. Available Weed Detection Systems

The real application of machine vision systems is closely related to robotics. Controlling weeds with robotic systems is a practice that has recently gained increasing interest. In robotic systems, mechanical aspects usually require higher precision; hence, detecting weeds from machine vision becomes even more challenging. Qiu et al. [82] and Ren et al. [83] used popular models of neural networks, such as R-CNN and VGG16, for weed detection with a robotics system. Asha et al. [84] created a robot for weed control, but realized the challenge of accurate weed detection after deployment in the field due to the minor differences in size between the weed and the crop (Figure 3). When the weed was close to a cultivated crop and its leaves (e.g., when they covered each other), the robot had a problem identifying the weeds. After color segmentation, the weed was perceived as a single object together with the crop.
Chang et al. [79] and Shinde et al. [85] used neural networks for image segmentation and the subsequent identification of weeds by finding the shapes that represent the weed’s structure (Figure 6).
Raja et al. [86] proposed the marking of the crops, making them machine-readable for robotic systems. After such annotation, a robot could be trained to cut a stem with high accuracy. A drawback is that this work requires a lot of time. Additionally, several works have focused on laser technology for weed control in ideal imaging conditions, without much emphasis on the use of machine vision in real-world fields [87,88,89,90,91,92,93,94].
Unmanned aerial vehicles (UAV) have been a promising application for data collection and weed control in the field of agriculture (e.g., broad-acre farming), as demonstrated by Librán-Embid et al. [91] and Boursianis et al. [92], and, in the future machine vision in agricultural farms, may mainly be used via the UAV. The use of UAVs for creating an accurate map of weeds with DL has been presented by Huang [93], Hunter et al. [94], and Cerro et al. [95]. UAV images are not only used to obtain general information about the crop. Drones are used for patch spraying of herbicide; they are also used when precise detection of the target weeds in images is required, either in real-time or based on weed maps [96]. Rijk et al. [97] used a drone with the OpenCV library for image processing in real-time. Liang et al. [98] presented an automated image classification system based on a CNN model that relies on simple imaging tools to spray herbicide on patches of weeds in fields.
The development of robots to control weeds is a complicated and time-consuming task that includes mechanics, weed identification, and weed control systems working together. Selecting a neural network, collecting datasets, training the neural network, and setting up hyper-parameters takes a lot of time. Therefore, authors often prefer already existing proven and previously used tools that are available on the market.

4. DL for Weed Detection

This section analyzes the research papers identified through the criteria set out in Section 1.3. First, relevant applications of machine vision for weed detection in agronomic automation are broadly mentioned. Then, Section 4.1 presents studies that have used neural networks and DL for object detection in relevant agricultural applications.

4.1. The Curse of Dense Scenes in Agricultural Fields

Dense scenes are the ones where there is dense vegetation, including both crops and weeds, and large occlusion between weeds and crops. Zhang et al. [99] presented a review of DL for dense scenes in agriculture. The purpose of this study is illustrated in Figure 7a, presenting a cluster of fruits on a dense background (including trees from other rows). As a result, the authors give several general recommendations such as increasing the dataset, generating synthetic data, and setting hyper-parameters. They concluded that it is advisable to continue increasing the depth of neural networks to solve problems with dense scenes. Assad et al. [100] focused on weeds (Figure 7b,c). The authors used semantic segmentation when conducting DL with a ResNet-50-based SegNet model. As a result, they attained a Frequency-Weighted Intersection Over Union (FWIOU) value of 0.9869, but the authors did not consider situations where the weed and the crop came into contact; in such cases, this method will not be efficient.
Proper annotation of the training dataset was crucial to detect the objects precisely. Bounding box-based detection methods used several IoU (intersection over union) and NMS (non-maximum suppression) values for model training and inference. Some segmentation methods can be used as well, with blob detection and thresholding techniques for the identification of individual objects in clusters. Identifying individual objects in clusters is a problem in various fields of activity, and much research has been devoted to solving it. Shawky et al. [101] and Zhuoyao et al. [102] considered the problem of dense scenes and concluded that each situation is unique, and it is advisable to combine various convolutional neural networks with each other. However, it is very difficult to predict the result in advance because such a method may take a lot of time. Dyrmann et al. [78] considered an object as a combination of different objects, depending on their position, which made the model a little heavier but, at the same time, increased the tracking accuracy. This method is one of the most promising.

4.2. State-of-the-Art Methods in Weed Detection

This section presents studies that used deep neural networks for object detection in agricultural applications focusing on weed detection. Works that explored practical application of neural network models for real-time agricultural automation, focusing on the precise detection of weeds in natural environments, are considered. Recommendations are made on the type of DL networks and their suitability for practical farm applications.
Liakos et al. [103] provided a comprehensive review of research on the application of ML in agricultural production systems in all its aspects, including both vehicle management and machine vision systems. However, the authors focused more on static data on the use of neural networks than on the technical information needed to implement a neural network. Similarly, Hasan et al. [104] reported on existing methods of weed detection and classification based on DL. The authors considered only 70 papers, but all papers were considered using the same criteria. In the conclusion, they explained common ideas related to the use of DL in the field of agriculture.
Many scientists have used DL to detect a single feature on a specific background. Rehman et al. [105] considered the role of CV only in fruits and vegetables among various horticultural products in agricultural areas with statistical machine learning technology. Similarly, Osorio et al. [106] considered a DL approach for weed detection in lettuce crops, while Ferreira et al. [107] considered weed detection in soybean crops. It is critical that the authors only used a limited dataset from one field. Therefore, the results of the neural network presented in the work have high accuracy, but at the slightest change in the detection condition (e.g., a new field or a change of season), the accuracy may change. As a rule, the results of such works can be used only under consistent conditions, which very rarely happens.
Santos et al. [108] presented a review of several DL methods applied to various agricultural problems such as disease detection, the classification of fruits and plants, and weed classification. However, the review was very short and limited to a small number of studies.
Dokic et al. [109] concluded that DL neural methods were better than classical ML methods for plant classification in images. The analysis of deep neural networks in the context of agriculture is not fully disclosed because problems with dense scenes have not been solved. A review by Tian et al. [110] considered practical applications of machine vision in agriculture. Their analysis showed that computer vision could help in developing agricultural automation for small fields in order to achieve benefits such as low cost, high efficiency, and accuracy. However, more emphasis was placed on the automation process than on machine vision. Khaki et al. [111] proposed the use of a CNN to detect and count corn kernels in images. The authors trained different models for the detection of objects at high speed under various conditions, lighting, and angles. The authors applied the standard sliding window approach to detecting the kernel, obtaining high accuracy due to the correct annotation of the dataset (Figure 8).
Osorio et al. [106] classified weeds on images by employing architectures such as SVM, YOLOv3, and Mask R-CNN, and F1 scores of 88%, 94%, and 94%, respectively, were achieved for weed detection. F1 is a harmonic mean between precision and recall values for a model. Dunn’s test was introduced to obtain statistical measurements between each assessment (man vs. machine). They showed that the DL models could improve the accuracy of weed coverage estimates and minimize human bias. Yu et al. [112] used several models of DCNN for the detection of Bermuda herbs. The VGGNet model that was used achieved the highest F1 scores (> 0.95) and outperformed the GoogLeNet model in detecting weeds. The authors provided various tips to improve the detection accuracy for each DL model used in the study.
Asad et al. [100] used DL meta-architectures such as SegNet and UNET, and encoder blocks such as VGG16 and ResNet-50, to detect weed plants in canola fields. The ResNet-50-based SegNet model performed best, with a mean crossover at the pooled value of 0.8288 and a frequency-weighted crossover over the pooled value of 0.9869. Yu et al. [113] analyzed several deep convolutional neural networks (DCNNs) to detect dandelion (Taraxacum officinale Web.), Ground ivy (Glechoma hederacea L.), and euphorbia (Euphorbia maculata L.). They considered GoogleNet, DetectNet, and AlexNet. From those, DetectNet had the highest F (≥0.9843) in test datasets for weed detection.
Gao et al. [114] significantly improved the accuracy of the neural network by developing their neural network based on YOLOv3 and Tiny YOLO. Specifically, the average accuracy for the detection of hedge bindweed (Calystegia sepium (L.) R.Br.) and sugar beet were 0.761 and 0.897, respectively, for images of 800 × 1200 pixels. Such a redesign of existing standard object detection frameworks for specific applications has good potential for improved accuracy and speed.
Scott et al. [115] analyzed two models, the Faster R-CNN and the Single Shot Detector (SSD), for weed detection using mean intersection over the pool (IoU) and inference speed. The authors showed that the Faster RCNN with a 200-box proposal had the same weed detection performance as the SSD model in terms of accuracy, recall, F1, and IoU score, as well as similar hatch times. They concluded that Single-stage (one-shot) object detectors (e.g., SSD and YOLO) provided better detection speed (real-time) but were considered less accurate as compared to double-stage detectors such as Faster R-CNN
Narvekar et al. [116] developed a prototype of the CNN model architecture for the problem of classification of flower species with a dense overlap between flowers and weeds. The authors compared the result of transfer learning across the VGG16, MobileNet2, and Resnet50 CNN architectures.
Sharma et al. [117] used image segmentation to train CNN models. As a result, the S-CNN model was trained using segmented images and showed an accuracy of 98.6% when classifying ten classes of plant diseases. Object/instance segmentation was used to draw a free shape (polygon) around the target objects. Such segmentation methods are useful in cases where there is a high degree of object overlapping, which causes the common method of drawing rectangular bounding boxes to be inappropriate.
An overview of the methods and DL techniques used to identify weeds and crops is shown in Appendix B.

5. Technical Aspects

This section discusses various technical aspects considered in the state-of-the-art work presented in this review. Such elements include models and architectures that have been employed, methods used to improve performance, methods used to detect small objects, and complexity vs. processing.

5.1. Models and Architectures

The number of neural network models and architectures proposed for solving the task under study is increasing quickly due to the large interest in employing DL among scientists in this field. Therefore, we only present an analysis of the latest, most popular, and successfully used DL neural networks, which have shown promising, good results for datasets used by the authors for training and/or testing.
Du et al. [118] described a SpineNet model introducing multi-scale features via convolutional layers of mixed sizes. The model was originally developed to classify medical images, but is also used in the agricultural field due to its excellent efficiency. Koh et al. [119] successfully used this model for high-throughput image-based plant phenotyping. Shah et al. [120] described AmoebaNetalso, which refers to algorithms used for the automatic creation of neural networks. AmoebaNet uses evolutionary algorithms instead of reinforcement learning algorithms to automatically find optimal neural network architectures.
Yao et al. [121] used SM-NAS neural networks for high accuracy object detection, which are models that offer a two-step coarse search strategy called Structural-to-Modular NAS (SM-NAS). The first stage of the search, at the structural level, was aimed at finding an effective combination of different modules. The second stage of the search, at the modular level, developed each specific module. Jia et al. [122] described the CenterNet and CornerNet-Lite lightweight real-time object detection systems. Zhao et al. [123] used this system for fruit detection from digital images. As a result, this system showed the best results compared with ResNet-18, DLA-34, and HourglassNet. CenterNet models the object as a single point in the center of the bounding box. The size of the object is retrieved at a second phase through the image features. An input image is fed into the neural network, and the neural network generates a heatmap. The peaks in this heatmap correspond to the centers of the objects. The image characteristics at each peak in the heatmap predict the size of the bounding box around the object.
Xu et al. [124] considered SegNet, FCN, and U-Net for weed image segmentation at the seedling stage in paddy fields. Kong et al. [125] successfully used MCF-Net for crop species recognition in precision agriculture. Wosner et al. [126] used the EfficientDet neural network for object detection in agriculture. The general architecture of EfficientDet represents the one-stage detector paradigm. EfficientNet, pre-trained on ImageNet, was taken as a basis.
Some of the most popular neural networks for tracking objects in real-time are YoloV3, YoloV4, and YoloV5. Most articles concerning the detection of weeds or crop classification in agriculture used these models. For example, Wu et al. [127] used YoloV4 to accurately detect apple flowers in real-time. Kuznetsova et al. [128] considered YoloV5 in comparison with YoloV3 for apple detection. General information about YOLOV3 was presented by Tian et al. [129] and Wu et al. [130]. The open-source code of YOLO is regularly maintained, with various features continuously added. YOLO models have been widely used in real-time applications because of their well-maintained repository and documentation, and the availability of light, medium, and heavy models for speed–accuracy trade-offs.

5.2. Future Directions

Popular DL models have been designed for a wide range of tasks. Therefore, they can detect and classify many different objects. Summarizing previously reviewed papers, we can conclude that deep neural networks do not currently perform well on challenging tasks, such as finding weeds in dense scenes and accurately detecting their position. Using popular DL models without any modifications/adaptations makes detecting the position of a weed plant challenging.
Waheed et al. [131] and Atila [132] gave general advice, such as using transfer learning when all layers of the models were trained, to improve the accuracy of CNN, and used corn leaf diseases as an example. In addition, these recommendations were implemented by Pang et al. [133], who determined early-season corn stands using geometric descriptor information and deep neural networks.
Liang et al. [134] proposed the use of linear-phase point-convolution kernels (LPPC kernels) to reduce the computational complexity and storage costs of convolutional neural networks. This method was partially implemented by Taravat et al. [135] for agricultural field boundary detection. Isufi et al. [136] developed a custom DL model to study joint convolutional representations from the nearest neighbor and the graph of the farthest neighbor. Wei et al. [137] used this method to increase accuracy in the slice positioning method for the laser cutting of corn seeds.
Another option for increasing the accuracy of a deep neural network is to combine several types of networks (ensemble techniques). For example, Koo et al. [138] proposed a hierarchical classification model for CNN fusion to extract hierarchical representations of images. The authors applied residual learning to the RNN part to make it easier to train the composite model, and finally improved the model’s generalization. The experimental results showed that hierarchical networks perform better than modern CNNs. Agarap [139] combined neural networks and ML by combining CNN and SVM for images. This combination achieved a test classification accuracy of ≈99.04% using the MNIST dataset. Khaki et al. [140] used these techniques to create a DL model, which demonstrated the ability to generalize yield forecasts from untested media without significantly reducing the forecast accuracy.
Finally, proper annotation of the training dataset was crucial to detect the objects precisely. Bounding box-based detection methods used several IoU (intersection over union) and NMS (non-maximum suppression) values for model training and inference. Some segmentation methods can be used with blob detection and thresholding techniques for the identification of individual objects in clusters. Identifying individual objects in clusters is a problem in various fields of activity, and much research has been devoted to solving it. Dyrmann et al. [141] considered an object as a combination of different objects, depending on their position, which made the model a little heavier, but at the same time increased tracking accuracy.

5.3. Detection of Small Objects

Identifying and separating weeds from crops are processes that require high accuracy in object detection. Here, we highlight papers where the task was to detect small objects. Barth et al. [142] showed that with a large dataset with high image resolution (Nikon D7200 with a resolution of 4000 × 6000 pixels), it is possible to achieve high accuracy for real-time sugar beet and weed counting using the proposed deep neural network. Nguyen et al. [143] considered the possibility of tracking small objects in a filtered dataset from PASCAL VOC 2007, using the Fast RCNN, Faster RCNN, RetinaNet, and YOLOv3 DL models. They concluded that the deeper the architecture was, the higher the accuracy of detection achieved. Chen et al. [144] compiled a reference dataset adapted for detecting small objects to better assess the performance in the detection of small objects. The authors supplemented the state-of-the-art R-CNN algorithm with a contextual model and small area suggestion generator to improve the performance of small object detection.
Yu et al. [145] presented a mask-region convolutional neural network (Mask-RCNN) to detect strawberries using a robot. Resnet50 was adopted as the underlying network, combined with the architecture’s Pyramid Network (PN) feature for feature extraction. It was concluded that segmentation is especially useful for identifying objects in dense clusters and correctly calculating the gripping position. Finally, Boukhris et al. [146] trained Mask-RCNN to automatically detect small lesions on leaves and fruits, locate them, classify their severity, and visualize them.

5.4. Complexity vs. Processing Capacity

As the depth of the network increases, the number of layers increases, and the network requires more parameters for training; thus, longer training times and more processing capacity are needed. Initially, up to VGG Net, it was found that the accuracy increased with depth. However, the vanish gradient increased with the increasing of the depth, and then the model was not efficiently trained [147]. Afterwards, ResNet was produced using residual networks, and GoogleNet was developed using a similar technique (inception blocks) to flow information up to the end layers for very deep neural networks. Later, more accurate models were created, increasing width and not just depth. A logical solution for improving the accuracy of the neural network is to use more layers. However, heavy networks can no longer run on low-cost single-board computers (RaspberryPI, OrangePI, etc.), which can be conveniently installed directly on a vehicle. For example, Jetson Nano on YoloV3 can only detect a few frames per second [148,149]. The following possibilities are available for using deep neural networks on single board computers:
  • DeepStream SDK—software that allows the use of multiple neural networks to process each video stream, making it possible to apply different deep ML techniques;
  • The AWS IoT Greengrass Platform, which extends AWS Edge Web Services by enabling them to work locally with data;
  • The RAPIDS suite of software libraries, based on the CUDA-X AI, makes it possible to work continuously, complete data processing, and analyze pipelines entirely on GPUs;
  • Google Colab is a similar service to Jupyter-Notebook that has been offering free access to GPU Instances for a long time. Colab GPUs have been updated to the new NVIDIA T4 GPUs. This update opens up new software packages, allowing experimentation with RAPIDS at Colab;
  • NVIDIA TensorRT is an SDK for high performance DL output. It includes a DL inference optimizer and runtime that provides low latency and high throughput for DL inference applications. TensorRT is a very promising direction for single-board computers because we can obtain 39 FPS by using tkDNN + TensorRT [150] with Jetson Nano. tkDNN is a deep neural network library built with cuDNN and tensorRT primitives specifically designed to run on NVIDIA Jetson boards. This requires a conversion of Darknet weights to TensorRT weights using the TensorFlow version of YOLOv4 (https://github.com/hunglc007/tensorflow-yolov4-tflite#convert-to-tensorrt, accessed on 5 November 2021) or the Pytorch version of YOLOv4 (https://github.com/Tianxiaomo/pytorch-YOLOv4#5-onnx2tensorrt, accessed on 5 November 2021).

5.5. Limitations

In this survey, it is worth noting that all the criteria set for comparing works were impossible to fulfill, because some authors explained their design decisions only partially, or employed a wide variety of performance metrics on their own datasets, which are not publicly available for fair comparisons. More fairness should be achieved when comparing work in the field of machine vision and agriculture by considering popular, open, and public datasets based on well-accepted and well-understood metrics for assessment. Another critical problem in the surveyed papers was the lack of links to/information on initial sources: neural network codes, trained weights, and datasets. More than 95% of the papers did not provide any link to the initial source. In the absence of source code, it is not possible to practically test and apply the results presented in the manuscripts. Direct comparison of papers is complicated because the assessment of the effectiveness of neural network models is vague. A neural network can be evaluated for the accuracy of detection (classification), for the FPS rate, for the volume of weights (relevant for single-board computers), the required number of images for training, and others. Furthermore, many authors agree that the main indicator is the accuracy of determining the position of the detected object and often cite this parameter to prove the effectiveness of the model they have proposed. It is worth noting that the accuracy depends on many factors such as the setting of the model parameters, the quality of the labeling, the number of images, and the background on which the plant is located. The comparison of models requires that the models are checked on the same or similar datasets, and it is essential to have full access to the original materials. In the reviewed papers, the characteristics of the camera were not fully disclosed. For example, the OV7725 camera, which is popular in machine vision, has about a hundred registers, which can be changed in real-time.

6. Conclusions

This paper performed a survey on the state-of-the-art methods used for weed identification in agricultural fields, which robots can use for effective weed control. Various methods and techniques were reviewed, focusing on shallow neural networks and DL, and also mentioning different approaches from machine vision that have been applied in the field. Our findings indicate that DL performs better than traditional machine vision-based approaches. Authors of related work in agriculture, especially with respect to weed detection and elimination, are recommended to embrace modern approaches based on neural networks and DL because of the overwhelming number of papers reviewed that show the superiority of deep neural networks in such machine vision problems.
From our observation in this review, it seems logical to consider multifunctional adaptive unions to improve the performance of a neural network. The structural synthesis of multilayer neural networks will help in the efficient use of spatial information by giving different weights to different layers of objects. CNN architectures are combined well with other DL algorithms such as RNN, showing better results. Regarding DL architectures, the most popular neural network seems to be Yolo, but for practical use, the position of the weed must still be precisely determined. As we have seen, due to various frameworks and libraries, the line between single-board computers and computers with powerful graphical interfaces gradually decreases. Therefore, deep neural networks are expected to be increasingly used in real-time weed recognition in the future.
We conclude that currently, there is no neural network that is ideal for determining the exact position of the weeds for real-time weed detection. This is an ongoing challenge, with more solutions expected to come in the near future. We encourage scientists to continue their efforts in the field, aiming to solve the problem efficiently, accurately and without expensive resources.
We recognize that more advanced techniques and promising architectures might appear soon, making the proposed methodology obsolete. Finally, we intend to experimentally investigate the proposed recommendations for offering a complete weed detection and control system, identifying the exact position of weeds via DL and then killing them with lasers or robotic arms under realistic, real-world conditions.

Author Contributions

Conceptualization, I.R. and A.K.; methodology, I.R. and A.K.; formal analysis, I.R. and A.K.; investigation, I.R. and A.K.; data curation, I.R. and A.K.; writing—original draft preparation, I.R. and A.K.; writing—review and editing, I.R., A.K. and C.A.; supervision, C.A.; funding acquisition, C.A. All authors have read and agreed to the published version of the manuscript.

Funding

This review was mainly funded by the EU–project WeLASER “Sustainable Weed Management in Agriculture with Laser-Based Autonomous Tools,” Grant agreement ID: 101000256, funded under H2020-EU.3.2.1.1. AK received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 739578 and from the Government of the Republic of Cyprus through the Deputy Ministry of Research, Innovation and Digital Policy.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Nomenclature

AIArtificial Intelligence
CNNsConvolutional Neural Networks
CVComputer Vision
DCNNsDeep Convolutional Neural Networks
DLDeep Learning
MLMachine Learning

Appendix A

Figure A1. Block diagram on the various stages of the weed detection process. Numbers refer to related subsections in the article.
Figure A1. Block diagram on the various stages of the weed detection process. Numbers refer to related subsections in the article.
Remotesensing 13 04486 g0a1

Appendix B

Table A1. Applying deep learning to agriculture for weed control.
Table A1. Applying deep learning to agriculture for weed control.
No.PlaceDetection TaskCameraAccuracy, %Weed Position Used forDatasetNeural NetworkDisadvantageWeed TypeGrown CropReferences
1In greenhouse by automation lineFor an approximate determination of the positionKinect v2 sensor66Robotic intra-row weed controlAvailableAdaBoostLow accuracyAll that are not broccoliBroccoli[50]
2By unmanned aerial vehicle in fieldTo create weed mapsRGB camera89Weed control using agricultural vehicleNot availableAutomatic object-based classification methodComplexity of customizationAll that are not cornCorn[151]
3For precise positioning of the weedGoPro Hero3 Silver Edition87.69Herbicide useNot availableRandom Forest classifierLow accuracyAll that are not sugarcaneSugarcane[152]
Field weed density evaluationRGB cameras93.40For statisticsNot availableU-netNo way to control weedsAll that are not cornCorn[153]
4Autonomousrobot on thefieldTo classify tasksRGB camera92.5Robotic weed controlNot availableSVM was used as the classifierLow recognition speedBindweed and bristles (field bindweed and annual bindweed)Sugar beet fields were studied.[154]
5To determine the approximate positionCanonEOS 60D60Robotic weed controlAvailableR-CNNDetection problem with small weed [155]
6To determine the exact positionRGB camera82.13Mechanical weed controlAvailableResNet50SlownessAll that are not sugar beetSugar beet[156]
7VehicleTo determine the approximate positionSony IMX22090.3Herbicide applicationNot availableAFCP algorithmHarms both weeds and cornAll that are not cornCorn[157]

References

  1. FAO. NSP-Weeds. Available online: http://www.fao.org/agriculture/crops/thematic-sitemap/theme/biodiversity/weeds/en/ (accessed on 18 August 2021).
  2. Kudsk, P.; Streibig, J.S. Herbicides and two edge-sword. Weed Res. 2003, 43, 90–102. [Google Scholar] [CrossRef]
  3. Harrison, J.L. Pesticide Drift and the Pursuit of Environmental Justice; MIT Press: Cambridge, MA, USA; London, UK, 2011; Available online: https://www.jstor.org/stable/j.ctt5hhd79 (accessed on 5 November 2021).
  4. Lemtiri, A.; Colinet, G.; Alabi, T.; Cluzeau, D.; Zirbes, L.; Haubruge, E.; Francis, F. Impacts of earthworms on soil components and dynamics. A review. Biotechnol. Agron. Soc. Environ. 2014, 18, 121–133. Available online: https://popups.uliege.be/1780-4507/index.php?id=16826&file=1&pid=10881 (accessed on 18 August 2021).
  5. Pannacci, E.; Farneselli, M.; Guiducci, M.; Tei, F. Mechanical weed control in onion seed production. Crop. Prot. 2020, 135, 105221. [Google Scholar] [CrossRef]
  6. Rehman, T.; Qamar, U.; Zaman, Q.Z.; Chang, Y.K.; Schumann, A.W.; Corscadden, K.W. Development and field evaluation of a machine vision based in-season weed detection system for wild blueberry. Comput. Electron. Agric. 2019, 162, 1–3. [Google Scholar] [CrossRef]
  7. Rakhmatulin, I.; Andreasen, C. A concept of a compact and inexpensive device for controlling weeds with laser beams. Agron. 2020, 10, 1616. [Google Scholar] [CrossRef]
  8. Raj, R.; Rajiv, P.; Kumar, P.; Khari, M. Feature based video stabilization based on boosted HAAR Cascade and representative point matching algorithm. Image Vis. Comput. 2020, 101, 103957. [Google Scholar] [CrossRef]
  9. Kaur, J.; Sinha, P.; Shukla, R.; Tiwari, V. Automatic Cataract Detection Using Haar Cascade Classifier. In Data Intelligence Cognitive Informatics; Springer: Singapore, 2021. [Google Scholar] [CrossRef]
  10. Abouzahir, A.; Sadik, M.; Sabir, E. Bag-of-visual-words-augmented Histogram of Oriented Gradients for efficient weed detection. Biosyst. Eng. 2021, 202, 179–194. [Google Scholar] [CrossRef]
  11. Che’Ya, N.; Dunwoody, E.; Gupta, M. Assessment of Weed Classification Using Hyperspectral Reflectance and Optimal Multispectral UAV Imagery. Agronomy 2021, 11, 1435. [Google Scholar] [CrossRef]
  12. De Rainville, F.M.; Durand, A.; Fortin, F.A.; Tanguy, K.; Maldague, X.; Panneton, B.; Simard, M.J. Bayesian classification and unsupervised learning for isolating weeds in row crops. Pattern Anal. Applic. 2014, 17, 401–414. [Google Scholar] [CrossRef]
  13. Islam, N.; Rashid, M.; Wibowo, S.; Xu, C.Y.; Morshed, A.; Wasimi, S.A.; Moore, S.; Rahman, S.M. Early Weed Detection Using Image Processing and Machine Learning Techniques in an Australian Chilli Farm. Agriculture 2021, 11, 387. [Google Scholar] [CrossRef]
  14. Hung, C.; Xu, Z.; Sukkarieh, S. Feature Learning Based Approach for Weed Classification Using High Resolution Aerial Images from a Digital Camera Mounted on a UAV. Remote Sens. 2014, 6, 12037–12054. [Google Scholar] [CrossRef] [Green Version]
  15. Pourghassemi, B.; Zhang, C.; Lee, J. On the Limits of Parallelizing Convolutional Neural Networks on GPUs, In Proceedings of the SPAA ‘20: 32nd ACM Symposium on Parallelism in Algorithms and Architectures. virtual event, USA, 15−17 July,2020. [Google Scholar] [CrossRef]
  16. Kulkarni, A.; Deshmukh, G. Advanced Agriculture Robotic Weed Control System. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2013, 2, 10. Available online: https://www.ijareeie.com/upload/2013/october/43Advanced.pdf (accessed on 2 September 2021).
  17. Wang, N.; Zhang, E.; Dowell, Y.; Sun, D. Design of an optical weed sensor using plant spectral characteristic. Am. Soc. Agric. Biol. Eng. 2001, 44, 409–419. [Google Scholar] [CrossRef]
  18. Gikunda, P.; Jouandeau, N. Modern CNNs for IoT Based Farms. arXiv 2019, arXiv:1907.07772v1. [Google Scholar]
  19. Jouandeau, N.; Gikunda, P. State-Of-The-Art Convolutional Neural Networks for Smart Farms: A Review. Science and Information (SAI) Conference, Londres, UK, July 2017. Available online: https://hal.archives-ouvertes.fr/hal-02317323 (accessed on 16 August 2021).
  20. Saleem, M.; Potgieter, J.; Arif, K. Automation in Agriculture by Machine and Deep Learning Techniques: A Review of Recent Developments. Precis. Agric. 2021, 22, 2053–2091. [Google Scholar] [CrossRef]
  21. Kamilaris, A.; Prenafeta-Boldú, F. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef] [Green Version]
  22. Jiang, B.; He, J.; Yang, S.; Fu, H.; Li, H. Fusion of machine vision technology and AlexNet-CNNs deep learning network for the detection of postharvest apple pesticide residues. Artif. Intell. Agric. 2019, 1, 1–8. [Google Scholar] [CrossRef]
  23. Liu, H.; Lee, S.; Saunders, C. Development of a machine vision system for weed detection during both of off-season. Amer. J. Agric. Biol. Sci. 2014, 9, 174–193. [Google Scholar] [CrossRef] [Green Version]
  24. Watchareeruetai, U.; Takeuchi, Y.; Matsumoto, T.; Kudo, H.; Ohnishi, N. Computer Vision Based Methods for Detecting Weeds in Lawns. Mach. Vis. Applic. 2006, 17, 287–296. [Google Scholar] [CrossRef]
  25. Padmapriya, S.; Bhuvaneshwari, P. Real time Identification of Crops, Weeds, Diseases, Pest Damage and Nutrient Deficiency. Internat. J. Adv. Res. Educ. Technol. 2018, 5, 1. Available online: http://ijaret.com/wp-content/themes/felicity/issues/vol5issue1/bhuvneshwari.pdf (accessed on 17 August 2021).
  26. Olsen, A.; Konovalov, D.A.; Philippa, B.; Ridd, P.; Wood, J.C.; Johns, J.; Banks, W.; Girgenti, B.; Kenny, O.; Whinney, J.; et al. DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning. Sci. Rep. 2019, 9, 118–124. [Google Scholar] [CrossRef]
  27. Downey, D.; Slaughter, K.; David, C. Weeds accurately mapped using DGPS and ground-based vision identification. Calif. Agric. 2004, 58, 218–221. Available online: https://escholarship.org/uc/item/9136d0d2 (accessed on 16 August 2021). [CrossRef] [Green Version]
  28. Cun, Y.; Boser, B.; Dencker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. Available online: http://yann.lecun.com/exdb/publis/pdf/lecun-89e.pdf (accessed on 16 August 2021).
  29. Wen, X.; Jing, H.; Yanfeng, S.; Hui, Z. Advances in Convolutional Neural Networks. In Advances in Deep Learning; Aceves-Fernndez, M.A., Ed.; IntechOpen: London, UK, 2020. [Google Scholar] [CrossRef]
  30. Gothai, P.; Natesan, S. Weed Identification using Convolutional Neural Network and Convolutional Neural Network Architectures, Conference. In Proceedings of the 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), Surya Engineering College, Erode, India, 1–13 March 2020. [Google Scholar] [CrossRef]
  31. Su, W.-H. Crop plant signalling for real-time plant identification in smart farm: A systematic review and new concept in artificial intelligence for automated weed control. Artif. Intelli. Agric. 2020, 4, 262–271. [Google Scholar] [CrossRef]
  32. Li, Y.; Nie, J.; Chao, X. Do we really need deep CNN for plant diseases identification? Comput. Electron. Agric. 2020, 178, 105803. [Google Scholar] [CrossRef]
  33. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. SPRS J. Photogram. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  34. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep Learning vs. Traditional Computer Vision. In Advances in Computer Vision. CVC 2019. Advances in Intelligent Systems and Computing; Arai, K., Kapoor, S., Eds.; Springer: Cham, Switzerland, 2020; Volume 943. [Google Scholar] [CrossRef] [Green Version]
  35. Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
  36. Dhillon, A.; Verma, G. Convolutional neural network: A review of models, methodologies and applications to object detection. Prog. Artif. Intell. 2020, 9, 85–112. [Google Scholar] [CrossRef]
  37. Ren, Y.; Cheng, X. Review of convolutional neural network optimization and training in image processing. In Tenth International Symposium on Precision Engineering Measurements and Instrumentation 2018; SPIE.digital library: Kunming, China, 2019. [Google Scholar] [CrossRef]
  38. Gorach, T. Deep convolution neural networks—A review. Intern. Res. J. Eng. Technol. 2018, 5, 439–452. Available online: https://d1wqtxts1xzle7.cloudfront.net/57208511/IRJET-V5I777.pdf?1534583803=&response-content-disposition=inline%3B+filename%3DIRJET_DEEP_CONVOLUTIONAL_NEURAL_NETWORKS.pdf&Expires=1629185322&Signature=LdFCNz-FOJU2UC1DaF38SfcA0IRYByc51~4cv6e8DC3JD4Z6L936t9GoLuQE5Bg-9v9HSnvf6ObIT64xIes0IGtb-QPR-qPPm9LKLdc1xKRdQ8jq8fKvIwKhQtdTYumrXL5aijjHSTO1Rcu8Gs2pta~zkiC1~zfONjYrOWDhSsj5O9CKGKLW2z7j1tER5QyqkYWrgycIWyytROREB5moD~7i3WNYJjnbxr7QrOSUVTVJ6YQ2hR35cmcKQLf45RhTgm5SaP3VzqK27kz9m3HCNSBqhNL5hbZgJBi5vODpyl2qinuZL1vhwJey9ouDlz6ajTQIe53cvNTbLxXGiFOqDA__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA (accessed on 17 August 2021).
  39. Naranjo-Torres, J.; Mora, M.; Hernández-García, R.; Barrientos, R. Review of Convolutional Neural Network Applied to Fruit Image Processing. Appl. Sci. 2020, 10, 3443. [Google Scholar] [CrossRef]
  40. Jiao, J.; Zhao, M.; Lin, J.; Liang, K. A comprehensive review on convolutional neural network in machine fault diagnosis. Neurocomputing 2020, 417, 36–63. [Google Scholar] [CrossRef]
  41. He, T.; Kong, R.; Holmes, A.; Nguyen, M.; Sabuncu, M.R.; Eickhoff, S.B.; Bzdok, D.; Feng, J.; Yeo, B.T.T. Deep neural networks and kernel regression achieve comparable accuracies for functional connectivity prediction of behaviour and demographics. NeuroImage 2020, 206, 116276. [Google Scholar] [CrossRef] [PubMed]
  42. Ma, X.; Kittikunakorn, N.; Sorman, B.; Xi, H.; Chen, A.; Marsh, M.; Mongeau, A.; Piché, N.; Williams, R.O.; Skomski, D. Application of Deep Learning Convolutional Neural Networks for Internal Tablet Defect Detection: High Accuracy, Throughput, and Adaptability. J. Pharma. Sci. 2020, 109, 1547–1557. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Aydoğan, M.; Karci, A. Improving the accuracy using pre-trained word embeddings on deep neural networks for Turkish text classification. Phys. A Stat. Mech. Its Appl. 2020, 541, 123288. [Google Scholar] [CrossRef]
  44. Agarwal, M.; Gupta, S.; Biswas, K. Development of Efficient CNN model for Tomato crop disease identification. Sustain. Comput. Inform. Syst. 2020, 28, 100407. [Google Scholar] [CrossRef]
  45. Boulent, J.; Foucher, S.; Théau, J.; Charles, P. Convolutional Neural Networks for the Automatic Identification of Plant Diseases. Front. Plant Sci. 2019, 10, 941. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Jiang, Y.; Li, C. Convolutional Neural Networks for Image-Based High Throughput Plant Phenotyping: A Review. Plant Phenomics 2020, 2020, 4152816. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Noon, S.; Amjad, M.; Qureshi, M.; Mannan, A. Use of deep learning techniques for identification of plant leaf stresses: A review. Sustain. Comput. Inf. Systems 2020, 28, 100443. [Google Scholar] [CrossRef]
  48. Mishra, S.; Sachan, R.; Rajpal, D. Deep Convolutional Neural Network based Detection System for Real-time Corn Plant Disease Recognition. Procedia Comput. Sci. 2020, 167, 2003–2010. [Google Scholar] [CrossRef]
  49. Badhan, S.K.; Dsilva, D.M.; Sonkusare, R.; Weakey, S. Real-Time Weed Detection using Machine Learning and Stereo-Vision. In Proceedings of the 2021 6th International Conference for Convergence in Technology (I2CT), Pune, India, 2–4 April 2021; pp. 1–5. [Google Scholar] [CrossRef]
  50. Gai, J. Plants Detection, Localization and Discrimination using 3D Machine Vision for Robotic Intra-row Weed Control. Graduate Theses and Dissertations, Iowa State University, Ames, IA, USA, 2016. [Google Scholar] [CrossRef]
  51. Gottardi, M. A CMOS/CCD image sensor for 2D real time motion estimation. Sens. Actuators A Phys. 1995, 46, 251–256. [Google Scholar] [CrossRef]
  52. Helmers, H.; Schellenberg, M. CMOS vs. CCD sensors in speckle interferometry. Opt. Laser Technol. 2003, 35, 587–593. [Google Scholar] [CrossRef]
  53. Silfhout, R.; Kachatkou, A. Fibre-optic coupling to high-resolution CCD and CMOS image sensors. Nucl. Instr. Methods Phys. Res. Sect. A Accel. Spectrum. Detect. Ass. Equip. 2008, 597, 266–269. [Google Scholar] [CrossRef]
  54. Krishna, B.; Rekulapellim, N.; Kauda, B.P. Materials Today: Proceedings. Comparison of different deep learning frameworks. Mater. Today Proc. 2020, in press. [Google Scholar] [CrossRef]
  55. Trung, W.; Maleki, F.; Romero, F.; Forghani, R.; Kadoury, S. Overview of Machine Learning: Part 2: Deep Learning for Medical Image Analysis. Neuroimaging Clin. N. Am. 2020, 30, 417–431. [Google Scholar] [CrossRef]
  56. Wang, P.; Fan, E.; Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar] [CrossRef]
  57. Bui, D.; Tsangaratos, P.; Nguyen, V.; Liem, N.; Trinh, P. Comparing the prediction performance of a Deep Learning Neural Network model with conventional machine learning models in landslide susceptibility assessment. CATENA 2020, 188, 104426. [Google Scholar] [CrossRef]
  58. Kamilaris, A.; Brik, C.; Karatsiolis, S. Training Deep Learning Models via Synthetic Data: Application in Unmanned Aerial Vehicles. In Proceedings of the CAIP 2019, the Workshop on Deep-Learning Based Computer Vision for UAV, Salerno, Italy, 6 September 2019. [Google Scholar]
  59. Barth, R.; IJsselmuiden, J.; Hemming, J.; Van Henten, E.J. Data synthesis methods for semantic segmentation in agriculture: A Capsicum annuum dataset. Comput. Electron. Agri. 2018, 144, 284–296. [Google Scholar] [CrossRef]
  60. Zichao, J. A Novel Crop Weed Recognition Method Based on Transfer Learning from VGG16 Implemented by Keras. OP Conf. Ser. Mater. Sci. Eng. 2019, 677, 032073. [Google Scholar] [CrossRef]
  61. Chen, D.; Lu, Y.; Yong, S. Performance Evaluation of Deep Transfer Learning on Multiclass Identification of Common Weed Species in Cotton Production Systems. arXiv 2021, arXiv:2110.04960v1. [Google Scholar]
  62. Espejo-Garcia, B.; Mylonas, N.; Athanasakos, L.; Spyros Fountas, S.; Vasilakoglou, I. Towards weeds identification assistance through transfer learning. Comput. Electron. Agric. 2020, 171, 105306. [Google Scholar] [CrossRef]
  63. Al-Qurran, R.; Al-Ayyoub, M.; Shatnawi, A. Plant Classification in the Wild: A Transfer Learning Approach. In Proceedings of the 2018 International Arab Conference on Information Technology (ACIT), Werdanye, Lebanon, 28–30 November 2018; pp. 1–5. [Google Scholar] [CrossRef]
  64. Pajares, G.; Garcia-Santillam, I.; Campos, Y.; Montalo, M. Machine-vision systems selection for agricultural vehicles: A guide. Imaging 2016, 2, 34. [Google Scholar] [CrossRef] [Green Version]
  65. Shorten, C.; Khoshgoftaar, T. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  66. Zheng, Y.; Kong, J.; Jin, X.; Wang, X. CropDeep: The Crop Vision Dataset for Deep-Learning-Based Classification and Detection in Precision Agriculture. Sensors 2019, 19, 1058. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  67. Sudars, K.; Jasko, J.; Namatevsa, I.; Ozola, L.; Badaukis, N. Dataset of annotated food crops and weed images for robotic computer vision control. Data Brief 2020, 31, 105833. [Google Scholar] [CrossRef]
  68. Cap, Q.H.; Tani, H.; Uga, H.; Kagiwada, S.; Lyatomi, H. LASSR: Effective Super-Resolution Method for Plant Disease Diagnosis. arXiv 2020, arXiv:2010.06499. [Google Scholar] [CrossRef]
  69. Zhu, J.; Park, T.; Isola, P.; Efros, A. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. arXiv 2020, arXiv:1703.10593. [Google Scholar]
  70. Huang, Z.; Ke, W.; Huang, D. Improving Object Detection with Inverted Attention. In Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass, CO, USA, 1–5 March 2020. [Google Scholar]
  71. He, C.; Lai, S.; Lam, K. Object Detection with Relation Graph Inference. In Proceedings of the ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019. [Google Scholar]
  72. Champ, J.; Mora-Fallas, A.; Goëau, H.; Mata-Montero, E.; Bonnet, P.; Joly, A. Instance segmentation for the fine detection of crop and weed plants by precision agricultural robots. Appl. Plant Sci. 2020, 8, e11373. [Google Scholar] [CrossRef]
  73. Lameski, P.; Zdravevski, E.; Trajkovik, V.; Kulakov, A. Weed Detection Dataset with RGB Images Taken Under Variable Light Conditions. In ICT Innovations 2017. Communications in Computer and Information Science; Trajanov, D., Bakeva, V., Eds.; Springer: Cham, Switzerland, 2017; Volume 778. [Google Scholar] [CrossRef]
  74. Giselsson, T.M.; Jørgensen, R.N.; Jensen, P.K.; Dyrmann, M.; Midtiby, H.S. A Public Image Database for Benchmark of Plant Seedling Classification Algorithms. arXiv 2017, arXiv:1711.05458. [Google Scholar]
  75. Cicco, M.; Potena, C.; Grisetti, G.; Pretto, A. Automatic Model Based Dataset Generation for Fast and Accurate Crop and Weeds Detection. arXiv 2016, arXiv:1612.03019. [Google Scholar]
  76. Lu, Y.; Young, S. A survey of public datasets for computer vision tasks in precision agriculture. Comput. Electron. Agric. 2020, 178, 105760. [Google Scholar] [CrossRef]
  77. Faisal, F.; Hossain, B.; Emam, H. Performance Analysis of Support Vector Machine and Bayesian Classifier for Crop and Weed Classification from Digital Images. World Appl. Sci. 2011, 12, 432–440. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.390.9311&rep=rep1&type=pdf (accessed on 18 August 2021).
  78. Dyrmann, M. Automatic Detection and Classification of Weed Seedlings under Natural Light Conditions. Det Tekniske Fakultet.University of Southern Denmark. 2017. Available online: https://pure.au.dk/portal/files/114969776/MadsDyrmannAfhandlingMedOmslag.pdf (accessed on 18 August 2021).
  79. Chang, C.; Lin, K. Smart Agricultural Machine with a Computer Vision Based Weeding and Variable-Rate Irrigation Scheme. Robotics 2018, 7, 38. [Google Scholar] [CrossRef] [Green Version]
  80. Slaughter, D.C.; Giles, D.K.; Downey, D. Autonomous robotic weed control systems: A review. Comput. Electron. Agric. 2008, 61, 63–78. [Google Scholar] [CrossRef]
  81. Abhisesh, S. Machine Vision System for Robotic Apple Harvesting in Fruiting Wall Orchards. Ph.D. Thesis, Department of Biological Systems Engineering, Washington State University, Pullman, WA, USA, December 2016. Available online: https://research.libraries.wsu.edu/xmlui/handle/2376/12033 (accessed on 18 August 2021).
  82. Qiu, Q.; Fan, Z.; Meng, Z.; Zhang, Q.; Cong, Y.; Li, B.; Wang, N.; Zhao, C. Extended Ackerman Steering Principle for the coordinated movement control of a four wheel drive agricultural mobile robot. Comput. Electron. Agric. 2018, 152, 40–50. [Google Scholar] [CrossRef]
  83. Ren, G.; Lin, T.; Ying, Y.; Chowdhary, G.; Ting, K.C. Agricultural robotics research applicable to poultry production: A review. Comput. Electron. Agric. 2020, 169, 105216. [Google Scholar] [CrossRef]
  84. Asha, R.; Aman, M.; Pankaj, M.; Singh, A. Robotics-automation and sensor based approaches in weed detection and control: A review. Intern. J. Chem. Stud. 2020, 8, 542–550. [Google Scholar] [CrossRef]
  85. Shinde, A.; Shukla, M. Crop detection by machine vision for weed management. Intern. J. Adv. Eng. Technol. 2014, 7, 818–826. Available online: https://www.academia.edu/38850273/CROP_DETECTION_BY_MACHINE_VISION_FOR_WEED_MANAGEMENT (accessed on 18 August 2021).
  86. Raja, R.; Nguyen, T.; Vuong, V.L.; Slaughter, D.C.; Fennimore, S.A. RTD-SEPs: Real-time detection of stem emerging points and classification of crop-weed for robotic weed control in producing tomato. Biosyst. Eng. 2020, 195, 152–171. [Google Scholar] [CrossRef]
  87. Sirikunkitti, S.; Chongcharoen, K.; Yoongsuntia, P.; Ratanavis, A. Progress in a Development of a Laser-Based Weed Control System. In Proceedings of the 2019 Research, Invention, and Innovation Congress (RI2C), Bangkok, Thailand, 11–13 December 2019; pp. 1–4. [Google Scholar] [CrossRef]
  88. Mathiassen, S.; Bak, T.; Christensen, S.; Kudsk, P. The effect of laser treatment as a weed control method. Biosyst. Eng. 2006, 95, 497–505. [Google Scholar] [CrossRef] [Green Version]
  89. Xiong, Y.; Ge, Y.; Liang, Y.; Blackmore, S. Development of a prototype robot and fast path-planning algorithm for static laser weeding. Comput. Electron. Agric. 2017, 142, 494–503. [Google Scholar] [CrossRef]
  90. Marx, C.; Barcikowski, S.; Hustedt, M.; Haferkamp, H.; Rath, T. Design and application of a weed damage model for laser-based weed control. Biosyst. Eng. 2012, 113, 148–157. [Google Scholar] [CrossRef]
  91. Librán-Embid, F.; Klaus, F.; Tscharntke, T.; Grass, I. Unmanned aerial vehicles for biodiversity-friendly agricultural landscapes—A systematic review. Sci. Total Environ. 2020, 732, 139204. [Google Scholar] [CrossRef]
  92. Boursianis, A.; Papadopoulou, M.; Diamantoulakis, P.; Liopa-Tsakalidi, A.; Barouchas, P.; Salahas, G.; Karagiannidis, G.; Wan, S.; Goudos, S.K. Internet of Things (IoT) and Agricultural Unmanned Aerial Vehicles (UAVs) in smart farming: A comprehensive review. Internet Things 2020, 7, 100187. [Google Scholar] [CrossRef]
  93. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Deng, X.; Zhang, L. A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery. PLoS ONE 2018, 13, e019630213. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  94. Hunter, J.; Gannon, T.W.; Richardson, R.J.; Yelverton, F.H.; Leon, R.G. Integration of remote-weed mapping and an autonomous spraying unmanned aerial vehicle for site-specific weed management. Pest. Manag. Sci. 2020, 76, 1386–1392. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  95. Cerro, J.; Ulloa, C.; Barrientos, A.; Rivas, J. Unmanned Aerial Vehicles in Agriculture: A Survey. Agronomy 2021, 11, 203. [Google Scholar] [CrossRef]
  96. Rasmussen, J.; Nielsen, J. A novel approach to estimating the competitive ability of Cirsium arvense in cereals using unmanned aerial vehicle imagery. Weed Res. 2020, 60, 150–160. [Google Scholar] [CrossRef]
  97. Rijk, L.; Beedie, S. Precision Weed Spraying using a Multirotor UAV. In Proceedings of the10th International Micro-Air Vehicles Conference, Melbourne, Australia, 30 November 2018. [Google Scholar]
  98. Liang, Y.; Yang, Y.; Chao, C. Low-Cost Weed Identification System Using Drones. In Proceedings of the Seventh International Symposium on Computing and Networking Workshops (CANDARW), Nagasaki, Japan, 26–29 November 2019; pp. 260–263. [Google Scholar]
  99. Zhang, Q.; Liu, Y.; Gong, C.; Chen, Y.; Yu, H. Applications of deep learning for dense scenes, analysis in agriculture: A review. Sensors 2020, 20, 1520. [Google Scholar] [CrossRef] [Green Version]
  100. Asad, M.; Bais, A. Weed Detection in Canola Fields Using Maximum Likelihood Classification and Deep Convolutional Neural Network. Inform. Process. Agric. 2020, 7, 535–545. [Google Scholar] [CrossRef]
  101. Shawky, O.; Hagag, A.; Dahshan, E.; Ismail, M. Remote sensing image scene classification using CNN-MLP with data augmentation. Optik 2020, 221, 165356. [Google Scholar] [CrossRef]
  102. Zhuoyao, Z.; Lei, S.; Qiang, H. Improved localization accuracy by LocNet for Faster R-CNN based text detection in natural scene images. Pattern Recognit. 2019, 96, 106986. [Google Scholar] [CrossRef]
  103. Liakos, K.; Busato, P.; Moshou, D.; Pearson, S.; Bochtis, D. Machine learning in agriculture: A review. Sensors 2018, 18, 2674. [Google Scholar] [CrossRef] [Green Version]
  104. Hasan, A.S.M.M.; Sohel, F.; Diepeveen, D.; Laga, H.; Jones, M.G.K. A survey of deep learning techniques for weed detection from images. Comput. Electron. Agric. 2021, 184, 106067. Available online: https://www.semanticscholar.org/paper/A-Survey-of-Deep-Learning-Techniques-for-Weed-from-Hasan-Sohel/80bfc6bcdf5d231122b7ffee17591c8fc14ce528 (accessed on 5 November 2021). [CrossRef]
  105. Rehman, T.U.; Mahmud, M.S.; Chang, Y.K.; Jin, J.; Shin, J. Current and future applications of statistical machine learning algorithms for agricultural machine vision systems. Comput. Electron. Agricult. 2019, 156, 585–605. [Google Scholar] [CrossRef]
  106. Osorio, K.; Puerto, A.; Pedraza, C.; Jamaica, D.; Rodríguez, L. A Deep Learning Approach for Weed Detection in Lettuce Crops Using Multispectral Images. AgriEngineering 2020, 2, 32. [Google Scholar] [CrossRef]
  107. Ferreira, A.S.; Freitas, D.M.; Gonçalves da Silva, G.; Pistori, H.; Folhes, M.T. Weed detection in soybean crops using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324. [Google Scholar] [CrossRef]
  108. Santos, L.; Santos, F.N.; Oliveira, P.M.; Shinde, P. Deep Learning Applications in Agriculture: A Short Review. In Robot 2019: Fourth Iberian Robotics Conference. Advances in Intelligent Systems and Computing; Silva, M., Luís Lima, J., Reis, L., Sanfeliu, A., Tardioli, D., Eds.; Springer: Cham, Switzerland, 2019; Volume 1092. [Google Scholar] [CrossRef]
  109. Dokic, K.; Blaskovic, L.; Mandusic, D. From machine learning to deep learning in agriculture—The quantitative review of trends. IOP Conf. Ser. Earth Environ. Sci. 2020, 614, 012138. Available online: https://iopscience.iop.org/article/10.1088/1755-1315/614/1/012138 (accessed on 18 August 2021). [CrossRef]
  110. Tian, H.; Wang, T.; Yadong, Y.; Qiao, X.; Li, Y. Computer vision technology in agricultural automation —A review. Inform. Process. Agric. 2020, 7, 1–19. [Google Scholar] [CrossRef]
  111. Khaki, S.; Pham, H.; Han, Y.; Kuhl, A. Convolutional Neural Networks for Image-Based Corn Kernel Detection and Counting. arXiv 2020, arXiv:2003.12025v2. Available online: https://arxiv.org/pdf/2003.12025.pdf (accessed on 18 August 2021). [CrossRef] [PubMed]
  112. Yu, J.; Sharpe, S.; Schumann, A.; Boyd, N. Deep learning for image-based weed detection in turfgrass. Eur. J. Agron. 2019, 104, 78–84. [Google Scholar] [CrossRef]
  113. Yu, J.; Schumann, A.; Cao, Z.; Sharpe, S. Weed detection in perennial ryegrass with deep learning convolutional neural network. Front. Plant Sci. 2019, 10, 1422. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Gao, J.; French, A.; Pound, M. Deep convolutional neural networks for image based Convolvulus sepium detection in sugar beet fields. Plant Methods 2020, 16, 29. [Google Scholar] [CrossRef] [Green Version]
  115. Scott, S. Comparison of Object Detection and Patch-Based Classification Deep Learning Models on Mid- to Late-Season Weed Detection in UAV Imagery. Remote Sens. 2020, 12, 2136. [Google Scholar] [CrossRef]
  116. Narvekar, C.; Rao, M. Flower classification using CNN and transfer learning in CNN- Agriculture Perspective. In Proceedings of the 3rd International Conference on Intelligent Sustainable Systems (ICISS), Thoothukudi, India, 3–5 December 2020; pp. 660–664. [Google Scholar] [CrossRef]
  117. Sharma, P.; Berwal, Y.; Ghai, W. Performance analysis of deep learning CNN models for disease detection in plants using image segmentation. Information. Process. Agric. 2020, 7, 566–574. [Google Scholar] [CrossRef]
  118. Du, X.; Lin, T.; Jin, P. SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13–19 June 2020. [Google Scholar] [CrossRef]
  119. Koh, J.; Spangenberg, G.; Kant, S. Automated Machine Learning for High-Throughput Image-Based Plant Phenotyping. Remote Sens. 2021, 13, 858. [Google Scholar] [CrossRef]
  120. Shah, S.; Wu, W.; Lu, Q. AmoebaNet: An SDN-enabled network service for big data science. J. Netw. Comput. Appl. 2018, 119, 70–82. [Google Scholar] [CrossRef] [Green Version]
  121. Yao, L.; Xu, H.; Zhang, W. SM-NAS: Structural-to-Modular Neural Architecture Search for Object Detection. Proc. AAAI Conf. Artif. Intell. 2020, 34, 12661–12668. [Google Scholar] [CrossRef]
  122. Jia, X.; Yang, X.; Yu, X.; Gao, H. A Modified CenterNet for Crack Detection of Sanitary Ceramics. In Proceedings of the IECON 2020—46th Annual Conference of the IEEE Industrial Electronics Society, 18–21 October 2020. [Google Scholar] [CrossRef]
  123. Zhao, K.; Yan, W.Q. Fruit Detection from Digital Images Using CenterNet. Geom. Vis. 2021, 1386, 313–326. [Google Scholar] [CrossRef]
  124. Xu, M.; Deng, Z.; Qi, L.; Jiang, Y.; Li, H.; Wang, Y.; Xing, X. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLoS ONE 2019, 14, e0215676. [Google Scholar] [CrossRef]
  125. Kong, J.; Wang, H.; Wang, X.; Jin, X.; Fang, X.; Lin, S. Multi-stream hybrid architecture based on cross-level fusion strategy for fine-grained crop species recognition in precision agriculture. Comput. Electron. Agric. 2021, 185, 106134. [Google Scholar] [CrossRef]
  126. Wosner, O. Detection in Agricultural Contexts: Are We Close to Human Level? Computer Vision—ECCV 2020 Workshops. Lect. Notes Comput. Sci. 2020, 12540. [Google Scholar] [CrossRef]
  127. Wu, D.; Lv, S.; Jiang, M.; Song, H. Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Comput. Electron. Agric. 2020, 178, 105742. [Google Scholar] [CrossRef]
  128. Kuznetsova, A.; Maleva, T.; Soloviev, V. Detecting Apples in Orchards Using YOLOv3 and YOLOv5 in General and Close-Up Images; Advances in Neural Networks—ISNN; Springer: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
  129. Tian, Y.; Yang, G.; Wang, Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
  130. Wu, D.; Wu, Q.; Yin, X.; BoJiang, B.; Wang, H.; He, D.; Song, H. Lameness detection of dairy cows based on the YOLOv3 deep learning algorithm and a relative step size characteristic vector. Biosyst. Eng. 2020, 189, 150–163. [Google Scholar] [CrossRef]
  131. Waheed, A.; Goyal, M.; Gupta, D.; Khanna, A.; Hassanien, A.E.; Pandey, H.M. An optimized dense convolutional neural network model for disease recognition and classification in corn leaf. Comput. Electron. Agric. 2020, 175, 105456. [Google Scholar] [CrossRef]
  132. Atila, U.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 2021, 61, 101182. [Google Scholar] [CrossRef]
  133. Pang, Y.; Shi, Y.; Gao, S.; Jiang, F.; Veeranampalayam-Sivakumar, A.-N.; Thomson, L.; Luck, J.; Liu, C. Improved crop row detection with deep neural network for early-season maize stand count in UAV imagery. Comput. Electron. Agric. 2020, 178, 105766. [Google Scholar] [CrossRef]
  134. Liang, F.; Tian, Z.; Dong, M.; Cheng, S.; Sun, L.; Li, H.; Chen, Y.; Zhang, G. Efficient neural network using pointwise convolution kernels with linear phase constraint. Neurocomputing 2021, 423, 572–579. [Google Scholar] [CrossRef]
  135. Taravat, A.; Wagner, M.P.; Bonifacio, R.; Petit, D. Advanced Fully Convolutional Networks for Agricultural Field Boundary Detection. Remote Sens. 2021, 13, 722. [Google Scholar] [CrossRef]
  136. Isufi, E.; Pocchiari, M.; Hanjalic, A. Accuracy-diversity trade-off in recommender systems via graph convolutions. Inf. Process. Managem. 2021, 58, 102459. [Google Scholar] [CrossRef]
  137. Wei, Y.; Gu, K.; Tan, L. A positioning method for maize seed laser-cutting slice using linear discriminant analysis based on isometric distance measurement. Inf. Process. Agric. 2021. [Google Scholar] [CrossRef]
  138. Koo, J.; Klabjan, D.; Utke, J. Combined Convolutional and Recurrent Neural Networks for Hierarchical Classification of Images. arXiv 2019, arXiv:1809.09574v3. [Google Scholar]
  139. Agarap, A.F.M. An Architecture Combining Convolutional Neural Network (CNN) and Support Vector Machine (SVM) for Image Classification. arXiv 2017, arXiv:1712.03541. [Google Scholar]
  140. Khaki, S.; Wang, L.; Archontoulis, S. A CNN-RNN Framework for Crop Yield Prediction. Front. Plant Sci. 2020, 10, 1750. [Google Scholar] [CrossRef]
  141. Dyrmann, M.; Jørgensen, R.H.; Midtiby, H.S. RoboWeedSupport—Detection of weed locations in leaf occluded cereal crops using a fully convolutional neural network. Adv. Anim. Biosci. 2017, 8, 842–847. [Google Scholar] [CrossRef]
  142. Barth, R.; Hemming, J.; Henten, V. Optimising realism of synthetic images using cycle generative adversarial networks for improved part segmentation. Comput. Electron. Agric. 2020, 173, 105378. [Google Scholar] [CrossRef]
  143. Nguyen, N.; Tien, D.; Thanh, D. An Evaluation of Deep Learning Methods for Small Object Detection. J. Electr. Comput. Eng. 2020, 3189691. [Google Scholar] [CrossRef]
  144. Chen, C.; Liu, M.; Tuzel, O.; Xiao, J. R-CNN for Small Object Detection. Comput. Vis. 2017, 10115. [Google Scholar] [CrossRef]
  145. Yu, Y.; Zhang, K.; Li, Y.; Zhang, D. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput. Electron. Agric. 2019, 163, 104846. [Google Scholar] [CrossRef]
  146. Boukhris, L.; Abderrazak, J.; Besbes, H. Tailored Deep Learning based Architecture for Smart Agriculture. In Proceedings of the 2020 International Wireless Communications and Mobile Computing (IWCMC). 15−19 June 2020, Limassol, Cyprus. [CrossRef]
  147. Basodi, S.; Chunya, C.; Zhang, H.; Pan, Y. Gradient Amplification: An efficient way to train deep neural networks. arXiv 2020, arXiv:2006,10560v1. [Google Scholar] [CrossRef]
  148. Kurniawan, A. Administering NVIDIA Jetson Nano. In IoT Projects with NVIDIA Jetson Nano; Programming Apress: Berkeley, CA, USA, 2021. [Google Scholar] [CrossRef]
  149. Kurniawan, A. NVIDIA Jetson Nano. In IoT Projects with NVIDIA Jetson Nano; Programming Apress: Berkeley, CA, USA, 2021. [Google Scholar] [CrossRef]
  150. Verucchi, M.; Brilli, G.; Sapienza, D.; Verasani, M.; Arena, M.; Gatti, F.; Capotondi, A.; Cavicchioli, R.; Bertogna, M.; Solieri, M. A Systematic Assessment of Embedded Neural Networks for Object Detection. In Proceedings of the 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA); pp. 937–944. [CrossRef]
  151. Gašparović, M.; Zrinjski, M.; Barković, D.; Radočaj, D. An automatic method for weed mapping in oat fields based on UAV imagery. Comput. Electron. Agric. 2020, 173, 105385. [Google Scholar] [CrossRef]
  152. Yano, I.H.; Alves, J.R.; Santiago, W.E.; Mederos, B.J.T. Identification of weeds in sugarcane fields through images taken by UAV and random forest classifier. IFAC-Pap. 2016, 49, 415–420. [Google Scholar] [CrossRef]
  153. Zhou, H.; Zhang, C. A Field Weed Density Evaluation Method Based on UAV Imaging and Modified U-Net. Remote Sens. 2021, 13, 310. [Google Scholar] [CrossRef]
  154. Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 2018, 145, 153–160. [Google Scholar] [CrossRef]
  155. Sudars, K. Data For: Dataset of Annotated Food Crops and Weed Images for Robotic Computer Vision Control. Mendeley Data 2021, VI. [Google Scholar] [CrossRef] [PubMed]
  156. Xu, Y.; He, R.; Gao, Z.; Li, C.; Zhai, Y.; Jiao, Y. Weed density detection method based on absolute feature corner points in field. Agronomy 2020, 10, 113. [Google Scholar] [CrossRef] [Green Version]
  157. Shorewala, S.; Ashfaque, A.R.S.; Verma, U. Weed Density and Distribution Estimation for Precision Agriculture Using Semi-Supervised Learning. arXiv 2021, arXiv:2011.02193. Available online: https://arxiv.org/abs/2011.02193 (accessed on 5 September 2021).
Figure 1. Examples of weed images used for transfer learning.
Figure 1. Examples of weed images used for transfer learning.
Remotesensing 13 04486 g001
Figure 2. Examples of images from the CropDeep dataset.
Figure 2. Examples of images from the CropDeep dataset.
Remotesensing 13 04486 g002
Figure 3. Examples of artificial images generated by LASSR [68].
Figure 3. Examples of artificial images generated by LASSR [68].
Remotesensing 13 04486 g003
Figure 4. Image processing of pigweed (A. retroflexus): (a) original image, (b) grey-scale image, (c) after sharpening, (d) with noise filter [77].
Figure 4. Image processing of pigweed (A. retroflexus): (a) original image, (b) grey-scale image, (c) after sharpening, (d) with noise filter [77].
Remotesensing 13 04486 g004
Figure 5. Example of auto-calibration process: (a) original image, (b) corrected image [64].
Figure 5. Example of auto-calibration process: (a) original image, (b) corrected image [64].
Remotesensing 13 04486 g005
Figure 6. Example of a robot for removing weeds [85].
Figure 6. Example of a robot for removing weeds [85].
Remotesensing 13 04486 g006
Figure 7. Detection of objects within dense scenes: (a) for apples [99], (b) canola and weeds (initial image) [100], (c) weeds after semantic segmentation [100].
Figure 7. Detection of objects within dense scenes: (a) for apples [99], (b) canola and weeds (initial image) [100], (c) weeds after semantic segmentation [100].
Remotesensing 13 04486 g007
Figure 8. Counting of corn kernels processed in images with CNN [111].
Figure 8. Counting of corn kernels processed in images with CNN [111].
Remotesensing 13 04486 g008
Table 1. Popular open weed image datasets.
Table 1. Popular open weed image datasets.
NameSize, PixelsPlant/ObjectAmountReferences
1CropDeep1000 × 100031 different types of crops31,147[67]
2Food crops and weed images720 × 12806 food crops and 8 weed species1118[68]
3DeepWeeds256 × 2568 different weed species and various off-target (or negative) plants native to Australia.17,509[26]
4Crop and weed1200 x 2048Maize, weeds2489[72]
5Dataset with RGB images taken under variable light conditions3264 × 2448Carrot and weed 39[73]
6Crop and weed1200 × 20486 food crops and 8 weed species1176[63]
7V2 Plant seedlings Dataset10 pixels per mm.960 unique plants5539[74]
8Early crop weed6000 × 4000tomato, cotton, velvetleaf and black nightshade508[62]
9Weed detection dataset with RGB images taken under variable light conditions3200 × 2400carrot seedlings with weeds39[73]
10Datasets for sugar beet crop/weed detection1200 × 2048Capsella bursa pastoris8518[75]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rakhmatulin, I.; Kamilaris, A.; Andreasen, C. Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review. Remote Sens. 2021, 13, 4486. https://doi.org/10.3390/rs13214486

AMA Style

Rakhmatulin I, Kamilaris A, Andreasen C. Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review. Remote Sensing. 2021; 13(21):4486. https://doi.org/10.3390/rs13214486

Chicago/Turabian Style

Rakhmatulin, Ildar, Andreas Kamilaris, and Christian Andreasen. 2021. "Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review" Remote Sensing 13, no. 21: 4486. https://doi.org/10.3390/rs13214486

APA Style

Rakhmatulin, I., Kamilaris, A., & Andreasen, C. (2021). Deep Neural Networks to Detect Weeds from Crops in Agricultural Environments in Real-Time: A Review. Remote Sensing, 13(21), 4486. https://doi.org/10.3390/rs13214486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop