Next Article in Journal
YOLO-BLBE: A Novel Model for Identifying Blueberry Fruits with Different Maturities Using the I-MSRCR Method
Previous Article in Journal
Effect of Trap Type and Height on the Captures of the Pink Bollworm, Pectinophora gossypiella (Saunders) (Lepidoptera: Gelechiidae), in Pheromone-Baited Traps in Cotton
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weed Recognition at Soybean Seedling Stage Based on YOLOV8nGP + NExG Algorithm

1
Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
2
Faculty of Engineering, Hong Kong Polytechnic University, Hong Kong 999077, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Agronomy 2024, 14(4), 657; https://doi.org/10.3390/agronomy14040657
Submission received: 20 February 2024 / Revised: 14 March 2024 / Accepted: 18 March 2024 / Published: 24 March 2024
(This article belongs to the Section Weed Science and Weed Management)

Abstract

:
The high cost of manual weed control and the overuse of herbicides restrict the yield and quality of soybean. Intelligent mechanical weeding and precise application of pesticides can be used as effective alternatives for weed control in the field, and these require accurate distinction between crops and weeds. In this paper, images of soybean seedlings and weeds in different growth areas are used as datasets. In the aspect of soybean recognition, this paper designs a YOLOv8nGP algorithm with a backbone network optimisation based on GhostNet and an unconstrained pruning method with a 60% pruning rate. Compared with the original YOLOv8n, the YOLOv8nGP improves the Precision (P), Recall (R), and F1 metrics by 1.1% each, reduces the model size by 3.6 mb, and the inference time was 2.2 ms, which could meet the real-time requirements of field operations. In terms of weed recognition, this study utilises an image segmentation method based on the Normalized Excess Green Index (NExG). After filtering the soybean seedlings, the green parts of the image are extracted for weed recognition, which reduces the dependence on the diversity of the weed datasets. This study combines deep learning with traditional algorithms, which provides a new solution for weed recognition of soybean seedlings.

1. Introduction

Soybean, as a summer sowing crop, has the characteristics of small seeds, weak seedling roots, and slow growth. On the other hand, high temperature and high humidity promote the rapid and diversified growth of weeds in the summer. This seriously hinders the growth of soybean because the weeds will compete for resources, such as sunlight, water, inorganic salts, and living space. As a result, the quality and yield of soybean are adversely affected, and the growth and development of soybean are hindered [1].
Currently, there are three main conventional methods for weeding: artificial weeding, mechanized weeding, and chemical weeding. Artificial weeding in soybean fields is resource-intensive, especially due to the hot summer weather and the low operational efficiency of the process. This makes it unsuitable for large-scale weeding operations. Mechanical weeding can significantly improve the efficiency of the weeding program. In addition, it reduces the physical consumption of agricultural workers to the greatest extent and avoids the use of herbicides. Therefore, it lightens the ecological consequences of agricultural output [2]. Chemical weeding has brought the benefits of economy, rapidity, and simplicity. Nevertheless, when large-scale chemical weeding is carried out under high-temperature and high-humidity conditions in summer, it may cause phytotoxicity to soybean seedlings. By applying herbicides, rainy conditions can increase the environmental risks. Target application of pesticides is an innovative method of chemical weeding that can accurately apply herbicide to crops and weeds. This method reduces the use of herbicide in non-target areas and the negative impact on the environment [3]. Automatic mechanical weeding and the targeted application of pesticides depend on the accurate detection and identification of crops and weeds, so the identification of crops and weeds plays a vital role in ensuring the success of these operations.
Traditional crop and weed classification methods mainly include spectral feature recognition, conventional machine vision recognition, and deep learning-based machine vision recognition. In terms of spectral feature recognition, Su et al. [4] used a drone, multispectral images, and machine learning techniques to spectrally analyse blackgrass in a wheat field, combined with a random forest classifier and Bayesian hyperparameter optimization, to achieve accurate weed classification and localization. Nik et al. [5] utilized high-resolution multispectral data collected by a UAV to extract sensitive bands for distinguishing weeds from crops through significance analysis and then accomplished the distinction and localization of weeds in sorghum fields. Li et al. [6] utilized hyperspectral imaging data and a multilayer perceptron classification model to achieve the recognition of four weeds in the pasture with an overall accuracy of 89.1%. Different crops and weeds have different spectral reflectance characteristics, and spectral analysis can effectively classify different kinds of crops. However, spectral instruments are expensive, and the data collected are affected by soil type, humidity, and light conditions, which increases the difficulty of classifying weed and crop. In machine vision recognition, the traditional recognition methods are mainly based on the differences in colour, edge texture, and shape characteristics between crops and weeds, combined with machine learning algorithms to achieve classification. Commonly used algorithms mainly include Support Vector Machines (SVM), random forest, decision tree algorithms, etc. [6,7,8]. The recognition results are easily affected by factors such as changes in illumination, crop blockage, and data imbalance, etc., and the robustness of the recognition effect needs to be improved. At the same time, traditional methods need to design features manually, which is complicated in operation and has some defects, such as slow detection speed and insufficient identification accuracy. The integration of machine vision and deep learning algorithms leads to the application of the Convolutional Neural Network (CNN) in precision agriculture. This vision algorithm has stronger generalisation ability and higher accuracy than traditional methods, effectively enhancing the accuracy and adaptability of recognising target crops. Wang et al. [9] extracted multi-scale hierarchical features from images by CNN to realize the recognition of corn and weeds with an average target recognition accuracy of 98.92%. Kong et al. [10] proposed an improved YOLOv5 algorithm for fast and accurate identification of seedling crops and weeds in complex environments. Zhang et al. [11] built a weed recognition system for peanut fields based on the improved YOLOv4 algorithm, and its maximum average accuracy for weed recognition was 94.54%. Zhang et al. [12] used an optimized Faster R-CNN algorithm for weed identification in the seedling soybeans and achieved an average recognition rate of 92.69% in a natural environment. J. Prabavadhi et al. [13] established a YOLOv5-based method for detecting plant weeds, and medicinal plants; this method utilized a diverse Kaggle dataset for precise annotations and achieved high performance in precision, recall, and F1 score. Sneha N et al. [14] used the Region-Based Convolutional Neural Network (R-CNN), YOLOv3, and Centernet object recognition algorithms to identify weeds in images, and they pointed out that people need to begin by gathering a sizable collection of photos of various weed species to construct a deep learning model for weed identification. Oscar L G et al. [15] established an artificial vision system based on the YOLOv5s model to differentiate corn from four types of weeds, and the recognition accuracy of corn was much higher than that of weeds. Although the deep learning algorithm can solve the problem of manual feature design in traditional image processing methods, its overall recognition rate is affected by the size of the dataset, and the dataset in a single environment restricts the versatility of the recognition model, resulting in a lack of robustness and generalization ability of the model. Compared to crops, there are many kinds of weeds in farmland, and the distribution of weeds is greatly influenced by geographical factors [16], which makes it very difficult to establish comprehensive weed datasets.
Intelligent weeding robots have recently become a prominent field of agricultural research [17,18,19]. Bawden et al. [20] developed a modular weeding robot called AgBotII, which used machine vision for colour classification to differentiate crops from weeds. AgBotII can identify both broadleaf plants and grasses, achieving 96% overall accuracy. Trygve et al. [21] developed a target application intelligent weeding robot, Adigo, to achieve accurate weeding of carrot fields by differentiating and localizing carrots and weeds. However, these devices cannot be widely implemented due to the high costs, and the computational costs required by weed detection and recognition algorithms are also too large [22,23]. Therefore, it is imperative to study the weed identification model on embedded devices, with the emphasis on reducing the calculation cost and improving the accuracy.
In this study we collected images of soybean seedlings and weeds from different regions aims to solve the problem of weed recognition at the soybean seedling stage under natural conditions. In this paper, the recognition of weeds is divided into two steps; firstly, the lightweight YOLOv8nGP algorithm is used to identify soybean seedlings. Secondly, after soybean seedlings are separated, weeds and soil parts are kept in the image, and then weeds are extracted and segmented from the background using the NExG index. Finally, an algorithm for recognising weeds at the soybean seedling stage is developed. Applying this algorithm to intelligent field weeding machinery can accurately identify different kinds of weeds in the field and then guide the weeding machinery to carry out targeted weeding operations. This way of combining deep learning with traditional machine vision algorithms can reduce the dependence of the recognition model on datasets and improve the robustness of the model.

2. Materials and Methods

2.1. Data Acquisition

2.1.1. Data Acquisition and Annotation

To increase the diversity of the soybean and weed samples, datasets collected from different areas were used in this study. The first part was collected in Xinjian Town, Yixing City, Jiangsu Province (Eastern coastal province of China), the second part was collected in Guannan Town, Lianyungang City, Jiangsu Province, and the third part was collected in Suining Town, Xuzhou City, Jiangsu Province. The images were taken in July 2023. A total of 1450 images were accumulated from soybean fields in diverse light settings, including clear and overcast days. The images were acquired by a Jai Gox-3200M camera (JAI A/S, Copenhagen, Denmark). The camera was placed on an NJS-01 agricultural four-wheeled machine platform (NIAM, Nanjing, China) equipped with a computer, at a height of approximate 150 cm from the ground, and the shooting angle was permanent to the ground. In addition, in order to increase the diversity of samples, photos of soybean seedlings in different areas were collected from the Internet.

2.1.2. Data Augmentation

The dataset was initially divided into a training set, a validation set, and a test set with a ratio of 8:1:1. Subsequently, data augmentation techniques were applied to the training set, such as image flipping, both horizontally and vertically, altering brightness levels, and randomly cropping, stretching, and rotating images [24] (Figure 1). The distribution of the datasets is shown in Table 1.

2.2. Methods of Recognition of Soybean Seedlings and Weeds

The accuracy and robustness of recognition models based on deep learning mainly depend on the richness of the dataset. Due to the wide variety of weeds and the large differences in weed species in different regions, it is very difficult to classify and recognise weeds using deep learning methods. Compared with weeds, the appearance characteristics of soybean seedlings are mainly related to varieties and breeding periods and are not affected by geographical distribution. Therefore, it is feasible to identify soybean seedlings by using a deep learning algorithm, and the recognition model generated from it can be used in different geographical scenes with stronger generalization ability. In this paper, the YOLOv8GP algorithm is used to recognise soybean seedlings. After soybean seedlings are identified, the green targets in the area outside the recognition box are extracted and regarded as weeds. Weeds are segmented by using their colour characteristics, and noise is processed by regional filtering; finally, weed recognition is completed.

2.2.1. Recognition of Soybean Seedlings Based on Lightweight YOLOv8

Introduction to the YOLOv8 Recognition Algorithm

YOLOv8 is the latest version of the YOLO (You Only Look Once) algorithms. Based on the success of previous YOLO versions, YOLOv8 makes several improvements aimed at improving recognition performance and accuracy. In terms of network structure, YOLOv8 is divided into Input, Backbone, Neck, and Output modules. Compared to the previous versions, the main innovation of YOLOv8 is that it uses the CSPLayer2Conv (C2f) module in the main part, which enriches the gradient flow of the model with more branches and cross-layer connections and forms a neural network module with stronger feature representation ability. The detection head part uses anchor-free+Decoupled-head, the loss function uses a combination of the classification BCE and the regression CIOU + VFL, the frame-matching strategy is changed from the static matching method to the task-aligned assigner-matching method to further improve performance and flexibility. In the YOLOv8, there are different models to choose from, including n, s, m, l, and x. In this paper, considering the limitations of the arithmetic power of the field operation platform, the YOLOv8n with the smallest number of model parameters is chosen to be the main body of the recognition model [25].
In this study, a soybean seedlings recognition model suitable for the field weeding devices was developed by implementing a lightweight improvement to YOLOv8n using the GhostNet network and unconstrained pruning operations. The purpose is to reduce the size of the model while maintaining the accuracy. Figure 2 displays the improved network structure diagram.

Lightweight Backbone Network Improvements

Due to the limited memory and computing resources, it is difficult to realize convolutional neural networks (CNN) on embedded devices, so reducing the weight of recognition model has become an effective means to reduce the difficulty of realizing visual algorithms and hardware cost of intelligent weeding equipment [26].
GhostNet is a model compression method with a Ghost module as its core, and the convolution process between Ghost modules is shown in Figure 3 [27]. The Ghost module introduces linear operation instead of partial convolution, which is divided into three steps: firstly, a small number of eigenmaps are generated using the standard convolution; secondly, based on the first step, more Ghost feature maps are obtained with a small number of parameters using linear operations such as deep convolution or shifting; thirdly, the feature maps generated in the first two steps are combined to obtain the output feature maps of the Ghost module. With the same number of input and output feature maps, the computational cost of the Ghost module is much lower than that of ordinary convolution, allowing more feature information to be obtained with less computation and without adversely affecting the performance of the model.
The FLOPs of ordinary conv is h × w × n × c × k × k . The FLOPs of Ghost module is n / s × h × w × n × c × k × k + ( s 1 ) h × w × d × d , d × d is the average size of the convolution kernel used for the linear transformation, and the theoretical ratio of the FLOPs of the Ghost module to those of the normal convolution can be calculated as in Formula (1):
r s = h × w × n × c × k × k n s × h × w × c × k × + ( s 1 ) h × w × d × d = c × k × k 1 s × c × k × k + s 1 s × d × d s × c s + c 1 s
From Formula (1), it can be known that the computation of ordinary convolution is s times more than the Ghost module. Therefore, the computation of Ghost module is only 1/s of the ordinary convolution, which vastly reduces the computational overhead of the whole network.
Using the advantages of the Ghost module, a Ghost structure for YOLOv8 was built, as shown in Figure 4. The GhostConv first uses half of the original Conv to generate half of the feature map, then uses a cheap computation to process the feature map to generate the other half, and finally stitches the two parts of the feature map into a complete feature map through the Concat operation. Ghost bottleneck can be formed by stacking Ghost modules. Each Ghost bottleneck consists of two main GhostConvs. The first GhostConv acts as an extension layer to increase the number of channels. The second GhostConv reduces the number of channels in the output feature map to match the number of input channels. The two GhostConvs are connected by a shortcut. The difference between the two GhostConvs is that the first Ghost uses the ReLU activation function, while each subsequent layer uses batch normalisation, which allows the model to reduce the number of model parameters and computation and also to optimise the feature maps through the GhostConvs to improve the efficiency of model recognition.
By replacing the bottleneck module in the C2f module with the Ghost bottleneck module, a new GhostC2f module is generated. It is essential to note that using the Ghost bottleneck module significantly improves the quality of the model while reducing computational costs. This, in turn, reduces the size and computation of the network.
Due to the small colour difference between the soybean seedlings and the weeds, the feature maps generated by the convolution operation are similar and large in number, and the generation of redundant feature maps has a limited enhancement for the recognition effect, while increasing the number of parameters. Therefore, in this paper, we replace the original Conv and C2f modules with GhostConv and GhostC2f in the backbone network layer of the YOLOv8n model to generate feature maps that can uncover the required information from the original features at a very low cost, which reduces the parameter scale and resource consumption of the network model and makes it suitable for devices with performance-limited hardware on mobile devices.

Model Pruning

After the lightweight improvement of the backbone network of YOLOv8n based on GhostNet, although the computational amount of the whole deep learning network has been effectively reduced, in order to have efficient fitting ability and maintain good recognition effect, the redundant structure after the convolution operation will still lead to a large model volume. In order to further reduce the model size, this paper uses the model pruning method to reduce the model size, while maintaining the high recognition accuracy.
Model pruning [28] is a widely used in lightweight model improvement technology to reduce the complexity and the redundant parameters of the model. According to the different granularity, pruning methods can be divided into channel level, weight level, and hierarchy, which is a powerful tool to realize lightweight model. The channel-level pruning method [29,30] strikes a good balance between difficulty and flexibility for a variety of fully connected networks, allowing traditional convolutional neural networks (CNNs) to run quickly and efficiently on multiple platforms.
At present, the strategy of adding constraints is the main method of channel pruning. One of the common methods is to apply sparsity on the scale factors of the BatchNormalization (BN) layer by using L1 regularization on the model [31] and then jointly training the relationship between the network weights and these scale factors by directly removing the channels whose scale factors are lower than a set threshold, in order to reduce the redundancy of the channels and thus reserve the more important channels to complete the model pruning. Although this method can effectively reduce the model size and improve the inference speed, it may have some negative impact on the accuracy of the model [32,33,34].
Another pruning method is unstructured pruning, which provides greater degrees of freedom and allows the model to be pruned with few restrictions. Unconstrained pruning does not restrict the location of the pruning, and the parameters to be pruned can be chosen at will, whereas constrained pruning usually requires certain rules and restrictions on the structure of the pruning. This flexibility allows unconstrained pruning to better adapt to different types of neural network architectures and allows greater flexibility in retaining parameters that contribute to the model’s performance, whereas constrained pruning may degrade performance by cutting out some key parameters. In many network architectures, unconstrained pruning has achieved some gains in accuracy compared to structured pruning, although this may lead to an increase in inference time [35]. In the task of separating soybean seedlings from weeds, high recognition accuracy is a priority, so unconstrained channel pruning is used in the pruning operation of the lightweight model in order to ensure recognition without sacrificing recognition speed too much.

Model Performance Evaluation Indices

In this study, the commonly used metrics in multi-class target recognition models, such as Precision (Equation (2)), Recall (Equation (3)), F1 Score (Equation (4)), and Mean Average Precision (Equation (5)), are used to evaluate the performance of the models [11].
Precision (P): indicates the proportion of samples predicted to be positive by the model that are actually positive;
Recall (R): indicates the proportion of samples that are actually positive that are correctly predicted as positive by the model;
F1 Score (F1): measures the comprehensive performance of the target recognition model, which is the reconciled average of Precision and Recall;
Mean Average Precision (mAP): measures the recognition performance of the model on different categories.
P r e c i s i o n = T P ( T P + F P ) ,
R e c a l l = T P ( T P + F N ) ,
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l ,
A P = 1 0 P r e c i s i o n d ( R e c a l l ) m A P = 1 N 1 N A P ( k ) ,
where TP denotes True Positive, the number of samples correctly predicted as positive by the model; FP denotes False Positive, the number of samples incorrectly predicted as positive by the model; and FN denotes False Negative, the number of samples incorrectly predicted as negative by the model. AP(k) denotes the kth category’s area under the Precision–Recall curve, and N denotes the total number of categories.
[email protected] refers to the average accuracy of all categories when IoU (Intersection over Union) [36] reaches 0.5 in the target recognition task. MAP@(0.5–0.95) refers to the average accuracy of all categories when IoU changes from 0.5 to 0.95 in the target recognition task. MAP@(0.5–0.95) index gives the average performance of the algorithm under multiple IoU thresholds. In the actual weed management in the field, a certain amount of redundancy is usually left around the crop to reduce the seedling injury rate. Taking mAP@0.5 as the evaluation index of the model meets the needs of practical application, so, in this paper, P, R, F1 scores and mAP@0.5 are used as the main indicators to evaluate the accuracy of the model.

2.2.2. Weed Recognition Based on Colour Features

Unlike soybean, it is difficult to accurately identify multiple weeds because of the wide variety of species and individual differences. According to incomplete statistics, there are more than 70 common weeds in soybean fields, and the common types of weeds can be divided into two categories: gramineae weeds and broadleaf weeds [37]. Some common weeds are shown in Figure 5 [38,39].
On the other hand, there is a large difference in the colour of weeds and the soil background, and weeds can be identified from the soil by the green features. In this study, the improved YOLOv8n was first utilized to identify soybean seedlings, and the pixels within the border of the soybean plant were removed when identifying weeds. In this way, the remaining green pixel values in the image can be considered as weed pixels, which are segmented from the soil background using colour features to complete the recognition of weeds. Based on the spectral characteristics of green crops with high reflectivity of green channel and low reflectivity of red and blue channels in visible light band, researchers constructed a series of vegetation indices, such as the Color Index of Vegetation Extraction (CIVE) [40], the Excess Green–Red Index (ExGR), the Excess Red Index (ExR), the Composition of Green Vegetation Index (COM) [41], the Vegetation Index (VEG) [42], the Excess Green Index (ExG) [43], etc., which are used to enhance the difference between the crop and the surrounding features and effectively separate the green crop from the soil background [44].
In Table 2, R, G, B are the values of the visible red, green, and blue channels, and r, g, b are the normalized values of the red, green, and blue channels. In the actual field weeding operation, the machine may cause shadows on the crop and the ground due to different light directions, which leads to large changes in soil brightness; among these algorithms, COM has the best effect for eliminating shadows [41], but due to the complexity of the computational algorithm, it will be significantly slower than the other vegetation index algorithms in real-time. ExG is second only to COM in the recognition effect, which can significantly suppress shadows, dead grass, and soil images, and the amount of computation is small, and the image processing time is short [45]. In order to reduce the effect of light intensity on image segmentation effect, this paper adopts the Normalized Excess Green Index (NExG) [46] for the segmentation of weeds.

2.3. Test Platform

The hardware and software environments for the model training and testing are shown in Table 3.

3. Results

3.1. Performance of Data Augmentation

The training set of the original dataset has a total of 1232 images, and the training dataset was extended by regular data augmentation by adding 3600 images, making a total of 4832 images. The original training dataset and the augmented training dataset were trained with YOLOv8n as the training network, the value of the momentum factor was set to 0.9, the initial value of the learning rate was set to 0.0001, and the learning rate decayed with the cosine annealing recession. The number of training rounds was set to 300. The corresponding training results for the two datasets are shown in Table 4.
The comparison showed that the use of the enhanced dataset compared to the original dataset resulted in progress in different indices. P improved by 1%, R by 1%, mAP@0.5 by 0.9%, and F1 by 1%. These results indicate that data augmentation significantly contributes to improving the model’s recognition accuracy. The inference time at 1.7 ms meets the requirements of the target application and other practical scenarios, indicating that the data augmentation technique is effective in improving the accuracy of model recognition.

3.2. Performance of Different Lightweight Networks

In order to verify the impact of different lightweight networks on the recognition performance of this model, in this study, the ShuffNetv2 and FasterNext were compared with GhostNet under the same training hyperparameters and methods; the recognition performances are shown in Table 5. FLOPs stands for ‘Floating Point Operations per Second’, which indicates the number of floating point operations performed per second.
From Table 5, it can be seen that YOLOv8n + ShuffleNetv2 (YOLOv8nS) performs best in terms of the degree of lightweighting, which reduces the model size by 2.25 mb compared to YOLOv8n. However, in terms of the recognition accuracy, in comparison to YOLOv8n, the recognition accuracy reduces at P, R, and mAP@0.5 by 3.5%, 2.3%, and 1.2%. Although the model size of YOLOv8n + GhostNet (YOLOv8nG) is 1.07 mb larger than that of YOLOv8nS, its recognition accuracy is significantly higher. Compared with YOLOv8n, YOLOv8nG is 0.9% and 0.3% higher in R and F1; other indicators are basically the same, with a reduction of 1.18 MB in model size. In terms of inference time, the inference time of the above models is less than 1.8 ms, which can meet the requirement of real-time recognition. It is particularly noteworthy that correct plant recognition is very important in field weed control applications. Therefore, our goal is to initially lighten the model while maintaining its accuracy. Although the YOLOv8nG is slightly inferior to YOLOv8nS in terms of inference time and model size, its excellent performance in recognition accuracy makes it a suitable choice. In comparison with YOLOv8n + FasterNext (YOLOv8nF), although YOLOv8G is 0.7% lower in P than YOLOv8nF, it is 1.5% higher in R. The significant improvement in recall indicates that the YOLOv8nG model is more effective in identifying actual targets or events, reducing the occurrence of false positives, thereby improving the comprehensiveness and coverage of the model. In terms of model size, YOLOv8nG has also decreased by 0.34 mb compared to YOLOv8nF. In order to further improve the lightweighting of the model, we chose the YOLOv8nG model as the object for further optimization. We applied a pruning optimization strategy to further reduce the size and computational burden of the model while maintaining its high accuracy in plant recognition.

3.3. Performance of the Lightweight Model at Different Pruning Rates

After obtaining the weight model through conventional training, the weight file was subjected to unconstrained channel pruning. In order to ensure that the model would not be over-pruned during the pruning process, the global channel pruning rate was set to 30%, 40%, 50%, 60%, and 70%, respectively, and the model after pruning under the five pruning rates was tested separately to seek the most appropriate pruning rate. The recognition performance under different pruning rates is shown in Table 6.
From Table 6, we can see that after unconstrained channel pruning, the model size is significantly reduced compared to YOLOv8nG. This indicates that pruning operations can make the model more lightweight. However, the unconstrained channel pruning leads to irregular weight distribution, and it requires more computational resources to handle this situation. Additionally, due to the more complex memory access, this leads to an increase in inference time. For the soybean recognition task, as it only requires identifying a single object with low difficulty, the overall inference speed is still high, and the inference time for each image is less than 3 ms, fully meeting the requirements of actual tasks.
As the pruning rate increases, the model size decreases, but at the pruning rate of 70%, the model size becomes larger. This is because when the excessive pruning rates are applied, the model architecture and parameters may need to be reorganized to adapt to the excessive pruning rates, which can lead to the addition of parameters in order to maintain the validity of the model. This reorganization can result in an increase in the model size. Among these pruning rates, we find that the model size is smallest when the pruning rate is 60%, which is 2.42 mb smaller than that of YOLOv8nG, while its P, R, mAP@0.5, and F1 scores are the highest among all pruning rates. The model with a 60% pruning rate is an ideal choice for achieving both model size compression and improved recognition accuracy. In summary, the unconstrained channel pruning offers a very promising solution for the soybean recognition task by balancing the model size and the inference accuracy. With the pruning rate of 60%, the improvement in the inference accuracy and the reduction in model size make it an efficient choice that meets the actual task requirements and improves model performance, while the inference time increases only 0.4 ms. Figure 6 shows the comparison of the number of channels on some of the layers before and after model pruning, from which we can see that after unconstrained pruning, the number of channels on these layers is significantly reduced, leading to a reduction in model size. We call this pruned model YOLOv8nGP.

3.4. Ablation Experiments

In order to verify the effectiveness of the two improvement methods, ablation experiments were designed to verify the effect of each improvement method, and the results of the ablation experiments are shown in Table 7. “√” indicates that the corresponding method was used.
As can be seen from Table 7, the models with GhostNet and unconstrained channel pruning treatments achieve significant reductions in model size compared to the original YOLOv8n algorithm. The YOLOv8nGP model, which uses both GhostNet and pruning treatments, achieves the largest model size reduction of 60.5% compared to the YOLOv8n.
In terms of inference speed, the lightweight operation triggered a slight increase in inference time, but its inference time remained below 3 ms. This makes it suitable for use in the field operations and other task requirements without compromising real-time performance. In terms of recognition accuracy, YOLOv8nGP performs best, improving on the YOLOv8n in P, R, and F1 scores by 1.1% each. This improvement is critical to the reduction of soybean plant leakage, as high recognition accuracy means less leakage, which in turn improves the accuracy of farming operations.
The YOLOv8nGP model achieves the best performance by introducing these techniques, including smaller model size and higher recognition accuracy, with the inference time below the real-time performance range, making it an ideal choice for soybean recognition tasks.

3.5. Comparison with Other Lightweight YOLO Algorithms

Under the same dataset and testing platform, the YOLOv8nGP is compared with other lightweight YOLO algorithms, including YOLOv7n, YOLOv7-tiny, YOLOv5n, YOLOv5-lite, YOLOv3n, and YOLOv3n-tiny. In the comparison test, the input parameters were strictly controlled, including the uniform input sizes, the same dataset, and data enhancement methods. The results are shown in Table 8.
In summary, the YOLOv8nGP algorithm proposed in this paper achieves the highest recognition accuracy while being lightweight and maintaining good real-time performance. The results of this paper demonstrate the feasibility and excellence of the YOLOv8n algorithm, providing a strong support for practical applications.

3.6. Weed Recognition Based on NExG

After identifying and eliminating the green pixels belonging to the soybean seedlings, the location of the soybean seedlings is marked with a red rectangular box, as can be seen in Figure 7a,b, and all the remaining green targets are weeds or seen as weeds. The binary maps of weed segmentation based on the NExG is shown in Figure 7c,d. As can be seen in the Figure 7c,d, weeds are correctly recognized and show excellent segmentation results, but there are still some noisy pixels in the segmented binary map, and some of the noise is due to the fact that the recognition frame does not completely cover the plants when recognizing the soybean seedlings resulting in noise in the edge part. The binary maps of weed segmentation after area filtering are shown in Figure 7e,f. It can be observed that some noisy pixels in the map are filtered out, and the morphology of the weed targets are complete and retain clear outlines. The results of weed recognition can be used for accurate weeding of soybean seedlings in the field. The average processing time for weed segmentation of a single image is 2.13 ms, which can meet the real-time requirements, and the processed image can be further processed, including generating weed distribution coordinates and outputting the percentage of weed area, in order to be adapted to different operating implements. The experimental results show that the soybean seedlings and weeds recognition method proposed in this study has a high degree of feasibility, as well as excellent application prospects.

4. Discussion

The field condition of soybean at the seedling stage is complex, with many weed species and different crop sizes, all of which cause difficulties in the effectiveness of recognition models. In this paper, a weed recognition method was used, and the optimized YOLOv8nGP model was firstly applied to the recognition of soybean seedlings. The P, R, and mAP@0.5 got 94.5%, 94.3%, and 97.6%, the size of the model was reduced from 5.95 mb to 2.35 mb, and the average inference time of each image was 2.2 ms, which could realize the real-time recognition of soybean seedlings. In soybean recognition, Zhang et al. [12] proposed an optimized FasterR-CNN Algorithm, with a P of 90.06% for soybean recognition, but its average recognition speed was 590 ms; Alex Aaron et al. [47] propose a lightweight algorithm of Inceptionnet V3, in which the model’s P is 91%, R is 87%, and the model’s size is 23 mb. Although the recognition accuracy of the model can be affected by the dataset, compared with these algorithms, the recognition speed and model size of the YOLOv8nGP are improved, which also verifies the effect of lightweight models. In the recognition of weeds, which differs from the previous work by establishing a weed dataset for weed recognition [11,48], considering that the dataset cannot cover the multiple categories of weeds, the NExG exponential segmentation was used to segment the weed; the segmentation results were clear and accurate, and the method could satisfy the requirements of the recognition of weeds in the field., This method of recognition is independent of the weed dataset and can be applied to soybean field scenes in different regions.

4.1. Model Lightweight Improvement

To overcome the constrained computational power of mobile platforms in the field, a comprehensive lightweight enhancement was implemented into the original YOLOv8n model. This innovation integrates the efficient backbone network GhostNet and an unconstrained pruning method, which allows for the model’s size to be reduced from 5.95 mb to 2.35 mb, thereby improving its recognition performance. The recognition time for soybean seedlings is also reduced to 2.2 ms, meeting the real-time requirement. This algorithmic improvement makes it easier to deploy and apply on the mobile side. Typically, the use of lightweight backbone networks alone has a negative impact on recognition accuracy, and improvements to the attention mechanism, loss function, etc. are generally required to overcome this problem, even though these improvements may increase computational complexity and model size [32]. On the other hand, the constrained pruning method can significantly reduce the model computation and size but may negatively affect the recognition accuracy [34,49]. Therefore, in this paper, we combine the lightweight backbone network GhostNet with unconstrained pruning to optimize the YOLOv8n. Although this method slightly affects the inference speed, it achieves a significant improvement in recognition accuracy and also significantly reduces the model size. From Table 8, it can be seen that the YOLOv8nGP algorithm has the highest recognition accuracy and the smallest model size compared to other mainstream lightweight algorithms. This makes our improved model an efficient alternative that not only meets the actual task requirements but also improves the recognition performance of the model.

4.2. Generalization Capability of Weed Recognition Models

Weed recognition and management is one of the most important issues in agriculture. The distribution of weeds is highly random, and their species and distribution can be influenced by factors such as geography and season. It is a challenging task to cover weeds from different regions, species, and fertility periods when building a weed dataset. The deep learning algorithms have achieved great success in the field of image recognition by automatically learning features from images and classifying them. Deep learning algorithms have the potential to achieve high accuracy in recognizing weeds in specific application scenarios [11,48,50]. However, their effectiveness is reliant on the dataset used. As shown in the study [12,13,15], these models have good recognition accuracy for specific kinds of weeds (such as gramineous weed, broadleaf weed, etc.). However, there are many kinds of weeds, and it is difficult for us to know the actual effect of these models when identifying other weeds. Therefore, deep learning algorithms tend to perform well for a particular type of farmland or crop, but they may underperform due to the unpredictable and varied nature of weed distribution. Instead of relying on the specific weed dataset, the colour feature-based weed recognition method uses the colour difference between the weeds and the background soil for segmentation, which means that the method can be adapted to a variety of environments and different types of weeds. This method makes use of the colour information in the image, which is a very intuitive and powerful feature. Weeds usually have a different colour from the surrounding soil, and this difference can be used to effectively separate weeds from the background [42,44]. In addition, the colour characteristics are not only able to distinguish between different types of weeds but are also adaptable to different geographical regions and seasons.
In addition, the generalization ability of this method is one of its major strengths. Because it does not rely on pre-constructed weed datasets. This is important for ever-changing fields and crop types. In summary, the colour feature-based weed recognition method provides a powerful tool that allows those people in the agricultural industry to achieve accurate weed recognition and management in different environments without the need to prepare large datasets in advance. The generalization ability and adaptability of this method make it a reliable solution.

5. Conclusions

In order to solve the problem of weed recognition at the soybean seedling stage under natural conditions, this paper takes the soybean field crop as the research object, collects soybean seedling and weed images from different growing regions as the dataset, and proposes a lightweight weed extraction method integrating deep learning algorithms and traditional machine vision algorithms. For the recognition of soybean seedlings, this paper introduces a lightweight GhostNet network and an unconstrained pruning method with a 60% pruning rate and integrates them with the YOLOv8n model to create the improved YOLOv8nGP model. This improved model improves the P, R, and F1 metrics by 1.1% each, in comparison to the original YOLOv8n algorithm. Meanwhile, the model size is also reduced by 2.42 mb. Compared with other lightweight YOLO algorithms, this model has a clear advantage in recognition accuracy, while the inference time is only 2.2 ms, which can meet the real-time requirements of field operations, making the automatic recognition and tracking of soybean seedlings more efficient and accurate. In some studies, the addition of lightweight attention modules shows effective improvement to the recognition effect of the model [51,52], and, at the same time, it does not increase the FLOPs too much. The effect of attention modules is affected by factors such as the location of the addition and the number of repetitions. In future work, I will make more attempts to obtain better recognition effects.
For weed recognition, this paper utilises a NExG-based method. The method initially recognises soybean seedlings and subsequently performs weed recognition through the extraction of the green portion of the image and image filtering. This approach decreases reliance on the diversity of the weed dataset, hence enhancing the generalisation ability of the weed recognition model. This means that the method performs well even in the face of weeds from different regions and seasons. This approach offers a more adaptable and dependable solution for agricultural weed management.
In the actual field environment, there are many kinds of weeds, which brings great trouble to the establishment of the conventional weed recognition model. First of all, it is very difficult to establish a dataset that can contain all kinds of weeds. Secondly, the labelling of weeds is very complicated, and the irregular shape and the mutual coverage of weeds leads to a sharp increase in workload. Overall, the research in this paper provides a new solution to the problem of soybean seedlings and weed recognition by combining the deep learning and traditional algorithms, which makes it possible to achieve efficient and accurate recognition of soybean seedlings and weeds in farmland environments; at the same time, the improved lightweight recognition model is more convenient for mobile ministry. In future work, this study will use appropriate agricultural machinery equipped with a camera and computer to mark and eradicate weeds from the images. In addition, this model can also be embedded in the APP function module of smart phones to help farmers identify weeds and provide certain drug use suggestions.

Author Contributions

Conceptualization, T.S., L.C. and X.X.; methodology, T.S. and X.X.; software, T.S., L.C. and L.Z.; validation, T.S. and Y.J. (Yuxuan Jiao); formal analysis, T.S. and S.Z.; investigation, T.S. and Y.J. (Yuxuan Jiao); resources, T.S. and Y.J. (Yongkui Jin); data curation, T.S. and S.Z.; writing—original draft preparation, T.S.; writing—review and editing, L.C.; visualization, T.S. and Y.J. (Yuxuan Jiao); supervision, L.C. and Y.J. (Yongkui Jin); project administration, L.C. and Y.J. (Yongkui Jin); funding acquisition, X.X. and L.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2022YFD2000700; the Innovation Program of Chinese Academy of Agricultural Sciences, grant number CAAS-SAE-202301; the China Modern Agricultural Industrial Technology System, grant number CARS-12; the Key Research and Development Project of Shandong Province, grant number 2022SFGC0204-NJS; and the National Key Research and Development Plan, grant number 2022YFD2001603.

Data Availability Statement

The data that support the findings of this article are not publicly available due to privacy.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Fachinelli, R.; Melo, T.S.; Capristo, D.P.; Abreu, H.; Ceccon, G. Weeds in Soybean Crop after Annual Crops and Pasture. J. Neotrop. Agric. 2021, 1, e5563. [Google Scholar] [CrossRef]
  2. Fang, H.M.; Niu, M.M.; Xue, X.Y.; Ji, C.Y. Effects of mechanical-chemical synergistic weeding on weed control in maize field. Trans. Chin. Soc. Agric. Eng. 2022, 38, 44–51. [Google Scholar]
  3. Yang, Z.H.; Yang, H.M.; Yu, C.; Chen, Y.F.; Zhou, X.; Ma, Y.; Wang, X.N. Research Status and Analysis of Automatic Target Spraying Technology for Facility Vegetables. Xinjiang Agric. Sci. 2022, 58, 1547–1557. [Google Scholar]
  4. Su, J.Y.; Yi, D.W.; Matthew, C.; Liu, C.J.; Zhai, X.J.; Klaus, M.M.; Chen, W.H. Spectral analysis and mapping of blackgrass weed by leveraging machine learning and UAV multispectral imagery. Comput. Electron. Agric. 2022, 192, 106621. [Google Scholar] [CrossRef]
  5. Nik, N.; Ernest, D.; Madan, G. Assessment of Weed Classification Using Hyperspectral Reflectance and Optimal Multispectral UAV Imagery. Agronomy 2021, 11, 1435. [Google Scholar] [CrossRef]
  6. Li, H.; Qi, L.J.; Zhang, J.H.; Ji, R.H. Recognition of Weed during Cotton Emergence Based on Principal Component Analysis and Support Vector Machine. Trans. Chin. Soc. Agric. Mach. 2012, 43, 184–196. [Google Scholar]
  7. Zhao, P.; Wei, Z.X. Weed Recognition in Agricultural Field Using Multiple Feature Fusions. Trans. Chin. Soc. Agric. Mach. 2014, 45, 275–281. [Google Scholar]
  8. Miao, R.H.; Yang, H.; Wu, J.L.; Liu, H.Y. Weed identification of overlapping spinach leaves based on image sub-block and reconstruction. Trans. Chin. Soc. Agric. Eng. 2020, 36, 178–184. [Google Scholar]
  9. Wang, C.; Wu, X.H.; Li, Z.W. Recognition of maize and weed based on multi-scale hierarchical features extracted by convolutional neural network. Trans. Chin. Soc. Agric. Eng. 2018, 34, 144–151. [Google Scholar]
  10. Kong, S.; Li, J.; Zhai, Y.T.; Gao, Z.Y.; Zhou, Y.; Xu, Y.L. Real-Time Detection of Crops with Dense Planting Using Deep Learning at Seedling Stage. Agronomy 2023, 13, 1503. [Google Scholar] [CrossRef]
  11. Zhang, H.; Wang, Z.; Guo, Y.F.; Ma, Y.; Cao, W.K.; Chen, D.X.; Yang, S.B.; Gao, R. Weed Detection in Peanut Fields Based on Machine Vision. Agriculture 2022, 12, 1541. [Google Scholar] [CrossRef]
  12. Zhang, X.L.; Cui, J.; Liu, H.J.; Han, Y.Q.; Ai, H.F.; Dong, C.; Zhang, J.R.; Chu, Y.X. Weed Identification in Soybean Seedling Stage Based on Optimized Faster R-CNN Algorithm. Agriculture 2023, 13, 175. [Google Scholar] [CrossRef]
  13. Prabavadhi, J.; Kanmani, S. Mobile Based Deep Learning Application for Weed and Medicinal Plant Detection Using YOLOV5. In Proceedings of the 2023 International Conference on System, Computation, Automation and Networking (ICSCAN), Puducherry, India, 17–18 November 2023; pp. 1–5. [Google Scholar] [CrossRef]
  14. Sneha, N.; Sundaram, M.; Ranjan, R.; Abhishek. Weedspedia: Deep Learning-Based Approach for Weed Detection using R-CNN, YoloV3 and Centernet. In Proceedings of the 2023 International Conference on Quantum Technologies, Communications, Computing, Hardware and Embedded Systems Security (iQ-CCHESS), Kottayam, India, 15–16 September 2023; pp. 1–5. [Google Scholar] [CrossRef]
  15. García-Navarrete, O.L.; Santamaria, O.; Martín-Ramos, P.; Valenzuela-Mahecha, M.Á.; Navas-Gracia, L.M. Development of a Detection System for Types of Weeds in Maize (Zea mays L.) under Greenhouse Conditions Using the YOLOv5 v7.0 Model. Agriculture 2024, 14, 286. [Google Scholar] [CrossRef]
  16. Tang, L.; Tang, Y. How many weed species are known in China’s farmland. Pestic. Mark. Inf. 2023, 6, 63–64. [Google Scholar]
  17. Quan, L.; Feng, H.; Lv, Y. Maize seedling detection under different growth stages and complex field environments based on an improved Faster R–CNN. Biosyst. Eng. 2019, 184, 1–23. [Google Scholar] [CrossRef]
  18. Kanagasingham, S.; Ekpanyapong, M.; Chaihan, R. Integrating machine vision-based row guidance with GPS and compass-based routing to achieve autonomous navigation for a rice field weeding robot. Precis. Agric. 2020, 21, 831–855. [Google Scholar] [CrossRef]
  19. Akbarzadeh, S.; Paap, A.; Ahderom, S.; Apopei, B.; Alameh, K. Plant discrimination by Support Vector Machine classifier based on spectral reflectance. Comput. Electron. Agric. 2018, 148, 250–258. [Google Scholar] [CrossRef]
  20. Bawden, O.; Kulk, J.; Russell, R.; McCool, C.; Dayoub, F.; Lerhnert, C.; Perex, T. Robot for weed species plant-specific management. J. Field Robot. 2017, 34, 1179–1199. [Google Scholar] [CrossRef]
  21. Trygve, U.; Frode, U.; Anders, B.; Dorum, J.; Netland, J.; Overskeid, O.; Bergr, T.W.; Gravdahl, J.T. Robotic in-row weed control in vegetables. Comput. Electron. Agric. 2018, 154, 36–45. [Google Scholar] [CrossRef]
  22. Wang, A.C.; Zhang, W.; Wei, X.H. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
  23. Wu, Z.N.; Chen, Y.J.; Zhao, B.; Kang, X.B.; Ding, Y.Y. Review of Weed Detection Methods Based on Computer Vision. Sensors 2021, 21, 3647. [Google Scholar] [CrossRef]
  24. Wu, D.H.; Lv, S.C.; Jiang, M.; Song, H.B. Using channel pruning-based YOLO v4 deep learning algorithm for the real-time and accurate detection of apple flowers in natural environments. Comput. Electron. Agric. 2020, 178, 105742. [Google Scholar] [CrossRef]
  25. Ren, R.; Sun, H.X.; Zhang, S.J.; Wang, N.; Lu, X.Y.; Jing, J.P.; Xin, M.M.; Cui, T.Y. Intelligent Detection of Lightweight “Yuluxiang” Pear in Non-Structural Environment Based on YOLO-GEW. Agronomy 2023, 13, 2418. [Google Scholar] [CrossRef]
  26. Chechlinski, L.; Siemiatkowska, B.; Majewski, M. A System for Weeds and Crops Identification-Reaching over 10 FPS on Raspberry Pi with the Usage of obileNets, DenseNet and Custom Modifications. Sensors 2019, 19, 3787. [Google Scholar] [CrossRef]
  27. Han, K.; Wang, Y.H.; Tian, Q.; Guo, J.Y.; Xu, C.J.; Xu, C. GhostNet: More Features from Cheap Operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1577–1586. [Google Scholar] [CrossRef]
  28. Peng, D.L.; Wang, T.X. Pruning algorithm based on GoogLeNet model. Control Decis. 2019, 34, 1259–1264. [Google Scholar]
  29. Sun, Y.L.; Ye, J.Y. Convolutional neural networks compression based on pruning and quantization. Comput. Sci. 2020, 47, 261–266. [Google Scholar]
  30. Fan, Y.; Tang, X.; Ma, Z. A weight-based channel pruning algorithm for depth-wise separable convolution unit. In Proceedings of the 2021 4th International Conference on Algorithms, Compating and Artificial Intelligence, Sanya, China, 22–24 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  31. Han, S.; Zhan, Y.; Liu, X. Variational automatic channel pruning algorithm based on structure optimization for convolutional neural networks. J. Internet Technol. 2021, 22, 339–351. [Google Scholar]
  32. Ye, Y.; You, G.M.; Fwu, J.K.; Zhu, X.; Yang, Q.; Zhu, Y. Channel pruning via optimal thresholding. In Neural Information Processing: Proceedings of the 27th International Conference, ICONIP 2020, Bangkok, Thailand, 18–22 November 2020; Springer: Cham, Switzerland, 2020; pp. 508–516. [Google Scholar] [CrossRef]
  33. Li, A.; Sun, S.J.; Zhao, C.Y.; Feng, M.T.; Wu, C.Z.; Li, W. Research on Lightweight of Improved YOLOv5s Track Obstacle Detection Model. Comput. Eng. Appl. 2023, 59, 197–207. [Google Scholar]
  34. Yang, J.; Zuo, H.; Huang, Q.; Sun, Q.; Li, S.; Li, L. Lightweight Method for Crop Leaf Disease Detection Model Based on YOLO v5s. Trans. Chin. Soc. Agric. Mach. 2023, 54, 222–229. [Google Scholar]
  35. Wan, A.; Hao, H.X.; Patnaik, K.; Xu, Y.Y.; Hadad, O.; Guera, D.; Ren, Z.L.; Shan, Q. UPSCALE: Unconstrained Channel Pruning. In Proceedings of the 40th International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023. [Google Scholar] [CrossRef]
  36. Padilla, R.; Netto, S.L.; Silva, E.A.B. A Survey on Performance Metrics for Object-Detection Algorithms. In Proceedings of the 2020 International Conference on Systems, Signals and Image Processing (IWSSIP), Niteroi, Brazil, 1–3 July 2020. [Google Scholar] [CrossRef]
  37. Cui, J. Research on Weed Recognition Method of Soybean Seedling Stage Based on Deep Learning. Master’s Thesis, Jilin Agricultural University, Changchun, China, 2023. [Google Scholar]
  38. Pictures of Weeds in Soybean Fields [EB/OL]. Available online: https://image.baidu.com/search/index?tn=baiduimage&ct=201326592&lm=-1&cl=2&ie=gb18030&word=%B4%F3%B6%B9%CC%EF%BC%E4%D4%D3%B2%DD%CD%BC%C6%AC&fr=ala&ala=1&alatpl=normal&pos=0&dyTabStr=MCwzLDIsMSw2LDQsNSw3LDgsOQ%3D%3D (accessed on 15 February 2020).
  39. Atlas of 207 Common Weeds in Chinese Farmland [EB/OL]. 5 November 2014. Available online: http://www.360doc.com/document/14/1105/17/14491712_422757344.shtml (accessed on 15 February 2020).
  40. Wu, Y.L.; Zhao, L.; Jiang, H.Y.; Guo, X. Image segmentation method for green crops using improved mean shift. Trans. Chin. Soc. Agric. Eng. 2014, 30, 161–167. [Google Scholar]
  41. Wu, L.L.; Xiong, L.R.; Peng, H. Quantitative evaluation of in-field rapeseed image segmentation based on RGB vegetation indices. J. Huazhong Agric. Univ. 2019, 38, 109–113. [Google Scholar]
  42. Hague, T.; Tillett, N.D.; Wheeler, H. Automated crop and weed monitoring in widely spaced cereals. Precis. Agric. 2006, 7, 21–32. [Google Scholar] [CrossRef]
  43. Woebbecke, D.M.; Meyer, G.E.; Von, B.K.; Mortensen, D.A. Shape features for identifying young weeds using image analysis. Trans. ASAE 1995, 38, 271–281. [Google Scholar] [CrossRef]
  44. Hamuda, E.; Martin, G.; Edward, J. A survey of image processing techniques for plant extraction and segmentation in the field. Comput. Electron. Agric. 2016, 125, 184–199. [Google Scholar] [CrossRef]
  45. Hu, L.; Luo, X.W.; Zeng, S.; Zhang, Z.G.; Chen, X.F.; Lin, C.X. Plant recognition and localization for intra-row mechanical weeding device based on machine vision. Trans. Chin. Soc. Agric. Eng. 2013, 29, 12–18. [Google Scholar]
  46. Gee, C.; Bossu, J.; Jones, G.; Truchetet, F. Crop/weed discrimination in perspective agronomic images. Comput. Electron. Agric. 2008, 60, 49–59. [Google Scholar] [CrossRef]
  47. Aaron, A.; Hassan, M.; Hamada, M.; Kakudi, H. A Lightweight Deep Learning Model for Identifying Weeds in Corn and Soybean Using Quantization. Eng. Proc. 2023, 56, 318. [Google Scholar] [CrossRef]
  48. Khan, S.; Tufail, M.; Khan, M.T.; KhanAhmad, Z.; Anwar, S. Deep learning-based identification system of weeds and crops in strawberry and pea fields for a precision agriculture sprayer. Precis. Agric. 2021, 22, 1711–1727. [Google Scholar] [CrossRef]
  49. Wang, Z.; Xu, X.S.; Hua, Z.X.; Shang, Y.Y.; Duan, Y.C.; Song, H.B. Lightweight recognition for the oestrus behavior of dairy cows combining YOLO v5n and channel pruning. Trans. Chin. Soc. Agric. Eng. 2022, 38, 130–140. [Google Scholar]
  50. Ma, H.; Dong, K.; Wang, Y.; Wei, S.; Huang, W.; Gou, J. Lightweight Plant Recognition Model Based on Improved YOLO v5s. Trans. Chin. Soc. Agric. Mach. 2023, 54, 267–276. [Google Scholar]
  51. Zhu, H.B.; Zhang, Y.Y.; Mu, D.L.; Bai, L.Z.; Wu, X.; Zhuang, H.; Li, H. Research on improved YOLOx weed detection based on lightweight attention module. Crop Prot. 2024, 177, 106563. [Google Scholar] [CrossRef]
  52. Firozeh, S.; Angelo, C.; Giovanni, D.; Angelo, S.S.; Francesco, C.; Vito, R. Optimizing tomato plant phenotyping detection: Boosting YOLOv8 architecture to tackle data complexity. Comput. Electron. Agric. 2024, 218, 108728. [Google Scholar] [CrossRef]
Figure 1. Data augmentation.
Figure 1. Data augmentation.
Agronomy 14 00657 g001
Figure 2. Improved YOLOv8 network structure diagram.
Figure 2. Improved YOLOv8 network structure diagram.
Agronomy 14 00657 g002
Figure 3. The convolution layer and Ghost module: (a) The convolutional layer and (b) The Ghost module.
Figure 3. The convolution layer and Ghost module: (a) The convolutional layer and (b) The Ghost module.
Agronomy 14 00657 g003
Figure 4. Ghost structure schematic.
Figure 4. Ghost structure schematic.
Agronomy 14 00657 g004
Figure 5. Common weeds in soybean field (images collected from Internet).
Figure 5. Common weeds in soybean field (images collected from Internet).
Agronomy 14 00657 g005
Figure 6. Comparison of the number of channels in different layers before and after pruning.
Figure 6. Comparison of the number of channels in different layers before and after pruning.
Agronomy 14 00657 g006
Figure 7. Weed segmentation map: (a) soybean recognition without weeds, (b) soybean recognition with weeds, (c) segmentation binary map without weeds, (d) segmentation binary map with weeds, (e) filtered binary map without weeds, (f) filtered binary map with weeds.
Figure 7. Weed segmentation map: (a) soybean recognition without weeds, (b) soybean recognition with weeds, (c) segmentation binary map without weeds, (d) segmentation binary map with weeds, (e) filtered binary map without weeds, (f) filtered binary map with weeds.
Agronomy 14 00657 g007
Table 1. Distribution of the datasets.
Table 1. Distribution of the datasets.
DatasetTrainValidTest
Original Images1232154154
Data Augmentation3600//
Total Number4832154154
Table 2. Common vegetation indices.
Table 2. Common vegetation indices.
Vegetation IndexFormulas
CIVE C I V E = 0.441 r 0.811 g + 0.385 b + 18.78745
ExG E x G = 2 G R B
ExR E x R = 1.4 R G
ExGR E x G R = E x G E x R
VEG VEG = G / ( R 0.667 B 0.333 )
COM C O M = 0.25 E x G + 0.3 E x G R + 0.33 C I V E + 0.12 V E G
NExG N E x G = 2 g r b
Table 3. Hardware and software environments.
Table 3. Hardware and software environments.
ConfigurationParameter
Operation SystemWindows 10 Professional Edition (Microsoft Corp, Redmond, WA, USA)
CPUIntel i5 13600KF (Intel Corp, SantaClara, CA, USA)
GPUsNvidia GTX4070ti 12 GB (Nvidia Corp, SantaClara, CA, USA)
Python3.8
Pytorch2.0.1
Cuda11.8
Cudnn8.9.1
Data Annotation ToolsLabelling
Table 4. Recognition performance with enhanced dataset.
Table 4. Recognition performance with enhanced dataset.
DatasetModelP/%R/%mAP@0.5/%F1/%
OriginDatasetYOLOv8n92.492.296.592.3
Augment Dataset93.493.297.493.3
Table 5. Recognition performance under different lightweight backbone networks.
Table 5. Recognition performance under different lightweight backbone networks.
ModelP/%R/%mAP@0.5/%F1/%FOLPs/GInference Time/msModel Size/mb
YOLOv8n93.493.297.493.38.11.75.95
YOLOv8n + GhostNet93.194.197.393.66.41.84.77
YOLOv8n + ShuffleNetv289.990.996.289.95.11.43.70
YOLOv8n + FasterNext93.892.697.393.27.51.75.11
Table 6. Recognition performance under different pruning rates.
Table 6. Recognition performance under different pruning rates.
Pruning Rate/%P/%R/%mAP@0.5/%F1/%FLOPs/GInference Time/msModel Size/mb
YOLOv8nG93.194.197.393.66.41.84.77
unconstrained channel pruning30%93.593.497.293.43.72.13.75
40%92.994.197.293.53.82.33.35
50%93.692.697.293.13.42.22.78
60%94.594.397.694.43.02.22.35
70%93.593.497.293.43.72.13.75
Table 7. Recognition performance of ablation experiments.
Table 7. Recognition performance of ablation experiments.
ModelGhostNetPruneP/%R/%mAP@0.5/%F1/%Inference Time/msModel Size/mb
YOLOv8n 93.493.297.493.31.75.95
YOLOv8n-G 93.194.197.293.61.84.77
YOLOv8n-P 93.394.197.493.72.22.92
YOLOv8n-GP94.594.397.694.42.22.35
Table 8. Experimental results under different lightweight algorithms.
Table 8. Experimental results under different lightweight algorithms.
ModelP/%R/%mAP@0.5/%F1/%FLOPs/GInference Time/msModel Size/mb
YOLOv8nGP94.594.397.694.43.02.22.35
YOLOv7n93.489.996.291.66.63.64.78
YOLOv7-tiny94.093.297.493.613.04.911.7
YOLOv5n93.992.196.893.07.82.35.29
YOLOv5-Lite88.986.593.387.73.64.43.40
YOLOv3n94.193.697.693.812.02.07.94
YOLOv3-tiny94.093.596.993.718.92.623.2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, T.; Cui, L.; Zong, L.; Zhang, S.; Jiao, Y.; Xue, X.; Jin, Y. Weed Recognition at Soybean Seedling Stage Based on YOLOV8nGP + NExG Algorithm. Agronomy 2024, 14, 657. https://doi.org/10.3390/agronomy14040657

AMA Style

Sun T, Cui L, Zong L, Zhang S, Jiao Y, Xue X, Jin Y. Weed Recognition at Soybean Seedling Stage Based on YOLOV8nGP + NExG Algorithm. Agronomy. 2024; 14(4):657. https://doi.org/10.3390/agronomy14040657

Chicago/Turabian Style

Sun, Tao, Longfei Cui, Lixuan Zong, Songchao Zhang, Yuxuan Jiao, Xinyu Xue, and Yongkui Jin. 2024. "Weed Recognition at Soybean Seedling Stage Based on YOLOV8nGP + NExG Algorithm" Agronomy 14, no. 4: 657. https://doi.org/10.3390/agronomy14040657

APA Style

Sun, T., Cui, L., Zong, L., Zhang, S., Jiao, Y., Xue, X., & Jin, Y. (2024). Weed Recognition at Soybean Seedling Stage Based on YOLOV8nGP + NExG Algorithm. Agronomy, 14(4), 657. https://doi.org/10.3390/agronomy14040657

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop