The Applications of Deep Learning in Smart Agriculture

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: closed (31 July 2024) | Viewed by 28630

Special Issue Editors


E-Mail Website
Guest Editor
Department of Natural Resources and Agricultural Engineering, Agricultural University of Athens, 75 Iera Odos St., 11855 Athens, Greece
Interests: deep learning; computer vision; natural language processing; multimodal learning; self-supervised learning; domain adaptation

E-Mail Website
Guest Editor
Interdisciplinary Centre for Data and AI, School of Natural and Computing Sciences, University of Aberdeen, Aberdeen AB24 3FX, UK
Interests: deep learning; multimodal learning; capsule neural networks; self-supervised learning; domain adaptation; privacy-preserving technologies; efficient deep learning systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Due to the substantial population growth, ensuring the availability of high-quality food globally without affecting natural ecosystems has become a relevant concern where agriculture is considered a critical field with a significant economic and environmental impact. Therefore, advancing toward smart agriculture has become an unavoidable step. This means that new emerging technologies should be integrated within important agricultural tasks (e.g., phenotyping, disease detection, yield prediction, harvesting, spraying, etc.).

Many of these emerging technologies are related to deep learning, a field of artificial intelligence in which the relevant features of a decision/prediction problem are automatically extracted. The relationship between agriculture and deep learning has become rather promising in recent years; specifically, positive results have been reported by implementing deep-learning-based techniques, such as transfer learning, domain adaptation/generalization, transformer-based architectures, generative adversarial neural networks, knowledge distillation, neural architecture search, etc. These techniques, which directly favor the improvement of the current methods used in precision agriculture, could boost the value of different types of data: from images or videos to the texts found in regulatory documents, without forgetting about tabular data containing vegetation indexes along the growing season.

Thus, this Special Issue aims to provide a place for submitting all papers scoped under the agricultural domain and the use of deep learning-based techniques.

Dr. Borja Espejo-García
Dr. Spyros Fountas
Dr. Georgios Leontidis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart farming
  • deep learning
  • precision agriculture
  • computer vision
  • natural language processing
  • machine learning
  • sensors
  • multi-modal information
  • farm machinery

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 7622 KiB  
Article
Using Pseudo-Color Maps and Machine Learning Methods to Estimate Long-Term Salinity of Soils
by Ravil I. Mukhamediev, Alexey Terekhov, Yedilkhan Amirgaliyev, Yelena Popova, Dmitry Malakhov, Yan Kuchin, Gulshat Sagatdinova, Adilkhan Symagulov, Elena Muhamedijeva and Pavel Gricenko
Agronomy 2024, 14(9), 2103; https://doi.org/10.3390/agronomy14092103 - 15 Sep 2024
Viewed by 767
Abstract
Soil salinity assessment methods based on remote sensing data are a common topic of scientific research. However, the developed methods, as a rule, estimate relatively small areas of the land surface at certain moments of the season, tied to the timing of ground [...] Read more.
Soil salinity assessment methods based on remote sensing data are a common topic of scientific research. However, the developed methods, as a rule, estimate relatively small areas of the land surface at certain moments of the season, tied to the timing of ground surveys. Considerable variability of weather conditions and the state of the earth surface makes it difficult to assess the salinity level with the help of remote sensing data and to verify it within a year. At the same time, the assessment of salinity on the basis of multiyear data allows reducing the level of seasonal fluctuations to a considerable extent and revealing the statistically stable characteristics of cultivated areas of land surface. Such an approach allows, in our opinion, the processes of mapping the salinity of large areas of cultivated lands to be automated considerably. The authors propose an approach to assess the salinization of cultivated and non-cultivated soils of arid zones on the basis of long-term averaged values of vegetation indices and salinity indices. This approach allows revealing the consistent relationships between the characteristics of spectral indices and salinization parameters. Based on this approach, this paper presents a mapping method including the use of multiyear data and machine learning algorithms to classify soil salinity levels in one of the regions of South Kazakhstan. Verification of the method was carried out by comparing the obtained salinity assessment with the expert data and the results of laboratory tests of soil samples. The percentage of “gross” errors of the method, in other words, errors when the predicted salinity class differs by more than one position compared to the actual one, is 22–28% (accuracy is 0.78–0.72). The obtained results allow recommending the developed method for the assessment of long-term trends of secondary salinization of irrigated arable land in arid areas. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

25 pages, 12480 KiB  
Article
EFS-Former: An Efficient Network for Fruit Tree Leaf Disease Segmentation and Severity Assessment
by Donghui Jiang, Miao Sun, Shulong Li, Zhicheng Yang and Liying Cao
Agronomy 2024, 14(9), 1992; https://doi.org/10.3390/agronomy14091992 - 2 Sep 2024
Viewed by 860
Abstract
Fruit is a major source of vitamins, minerals, and dietary fiber in people’s daily lives. Leaf diseases caused by climate change and other factors have significantly reduced fruit production. Deep learning methods for segmenting leaf diseases can effectively mitigate this issue. However, challenges [...] Read more.
Fruit is a major source of vitamins, minerals, and dietary fiber in people’s daily lives. Leaf diseases caused by climate change and other factors have significantly reduced fruit production. Deep learning methods for segmenting leaf diseases can effectively mitigate this issue. However, challenges such as leaf folding, jaggedness, and light shading make edge feature extraction difficult, affecting segmentation accuracy. To address these problems, this paper proposes a method based on EFS-Former. The expanded local detail (ELD) module extends the model’s receptive field by expanding the convolution, better handling fine spots and effectively reducing information loss. H-attention reduces computational redundancy by superimposing multi-layer convolutions, significantly improving feature filtering. The parallel fusion architecture effectively utilizes the different feature extraction intervals of the convolutional neural network (CNN) and Transformer encoders, achieving comprehensive feature extraction and effectively fusing detailed and semantic information in the channel and spatial dimensions within the feature fusion module (FFM). Experiments show that, compared to DeepLabV3+, this method achieves 10.78%, 9.51%, 0.72%, and 8.00% higher scores for mean intersection over union (mIoU), mean pixel accuracy (mPA), accuracy (Acc), and F_score, respectively, while having 1.78 M fewer total parameters and 0.32 G lower floating point operations per second (FLOPS). Additionally, it effectively calculates the ratio of leaf area occupied by spots. This method is also effective in calculating the disease period by analyzing the ratio of leaf area occupied by diseased spots. The method’s overall performance is evaluated using mIoU, mPA, Acc, and F_score metrics, achieving 88.60%, 93.49%, 98.60%, and 95.90%, respectively. In summary, this study offers an efficient and accurate method for fruit tree leaf spot segmentation, providing a solid foundation for the precise analysis of fruit tree leaves and spots, and supporting smart agriculture for precision pesticide spraying. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

22 pages, 8627 KiB  
Article
Enhancing Panax notoginseng Leaf Disease Classification with Inception-SSNet and Image Generation via Improved Diffusion Model
by Ruoxi Wang, Xiaofan Zhang, Qiliang Yang, Lian Lei, Jiaping Liang and Ling Yang
Agronomy 2024, 14(9), 1982; https://doi.org/10.3390/agronomy14091982 - 1 Sep 2024
Cited by 1 | Viewed by 717
Abstract
The rapid and accurate classification of Panax notoginseng leaf diseases is vital for timely disease control and reducing economic losses. Recently, image classification algorithms have shown great promise for plant disease diagnosis, but dataset quantity and quality are crucial. Moreover, classifying P. notoginseng [...] Read more.
The rapid and accurate classification of Panax notoginseng leaf diseases is vital for timely disease control and reducing economic losses. Recently, image classification algorithms have shown great promise for plant disease diagnosis, but dataset quantity and quality are crucial. Moreover, classifying P. notoginseng leaf diseases faces severe challenges, including the small features of anthrax and the strong similarity between round spot and melasma diseases. In order to address these problems, we have proposed an ECA-based diffusion model and Inception-SSNet for the classification of the six major P. notoginseng leaf diseases, namely gray mold, powdery mildew, virus infection, anthrax, melasma, and round spot. Specifically, we propose an image generation scheme, in which the lightweight attention mechanism, ECA, is used to capture the dependencies between channels for improving the dataset quantity and quality. To extract disease features more accurately, we developed an Inception-SSNet hybrid model with skip connection, attention feature fusion, and self-calibrated convolutional. These innovative methods enable the model to make better use of local and global information, especially when dealing with diseases with similar features and small targets. The experimental results show that our proposed ECA-based diffusion model FID reaches 42.73, compared with the baseline model, which improved by 74.71%. Further, we tested the classification model using the data set of P. notoginseng leaf disease generation, and the accuracy of 11 mainstream classification models was improved. Our proposed Inception-SSNet classification model achieves an accuracy of 97.04% on the non-generated dataset, which is an improvement of 0.11% compared with the baseline model. On the generated dataset, the accuracy reached 99.44%, which is an improvement of 1.02% compared to the baseline model. This study provides an effective solution for the monitoring of Panax notoginseng diseases. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

14 pages, 3987 KiB  
Article
Research on an Intelligent Seed-Sorting Method and Sorter Based on Machine Vision and Lightweight YOLOv5n
by Yubo Feng, Xiaoshun Zhao, Ruitao Tian, Chenyang Liang, Jingyan Liu and Xiaofei Fan
Agronomy 2024, 14(9), 1953; https://doi.org/10.3390/agronomy14091953 - 29 Aug 2024
Viewed by 733
Abstract
To address the current issues of low intelligence and accuracy in seed-sorting devices, an intelligent seed sorter was developed in this study using machine-vision technology and the lightweight YOLOv5n. The machine consisted of a transmission system, feeding system, image acquisition system, and seed [...] Read more.
To address the current issues of low intelligence and accuracy in seed-sorting devices, an intelligent seed sorter was developed in this study using machine-vision technology and the lightweight YOLOv5n. The machine consisted of a transmission system, feeding system, image acquisition system, and seed screening system. A lightweight YOLOv5n model, FS-YOLOv5n, was trained using 4756 images, incorporating FasterNet, Local Convolution (PConv), and a squeeze-and-excitation (SE) attention mechanism to improve feature extraction efficiency, detection accuracy, and reduce redundancy. Taking ‘Zhengdan 958’ corn seeds as the research object, a quality identification and seed sorting test was conducted on six test groups (each consisting of 1000 seeds) using the FS-YOLOv5n model. Following lightweight improvements, the machine showed an 81% reduction in parameters and floating-point operations compared to baseline models. The intelligent seed sorter achieved an average sorting rate of 90.76%, effectively satisfying the seed-sorting requirements. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

16 pages, 3200 KiB  
Article
Automated Assessment of Wheat Leaf Disease Spore Concentration Using a Smart Microscopy Scanning System
by Olga V. Doroshenko, Mikhail V. Golub, Oksana Yu. Kremneva, Pavel S. Shcherban’, Andrey S. Peklich, Roman Yu. Danilov, Ksenia E. Gasiyan, Artem V. Ponomarev, Ilya N. Lagutin, Ilya A. Moroz and Victor K. Postovoy
Agronomy 2024, 14(9), 1945; https://doi.org/10.3390/agronomy14091945 - 28 Aug 2024
Viewed by 647
Abstract
An advanced approach to the automated assessment of a microscopic slide containing spores is presented. The objective is to develop an intelligent system for the rapid and precise estimation of phytopathogenic spore concentration on microscopic slides, thereby enabling automated processing. The smart microscopy [...] Read more.
An advanced approach to the automated assessment of a microscopic slide containing spores is presented. The objective is to develop an intelligent system for the rapid and precise estimation of phytopathogenic spore concentration on microscopic slides, thereby enabling automated processing. The smart microscopy scanning system comprises an electronic microscope, a coordinate table, and software for the control of the coordinate table and image processing. The developed smart microscopy scanning system processes the entire microscope slide with multiple exposed strips, which are automatically determined based on the novel two-stage algorithm. The analysis of trained convolutional neural networks employed for the detection of spore phytopathogens demonstrates high precision and recall metrics. The system is capable of identifying and counting the number of spores of phytopathogenic fungi species Blumeria graminis, Puccinia striiformis, and Pyrenophora tritici-repentis on each exposed strip. A methodology for estimating the spore distribution on a microscopic slide is proposed, which involves calculating the average spore concentration density. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

19 pages, 15665 KiB  
Article
A Novel Fusion Perception Algorithm of Tree Branch/Trunk and Apple for Harvesting Robot Based on Improved YOLOv8s
by Bin Yan, Yang Liu and Wenhui Yan
Agronomy 2024, 14(9), 1895; https://doi.org/10.3390/agronomy14091895 - 24 Aug 2024
Cited by 1 | Viewed by 928
Abstract
Aiming to accurately identify apple targets and achieve segmentation and the extraction of branch and trunk areas of apple trees, providing visual guidance for a picking robot to actively adjust its posture to avoid branch trunks for obstacle avoidance fruit picking, the spindle-shaped [...] Read more.
Aiming to accurately identify apple targets and achieve segmentation and the extraction of branch and trunk areas of apple trees, providing visual guidance for a picking robot to actively adjust its posture to avoid branch trunks for obstacle avoidance fruit picking, the spindle-shaped fruit trees, which are widely planted in standard modern apple orchards, were focused on, and an algorithm for apple tree fruit detection and branch segmentation for picking robots was proposed based on an improved YOLOv8s model design. Firstly, image data of spindle-shaped fruit trees in modern apple orchards were collected, and annotations of object detection and pixel-level segmentation were conducted on the data. Training set data were then augmented to improve the generalization performance of the apple detection and branch segmentation algorithm. Secondly, the original YOLOv8s network architecture’s design was improved by embedding the SE module visual attention mechanism after the C2f module of the YOLOv8s Backbone network architecture. Finally, the dynamic snake convolution module was embedded into the Neck structure of the YOLOv8s network architecture to better extract feature information of different apple targets and tree branches. The experimental results showed that the proposed improved algorithm can effectively recognize apple targets in images and segment tree branches and trunks. For apple recognition, the precision was 99.6%, the recall was 96.8%, and the mAP value was 98.3%. The mAP value for branch and trunk segmentation was 81.6%. The proposed improved YOLOv8s algorithm design was compared with the original YOLOv8s, YOLOv8n, and YOLOv5s algorithms for the recognition of apple targets and segmentation of tree branches and trunks on test set images. The experimental results showed that compared with the other three algorithms, the proposed algorithm increased the mAP for apple recognition by 1.5%, 2.3%, and 6%, respectively. The mAP for tree branch and trunk segmentation was increased by 3.7%, 15.4%, and 24.4%, respectively. The proposed detection and segmentation algorithm for apple tree fruits, branches, and trunks is of great significance for ensuring the success rate of robot harvesting, which can provide technical support for the development of an intelligent apple harvesting robot. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

18 pages, 7039 KiB  
Article
Two-Stage Detection Algorithm for Plum Leaf Disease and Severity Assessment Based on Deep Learning
by Caihua Yao, Ziqi Yang, Peifeng Li, Yuxia Liang, Yamin Fan, Jinwen Luo, Chengmei Jiang and Jiong Mu
Agronomy 2024, 14(7), 1589; https://doi.org/10.3390/agronomy14071589 - 21 Jul 2024
Cited by 2 | Viewed by 1074
Abstract
Crop diseases significantly impact crop yields, and promoting specialized control of crop diseases is crucial for ensuring agricultural production stability. Disease identification primarily relies on human visual inspection, which is inefficient, inaccurate, and subjective. This study focused on the plum red spot ( [...] Read more.
Crop diseases significantly impact crop yields, and promoting specialized control of crop diseases is crucial for ensuring agricultural production stability. Disease identification primarily relies on human visual inspection, which is inefficient, inaccurate, and subjective. This study focused on the plum red spot (Polystigma rubrum), proposing a two-stage detection algorithm based on deep learning and assessing the severity of the disease through lesion coverage rate. The specific contributions are as follows: We utilized the object detection model YOLOv8 to strip leaves to eliminate the influence of complex backgrounds. We used an improved U-Net network to segment leaves and lesions. We combined Dice Loss with Focal Loss to address the poor training performance due to the pixel ratio imbalance between leaves and disease spots. For inconsistencies in the size and shape of leaves and lesions, we utilized ODConv and MSCA so that the model could focus on features at different scales. After verification, the accuracy rate of leaf recognition is 95.3%, and the mIoU, mPA, mPrecision, and mRecall of the leaf disease segmentation model are 90.93%, 95.21%, 95.17%, and 95.21%, respectively. This research provides an effective solution for the detection and severity assessment of plum leaf red spot disease under complex backgrounds. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

15 pages, 6287 KiB  
Article
Research on Improved Road Visual Navigation Recognition Method Based on DeepLabV3+ in Pitaya Orchard
by Lixue Zhu, Wenqian Deng, Yingjie Lai, Xiaogeng Guo and Shiang Zhang
Agronomy 2024, 14(6), 1119; https://doi.org/10.3390/agronomy14061119 - 24 May 2024
Cited by 3 | Viewed by 987
Abstract
Traditional DeepLabV3+ image semantic segmentation methods face challenges in pitaya orchard environments characterized by multiple interference factors, complex image backgrounds, high computational complexity, and extensive memory consumption. This paper introduces an improved visual navigation path recognition method for pitaya orchards. Initially, DeepLabV3+ utilizes [...] Read more.
Traditional DeepLabV3+ image semantic segmentation methods face challenges in pitaya orchard environments characterized by multiple interference factors, complex image backgrounds, high computational complexity, and extensive memory consumption. This paper introduces an improved visual navigation path recognition method for pitaya orchards. Initially, DeepLabV3+ utilizes a lightweight MobileNetV2 as its primary feature extraction backbone, which is augmented with a Pyramid Split Attention (PSA) module placed after the Atrous Spatial Pyramid Pooling (ASPP) module. This improvement enhances the spatial feature representation of feature maps, thereby sharpening the segmentation boundaries. Additionally, an Efficient Channel Attention Network (ECANet) mechanism is integrated with the lower-level features of MobileNetV2 to reduce computational complexity and refine the clarity of target boundaries. The paper also designs a navigation path extraction algorithm, which fits the road mask regions segmented by the model to achieve precise navigation path recognition. Experimental findings show that the enhanced DeepLabV3+ model achieved a Mean Intersection over Union (MIoU) and average pixel accuracy of 95.79% and 97.81%, respectively. These figures represent increases of 0.59 and 0.41 percentage points when contrasted with the original model. Furthermore, the model’s memory consumption is reduced by 85.64%, 84.70%, and 85.06% when contrasted with the Pyramid Scene Parsing Network (PSPNet), U-Net, and Fully Convolutional Network (FCN) models, respectively. This reduction makes the proposed model more efficient while maintaining high segmentation accuracy, thus supporting enhanced operational efficiency in practical applications. The test results for navigation path recognition accuracy reveal that the angle error between the navigation centerline extracted using the least squares method and the manually fitted centerline is less than 5°. Additionally, the average deviation between the road centerlines extracted under three different lighting conditions and the actual road centerline is only 2.66 pixels, with an average image recognition time of 0.10 s. This performance suggests that the study can provide an effective reference for visual navigation in smart agriculture. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

19 pages, 10865 KiB  
Article
Organ Segmentation and Phenotypic Trait Extraction of Cotton Seedling Point Clouds Based on a 3D Lightweight Network
by Jiacheng Shen, Tan Wu, Jiaxu Zhao, Zhijing Wu, Yanlin Huang, Pan Gao and Li Zhang
Agronomy 2024, 14(5), 1083; https://doi.org/10.3390/agronomy14051083 - 20 May 2024
Viewed by 940
Abstract
Cotton is an important economic crop; therefore, enhancing cotton yield and cultivating superior varieties are key research priorities. The seedling stage, a critical phase in cotton production, significantly influences the subsequent growth and yield of the crop. Therefore, breeding experts often choose to [...] Read more.
Cotton is an important economic crop; therefore, enhancing cotton yield and cultivating superior varieties are key research priorities. The seedling stage, a critical phase in cotton production, significantly influences the subsequent growth and yield of the crop. Therefore, breeding experts often choose to measure phenotypic parameters during this period to make breeding decisions. Traditional methods of phenotypic parameter measurement require manual processes, which are not only tedious and inefficient but can also damage the plants. To effectively, rapidly, and accurately extract three-dimensional phenotypic parameters of cotton seedlings, precise segmentation of phenotypic organs must first be achieved. This paper proposes a neural network-based segmentation algorithm for cotton seedling organs, which, compared to the average precision of 75.4% in traditional unsupervised learning, achieves an average precision of 96.67%, demonstrating excellent segmentation performance. The segmented leaf and stem point clouds are used for the calculation of phenotypic parameters such as stem length, leaf length, leaf width, and leaf area. Comparisons with actual measurements yield coefficients of determination R2 of 91.97%, 90.97%, 92.72%, and 95.44%, respectively. The results indicate that the algorithm proposed in this paper can achieve precise segmentation of stem and leaf organs, and can efficiently and accurately extract three-dimensional phenotypic structural information of cotton seedlings. In summary, this study not only made significant progress in the precise segmentation of cotton seedling organs and the extraction of three-dimensional phenotypic structural information, but the algorithm also demonstrates strong applicability to different varieties of cotton seedlings. This provides new perspectives and methods for plant researchers and breeding experts, contributing to the advancement of the plant phenotypic computation field and bringing new breakthroughs and opportunities to the field of plant science research. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

18 pages, 29742 KiB  
Article
Rice Counting and Localization in Unmanned Aerial Vehicle Imagery Using Enhanced Feature Fusion
by Mingwei Yao, Wei Li, Li Chen, Haojie Zou, Rui Zhang, Zijie Qiu, Sha Yang and Yue Shen
Agronomy 2024, 14(4), 868; https://doi.org/10.3390/agronomy14040868 - 21 Apr 2024
Cited by 3 | Viewed by 1397
Abstract
In rice cultivation and breeding, obtaining accurate information on the quantity and spatial distribution of rice plants is crucial. However, traditional field sampling methods can only provide rough estimates of the plant count and fail to capture precise plant locations. To address these [...] Read more.
In rice cultivation and breeding, obtaining accurate information on the quantity and spatial distribution of rice plants is crucial. However, traditional field sampling methods can only provide rough estimates of the plant count and fail to capture precise plant locations. To address these problems, this paper proposes P2PNet-EFF for the counting and localization of rice plants. Firstly, through the introduction of the enhanced feature fusion (EFF), the model improves its ability to integrate deep semantic information while preserving shallow spatial details. This allows the model to holistically analyze the morphology of plants rather than focusing solely on their central points, substantially reducing errors caused by leaf overlap. Secondly, by integrating efficient multi-scale attention (EMA) into the backbone, the model enhances its feature extraction capabilities and suppresses interference from similar backgrounds. Finally, to evaluate the effectiveness of the P2PNet-EFF method, we introduce the URCAL dataset for rice counting and localization, gathered using UAV. This dataset consists of 365 high-resolution images and 173,352 point annotations. Experimental results on the URCAL demonstrate that the proposed method achieves a 34.87% reduction in MAE and a 28.19% reduction in RMSE compared to the original P2PNet while increasing R2 by 3.03%. Furthermore, we conducted extensive experiments on three frequently used plant counting datasets. The results demonstrate the excellent performance of the proposed method. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

18 pages, 4951 KiB  
Article
Evaluation of the Potential of Using Machine Learning and the Savitzky–Golay Filter to Estimate the Daily Soil Temperature in Gully Regions of the Chinese Loess Plateau
by Wei Deng, Dengfeng Liu, Fengnian Guo, Lianpeng Zhang, Lan Ma, Qiang Huang, Qiang Li, Guanghui Ming and Xianmeng Meng
Agronomy 2024, 14(4), 703; https://doi.org/10.3390/agronomy14040703 - 28 Mar 2024
Viewed by 1181
Abstract
Soil temperature directly affects the germination of seeds and the growth of crops. In order to accurately predict soil temperature, this study used RF and MLP to simulate shallow soil temperature, and then the shallow soil temperature with the best simulation effect will [...] Read more.
Soil temperature directly affects the germination of seeds and the growth of crops. In order to accurately predict soil temperature, this study used RF and MLP to simulate shallow soil temperature, and then the shallow soil temperature with the best simulation effect will be used to predict the deep soil temperature. The models were forced by combinations of environmental factors, including daily air temperature (Tair), water vapor pressure (Pw), net radiation (Rn), and soil moisture (VWC), which were observed in the Hejiashan watershed on the Loess Plateau in China. The results showed that the accuracy of the model for predicting deep soil temperature proposed in this paper is higher than that of directly using environmental factors to predict deep soil temperature. In testing data, the range of MAE was 1.158–1.610 °C, the range of RMSE was 1.449–2.088 °C, the range of R2 was 0.665–0.928, and the range of KGE was 0.708–0.885 at different depths. The study not only provides a critical reference for predicting soil temperature but also helps people to better carry out agricultural production activities. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

16 pages, 5704 KiB  
Article
Evaluating Time-Series Prediction of Temperature, Relative Humidity, and CO2 in the Greenhouse with Transformer-Based and RNN-Based Models
by Ju Yeon Ahn, Yoel Kim, Hyeonji Park, Soo Hyun Park and Hyun Kwon Suh
Agronomy 2024, 14(3), 417; https://doi.org/10.3390/agronomy14030417 - 21 Feb 2024
Cited by 3 | Viewed by 2026
Abstract
In greenhouses, plant growth is directly influenced by internal environmental conditions, and therefore requires continuous management and proper environmental control. Inadequate environmental conditions make plants vulnerable to pests and diseases, lower yields, and cause impaired growth and development. Previous studies have explored the [...] Read more.
In greenhouses, plant growth is directly influenced by internal environmental conditions, and therefore requires continuous management and proper environmental control. Inadequate environmental conditions make plants vulnerable to pests and diseases, lower yields, and cause impaired growth and development. Previous studies have explored the combination of greenhouse actuator control history with internal and external environmental data to enhance prediction accuracy, using deep learning-based models such as RNNs and LSTMs. In recent years, transformer-based models and RNN-based models have shown good performance in various domains. However, their applications for time-series forecasting in a greenhouse environment remain unexplored. Therefore, the objective of this study was to evaluate the prediction performance of temperature, relative humidity (RH), and CO2 concentration in a greenhouse after 1 and 3 h, using a transformer-based model (Autoformer), variants of two RNN models (LSTM and SegRNN), and a simple linear model (DLinear). The performance of these four models was compared to assess whether the latest state-of-the-art (SOTA) models, Autoformer and SegRNN, are as effective as DLinear and LSTM in predicting greenhouse environments. The analysis was based on four external climate data samples, three internal data samples, and six actuator data samples. Overall, DLinear and SegRNN consistently outperformed Autoformer and LSTM. Both DLinear and SegRNN performed well in general, but were not as strong in predicting CO2 concentration. SegRNN outperformed DLinear in CO2 predictions, while showing similar performance in temperature and RH prediction. The results of this study do not provide a definitive conclusion that transformer-based models, such as Autoformer, are inferior to linear-based models like DLinear or certain RNN-based models like SegRNN in predicting time series for greenhouse environments. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

22 pages, 214283 KiB  
Article
ESG-YOLO: A Method for Detecting Male Tassels and Assessing Density of Maize in the Field
by Wendi Wu, Jianhua Zhang, Guomin Zhou, Yuhang Zhang, Jian Wang and Lin Hu
Agronomy 2024, 14(2), 241; https://doi.org/10.3390/agronomy14020241 - 24 Jan 2024
Cited by 2 | Viewed by 1696
Abstract
The intelligent acquisition of phenotypic information on male tassels is critical for maize growth and yield assessment. In order to realize accurate detection and density assessment of maize male tassels in complex field environments, this study used a UAV to collect images of [...] Read more.
The intelligent acquisition of phenotypic information on male tassels is critical for maize growth and yield assessment. In order to realize accurate detection and density assessment of maize male tassels in complex field environments, this study used a UAV to collect images of maize male tassels under different environmental factors in the experimental field and then constructed and formed the ESG-YOLO detection model based on the YOLOv7 model by using GELU as the activation function instead of the original SiLU and by adding a dual ECA attention mechanism and an SPD-Conv module. And then, through the model to identify and detect the male tassel, the model’s average accuracy reached a mean value (mAP) of 93.1%; compared with the YOLOv7 model, its average accuracy mean value (mAP) is 2.3 percentage points higher. Its low-resolution image and small object target detection is excellent, and it can be more intuitive and fast to obtain the maize male tassel density from automatic identification surveys. It provides an effective method for high-precision and high-efficiency identification of maize male tassel phenotypes in the field, and it has certain application value for maize growth potential, yield, and density assessment. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

23 pages, 48270 KiB  
Article
Improved YOLOv7-Tiny Complex Environment Citrus Detection Based on Lightweighting
by Bo Gu, Changji Wen, Xuanzhi Liu, Yingjian Hou, Yuanhui Hu and Hengqiang Su
Agronomy 2023, 13(11), 2667; https://doi.org/10.3390/agronomy13112667 - 24 Oct 2023
Cited by 7 | Viewed by 2624
Abstract
In complex citrus orchard environments, light changes, branch shading, and fruit overlapping impact citrus detection accuracy. This paper proposes the citrus detection model YOLO-DCA in complex environments based on the YOLOv7-tiny model. We used depth-separable convolution (DWConv) to replace the ordinary convolution in [...] Read more.
In complex citrus orchard environments, light changes, branch shading, and fruit overlapping impact citrus detection accuracy. This paper proposes the citrus detection model YOLO-DCA in complex environments based on the YOLOv7-tiny model. We used depth-separable convolution (DWConv) to replace the ordinary convolution in ELAN, which reduces the number of parameters of the model; we embedded coordinate attention (CA) into the convolution to make it a coordinate attention convolution (CAConv) to replace the ordinary convolution of the neck network convolution; and we used a dynamic detection head to replace the original detection head. We trained and evaluated the test model using a homemade citrus dataset. The model size is 4.5 MB, the number of parameters is 2.1 M, mAP is 96.98%, and the detection time of a single image is 5.9 ms, which is higher than in similar models. In the application test, it has a better detection effect on citrus in occlusion, light transformation, and motion change scenes. The model has the advantages of high detection accuracy, small model space occupation, easy application deployment, and strong robustness, which can help citrus-picking robots and improve their intelligence level. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

12 pages, 9590 KiB  
Article
Real-Time Joint-Stem Prediction for Agricultural Robots in Grasslands Using Multi-Task Learning
by Jiahao Li, Ronja Güldenring and Lazaros Nalpantidis
Agronomy 2023, 13(9), 2365; https://doi.org/10.3390/agronomy13092365 - 12 Sep 2023
Cited by 1 | Viewed by 1598
Abstract
Autonomous weeding robots need to accurately detect the joint stem of grassland weeds in order to control those weeds in an effective and energy-efficient manner. In this work, keypoints on joint stems and bounding boxes around weeds in grasslands are detected jointly using [...] Read more.
Autonomous weeding robots need to accurately detect the joint stem of grassland weeds in order to control those weeds in an effective and energy-efficient manner. In this work, keypoints on joint stems and bounding boxes around weeds in grasslands are detected jointly using multi-task learning. We compare a two-stage, heatmap-based architecture to a single-stage, regression-based architecture—both based on the popular YOLOv5 object detector. Our results show that introducing joint-stem detection as a second task boosts the individual weed detection performance in both architectures. Furthermore, the single-stage architecture clearly outperforms its competitors with an OKS of 56.3 in joint-stem detection while also achieving real-time performance of 12.2 FPS on Nvidia Jetson NX, suitable for agricultural robots. Finally, we make the newly created joint-stem ground-truth annotations publicly available for the relevant research community. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

20 pages, 4363 KiB  
Article
Deep-Learning-Based Rice Disease and Insect Pest Detection on a Mobile Phone
by Jizhong Deng, Chang Yang, Kanghua Huang, Luocheng Lei, Jiahang Ye, Wen Zeng, Jianling Zhang, Yubin Lan and Yali Zhang
Agronomy 2023, 13(8), 2139; https://doi.org/10.3390/agronomy13082139 - 15 Aug 2023
Cited by 9 | Viewed by 3681
Abstract
The realization that mobile phones can detect rice diseases and insect pests not only solves the problems of low efficiency and poor accuracy from manually detection and reporting, but it also helps farmers detect and control them in the field in a timely [...] Read more.
The realization that mobile phones can detect rice diseases and insect pests not only solves the problems of low efficiency and poor accuracy from manually detection and reporting, but it also helps farmers detect and control them in the field in a timely fashion, thereby ensuring the quality of rice grains. This study examined two Improved detection models for the detection of six high-frequency diseases and insect pests. These models were the Improved You Only Look Once (YOLO)v5s and YOLOv7-tiny based on their lightweight object detection networks. The Improved YOLOv5s was introduced with the Ghost module to reduce computation and optimize the model structure, and the Improved YOLOv7-tiny was introduced with the Convolutional Block Attention Module (CBAM) and SIoU to improve model learning ability and accuracy. First, we evaluated and analyzed the detection accuracy and operational efficiency of the models. Then we deployed two proposed methods to a mobile phone. We also designed an application to further verify their practicality for detecting rice diseases and insect pests. The results showed that Improved YOLOv5s achieved the highest F1-Score of 0.931, 0.961 in mean average precision (mAP) (0.5), and 0.648 in mAP (0.5:0.9). It also reduced network parameters, model size, and the floating point operations per second (FLOPs) by 47.5, 45.7, and 48.7%, respectively. Furthermore, it increased the model inference speed by 38.6% compared with the original YOLOv5s model. Improved YOLOv7-tiny outperformed the original YOLOv7-tiny in detection accuracy, which was second only to Improved YOLOv5s. The probability heat maps of the detection results showed that Improved YOLOv5s performed better in detecting large target areas of rice diseases and insect pests, while Improved YOLOv7-tiny was more accurate in small target areas. On the mobile phone platform, the precision and recall of Improved YOLOv5s under FP16 accuracy were 0.925 and 0.939, and the inference speed was 374 ms/frame, which was superior to Improved YOLOv7-tiny. Both of the proposed improved models realized accurate identification of rice diseases and insect pests. Moreover, the constructed mobile phone application based on the improved detection models provided a reference for realizing fast and efficient field diagnoses. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

19 pages, 10404 KiB  
Article
Improved U-Net for Growth Stage Recognition of In-Field Maize
by Tianyu Wan, Yuan Rao, Xiu Jin, Fengyi Wang, Tong Zhang, Yali Shu and Shaowen Li
Agronomy 2023, 13(6), 1523; https://doi.org/10.3390/agronomy13061523 - 31 May 2023
Cited by 6 | Viewed by 1831
Abstract
Precise recognition of maize growth stages in the field is one of the critical steps in conducting precision irrigation and crop growth evaluation. However, due to the ever-changing environmental factors and maize growth characteristics, traditional recognition methods usually suffer from limitations in recognizing [...] Read more.
Precise recognition of maize growth stages in the field is one of the critical steps in conducting precision irrigation and crop growth evaluation. However, due to the ever-changing environmental factors and maize growth characteristics, traditional recognition methods usually suffer from limitations in recognizing different growth stages. For the purpose of tackling these issues, this study proposed an improved U-net by first using a cascade convolution-based network as the encoder with a strategy for backbone network replacement to optimize feature extraction and reuse. Secondly, three attention mechanism modules have been introduced to upgrade the decoder part of the original U-net, which highlighted critical regions and extracted more discriminative features of maize. Subsequently, a dilation path of the improved U-net was constructed by integrating dilated convolution layers using a multi-scale feature fusion approach to preserve the detailed spatial information of in-field maize. Finally, the improved U-net has been applied to recognize different growth stages of maize in the field. The results clearly demonstrated the superior ability of the improved U-net to precisely segment and recognize maize growth stage from in-field images. Specifically, the semantic segmentation network achieved a mean intersection over union (mIoU) of 94.51% and a mean pixel accuracy (mPA) of 96.93% in recognizing the maize growth stage with only 39.08 MB of parameters. In conclusion, the good trade-offs made in terms of accuracy and parameter number demonstrated that this study could lay a good foundation for implementing accurate maize growth stage recognition and long-term automatic growth monitoring. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

13 pages, 2627 KiB  
Article
Advancing Agricultural Predictions: A Deep Learning Approach to Estimating Bulb Weight Using Neural Prophet Model
by Wonseong Kim and Byung Min Soon
Agronomy 2023, 13(5), 1362; https://doi.org/10.3390/agronomy13051362 - 12 May 2023
Cited by 3 | Viewed by 1719
Abstract
A deep learning methodology was utilized to predict the bulb weights of garlic and onions in the Jeolla Province of Korea. The Korea Rural Economic Institute (KREI) operates the Outlook & Agricultural Statistics Information System (OASIS) platform, which provides actual measurements of garlic [...] Read more.
A deep learning methodology was utilized to predict the bulb weights of garlic and onions in the Jeolla Province of Korea. The Korea Rural Economic Institute (KREI) operates the Outlook & Agricultural Statistics Information System (OASIS) platform, which provides actual measurements of garlic and onions. We trained the Neural Prophet (NP) lagged time-series model using this data. The NP model effectively handles lagged variables and their covariates by inserting a hidden layer. Our results indicate that the NP model performed with around 5% mean absolute error in predicting bulb weights, with a gap of 3.3 g and 4.7 g with average weights of 63.7 g and 129.9 g for garlic and onions, respectively. This experimental research was based on only three years of measurement data. Hence, the gap between observed and predicted data can be reduced by accumulating more measurement data in the future. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 6918 KiB  
Review
Leveraging Convolutional Neural Networks for Disease Detection in Vegetables: A Comprehensive Review
by Muhammad Mahmood ur Rehman, Jizhan Liu, Aneela Nijabat, Muhammad Faheem, Wenyuan Wang and Shengyi Zhao
Agronomy 2024, 14(10), 2231; https://doi.org/10.3390/agronomy14102231 - 27 Sep 2024
Viewed by 1065
Abstract
Timely and accurate detection of diseases in vegetables is crucial for effective management and mitigation strategies before they take a harmful turn. In recent years, convolutional neural networks (CNNs) have emerged as powerful tools for automated disease detection in crops due to their [...] Read more.
Timely and accurate detection of diseases in vegetables is crucial for effective management and mitigation strategies before they take a harmful turn. In recent years, convolutional neural networks (CNNs) have emerged as powerful tools for automated disease detection in crops due to their ability to learn intricate patterns from large-scale image datasets and make predictions of samples that are given. The use of CNN algorithms for disease detection in important vegetable crops like potatoes, tomatoes, peppers, cucumbers, bitter gourd, carrot, cabbage, and cauliflower is critically examined in this review paper. This review examines the most recent state-of-the-art techniques, datasets, and difficulties related to these crops’ CNN-based disease detection systems. Firstly, we present a summary of CNN architecture and its applicability to classify tasks based on images. Subsequently, we explore CNN applications in the identification of diseases in vegetable crops, emphasizing relevant research, datasets, and performance measures. Also, the benefits and drawbacks of CNN-based methods, covering problems with computational complexity, model generalization, and dataset size, are discussed. This review concludes by highlighting the revolutionary potential of CNN algorithms in transforming crop disease diagnosis and management strategies. Finally, this study provides insights into the current limitations regarding the usage of computer algorithms in the field of vegetable disease detection. Full article
(This article belongs to the Special Issue The Applications of Deep Learning in Smart Agriculture)
Show Figures

Figure 1

Back to TopTop