Advanced Image Processing in Agricultural Applications

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: closed (25 June 2024) | Viewed by 9380

Special Issue Editors


E-Mail Website
Guest Editor
Key Laboratory of Key Technology on Agricultural Machine and Equipment, Ministry of Education, College of Engineering, South China Agricultural University, Guangzhou 510642, China
Interests: agricultural robotics; image processing; motion control; neural networks; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Key Laboratory of Key Technology on Agricultural Machine and Equipment, Ministry of Education, College of Engineering, South China Agricultural University, Guangzhou 510642, China
Interests: intelligent agricultural equipment; agricultural robot; intelligent control technology; vehicle design; image processing

E-Mail Website
Guest Editor
Key Laboratory of Key Technology on Agricultural Machine and Equipment, Ministry of Education, College of Engineering, South China Agricultural University, Guangzhou 510642, China
Interests: intelligent agricultural equipment; agricultural robot; vehicle systems; precision agriculture; image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Application of image processing technology in agricultural production scenarios, such as fruit-picking robots, pest monitoring, growth environment factor monitoring, agricultural planting management, crop baking and drying, and seed quality breeding, has recently garnered increasing research interest. However, in a complex agricultural environment, image processing is rather difficult, as it can often lead to misclassification due to the interference of external factors, thereby resulting in erroneous experimental results. In addition, with the rapid development of image processing algorithms, the application of this technology in agriculture faces further challenges, including overlapping of agricultural products, serious occlusion of detection targets, excessive detection of targets, and difficulties in image processing due to light and camera angle. Therefore, advanced image processing technology in agriculture is an inspiring and promising area of research.

This Special Issue aims to present state-of-the-art research achievements that contribute to a better understanding of the agricultural field in terms of image processing, environment perception, and sensor fusion. We also encourage submissions of review articles.

The potential topics for this Special Issue include, but are not limited to, the following:

  • Deep learning algorithm in agricultural applications;
  • Image processing technology in pest monitoring;
  • Soil spectral data in agricultural engineering;
  • Multispectral image processing in agricultural engineering;
  • Satellite remote sensing technology in agriculture;
  • Detection and location of agricultural robotics;
  • Near-infrared image processing in agricultural engineering;
  • Hyperspectral technology in crop monitoring;
  • Machine learning technology in crop baking and drying;
  • Statistical analysis technology in crop quality assessment.

Dr. Jiehao Li
Prof. Dr. Jun Li
Prof. Dr. Weibin Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • agricultural robotics
  • crop processing
  • computer vision
  • deep learning
  • hyperspectral imagery
  • RGB image
  • image processing
  • feature extract
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 8896 KiB  
Article
Automatic Paddy Planthopper Detection and Counting Using Faster R-CNN
by Siti Khairunniza-Bejo, Mohd Firdaus Ibrahim, Marsyita Hanafi, Mahirah Jahari, Fathinul Syahir Ahmad Saad and Mohammad Aufa Mhd Bookeri
Agriculture 2024, 14(9), 1567; https://doi.org/10.3390/agriculture14091567 - 10 Sep 2024
Viewed by 679
Abstract
Counting planthoppers manually is laborious and yields inconsistent results, particularly when dealing with species with similar features, such as the brown planthopper (Nilaparvata lugens; BPH), whitebacked planthopper (Sogatella furcifera; WBPH), zigzag leafhopper (Maiestas dorsalis; ZIGZAG), and green [...] Read more.
Counting planthoppers manually is laborious and yields inconsistent results, particularly when dealing with species with similar features, such as the brown planthopper (Nilaparvata lugens; BPH), whitebacked planthopper (Sogatella furcifera; WBPH), zigzag leafhopper (Maiestas dorsalis; ZIGZAG), and green leafhopper (Nephotettix malayanus and Nephotettix virescens; GLH). Most of the available automated counting methods are limited to populations of a small density and often do not consider those with a high density, which require more complex solutions due to overlapping objects. Therefore, this research presents a comprehensive assessment of an object detection algorithm specifically developed to precisely detect and quantify planthoppers. It utilises annotated datasets obtained from sticky light traps, comprising 1654 images across four distinct classes of planthoppers and one class of benign insects. The datasets were subjected to data augmentation and utilised to train four convolutional object detection models based on transfer learning. The results indicated that Faster R-CNN VGG 16 outperformed other models, achieving a mean average precision (mAP) score of 97.69% and exhibiting exceptional accuracy in classifying all planthopper categories. The correctness of the model was verified by entomologists, who confirmed a classification and counting accuracy rate of 98.84%. Nevertheless, the model fails to recognise certain samples because of the high density of the population and the significant overlap among them. This research effectively resolved the issue of low- to medium-density samples by achieving very precise and rapid detection and counting. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

23 pages, 19814 KiB  
Article
Semi-Supervised One-Stage Object Detection for Maize Leaf Disease
by Jiaqi Liu, Yanxin Hu, Qianfu Su, Jianwei Guo, Zhiyu Chen and Gang Liu
Agriculture 2024, 14(7), 1140; https://doi.org/10.3390/agriculture14071140 - 14 Jul 2024
Cited by 1 | Viewed by 657
Abstract
Maize is one of the most important crops globally, and accurate diagnosis of leaf diseases is crucial for ensuring increased yields. Despite the continuous progress in computer vision technology, detecting maize leaf diseases based on deep learning still relies on a large amount [...] Read more.
Maize is one of the most important crops globally, and accurate diagnosis of leaf diseases is crucial for ensuring increased yields. Despite the continuous progress in computer vision technology, detecting maize leaf diseases based on deep learning still relies on a large amount of manually labeled data, and the labeling process is time-consuming and labor-intensive. Moreover, the detectors currently used for identifying maize leaf diseases have relatively low accuracy in complex experimental fields. Therefore, the proposed Agronomic Teacher, an object detection algorithm that utilizes limited labeled and abundant unlabeled data, is applied to maize leaf disease recognition. In this work, a semi-supervised object detection framework is built based on a single-stage detector, integrating the Weighted Average Pseudo-labeling Assignment (WAP) strategy and AgroYOLO detector combining Agro-Backbone network with Agro-Neck network. The WAP strategy uses weight adjustments to set objectness and classification scores as evaluation criteria for pseudo-labels reliability assignment. Agro-Backbone network accurately extracts features of maize leaf diseases and obtains richer semantic information. Agro-Neck network enhances feature fusion by utilizing multi-layer features for collaborative combinations. The effectiveness of the proposed method is validated on the MaizeData and PascalVOC datasets at different annotation ratios. Compared to the baseline model, Agronomic Teacher leverages abundant unlabeled data to achieve a 6.5% increase in mAP (0.5) on the 30% labeled MaizeData. On the 30% labeled PascalVOC dataset, the mAP (0.5) improved by 8.2%, demonstrating the method’s potential for generalization. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

17 pages, 2966 KiB  
Article
Loop Closure Detection with CNN in RGB-D SLAM for Intelligent Agricultural Equipment
by Haixia Qi, Chaohai Wang, Jianwen Li and Linlin Shi
Agriculture 2024, 14(6), 949; https://doi.org/10.3390/agriculture14060949 - 18 Jun 2024
Cited by 1 | Viewed by 830
Abstract
Loop closure detection plays an important role in the construction of reliable maps for intelligent agricultural machinery equipment. With the combination of convolutional neural networks (CNN), its accuracy and real-time performance are better than those based on traditional manual features. However, due to [...] Read more.
Loop closure detection plays an important role in the construction of reliable maps for intelligent agricultural machinery equipment. With the combination of convolutional neural networks (CNN), its accuracy and real-time performance are better than those based on traditional manual features. However, due to the use of small embedded devices in agricultural machinery and the need to handle multiple tasks simultaneously, achieving optimal response speeds becomes challenging, especially when operating on large networks. This emphasizes the need to study in depth the kind of lightweight CNN loop closure detection algorithm more suitable for intelligent agricultural machinery. This paper compares a variety of loop closure detection based on lightweight CNN features. Specifically, we prove that GhostNet with feature reuse can extract image features with both high-dimensional semantic information and low-dimensional geometric information, which can significantly improve the loop closure detection accuracy and real-time performance. To further enhance the speed of detection, we implement Multi-Probe Random Hyperplane Local Sensitive Hashing (LSH) algorithms. We evaluate our approach using both a public dataset and a proprietary greenhouse dataset, employing an incremental data processing method. The results demonstrate that GhostNet and the Linear Scanning Multi-Probe LSH algorithm synergize to meet the precision and real-time requirements of agricultural closed-loop detection. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

21 pages, 10587 KiB  
Article
Detection and Instance Segmentation of Grape Clusters in Orchard Environments Using an Improved Mask R-CNN Model
by Xiang Huang, Dongdong Peng, Hengnian Qi, Lei Zhou and Chu Zhang
Agriculture 2024, 14(6), 918; https://doi.org/10.3390/agriculture14060918 - 10 Jun 2024
Viewed by 931
Abstract
Accurately segmenting grape clusters and detecting grape varieties in orchards is beneficial for orchard staff to accurately understand the distribution, yield, growth information, and efficient mechanical harvesting of different grapes. However, factors, such as lighting changes, grape overlap, branch and leaf occlusion, similarity [...] Read more.
Accurately segmenting grape clusters and detecting grape varieties in orchards is beneficial for orchard staff to accurately understand the distribution, yield, growth information, and efficient mechanical harvesting of different grapes. However, factors, such as lighting changes, grape overlap, branch and leaf occlusion, similarity in fruit and background colors, as well as the high similarity between some different grape varieties, bring tremendous difficulties in the identification and segmentation of different varieties of grape clusters. To resolve these difficulties, this study proposed an improved Mask R-CNN model by assembling an efficient channel attention (ECA) module into the residual layer of the backbone network and a dual attention network (DANet) into the mask branch. The experimental results showed that the improved Mask R-CNN model can accurately segment clusters of eight grape varieties under various conditions. The bbox_mAP and mask_mAP on the test set were 0.905 and 0.821, respectively. The results were 1.4% and 1.5% higher than the original Mask R-CNN model, respectively. The effectiveness of the ECA module and DANet module on other instance segmentation models was explored as comparison, which provided a certain ideological reference for model improvement and optimization. The results of the improved Mask R-CNN model in this study were superior to other classic instance segmentation models. It indicated that the improved model could effectively, rapidly, and accurately segment grape clusters and detect grape varieties in orchards. This study provides technical support for orchard staff and grape-picking robots to pick grapes intelligently. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

27 pages, 14033 KiB  
Article
MOLO-SLAM: A Semantic SLAM for Accurate Removal of Dynamic Objects in Agricultural Environments
by Jinhong Lv, Beihuo Yao, Haijun Guo, Changlun Gao, Weibin Wu, Junlin Li, Shunli Sun and Qing Luo
Agriculture 2024, 14(6), 819; https://doi.org/10.3390/agriculture14060819 - 24 May 2024
Cited by 1 | Viewed by 1337
Abstract
Visual simultaneous localization and mapping (VSLAM) is a foundational technology that enables robots to achieve fully autonomous locomotion, exploration, inspection, and more within complex environments. Its applicability also extends significantly to agricultural settings. While numerous impressive VSLAM systems have emerged, a majority of [...] Read more.
Visual simultaneous localization and mapping (VSLAM) is a foundational technology that enables robots to achieve fully autonomous locomotion, exploration, inspection, and more within complex environments. Its applicability also extends significantly to agricultural settings. While numerous impressive VSLAM systems have emerged, a majority of them rely on static world assumptions. This reliance constrains their use in real dynamic scenarios and leads to increased instability when applied to agricultural contexts. To address the problem of detecting and eliminating slow dynamic objects in outdoor forest and tea garden agricultural scenarios, this paper presents a dynamic VSLAM innovation called MOLO-SLAM (mask ORB label optimization SLAM). MOLO-SLAM merges the ORBSLAM2 framework with the Mask-RCNN instance segmentation network, utilizing masks and bounding boxes to enhance the accuracy and cleanliness of 3D point clouds. Additionally, we used the BundleFusion reconstruction algorithm for 3D mesh model reconstruction. By comparing our algorithm with various dynamic VSLAM algorithms on the TUM and KITTI datasets, the results demonstrate significant improvements, with enhancements of up to 97.72%, 98.51%, and 28.07% relative to the original ORBSLAM2 on the three datasets. This showcases the outstanding advantages of our algorithm. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

13 pages, 2974 KiB  
Article
High-Precision Detection for Sandalwood Trees via Improved YOLOv5s and StyleGAN
by Yu Zhang, Jiajun Niu, Zezhong Huang, Chunlei Pan, Yueju Xue and Fengxiao Tan
Agriculture 2024, 14(3), 452; https://doi.org/10.3390/agriculture14030452 - 11 Mar 2024
Cited by 3 | Viewed by 1869
Abstract
An algorithm model based on computer vision is one of the critical technologies that are imperative for agriculture and forestry planting. In this paper, a vision algorithm model based on StyleGAN and improved YOLOv5s is proposed to detect sandalwood trees from unmanned aerial [...] Read more.
An algorithm model based on computer vision is one of the critical technologies that are imperative for agriculture and forestry planting. In this paper, a vision algorithm model based on StyleGAN and improved YOLOv5s is proposed to detect sandalwood trees from unmanned aerial vehicle remote sensing data, and this model has excellent adaptability to complex environments. To enhance feature expression ability, a CA (coordinate attention) module with dimensional information is introduced, which can both capture target channel information and keep correlation information between long-range pixels. To improve the training speed and test accuracy, SIOU (structural similarity intersection over union) is proposed to replace the traditional loss function, whose direction matching degree between the prediction box and the real box is fully considered. To achieve the generalization ability of the model, StyleGAN is introduced to augment the remote sensing data of sandalwood trees and to improve the sample balance of different flight heights. The experimental results show that the average accuracy of sandalwood tree detection increased from 93% to 95.2% through YOLOv5s model improvement; then, on that basis, the accuracy increased by another 0.4% via data generation from the StyleGAN algorithm model, finally reaching 95.6%. Compared with the mainstream lightweight models YOLOv5-mobilenet, YOLOv5-ghost, YOLOXs, and YOLOv4-tiny, the accuracy of this method is 2.3%, 2.9%, 3.6%, and 6.6% higher, respectively. The size of the training sandalwood tree model is 14.5 Mb, and the detection time is 17.6 ms. Thus, the algorithm demonstrates the advantages of having high detection accuracy, a compact model size, and a rapid processing speed, making it suitable for integration into edge computing devices for on-site real-time monitoring. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

21 pages, 7447 KiB  
Article
DiffuCNN: Tobacco Disease Identification and Grading Model in Low-Resolution Complex Agricultural Scenes
by Huizhong Xiong, Xiaotong Gao, Ningyi Zhang, Haoxiong He, Weidong Tang, Yingqiu Yang, Yuqian Chen, Yang Jiao, Yihong Song and Shuo Yan
Agriculture 2024, 14(2), 318; https://doi.org/10.3390/agriculture14020318 - 17 Feb 2024
Viewed by 1516
Abstract
A novel deep learning model, DiffuCNN, is introduced in this paper, specifically designed for counting tobacco lesions in complex agricultural settings. By integrating advanced image processing techniques with deep learning methodologies, the model significantly enhances the accuracy of detecting tobacco lesions under low-resolution [...] Read more.
A novel deep learning model, DiffuCNN, is introduced in this paper, specifically designed for counting tobacco lesions in complex agricultural settings. By integrating advanced image processing techniques with deep learning methodologies, the model significantly enhances the accuracy of detecting tobacco lesions under low-resolution conditions. After detecting lesions, the grading of the disease severity is achieved through counting. The key features of DiffuCNN include a resolution enhancement module based on diffusion, an object detection network optimized through filter pruning, and the employment of the CentralSGD optimization algorithm. Experimental results demonstrate that DiffuCNN surpasses other models in precision, with respective values of 0.98 on precision, 0.96 on recall, 0.97 on accuracy, and 62 FPS. Particularly in counting tobacco lesions, DiffuCNN exhibits an exceptional performance, attributable to its efficient network architecture and advanced image processing techniques. The resolution enhancement module based on diffusion amplifies minute details and features in images, enabling the model to more effectively recognize and count tobacco lesions. Concurrently, filter pruning technology reduces the model’s parameter count and computational burden, enhancing the processing speed while retaining the capability to recognize key features. The application of the CentralSGD optimization algorithm further improves the model’s training efficiency and final performance. Moreover, an ablation study meticulously analyzes the contribution of each component within DiffuCNN. The results reveal that each component plays a crucial role in enhancing the model performance. The inclusion of the diffusion module significantly boosts the model’s precision and recall, highlighting the importance of optimizing at the model’s input end. The use of filter pruning and the CentralSGD optimization algorithm effectively elevates the model’s computational efficiency and detection accuracy. Full article
(This article belongs to the Special Issue Advanced Image Processing in Agricultural Applications)
Show Figures

Figure 1

Back to TopTop