sensors-logo

Journal Browser

Journal Browser

Intelligent Sensing and Machine Vision in Precision Agriculture: 2nd Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Smart Agriculture".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 11909

Special Issue Editors

College of Engineering, Anhui Agricultural University, Hefei 230036, China
Interests: smart agriculture; intelligent agricultural equipment
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Agricultural Engineering, Jiangsu University, Zhenjiang 212013, China
Interests: intelligent agriculture machinery; agriculture robot

E-Mail Website
Guest Editor
College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou 350002, China
Interests: machine vision; precision agriculture

Special Issue Information

Dear colleagues,

Precision agriculture seeks to employ information technology to support farming operation and management, such as fertilizer inputs, irrigation management, pesticide application, etc. The temporal, spatial, and individual information related to environmental parameters and crop features are gathered, processed, and analyzed through various intelligent sensing technologies. Among them, machine vision technologies, including 3D/2D imaging, visible/near-infrared imaging, and hyperspectral/multispectral imaging, have been extensively used for precision agriculture, such as plant phenotyping, autonomous navigation, disease detection, production prediction, etc. Moreover, deep learning has greatly promoted the development of intelligent sensing technologies, which have a range of potential applications in precision agriculture.

Dr. Lu Liu
Dr. Jianjun Yin
Dr. Haiyong Weng
Dr. Yuwei Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision agriculture
  • agricultural robot
  • machine vision
  • image processing
  • multispectral imaging
  • plant phenotyping
  • optical measurement
  • disease detection
  • deep learning
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 16117 KiB  
Article
A Stride Toward Wine Yield Estimation from Images: Metrological Validation of Grape Berry Number, Radius, and Volume Estimation
by Bernardo Lanza, Davide Botturi, Alessandro Gnutti, Matteo Lancini, Cristina Nuzzi and Simone Pasinetti
Sensors 2024, 24(22), 7305; https://doi.org/10.3390/s24227305 - 15 Nov 2024
Viewed by 649
Abstract
Yield estimation is a key point theme for precision agriculture, especially for small fruits and in-field scenarios. This paper focuses on the metrological validation of a novel deep-learning model that robustly estimates both the number and the radii of grape berries in vineyards [...] Read more.
Yield estimation is a key point theme for precision agriculture, especially for small fruits and in-field scenarios. This paper focuses on the metrological validation of a novel deep-learning model that robustly estimates both the number and the radii of grape berries in vineyards using color images, allowing the computation of the visible (and total) volume of grape clusters, which is necessary to reach the ultimate goal of estimating yield production. The proposed algorithm is validated by analyzing its performance on a custom dataset. The number of berries, their mean radius, and the grape cluster volume are converted to millimeters and compared to reference values obtained through manual measurements. The validation experiment also analyzes the uncertainties of the parameters. Results show that the algorithm can reliably estimate the number (MPE=5%, σ=6%) and the radius of the visible portion of the grape clusters (MPE=0.8%, σ=7%). Instead, the volume estimated in px3 results in a MPE=0.4% with σ=21%, thus the corresponding volume in mm3 is affected by high uncertainty. This analysis highlighted that half of the total uncertainty on the volume is due to the camera–object distance d and parameter R used to take into account the proportion of visible grapes with respect to the total grapes in the grape cluster. This issue is mostly due to the absence of a reliable depth measure between the camera and the grapes, which could be overcome by using depth sensors in combination with color images. Despite being preliminary, the results prove that the model and the metrological analysis are a remarkable advancement toward a reliable approach for directly estimating yield from 2D pictures in the field. Full article
Show Figures

Figure 1

18 pages, 7770 KiB  
Article
Vision-Based Localization Method for Picking Points in Tea-Harvesting Robots
by Jingwen Yang, Xin Li, Xin Wang, Leiyang Fu and Shaowen Li
Sensors 2024, 24(21), 6777; https://doi.org/10.3390/s24216777 - 22 Oct 2024
Cited by 2 | Viewed by 1097
Abstract
To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale [...] Read more.
To address the issue of accurately recognizing and locating picking points for tea-picking robots in unstructured environments, a visual positioning method based on RGB-D information fusion is proposed. First, an improved T-YOLOv8n model is proposed, which improves detection and segmentation performance across multi-scale scenes through network architecture and loss function optimizations. In the far-view test set, the detection accuracy of tea buds reached 80.8%; for the near-view test set, the mAP0.5 values for tea stem detection in bounding boxes and masks reached 93.6% and 93.7%, respectively, showing improvements of 9.1% and 14.1% over the baseline model. Secondly, a layered visual servoing strategy for near and far views was designed, integrating the RealSense depth sensor with robotic arm cooperation. This strategy identifies the region of interest (ROI) of the tea bud in the far view and fuses the stem mask information with depth data to calculate the three-dimensional coordinates of the picking point. The experiments show that this method achieved a picking point localization success rate of 86.4%, with a mean depth measurement error of 1.43 mm. The proposed method improves the accuracy of picking point recognition and reduces depth information fluctuations, providing technical support for the intelligent and rapid picking of premium tea. Full article
Show Figures

Figure 1

16 pages, 9089 KiB  
Article
Consecutive Image Acquisition without Anomalies
by Angel Mur, Patrice Galaup, Etienne Dedic, Dominique Henry and Hervé Aubert
Sensors 2024, 24(20), 6608; https://doi.org/10.3390/s24206608 - 14 Oct 2024
Viewed by 984
Abstract
An image is a visual representation that can be used to obtain information. A camera on a moving vector (e.g., on a rover, drone, quad, etc.) may acquire images along a controlled trajectory. The maximum visual information is captured during a fixed acquisition [...] Read more.
An image is a visual representation that can be used to obtain information. A camera on a moving vector (e.g., on a rover, drone, quad, etc.) may acquire images along a controlled trajectory. The maximum visual information is captured during a fixed acquisition time when consecutive images do not overlap and have no space (or gap) between them. The images acquisition is said to be anomalous when two consecutive images overlap (overlap anomaly) or have a gap between them (gap anomaly). In this article, we report a new algorithm, named OVERGAP, that remove these two types of anomalies when consecutive images are obtained from an on-board camera on a moving vector. Anomaly detection and correction use here both the Dynamic Time Warping distance and Wasserstein distance. The proposed algorithm produces consecutive, anomaly-free images with the desired size that can conveniently be used in a machine learning process (mainly Deep Learning) to create a prediction model for a feature of interest. Full article
Show Figures

Figure 1

27 pages, 8828 KiB  
Article
Research on Detection Method of Chaotian Pepper in Complex Field Environments Based on YOLOv8
by Yichu Duan, Jianing Li and Chi Zou
Sensors 2024, 24(17), 5632; https://doi.org/10.3390/s24175632 - 30 Aug 2024
Cited by 1 | Viewed by 1058
Abstract
The intelligent detection of chili peppers is crucial for achieving automated operations. In complex field environments, challenges such as overlapping plants, branch occlusions, and uneven lighting make detection difficult. This study conducted comparative experiments to select the optimal detection model based on YOLOv8 [...] Read more.
The intelligent detection of chili peppers is crucial for achieving automated operations. In complex field environments, challenges such as overlapping plants, branch occlusions, and uneven lighting make detection difficult. This study conducted comparative experiments to select the optimal detection model based on YOLOv8 and further enhanced it. The model was optimized by incorporating BiFPN, LSKNet, and FasterNet modules, followed by the addition of attention and lightweight modules such as EMBC, EMSCP, DAttention, MSBlock, and Faster. Adjustments to CIoU, Inner CIoU, Inner GIoU, and inner_mpdiou loss functions and scaling factors further improved overall performance. After optimization, the YOLOv8 model achieved precision, recall, and mAP scores of 79.0%, 75.3%, and 83.2%, respectively, representing increases of 1.1, 4.3, and 1.6 percentage points over the base model. Additionally, GFLOPs were reduced by 13.6%, the model size decreased to 66.7% of the base model, and the FPS reached 301.4. This resulted in accurate and rapid detection of chili peppers in complex field environments, providing data support and experimental references for the development of intelligent picking equipment. Full article
Show Figures

Figure 1

19 pages, 3177 KiB  
Article
Developing Machine Vision in Tree-Fruit Applications—Fruit Count, Fruit Size and Branch Avoidance in Automated Harvesting
by Chiranjivi Neupane, Kerry B. Walsh, Rafael Goulart and Anand Koirala
Sensors 2024, 24(17), 5593; https://doi.org/10.3390/s24175593 - 29 Aug 2024
Cited by 2 | Viewed by 1470
Abstract
Recent developments in affordable depth imaging hardware and the use of 2D Convolutional Neural Networks (CNN) in object detection and segmentation have accelerated the adoption of machine vision in a range of applications, with mainstream models often out-performing previous application-specific architectures. The need [...] Read more.
Recent developments in affordable depth imaging hardware and the use of 2D Convolutional Neural Networks (CNN) in object detection and segmentation have accelerated the adoption of machine vision in a range of applications, with mainstream models often out-performing previous application-specific architectures. The need for the release of training and test datasets with any work reporting model development is emphasized to enable the re-evaluation of published work. An additional reporting need is the documentation of the performance of the re-training of a given model, quantifying the impact of stochastic processes in training. Three mango orchard applications were considered: the (i) fruit count, (ii) fruit size and (iii) branch avoidance in automated harvesting. All training and test datasets used in this work are available publicly. The mAP ‘coefficient of variation’ (Standard Deviation, SD, divided by mean of predictions using models of repeated trainings × 100) was approximately 0.2% for the fruit detection model and 1 and 2% for the fruit and branch segmentation models, respectively. A YOLOv8m model achieved a mAP50 of 99.3%, outperforming the previous benchmark, the purpose-designed ‘MangoYOLO’, for the application of the real-time detection of mango fruit on images of tree canopies using an edge computing device as a viable use case. YOLOv8 and v9 models outperformed the benchmark MaskR-CNN model in terms of their accuracy and inference time, achieving up to a 98.8% mAP50 on fruit predictions and 66.2% on branches in a leafy canopy. For fruit sizing, the accuracy of YOLOv8m-seg was like that achieved using Mask R-CNN, but the inference time was much shorter, again an enabler for the field adoption of this technology. A branch avoidance algorithm was proposed, where the implementation of this algorithm in real-time on an edge computing device was enabled by the short inference time of a YOLOv8-seg model for branches and fruit. This capability contributes to the development of automated fruit harvesting. Full article
Show Figures

Figure 1

16 pages, 3229 KiB  
Article
Streamlining YOLOv7 for Rapid and Accurate Detection of Rapeseed Varieties on Embedded Device
by Siqi Gu, Wei Meng and Guodong Sun
Sensors 2024, 24(17), 5585; https://doi.org/10.3390/s24175585 - 28 Aug 2024
Viewed by 806
Abstract
Real-time seed detection on resource-constrained embedded devices is essential for the agriculture industry and crop yield. However, traditional seed variety detection methods either suffer from low accuracy or cannot directly run on embedded devices with desirable real-time performance. In this paper, we focus [...] Read more.
Real-time seed detection on resource-constrained embedded devices is essential for the agriculture industry and crop yield. However, traditional seed variety detection methods either suffer from low accuracy or cannot directly run on embedded devices with desirable real-time performance. In this paper, we focus on the detection of rapeseed varieties and design a dual-dimensional (spatial and channel) pruning method to lighten the YOLOv7 (a popular object detection model based on deep learning). We design experiments to prove the effectiveness of the spatial dimension pruning strategy. And after evaluating three different channel pruning methods, we select the custom ratio layer-by-layer pruning, which offers the best performance for the model. The results show that using custom ratio layer-by-layer pruning can achieve the best model performance. Compared to the YOLOv7 model, this approach results in mAP increasing from 96.68% to 96.89%, the number of parameters reducing from 36.5 M to 9.19 M, and the inference time per image on the Raspberry Pi 4B reducing from 4.48 s to 1.18 s. Overall, our model is suitable for deployment on embedded devices and can perform real-time detection tasks accurately and efficiently in various application scenarios. Full article
Show Figures

Figure 1

Review

Jump to: Research

42 pages, 13582 KiB  
Review
A Comprehensive Review of LiDAR Applications in Crop Management for Precision Agriculture
by Sheikh Muhammad Farhan, Jianjun Yin, Zhijian Chen and Muhammad Sohail Memon
Sensors 2024, 24(16), 5409; https://doi.org/10.3390/s24165409 - 21 Aug 2024
Cited by 6 | Viewed by 4899
Abstract
Precision agriculture has revolutionized crop management and agricultural production, with LiDAR technology attracting significant interest among various technological advancements. This extensive review examines the various applications of LiDAR in precision agriculture, with a particular emphasis on its function in crop cultivation and harvests. [...] Read more.
Precision agriculture has revolutionized crop management and agricultural production, with LiDAR technology attracting significant interest among various technological advancements. This extensive review examines the various applications of LiDAR in precision agriculture, with a particular emphasis on its function in crop cultivation and harvests. The introduction provides an overview of precision agriculture, highlighting the need for effective agricultural management and the growing significance of LiDAR technology. The prospective advantages of LiDAR for increasing productivity, optimizing resource utilization, managing crop diseases and pesticides, and reducing environmental impact are discussed. The introduction comprehensively covers LiDAR technology in precision agriculture, detailing airborne, terrestrial, and mobile systems along with their specialized applications in the field. After that, the paper reviews the several uses of LiDAR in agricultural cultivation, including crop growth and yield estimate, disease detection, weed control, and plant health evaluation. The use of LiDAR for soil analysis and management, including soil mapping and categorization and the measurement of moisture content and nutrient levels, is reviewed. Additionally, the article examines how LiDAR is used for harvesting crops, including its use in autonomous harvesting systems, post-harvest quality evaluation, and the prediction of crop maturity and yield. Future perspectives, emergent trends, and innovative developments in LiDAR technology for precision agriculture are discussed, along with the critical challenges and research gaps that must be filled. The review concludes by emphasizing potential solutions and future directions for maximizing LiDAR’s potential in precision agriculture. This in-depth review of the uses of LiDAR gives helpful insights for academics, practitioners, and stakeholders interested in using this technology for effective and environmentally friendly crop management, which will eventually contribute to the development of precision agricultural methods. Full article
Show Figures

Figure 1

Back to TopTop