sensors-logo

Journal Browser

Journal Browser

Intelligent Sensing and Machine Vision in Precision Agriculture

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Smart Agriculture".

Deadline for manuscript submissions: closed (30 April 2024) | Viewed by 50447

Special Issue Editors


E-Mail Website
Guest Editor
College of Engineering, Anhui Agricultural University, Hefei 230036, China
Interests: intelligent agricultural machinery; precision agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institutes of Physical Science and Information Technology, Anhui University, Hefei 230039, China
Interests: crop diseases and insect pests detection; intelligent agriculture; bioinformatics
School of Internet, Anhui University, Hefei 230039, China
Interests: optical measurement; multi-view imaging; agricultural vision

Special Issue Information

Dear Colleagues,

Precision agriculture seeks to employ information technology to support farming operation and management, such as fertilizer inputs, irrigation management, pesticide application, etc. The temporal, spatial, and individual information related to environmental parameters and crop features are gathered, processed, and analyzed through various intelligent sensing technologies. Among them, machine vision technologies, including 3D/2D imaging, visible/near-infrared imaging, and hyperspectral/multispectral imaging, have been extensively used for precision agriculture, such as plant phenotyping, autonomous navigation, disease detection, production prediction, etc. Moreover, deep learning has greatly promoted the development of intelligent sensing technologies, which has a range of potential applications in precision agriculture.

Dr. Yuwei Wang
Prof. Dr. Liqing Chen
Prof. Dr. Peng Chen
Dr. Bolin Cai
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision agriculture
  • agricultural robot
  • machine vision
  • image processing
  • multispectral imaging
  • plant phenotyping
  • optical measurement
  • disease detection
  • deep learning
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (20 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 4842 KiB  
Article
A Lightweight and High-Precision Passion Fruit YOLO Detection Model for Deployment in Embedded Devices
by Qiyan Sun, Pengbo Li, Chentao He, Qiming Song, Jierui Chen, Xiangzeng Kong and Zhicong Luo
Sensors 2024, 24(15), 4942; https://doi.org/10.3390/s24154942 - 30 Jul 2024
Cited by 3 | Viewed by 1196
Abstract
In order to shorten detection times and improve average precision in embedded devices, a lightweight and high-accuracy model is proposed to detect passion fruit in complex environments (e.g., with backlighting, occlusion, overlap, sun, cloud, or rain). First, replacing the backbone network of YOLOv5 [...] Read more.
In order to shorten detection times and improve average precision in embedded devices, a lightweight and high-accuracy model is proposed to detect passion fruit in complex environments (e.g., with backlighting, occlusion, overlap, sun, cloud, or rain). First, replacing the backbone network of YOLOv5 with a lightweight GhostNet model reduces the number of parameters and computational complexity while improving the detection speed. Second, a new feature branch is added to the backbone network and the feature fusion layer in the neck network is reconstructed to effectively combine the lower- and higher-level features, which improves the accuracy of the model while maintaining its lightweight nature. Finally, a knowledge distillation method is used to transfer knowledge from the more capable teacher model to the less capable student model, significantly improving the detection accuracy. The improved model is denoted as G-YOLO-NK. The average accuracy of the G-YOLO-NK network is 96.00%, which is 1.00% higher than that of the original YOLOv5s model. Furthermore, the model size is 7.14 MB, half that of the original model, and its real-time detection frame rate is 11.25 FPS when implemented on the Jetson Nano. The proposed model is found to outperform state-of-the-art models in terms of average precision and detection performance. The present work provides an effective model for real-time detection of passion fruit in complex orchard scenes, offering valuable technical support for the development of orchard picking robots and greatly improving the intelligence level of orchards. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

21 pages, 9140 KiB  
Article
An Improved Ningxia Desert Herbaceous Plant Classification Algorithm Based on YOLOv8
by Hongxing Ma, Tielei Sheng, Yun Ma and Jianping Gou
Sensors 2024, 24(12), 3834; https://doi.org/10.3390/s24123834 - 13 Jun 2024
Viewed by 745
Abstract
Wild desert grasslands are characterized by diverse habitats, uneven plant distribution, similarities among plant class, and the presence of plant shadows. However, the existing models for detecting plant species in desert grasslands exhibit low precision, require a large number of parameters, and incur [...] Read more.
Wild desert grasslands are characterized by diverse habitats, uneven plant distribution, similarities among plant class, and the presence of plant shadows. However, the existing models for detecting plant species in desert grasslands exhibit low precision, require a large number of parameters, and incur high computational cost, rendering them unsuitable for deployment in plant recognition scenarios within these environments. To address these challenges, this paper proposes a lightweight and fast plant species detection system, termed YOLOv8s-KDT, tailored for complex desert grassland environments. Firstly, the model introduces a dynamic convolutional KernelWarehouse method to reduce the dimensionality of convolutional kernels and increase their number, thus achieving a better balance between parameter efficiency and representation ability. Secondly, the model incorporates triplet attention into its feature extraction network, effectively capturing the relationship between channel and spatial position and enhancing the model’s feature extraction capabilities. Finally, the introduction of a dynamic detection head tackles the issue related to target detection head and attention non-uniformity, thus improving the representation of the target detection head while reducing computational cost. The experimental results demonstrate that the upgraded YOLOv8s-KDT model can rapidly and effectively identify desert grassland plants. Compared to the original model, FLOPs decreased by 50.8%, accuracy improved by 4.5%, and mAP increased by 5.6%. Currently, the YOLOv8s-KDT model is deployed in the mobile plant identification APP of Ningxia desert grassland and the fixed-point ecological information observation platform. It facilitates the investigation of desert grassland vegetation distribution across the entire Ningxia region as well as long-term observation and tracking of plant ecological information in specific areas, such as Dashuikeng, Huangji Field, and Hongsibu in Ningxia. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

22 pages, 13859 KiB  
Article
Stereo Vision for Plant Detection in Dense Scenes
by Thijs Ruigrok, Eldert J. van Henten and Gert Kootstra
Sensors 2024, 24(6), 1942; https://doi.org/10.3390/s24061942 - 18 Mar 2024
Viewed by 1398
Abstract
Automated precision weed control requires visual methods to discriminate between crops and weeds. State-of-the-art plant detection methods fail to reliably detect weeds, especially in dense and occluded scenes. In the past, using hand-crafted detection models, both color (RGB) and depth (D) data were [...] Read more.
Automated precision weed control requires visual methods to discriminate between crops and weeds. State-of-the-art plant detection methods fail to reliably detect weeds, especially in dense and occluded scenes. In the past, using hand-crafted detection models, both color (RGB) and depth (D) data were used for plant detection in dense scenes. Remarkably, the combination of color and depth data is not widely used in current deep learning-based vision systems in agriculture. Therefore, we collected an RGB-D dataset using a stereo vision camera. The dataset contains sugar beet crops in multiple growth stages with a varying weed densities. This dataset was made publicly available and was used to evaluate two novel plant detection models, the D-model, using the depth data as the input, and the CD-model, using both the color and depth data as inputs. For ease of use, for existing 2D deep learning architectures, the depth data were transformed into a 2D image using color encoding. As a reference model, the C-model, which uses only color data as the input, was included. The limited availability of suitable training data for depth images demands the use of data augmentation and transfer learning. Using our three detection models, we studied the effectiveness of data augmentation and transfer learning for depth data transformed to 2D images. It was found that geometric data augmentation and transfer learning were equally effective for both the reference model and the novel models using the depth data. This demonstrates that combining color-encoded depth data with geometric data augmentation and transfer learning can improve the RGB-D detection model. However, when testing our detection models on the use case of volunteer potato detection in sugar beet farming, it was found that the addition of depth data did not improve plant detection at high vegetation densities. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

23 pages, 7541 KiB  
Article
Wheat Seed Detection and Counting Method Based on Improved YOLOv8 Model
by Na Ma, Yaxin Su, Lexin Yang, Zhongtao Li and Hongwen Yan
Sensors 2024, 24(5), 1654; https://doi.org/10.3390/s24051654 - 3 Mar 2024
Cited by 13 | Viewed by 4304
Abstract
Wheat seed detection has important applications in calculating thousand-grain weight and crop breeding. In order to solve the problems of seed accumulation, adhesion, and occlusion that can lead to low counting accuracy, while ensuring fast detection speed with high accuracy, a wheat seed [...] Read more.
Wheat seed detection has important applications in calculating thousand-grain weight and crop breeding. In order to solve the problems of seed accumulation, adhesion, and occlusion that can lead to low counting accuracy, while ensuring fast detection speed with high accuracy, a wheat seed counting method is proposed to provide technical support for the development of the embedded platform of the seed counter. This study proposes a lightweight real-time wheat seed detection model, YOLOv8-HD, based on YOLOv8. Firstly, we introduce the concept of shared convolutional layers to improve the YOLOv8 detection head, reducing the number of parameters and achieving a lightweight design to improve runtime speed. Secondly, we incorporate the Vision Transformer with a Deformable Attention mechanism into the C2f module of the backbone network to enhance the network’s feature extraction capability and improve detection accuracy. The results show that in the stacked scenes with impurities (severe seed adhesion), the YOLOv8-HD model achieves an average detection accuracy (mAP) of 77.6%, which is 9.1% higher than YOLOv8. In all scenes, the YOLOv8-HD model achieves an average detection accuracy (mAP) of 99.3%, which is 16.8% higher than YOLOv8. The memory size of the YOLOv8-HD model is 6.35 MB, approximately 4/5 of YOLOv8. The GFLOPs of YOLOv8-HD decrease by 16%. The inference time of YOLOv8-HD is 2.86 ms (on GPU), which is lower than YOLOv8. Finally, we conducted numerous experiments and the results showed that YOLOv8-HD outperforms other mainstream networks in terms of mAP, speed, and model size. Therefore, our YOLOv8-HD can efficiently detect wheat seeds in various scenarios, providing technical support for the development of seed counting instruments. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

13 pages, 4258 KiB  
Article
Korean Cattle 3D Reconstruction from Multi-View 3D-Camera System in Real Environment
by Chang Gwon Dang, Seung Soo Lee, Mahboob Alam, Sang Min Lee, Mi Na Park, Ha-Seung Seong, Seungkyu Han, Hoang-Phong Nguyen, Min Ki Baek, Jae Gu Lee and Van Thuan Pham
Sensors 2024, 24(2), 427; https://doi.org/10.3390/s24020427 - 10 Jan 2024
Cited by 1 | Viewed by 1540
Abstract
The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow [...] Read more.
The rapid evolution of 3D technology in recent years has brought about significant change in the field of agriculture, including precision livestock management. From 3D geometry information, the weight and characteristics of body parts of Korean cattle can be analyzed to improve cow growth. In this paper, a system of cameras is built to synchronously capture 3D data and then reconstruct a 3D mesh representation. In general, to reconstruct non-rigid objects, a system of cameras is synchronized and calibrated, and then the data of each camera are transformed to global coordinates. However, when reconstructing cattle in a real environment, difficulties including fences and the vibration of cameras can lead to the failure of the process of reconstruction. A new scheme is proposed that automatically removes environmental fences and noise. An optimization method is proposed that interweaves camera pose updates, and the distances between the camera pose and the initial camera position are added as part of the objective function. The difference between the camera’s point clouds to the mesh output is reduced from 7.5 mm to 5.5 mm. The experimental results showed that our scheme can automatically generate a high-quality mesh in a real environment. This scheme provides data that can be used for other research on Korean cattle. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

17 pages, 16728 KiB  
Article
Seaweed Growth Monitoring with a Low-Cost Vision-Based System
by Jeroen Gerlo, Dennis G. Kooijman, Ivo W. Wieling, Ritchie Heirmans and Steve Vanlanduit
Sensors 2023, 23(22), 9197; https://doi.org/10.3390/s23229197 - 15 Nov 2023
Cited by 3 | Viewed by 2064
Abstract
In this paper, we introduce a method for automated seaweed growth monitoring by combining a low-cost RGB and stereo vision camera. While current vision-based seaweed growth monitoring techniques focus on laboratory measurements or above-ground seaweed, we investigate the feasibility of the underwater imaging [...] Read more.
In this paper, we introduce a method for automated seaweed growth monitoring by combining a low-cost RGB and stereo vision camera. While current vision-based seaweed growth monitoring techniques focus on laboratory measurements or above-ground seaweed, we investigate the feasibility of the underwater imaging of a vertical seaweed farm. We use deep learning-based image segmentation (DeeplabV3+) to determine the size of the seaweed in pixels from recorded RGB images. We convert this pixel size to meters squared by using the distance information from the stereo camera. We demonstrate the performance of our monitoring system using measurements in a seaweed farm in the River Scheldt estuary (in The Netherlands). Notwithstanding the poor visibility of the seaweed in the images, we are able to segment the seaweed with an intersection of the union (IoU) of 0.9, and we reach a repeatability of 6% and a precision of the seaweed size of 18%. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

16 pages, 6729 KiB  
Article
Research and Implementation of Millet Ear Detection Method Based on Lightweight YOLOv5
by Shujin Qiu, Yun Li, Jian Gao, Xiaobin Li, Xiangyang Yuan, Zhenyu Liu, Qingliang Cui and Cuiqing Wu
Sensors 2023, 23(22), 9189; https://doi.org/10.3390/s23229189 - 15 Nov 2023
Cited by 1 | Viewed by 1458
Abstract
As the millet ears are dense, small in size, and serious occlusion in the complex grain field scene, the target detection model suitable for this environment requires high computing power, and it is difficult to deploy the real-time detection of millet ears on [...] Read more.
As the millet ears are dense, small in size, and serious occlusion in the complex grain field scene, the target detection model suitable for this environment requires high computing power, and it is difficult to deploy the real-time detection of millet ears on mobile devices. A lightweight real-time detection method for millet ears is based on YOLOv5. First, the YOLOv5s model is improved by replacing the YOLOv5s backbone feature extraction network with the MobilenetV3 lightweight model to reduce model size. Then, using the multi-feature fusion detection structure, the micro-scale detection layer is augmented to reduce high-level feature maps and low-level feature maps. The Merge-NMS technique is used in post-processing for target information loss to reduce the influence of boundary blur on the detection effect and increase the detection accuracy of small and obstructed targets. Finally, the models reconstructed by different improved methods are trained and tested on the self-built millet ear data set. The AP value of the improved model in this study reaches 97.78%, F1-score is 94.20%, and the model size is only 7.56 MB, which is 53.28% of the standard YoloV5s model size, and has a better detection speed. Compared with other classical target detection models, it shows strong robustness and generalization ability. The lightweight model performs better in the detection of pictures and videos in the Jetson Nano. The results show that the improved lightweight YOLOv5 millet detection model in this study can overcome the influence of complex environments, and significantly improve the detection effect of millet under dense distribution and occlusion conditions. The millet detection model is deployed on the Jetson Nano, and the millet detection system is implemented based on the PyQt5 framework. The detection accuracy and detection speed of the millet detection system can meet the actual needs of intelligent agricultural machinery equipment and has a good application prospect. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

17 pages, 10601 KiB  
Article
Rice Grain Detection and Counting Method Based on TCLE–YOLO Model
by Yu Zou, Zefeng Tian, Jiawen Cao, Yi Ren, Yaping Zhang, Lu Liu, Peijiang Zhang and Jinlong Ni
Sensors 2023, 23(22), 9129; https://doi.org/10.3390/s23229129 - 12 Nov 2023
Cited by 1 | Viewed by 2415
Abstract
Thousand-grain weight is the main parameter for accurately estimating rice yields, and it is an important indicator for variety breeding and cultivation management. The accurate detection and counting of rice grains is an important prerequisite for thousand-grain weight measurements. However, because rice grains [...] Read more.
Thousand-grain weight is the main parameter for accurately estimating rice yields, and it is an important indicator for variety breeding and cultivation management. The accurate detection and counting of rice grains is an important prerequisite for thousand-grain weight measurements. However, because rice grains are small targets with high overall similarity and different degrees of adhesion, there are still considerable challenges preventing the accurate detection and counting of rice grains during thousand-grain weight measurements. A deep learning model based on a transformer encoder and coordinate attention module was, therefore, designed for detecting and counting rice grains, and named TCLE-YOLO in which YOLOv5 was used as the backbone network. Specifically, to improve the feature representation of the model for small target regions, a coordinate attention (CA) module was introduced into the backbone module of YOLOv5. In addition, another detection head for small targets was designed based on a low-level, high-resolution feature map, and the transformer encoder was applied to the neck module to expand the receptive field of the network and enhance the extraction of key feature of detected targets. This enabled our additional detection head to be more sensitive to rice grains, especially heavily adhesive grains. Finally, EIoU loss was used to further improve accuracy. The experimental results show that, when applied to the self-built rice grain dataset, the precision, recall, and [email protected] of the TCLE–YOLO model were 99.20%, 99.10%, and 99.20%, respectively. Compared with several state-of-the-art models, the proposed TCLE–YOLO model achieves better detection performance. In summary, the rice grain detection method built in this study is suitable for rice grain recognition and counting, and it can provide guidance for accurate thousand-grain weight measurements and the effective evaluation of rice breeding. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

14 pages, 3496 KiB  
Article
Research on the Relative Position Detection Method between Orchard Robots and Fruit Tree Rows
by Baoxing Gu, Qin Liu, Yi Gao, Guangzhao Tian, Baohua Zhang, Haiqing Wang and He Li
Sensors 2023, 23(21), 8807; https://doi.org/10.3390/s23218807 - 29 Oct 2023
Cited by 1 | Viewed by 1214
Abstract
The relative position of the orchard robot to the rows of fruit trees is an important parameter for achieving autonomous navigation. The current methods for estimating the position parameters between rows of orchard robots obtain low parameter accuracy. To address this problem, this [...] Read more.
The relative position of the orchard robot to the rows of fruit trees is an important parameter for achieving autonomous navigation. The current methods for estimating the position parameters between rows of orchard robots obtain low parameter accuracy. To address this problem, this paper proposes a machine vision-based method for detecting the relative position of orchard robots and fruit tree rows. First, the fruit tree trunk is identified based on the improved YOLOv4 model; second, the camera coordinates of the tree trunk are calculated using the principle of binocular camera triangulation, and the ground projection coordinates of the tree trunk are obtained through coordinate conversion; finally, the midpoints of the projection coordinates of different sides are combined, the navigation path is obtained by linear fitting with the least squares method, and the position parameters of the orchard robot are obtained through calculation. The experimental results show that the average accuracy and average recall rate of the improved YOLOv4 model for fruit tree trunk detection are 5.92% and 7.91% higher, respectively, than those of the original YOLOv4 model. The average errors of heading angle and lateral deviation estimates obtained based on the method in this paper are 0.57° and 0.02 m. The method can accurately calculate heading angle and lateral deviation values at different positions between rows and provide a reference for the autonomous visual navigation of orchard robots. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

17 pages, 17316 KiB  
Article
A Deep Learning Approach for Precision Viticulture, Assessing Grape Maturity via YOLOv7
by Eftichia Badeka, Eleftherios Karapatzak, Aikaterini Karampatea, Elisavet Bouloumpasi, Ioannis Kalathas, Chris Lytridis, Emmanouil Tziolas, Viktoria Nikoleta Tsakalidou and Vassilis G. Kaburlasos
Sensors 2023, 23(19), 8126; https://doi.org/10.3390/s23198126 - 27 Sep 2023
Cited by 12 | Viewed by 2634
Abstract
In the viticulture sector, robots are being employed more frequently to increase productivity and accuracy in operations such as vineyard mapping, pruning, and harvesting, especially in locations where human labor is in short supply or expensive. This paper presents the development of an [...] Read more.
In the viticulture sector, robots are being employed more frequently to increase productivity and accuracy in operations such as vineyard mapping, pruning, and harvesting, especially in locations where human labor is in short supply or expensive. This paper presents the development of an algorithm for grape maturity estimation in the framework of vineyard management. An object detection algorithm is proposed based on You Only Look Once (YOLO) v7 and its extensions in order to detect grape maturity in a white variety of grape (Assyrtiko grape variety). The proposed algorithm was trained using images received over a period of six weeks from grapevines in Drama, Greece. Tests on high-quality images have demonstrated that the detection of five grape maturity stages is possible. Furthermore, the proposed approach has been compared against alternative object detection algorithms. The results showed that YOLO v7 outperforms other architectures both in precision and accuracy. This work paves the way for the development of an autonomous robot for grapevine management. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

15 pages, 6084 KiB  
Article
A Lightweight Recognition Method for Rice Growth Period Based on Improved YOLOv5s
by Kaixuan Liu, Jie Wang, Kai Zhang, Minhui Chen, Haonan Zhao and Juan Liao
Sensors 2023, 23(15), 6738; https://doi.org/10.3390/s23156738 - 27 Jul 2023
Cited by 3 | Viewed by 1698
Abstract
The identification of the growth and development period of rice is of great significance to achieve high-yield and high-quality rice. However, the acquisition of rice growth period information mainly relies on manual observation, which has problems such as low efficiency and strong subjectivity. [...] Read more.
The identification of the growth and development period of rice is of great significance to achieve high-yield and high-quality rice. However, the acquisition of rice growth period information mainly relies on manual observation, which has problems such as low efficiency and strong subjectivity. In order to solve these problems, a lightweight recognition method is proposed to automatically identify the growth period of rice: Small-YOLOv5, which is based on improved YOLOv5s. Firstly, the new backbone feature extraction network MobileNetV3 was used to replace the YOLOv5s backbone network to reduce the model size and the number of model parameters, thus improving the detection speed of the model. Secondly, in the feature fusion stage of YOLOv5s, we introduced a more lightweight convolution method, GsConv, to replace the standard convolution. The computational cost of GsConv is about 60–70% of the standard convolution, but its contribution to the model learning ability is no less than that of the standard convolution. Based on GsConv, we built a lightweight neck network to reduce the complexity of the network model while maintaining accuracy. To verify the performance of Small-YOLOv5s, we tested it on a self-built dataset of rice growth period. The results show that compared with YOLOv5s (5.0) on the self-built dataset, the number of the model parameter was reduced by 82.4%, GFLOPS decreased by 85.9%, and the volume reduced by 86.0%. The mAP (0.5) value of the improved model was 98.7%, only 0.8% lower than that of the original YOLOv5s model. Compared with the mainstream lightweight model YOLOV5s- MobileNetV3-Small, the number of the model parameter was decreased by 10.0%, the volume reduced by 9.6%, and the mAP (0.5:0.95) improved by 5.0%—reaching 94.7%—and the recall rate improved by 1.5%—reaching 98.9%. Based on experimental comparisons, the effectiveness and superiority of the model have been verified. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

13 pages, 7585 KiB  
Article
Early Identification of Root Damages Caused by Western Corn Rootworms Using a Minimally Invasive Root Phenotyping Robot—MISIRoot
by Zhihang Song, Tianzhang Zhao and Jian Jin
Sensors 2023, 23(13), 5995; https://doi.org/10.3390/s23135995 - 28 Jun 2023
Cited by 1 | Viewed by 1146
Abstract
Western corn rootworm (WCR) is one of the most devastating corn rootworm species in North America because of its ability to cause severe production loss and grain quality damage. To control the loss, it is important to identify the infection of WCR at [...] Read more.
Western corn rootworm (WCR) is one of the most devastating corn rootworm species in North America because of its ability to cause severe production loss and grain quality damage. To control the loss, it is important to identify the infection of WCR at an early stage. Because the root system is the earliest feeding source of the WCR at the larvae stage, assessing the direct damage in the root system is crucial to achieving early detection. Most of the current methods still necessitate uprooting the entire plant, which could cause permanent destruction and a loss of the original root’s structural information. To measure the root damages caused by WCR non-destructively, this study utilized MISIRoot, a minimally invasive and in situ automatic plant root phenotyping robot to collect not only high-resolution images but also 3D positions of the roots without uprooting. To identify roots in the images and to study how the damages were distributed in different types of roots, a deep convolution neural network model was trained to differentiate the relatively thick and thin roots. In addition, a color camera was used to capture the above-ground morphological features, such as the leaf color, plant height, and side-view leaf area. To check if the plant shoot had any visible symptoms in the inoculated group compared to the control group, several vegetation indices were calculated based on the RGB color. Additionally, the shoot morphological features were fed into a PLS-DA model to differentiate the two groups. Results showed that none of the above-ground features or models output a statistically significant difference between the two groups at the 95% confidence level. On the contrary, many of the root structural features measured using MISIRoot could successfully differentiate the two groups with the smallest t-test p-value of 1.5791 × 10−6. The promising outcomes were solid proof of the effectiveness of MISIRoot as a potential solution for identifying WCR infestations before the plant shoot showed significant symptoms. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

15 pages, 7447 KiB  
Article
Exploiting Pre-Trained Convolutional Neural Networks for the Detection of Nutrient Deficiencies in Hydroponic Basil
by Zeki Gul and Sebnem Bora
Sensors 2023, 23(12), 5407; https://doi.org/10.3390/s23125407 - 7 Jun 2023
Cited by 3 | Viewed by 2304
Abstract
Due to the integration of artificial intelligence with sensors and devices utilized by Internet of Things technology, the interest in automation systems has increased. One of the common features of both agriculture and artificial intelligence is recommendation systems that increase yield by identifying [...] Read more.
Due to the integration of artificial intelligence with sensors and devices utilized by Internet of Things technology, the interest in automation systems has increased. One of the common features of both agriculture and artificial intelligence is recommendation systems that increase yield by identifying nutrient deficiencies in plants, consuming resources correctly, reducing damage to the environment and preventing economic losses. The biggest shortcomings in these studies are the scarcity of data and the lack of diversity. This experiment aimed to identify nutrient deficiencies in basil plants cultivated in a hydroponic system. Basil plants were grown by applying a complete nutrient solution as control and non-added nitrogen (N), phosphorous (P) and potassium (K). Then, photos were taken to determine N, P and K deficiencies in basil and control plants. After a new dataset was created for the basil plant, pretrained convolutional neural network (CNN) models were used for the classification problem. DenseNet201, ResNet101V2, MobileNet and VGG16 pretrained models were used to classify N, P and K deficiencies; then, accuracy values were examined. Additionally, heat maps of images that were obtained using the Grad-CAM were analyzed in the study. The highest accuracy was achieved with the VGG16 model, and it was observed in the heat map that VGG16 focuses on the symptoms. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

24 pages, 13569 KiB  
Article
YOLOv5-KCB: A New Method for Individual Pig Detection Using Optimized K-Means, CA Attention Mechanism and a Bi-Directional Feature Pyramid Network
by Guangbo Li, Guolong Shi and Jun Jiao
Sensors 2023, 23(11), 5242; https://doi.org/10.3390/s23115242 - 31 May 2023
Cited by 11 | Viewed by 2573
Abstract
Individual identification of pigs is a critical component of intelligent pig farming. Traditional pig ear-tagging requires significant human resources and suffers from issues such as difficulty in recognition and low accuracy. This paper proposes the YOLOv5-KCB algorithm for non-invasive identification of individual pigs. [...] Read more.
Individual identification of pigs is a critical component of intelligent pig farming. Traditional pig ear-tagging requires significant human resources and suffers from issues such as difficulty in recognition and low accuracy. This paper proposes the YOLOv5-KCB algorithm for non-invasive identification of individual pigs. Specifically, the algorithm utilizes two datasets—pig faces and pig necks—which are divided into nine categories. Following data augmentation, the total sample size was augmented to 19,680. The distance metric used for K-means clustering is changed from the original algorithm to 1-IOU, which improves the adaptability of the model’s target anchor boxes. Furthermore, the algorithm introduces SE, CBAM, and CA attention mechanisms, with the CA attention mechanism being selected for its superior performance in feature extraction. Finally, CARAFE, ASFF, and BiFPN are used for feature fusion, with BiFPN selected for its superior performance in improving the detection ability of the algorithm. The experimental results indicate that the YOLOv5-KCB algorithm achieved the highest accuracy rates in pig individual recognition, surpassing all other improved algorithms in average accuracy rate (IOU = 0.5). The accuracy rate of pig head and neck recognition was 98.4%, while the accuracy rate for pig face recognition was 95.1%, representing an improvement of 4.8% and 13.8% over the original YOLOv5 algorithm. Notably, the average accuracy rate of identifying pig head and neck was consistently higher than pig face recognition across all algorithms, with YOLOv5-KCB demonstrating an impressive 2.9% improvement. These results emphasize the potential for utilizing the YOLOv5-KCB algorithm for precise individual pig identification, facilitating subsequent intelligent management practices. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

13 pages, 5312 KiB  
Article
A Dragon Fruit Picking Detection Method Based on YOLOv7 and PSP-Ellipse
by Jialiang Zhou, Yueyue Zhang and Jinpeng Wang
Sensors 2023, 23(8), 3803; https://doi.org/10.3390/s23083803 - 7 Apr 2023
Cited by 27 | Viewed by 4199
Abstract
Dragon fruit is one of the most popular fruits in China and Southeast Asia. It, however, is mainly picked manually, imposing high labor intensity on farmers. The hard branches and complex postures of dragon fruit make it difficult to achieve automated picking. For [...] Read more.
Dragon fruit is one of the most popular fruits in China and Southeast Asia. It, however, is mainly picked manually, imposing high labor intensity on farmers. The hard branches and complex postures of dragon fruit make it difficult to achieve automated picking. For picking dragon fruits with diverse postures, this paper proposes a new dragon fruit detection method, not only to identify and locate the dragon fruit, but also to detect the endpoints that are at the head and root of the dragon fruit, which can provide more visual information for the dragon fruit picking robot. First, YOLOv7 is used to locate and classify the dragon fruit. Then, we propose a PSP-Ellipse method to further detect the endpoints of the dragon fruit, including dragon fruit segmentation via PSPNet, endpoints positioning via an ellipse fitting algorithm and endpoints classification via ResNet. To test the proposed method, some experiments are conducted. In dragon fruit detection, the precision, recall and average precision of YOLOv7 are 0.844, 0.924 and 0.932, respectively. YOLOv7 also performs better compared with some other models. In dragon fruit segmentation, the segmentation performance of PSPNet on dragon fruit is better than some other commonly used semantic segmentation models, with the segmentation precision, recall and mean intersection over union being 0.959, 0.943 and 0.906, respectively. In endpoints detection, the distance error and angle error of endpoints positioning based on ellipse fitting are 39.8 pixels and 4.3°, and the classification accuracy of endpoints based on ResNet is 0.92. The proposed PSP-Ellipse method makes a great improvement compared with two kinds of keypoint regression method based on ResNet and UNet. Orchard picking experiments verified that the method proposed in this paper is effective. The detection method proposed in this paper not only promotes the progress of the automatic picking of dragon fruit, but it also provides a reference for other fruit detection. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

14 pages, 2838 KiB  
Article
Automatic Crop Canopy Temperature Measurement Using a Low-Cost Image-Based Thermal Sensor: Application in a Pomegranate Orchard under a Permanent Shade Net House
by Jaime Giménez-Gallego, Juan D. González-Teruel, Pedro J. Blaya-Ros, Ana B. Toledo-Moreo, Rafael Domingo-Miguel and Roque Torres-Sánchez
Sensors 2023, 23(6), 2915; https://doi.org/10.3390/s23062915 - 8 Mar 2023
Cited by 3 | Viewed by 2816
Abstract
Water scarcity in arid and semi-arid areas has led to the development of regulated deficit irrigation (RDI) strategies on most species of fruit trees in order to improve water productivity. For a successful implementation, these strategies require continuous feedback of the soil and [...] Read more.
Water scarcity in arid and semi-arid areas has led to the development of regulated deficit irrigation (RDI) strategies on most species of fruit trees in order to improve water productivity. For a successful implementation, these strategies require continuous feedback of the soil and crop water status. This feedback is provided by physical indicators from the soil–plant–atmosphere continuum, as is the case of the crop canopy temperature, which can be used for the indirect estimation of crop water stress. Infrared Radiometers (IRs) are considered as the reference tool for temperature-based water status monitoring in crops. Alternatively, in this paper, we assess the performance of a low-cost thermal sensor based on thermographic imaging technology for the same purpose. The thermal sensor was tested in field conditions by performing continuous measurements on pomegranate trees (Punica granatum L. ‘Wonderful’) and was compared with a commercial IR. A strong correlation (R2 = 0.976) between the two sensors was obtained, demonstrating the suitability of the experimental thermal sensor to monitor the crop canopy temperature for irrigation management. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

16 pages, 7920 KiB  
Article
Design and Experiments of a Real-Time Bale Density Monitoring System Based on Dynamic Weighing
by Jianjun Yin, Zhijian Chen, Chao Liu, Maile Zhou and Lu Liu
Sensors 2023, 23(4), 1778; https://doi.org/10.3390/s23041778 - 5 Feb 2023
Cited by 6 | Viewed by 2287
Abstract
Bale density is one of the main performance indicators to measure the quality of baler operation. In this study, a real-time baler bale density monitoring system was designed for the problem of difficult real-time measurement of bale density on round balers. Firstly, a [...] Read more.
Bale density is one of the main performance indicators to measure the quality of baler operation. In this study, a real-time baler bale density monitoring system was designed for the problem of difficult real-time measurement of bale density on round balers. Firstly, a weighing calculation model for the rolling and sliding stage of the bale was established, and the dynamic characteristics during the contact between the bale and the inclined surface were analyzed based on ADAMS dynamics simulation. Then, a real-time monitoring system for the bale density based on the contact pressure of the inclined surface, attitude angle measurement and hydraulic monitoring of the cylinder was constructed, and the accuracy of the weighing model was confirmed. The system was used to observe and analyze the changes in the pitch angle of the carrier table and the oil pressure in the rod chamber of the backpack cylinder during the operation of the round baler. Finally, the monitoring system was calibrated and the dynamic calibration equations were obtained. The results show that the maximum error between the calculated value of the original weighing model and the actual weight was 3.63%, the maximum error of the calculated value of the weighing model corrected by the calibration equations was 3.40% and the measurement accuracy could be satisfied. The results show that the system was highly accurate and met the practical needs of bale weighing in the field. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

21 pages, 7806 KiB  
Article
Water Stress Index Detection Using a Low-Cost Infrared Sensor and Excess Green Image Processing
by Rodrigo Leme de Paulo, Angel Pontin Garcia, Claudio Kiyoshi Umezu, Antonio Pires de Camargo, Fabrício Theodoro Soares and Daniel Albiero
Sensors 2023, 23(3), 1318; https://doi.org/10.3390/s23031318 - 24 Jan 2023
Cited by 6 | Viewed by 3733
Abstract
Precision Irrigation (PI) is a promising technique for monitoring and controlling water use that allows for meeting crop water requirements based on site-specific data. However, implementing the PI needs precise data on water evapotranspiration. The detection and monitoring of crop water stress can [...] Read more.
Precision Irrigation (PI) is a promising technique for monitoring and controlling water use that allows for meeting crop water requirements based on site-specific data. However, implementing the PI needs precise data on water evapotranspiration. The detection and monitoring of crop water stress can be achieved by several methods, one of the most interesting being the use of infra-red (IR) thermometry combined with the estimate of the Crop Water Stress Index (CWSI). However, conventional IR equipment is expensive, so the objective of this paper is to present the development of a new low-cost water stress detection system using TL indices obtained by crossing the responses of infrared sensors with image processing. The results demonstrated that it is possible to use low-cost IR sensors with a directional Field of Vision (FoV) to measure plant temperature, generate thermal maps, and identify water stress conditions. The Leaf Temperature Maps, generated by the IR sensor readings of the plant segmentation in the RGB image, were validated by thermal images. Furthermore, the estimated CWSI is consistent with the literature results. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

21 pages, 6898 KiB  
Article
YOLOv5s-FP: A Novel Method for In-Field Pear Detection Using a Transformer Encoder and Multi-Scale Collaboration Perception
by Yipu Li, Yuan Rao, Xiu Jin, Zhaohui Jiang, Yuwei Wang, Tan Wang, Fengyi Wang, Qing Luo and Lu Liu
Sensors 2023, 23(1), 30; https://doi.org/10.3390/s23010030 - 20 Dec 2022
Cited by 12 | Viewed by 2382
Abstract
Precise pear detection and recognition is an essential step toward modernizing orchard management. However, due to the ubiquitous occlusion in orchards and various locations of image acquisition, the pears in the acquired images may be quite small and occluded, causing high false detection [...] Read more.
Precise pear detection and recognition is an essential step toward modernizing orchard management. However, due to the ubiquitous occlusion in orchards and various locations of image acquisition, the pears in the acquired images may be quite small and occluded, causing high false detection and object loss rate. In this paper, a multi-scale collaborative perception network YOLOv5s-FP (Fusion and Perception) was proposed for pear detection, which coupled local and global features. Specifically, a pear dataset with a high proportion of small and occluded pears was proposed, comprising 3680 images acquired with cameras mounted on a ground tripod and a UAV platform. The cross-stage partial (CSP) module was optimized to extract global features through a transformer encoder, which was then fused with local features by an attentional feature fusion mechanism. Subsequently, a modified path aggregation network oriented to collaboration perception of multi-scale features was proposed by incorporating a transformer encoder, the optimized CSP, and new skip connections. The quantitative results of utilizing the YOLOv5s-FP for pear detection were compared with other typical object detection networks of the YOLO series, recording the highest average precision of 96.12% with less detection time and computational cost. In qualitative experiments, the proposed network achieved superior visual performance with stronger robustness to the changes in occlusion and illumination conditions, particularly providing the ability to detect pears with different sizes in highly dense, overlapping environments and non-normal illumination areas. Therefore, the proposed YOLOv5s-FP network was practicable for detecting in-field pears in a real-time and accurate way, which could be an advantageous component of the technology for monitoring pear growth status and implementing automated harvesting in unmanned orchards. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

21 pages, 16429 KiB  
Article
Sugarcane-Seed-Cutting System Based on Machine Vision in Pre-Seed Mode
by Da Wang, Rui Su, Yanjie Xiong, Yuwei Wang and Weiwei Wang
Sensors 2022, 22(21), 8430; https://doi.org/10.3390/s22218430 - 2 Nov 2022
Cited by 7 | Viewed by 5479
Abstract
China is the world’s third-largest producer of sugarcane, slightly behind Brazil and India. As an important cash crop in China, sugarcane has always been the main source of sugar, the basic strategic material. The planting method of sugarcane used in China is mainly [...] Read more.
China is the world’s third-largest producer of sugarcane, slightly behind Brazil and India. As an important cash crop in China, sugarcane has always been the main source of sugar, the basic strategic material. The planting method of sugarcane used in China is mainly the pre-cutting planting mode. However, there are many problems with this technology, which has a great impact on the planting quality of sugarcane. Aiming at a series of problems, such as low cutting efficiency and poor quality in the pre-cutting planting mode of sugarcane, a sugarcane-seed-cutting device was proposed, and a sugarcane-seed-cutting system based on automatic identification technology was designed. The system consists of a sugarcane-cutting platform, a seed-cutting device, a visual inspection system, and a control system. Among them, the visual inspection system adopts the YOLO V5 network model to identify and detect the eustipes of sugarcane, and the seed-cutting device is composed of a self-tensioning conveying mechanism, a reciprocating crank slider transmission mechanism, and a high-speed rotary cutting mechanism so that the cutting device can complete the cutting of sugarcane seeds of different diameters. The test shows that the recognition rate of sugarcane seed cutting is no less than 94.3%, the accuracy rate is between 94.3% and 100%, and the average accuracy is 98.2%. The bud injury rate is no higher than 3.8%, while the average cutting time of a single seed is about 0.7 s, which proves that the cutting system has a high cutting rate, recognition rate, and low injury rate. The findings of this paper have important application values for promoting the development of sugarcane pre-cutting planting mode and sugarcane planting technology. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

Back to TopTop