sensors-logo

Journal Browser

Journal Browser

Perception and Imaging for Smart Agriculture

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 31 December 2024 | Viewed by 12059

Special Issue Editors


E-Mail Website
Guest Editor
College of Engineering, China Agricultural University, Beijing 100083, China
Interests: hyperspectral imaging; machine vision; near infrared spectroscopy; nondestrctive detection; sorting; instruments and equipment
Special Issues, Collections and Topics in MDPI journals
College of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering, Guangzhou, China
Interests: hyperspectral imaging; least squares approximations; principal component analysis; led lamps; taguchi methods; agricultural products; biological techniques; crops; curve fitting; error correction codes; food
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Automation, Hang Zhou Dian Zi University, Hangzhou 310018, China
Interests: Vis-NIR spectroscopy; computer vision; machine learning; plant phenomics

Special Issue Information

Dear Colleagues,

Perception and sensing technology that can achieve terminal information acquisition and value information extraction is the cornerstone of achieving intelligent agriculture, as it plays a crucial role in the monitoring of seed quality, soil and crop nutrients; the instant identification of disease, insect, grass pest and environmental stress information; the precise targeting and precise spraying of water, fertilizer and medicine; and the navigation, target identification, positioning and separation process in the automatic harvesting process that uses robots or autonomous industrial machinery. Meanwhile, various advanced imaging technologies have emerged in recent years, which also play an important role in the generation of prescription maps, the realization of remote monitoring processes, and the real-time dynamic analysis of biochemical physical processes of animal and plant cells, tissues, and organs.

The new generation of sensing and imaging technology will make smart agriculture more "hearing and seeing". This Special Issue will be devoted to collecting original papers related to advanced sensing and imaging technology in the field of smart agriculture and articles related to the latest international developments in related technologies, including but not limited to the above mentioned technologies. In addition, it will also encourage applied research related to interdisciplinary technology and agriculture.

Prof. Dr. Wei Wang
Dr. Seung-Chul Yoon
Dr. Xuan Chu
Dr. Hongwei Sun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • perception
  • sensors
  • advanced imaging technology
  • smart agriculture
  • agricultural information acquisition
  • artificial intelligence
  • agricultural robot
  • deep learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 40816 KiB  
Article
Enhancing Grapevine Node Detection to Support Pruning Automation: Leveraging State-of-the-Art YOLO Detection Models for 2D Image Analysis
by Francisco Oliveira, Daniel Queirós da Silva, Vítor Filipe, Tatiana Martins Pinho, Mário Cunha, José Boaventura Cunha and Filipe Neves dos Santos
Sensors 2024, 24(21), 6774; https://doi.org/10.3390/s24216774 - 22 Oct 2024
Viewed by 582
Abstract
Automating pruning tasks entails overcoming several challenges, encompassing not only robotic manipulation but also environment perception and detection. To achieve efficient pruning, robotic systems must accurately identify the correct cutting points. A possible method to define these points is to choose the cutting [...] Read more.
Automating pruning tasks entails overcoming several challenges, encompassing not only robotic manipulation but also environment perception and detection. To achieve efficient pruning, robotic systems must accurately identify the correct cutting points. A possible method to define these points is to choose the cutting location based on the number of nodes present on the targeted cane. For this purpose, in grapevine pruning, it is required to correctly identify the nodes present on the primary canes of the grapevines. In this paper, a novel method of node detection in grapevines is proposed with four distinct state-of-the-art versions of the YOLO detection model: YOLOv7, YOLOv8, YOLOv9 and YOLOv10. These models were trained on a public dataset with images containing artificial backgrounds and afterwards validated on different cultivars of grapevines from two distinct Portuguese viticulture regions with cluttered backgrounds. This allowed us to evaluate the robustness of the algorithms on the detection of nodes in diverse environments, compare the performance of the YOLO models used, as well as create a publicly available dataset of grapevines obtained in Portuguese vineyards for node detection. Overall, all used models were capable of achieving correct node detection in images of grapevines from the three distinct datasets. Considering the trade-off between accuracy and inference speed, the YOLOv7 model demonstrated to be the most robust in detecting nodes in 2D images of grapevines, achieving F1-Score values between 70% and 86.5% with inference times of around 89 ms for an input size of 1280 × 1280 px. Considering these results, this work contributes with an efficient approach for real-time node detection for further implementation on an autonomous robotic pruning system. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

16 pages, 4205 KiB  
Article
Development of Multimodal Fusion Technology for Tomato Maturity Assessment
by Yang Liu, Chaojie Wei, Seung-Chul Yoon, Xinzhi Ni, Wei Wang, Yizhe Liu, Daren Wang, Xiaorong Wang and Xiaohuan Guo
Sensors 2024, 24(8), 2467; https://doi.org/10.3390/s24082467 - 11 Apr 2024
Cited by 1 | Viewed by 1835
Abstract
The maturity of fruits and vegetables such as tomatoes significantly impacts indicators of their quality, such as taste, nutritional value, and shelf life, making maturity determination vital in agricultural production and the food processing industry. Tomatoes mature from the inside out, leading to [...] Read more.
The maturity of fruits and vegetables such as tomatoes significantly impacts indicators of their quality, such as taste, nutritional value, and shelf life, making maturity determination vital in agricultural production and the food processing industry. Tomatoes mature from the inside out, leading to an uneven ripening process inside and outside, and these situations make it very challenging to judge their maturity with the help of a single modality. In this paper, we propose a deep learning-assisted multimodal data fusion technique combining color imaging, spectroscopy, and haptic sensing for the maturity assessment of tomatoes. The method uses feature fusion to integrate feature information from images, near-infrared spectra, and haptic modalities into a unified feature set and then classifies the maturity of tomatoes through deep learning. Each modality independently extracts features, capturing the tomatoes’ exterior color from color images, internal and surface spectral features linked to chemical compositions in the visible and near-infrared spectra (350 nm to 1100 nm), and physical firmness using haptic sensing. By combining preprocessed and extracted features from multiple modalities, data fusion creates a comprehensive representation of information from all three modalities using an eigenvector in an eigenspace suitable for tomato maturity assessment. Then, a fully connected neural network is constructed to process these fused data. This neural network model achieves 99.4% accuracy in tomato maturity classification, surpassing single-modal methods (color imaging: 94.2%; spectroscopy: 87.8%; haptics: 87.2%). For internal and external maturity unevenness, the classification accuracy reaches 94.4%, demonstrating effective results. A comparative analysis of performance between multimodal fusion and single-modal methods validates the stability and applicability of the multimodal fusion technique. These findings demonstrate the key benefits of multimodal fusion in terms of improving the accuracy of tomato ripening classification and provide a strong theoretical and practical basis for applying multimodal fusion technology to classify the quality and maturity of other fruits and vegetables. Utilizing deep learning (a fully connected neural network) for processing multimodal data provides a new and efficient non-destructive approach for the massive classification of agricultural and food products. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

14 pages, 7138 KiB  
Article
Design and Development of Large-Band Dual-MSFA Sensor Camera for Precision Agriculture
by Vahid Mohammadi, Pierre Gouton, Matthieu Rossé and Kossi Kuma Katakpe
Sensors 2024, 24(1), 64; https://doi.org/10.3390/s24010064 - 22 Dec 2023
Cited by 3 | Viewed by 1618
Abstract
The optimal design and construction of multispectral cameras can remarkably reduce the costs of spectral imaging systems and efficiently decrease the amount of image processing and analysis required. Also, multispectral imaging provides effective imaging information through higher-resolution images. This study aimed to develop [...] Read more.
The optimal design and construction of multispectral cameras can remarkably reduce the costs of spectral imaging systems and efficiently decrease the amount of image processing and analysis required. Also, multispectral imaging provides effective imaging information through higher-resolution images. This study aimed to develop novel, multispectral cameras based on Fabry–Pérot technology for agricultural applications such as plant/weed separation, ripeness estimation, and disease detection. Two multispectral cameras were developed, covering visible and near-infrared ranges from 380 nm to 950 nm. A monochrome image sensor with a resolution of 1600 × 1200 pixels was used, and two multispectral filter arrays were developed and mounted on the sensors. The filter pitch was 4.5 μm, and each multispectral filter array consisted of eight bands. Band selection was performed using a genetic algorithm. For VIS and NIR filters, maximum RMS values of 0.0740 and 0.0986 were obtained, respectively. The spectral response of the filters in VIS was significant; however, in NIR, the spectral response of the filters after 830 nm decreased by half. In total, these cameras provided 16 spectral images in high resolution for agricultural purposes. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

15 pages, 3911 KiB  
Article
Non-Destructive Determination of Bayberry Sugar and Acidity by Hyperspectral Remote Sensing of Si-Sensor and Low-Cost Portable Instrument Development
by Jiaoru Wang, Weizhi Wu, Shoupeng Tian, Yadong He, Yun Huang, Fumin Wang and Yao Zhang
Sensors 2023, 23(24), 9822; https://doi.org/10.3390/s23249822 - 14 Dec 2023
Viewed by 1001
Abstract
The digitalization of information is crucial for the upgrading of the bayberry digital agriculture industry, while the low-cost information detection sensing equipment for bayberry are a bottleneck for the digital development of the industry. The existing rapid and non-destructive detection devices for fruit [...] Read more.
The digitalization of information is crucial for the upgrading of the bayberry digital agriculture industry, while the low-cost information detection sensing equipment for bayberry are a bottleneck for the digital development of the industry. The existing rapid and non-destructive detection devices for fruit acidity and sugar content mainly use near-infrared and mid-infrared spectral characteristic for detection. These devices use expensive InGaAs sensor, which are difficult to promote and apply in the bayberry digital industry. This study is based on the high-spectral range of 454–998 nm in bayberry fruit to study the mechanism of fruit sugar and acidity content detection and to develop a portable bayberry fruit sugar and acidity detection device using Si-sensor in order to achieve low-cost quality parameter detection of bayberry fruit. The research results show that: Based on the hyperspectral of bayberry fruit, the sensitive wavelength for sugar content inversion is 610 nm, and the inversion accuracy (RMSE) is 1.399Brix; the sensitive wavelength for pH inversion is 570 nm, and the inversion accuracy (RMSE) is 0.1329. Based on the above spectroscopic detection mechanism and spectral dimension reduction methods, combined with low-cost Si-sensor (400–1000 nm), a low-cost non-destructive portable bayberry fruit sugar and acidity detection device has been developed, with detection accuracies of 94.74% and 97.14%, respectively. This bayberry fruit sugar and acidity detector provides a low-cost portable non-destructive quality detection instrument of bayberry, which is in line with the industrial group of low consumption in which the bayberry is mainly cultivated on a small scale, accelerating the digitalization process of the bayberry industry. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

18 pages, 3202 KiB  
Article
Rapid Detection of Total Viable Count in Intact Beef Dishes Based on NIR Hyperspectral Hybrid Model
by Wensong Wei, Fengjuan Zhang, Fangting Fu, Shuo Sang and Zhen Qiao
Sensors 2023, 23(23), 9584; https://doi.org/10.3390/s23239584 - 3 Dec 2023
Cited by 5 | Viewed by 1288
Abstract
The total viable count (TVC) of bacteria is an important index to evaluate the freshness and safety of dishes. To improve the accuracy and robustness of spectroscopic detection of total viable bacteria count in a complex system, a new method based on a [...] Read more.
The total viable count (TVC) of bacteria is an important index to evaluate the freshness and safety of dishes. To improve the accuracy and robustness of spectroscopic detection of total viable bacteria count in a complex system, a new method based on a near-infrared (NIR) hyperspectral hybrid model and Support Vector Machine (SVM) algorithms was developed to directly determine the total viable count in intact beef dish samples in this study. Diffuse reflectance data of intact and crushed samples were tested by NIR hyperspectral and processed using Multiplicative Scattering Correction (MSC) and Competitive Adaptive Reweighted Sampling (CARS). Kennard–Stone (KS) and Samples Set Partitioning Based on Joint X-Y Distance (SPXY) algorithms were used to select the optimal number of standard samples transferred by the model combined with root mean square error. The crushed samples were transferred into the complete samples prediction model through the Direct Standardization (DS) algorithm. The spectral hybrid model of crushed samples and full samples was established. The results showed that the Determination Coefficient of Calibration (RP2) value of the total samples prediction set increased from 0.5088 to 0.8068, and the value of the Root Mean Square Error of Prediction (RMSEP) decreased from 0.2454 to 0.1691 log10 CFU/g. After establishing the hybrid model, the RMSEP value decreased by 9.23% more than before, and the values of Relative Percent Deviation (RPD) and Reaction Error Relation (RER) increased by 12.12% and 10.09, respectively. The results of this study showed that TVC instewed beef samples can be non-destructively determined based on the DS model transfer method combined with the hybrid model strategy. This study provided a reference for solving the problem of poor accuracy and reliability of prediction models in heterogeneous samples. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

14 pages, 2852 KiB  
Article
An Electrochemical Sensor Based on Three-Dimensional Porous Reduced Graphene and Ion Imprinted Polymer for Trace Cadmium Determination in Water
by Linzhe Wang, Jingfang Hu, Wensong Wei, Shuyu Xiao, Jiyang Wang, Yu Song, Yansheng Li, Guowei Gao and Lei Qin
Sensors 2023, 23(23), 9561; https://doi.org/10.3390/s23239561 - 1 Dec 2023
Cited by 2 | Viewed by 1210
Abstract
Three-dimensional (3D) porous graphene-based materials have displayed attractive electrochemical catalysis and sensing performances, benefiting from their high porosity, large surface area, and excellent electrical conductivity. In this work, a novel electrochemical sensor based on 3D porous reduced graphene (3DPrGO) and ion-imprinted polymer (IIP) [...] Read more.
Three-dimensional (3D) porous graphene-based materials have displayed attractive electrochemical catalysis and sensing performances, benefiting from their high porosity, large surface area, and excellent electrical conductivity. In this work, a novel electrochemical sensor based on 3D porous reduced graphene (3DPrGO) and ion-imprinted polymer (IIP) was developed for trace cadmium ion (Cd(II)) detection in water. The 3DPrGO was synthesized in situ at a glassy carbon electrode (GCE) surface using a polystyrene (PS) colloidal crystal template and the electrodeposition method. Then, IIP film was further modified on the 3DPrGO by electropolymerization to make it suitable for detecting Cd(II). Attributable to the abundant nanopores and good electron transport of the 3DPrGO, as well as the specific recognition for Cd(II) of IIP, a sensitive determination of trace Cd(II) at PoPD-IIP/3DPrGO/GCE was achieved. The proposed sensor exhibited comprehensive linear Cd(II) responses ranging from 1 to 100 μg/L (R2 = 99.7%). The limit of detection (LOD) was 0.11 μg/L, about 30 times lower than the drinking water standard set by the World Health Organization (WHO). Moreover, PoPD-IIP/3DPrGO/GCE was applied for the detection of Cd(II) in actual water samples. The satisfying recoveries (97–99.6%) and relative standard deviations (RSD, 3.5–5.7%) make the proposed sensor a promising candidate for rapid and on-site water monitoring. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

24 pages, 8173 KiB  
Article
Tea-YOLOv8s: A Tea Bud Detection Model Based on Deep Learning and Computer Vision
by Shuang Xie and Hongwei Sun
Sensors 2023, 23(14), 6576; https://doi.org/10.3390/s23146576 - 21 Jul 2023
Cited by 20 | Viewed by 3798
Abstract
Tea bud target detection is essential for mechanized selective harvesting. To address the challenges of low detection precision caused by the complex backgrounds of tea leaves, this paper introduces a novel model called Tea-YOLOv8s. First, multiple data augmentation techniques are employed to increase [...] Read more.
Tea bud target detection is essential for mechanized selective harvesting. To address the challenges of low detection precision caused by the complex backgrounds of tea leaves, this paper introduces a novel model called Tea-YOLOv8s. First, multiple data augmentation techniques are employed to increase the amount of information in the images and improve their quality. Then, the Tea-YOLOv8s model combines deformable convolutions, attention mechanisms, and improved spatial pyramid pooling, thereby enhancing the model’s ability to learn complex object invariance, reducing interference from irrelevant factors, and enabling multi-feature fusion, resulting in improved detection precision. Finally, the improved YOLOv8 model is compared with other models to validate the effectiveness of the proposed improvements. The research results demonstrate that the Tea-YOLOv8s model achieves a mean average precision of 88.27% and an inference time of 37.1 ms, with an increase in the parameters and calculation amount by 15.4 M and 17.5 G, respectively. In conclusion, although the proposed approach increases the model’s parameters and calculation amount, it significantly improves various aspects compared to mainstream YOLO detection models and has the potential to be applied to tea buds picked by mechanization equipment. Full article
(This article belongs to the Special Issue Perception and Imaging for Smart Agriculture)
Show Figures

Figure 1

Back to TopTop