Application of Vision Technology and Artificial Intelligence in Smart Farming

A special issue of Agriculture (ISSN 2077-0472). This special issue belongs to the section "Digital Agriculture".

Deadline for manuscript submissions: closed (10 July 2023) | Viewed by 91924

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
Interests: microclimate analytics of poultry houses; intelligent agricultural equipment; smart farming; non-destructive detection of meat quality; agricultural robot
Special Issues, Collections and Topics in MDPI journals
School of Engineering, University of British Columbia, Okanagan, Kelowna, BC V1V 1V7, Canada
Interests: intelligent sensing, measurement, and instrumentation; diagnostics, prognostics, and health management; predictive maintenance; digital twins; computational intelligence and data/information fusion; non-destructive testing and evaluation; machine/computer vision; data analytics and machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Applied Meteorology, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: climate-smart agriculture; AI meteorology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
Interests: prediction model; computer simulation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210031, China
Interests: intelligent agricultural equipment; three-dimensional reconstruction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computer vision (CV) and artificial intelligence (AI) have been gaining traction in agriculture. From reducing production costs with intelligent automation to boosting productivity, CV and AI have massive potential to enhance the overall functioning of smart farming. Monitoring and analyzing specific behaviors of livestock and poultry in large-scale farms based on CV and AI to improve our knowledge of intensively raised livestock and poultry behaviors in relation to modern management techniques, allowing for improved health, welfare, and performance. In the field of planting, CV approaches are required to extract plant phenotypes from images and automate the detection of plants and plant organs. AI approaches give growers weapons against pests. Smart farming requires considerable processing power. The application of CV and AI helps crops to progress toward perfect ripeness.

This Special Issue focuses on the application of CV and AI in smart farming, including breeding and planting. Topics of interest include but are not limited to the following: design and optimization of agricultural sensors, behavior recognition of livestock and poultry based on vision technology and deep learning, automation technology in agricultural equipment based on vision technology, design and optimization of robots for livestock and poultry breeding based on vision technology and artificial intelligence, non-destructive detection of meat quality, and agricultural big data analytics based on sensor data and deep learning. Original research articles and reviews are welcome. 

Dr. Xiuguo Zou
Dr. Zheng Liu
Dr. Xiaochen Zhu
Dr. Wentian Zhang
Dr. Yan Qian
Dr. Yuhua Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agriculture is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • agricultural sensors
  • behavior recognition of livestock and poultry
  • agricultural automation equipment
  • agricultural intelligent robots
  • intelligent robotic arms
  • non-destructive detection of meat quality
  • agricultural big data analytics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 173 KiB  
Editorial
Application of Vision Technology and Artificial Intelligence in Smart Farming
by Xiuguo Zou, Zheng Liu, Xiaochen Zhu, Wentian Zhang, Yan Qian and Yuhua Li
Agriculture 2023, 13(11), 2106; https://doi.org/10.3390/agriculture13112106 - 6 Nov 2023
Cited by 3 | Viewed by 1828
Abstract
With the rapid advancement of technology, traditional farming is gradually transitioning into smart farming [...] Full article

Research

Jump to: Editorial, Review

21 pages, 7799 KiB  
Article
Division of Cow Production Groups Based on SOLOv2 and Improved CNN-LSTM
by Guanying Cui, Lulu Qiao, Yuhua Li, Zhilong Chen, Zhenyu Liang, Chengrui Xin, Maohua Xiao and Xiuguo Zou
Agriculture 2023, 13(8), 1562; https://doi.org/10.3390/agriculture13081562 - 4 Aug 2023
Cited by 2 | Viewed by 1468
Abstract
Udder conformation traits interact with cow milk yield, and it is essential to study the udder characteristics at different levels of production to predict milk yield for managing cows on farms. This study aims to develop an effective method based on instance segmentation [...] Read more.
Udder conformation traits interact with cow milk yield, and it is essential to study the udder characteristics at different levels of production to predict milk yield for managing cows on farms. This study aims to develop an effective method based on instance segmentation and an improved neural network to divide cow production groups according to udders of high- and low-yielding cows. Firstly, the SOLOv2 (Segmenting Objects by LOcations) method was utilized to finely segment the cow udders. Secondly, feature extraction and data processing were conducted to define several cow udder features. Finally, the improved CNN-LSTM (Convolution Neural Network-Long Short-Term Memory) neural network was adopted to classify high- and low-yielding udders. The research compared the improved CNN-LSTM model and the other five classifiers, and the results show that CNN-LSTM achieved an overall accuracy of 96.44%. The proposed method indicates that the SOLOv2 and CNN-LSTM methods combined with analysis of udder traits have the potential for assigning cows to different production groups. Full article
Show Figures

Figure 1

17 pages, 5363 KiB  
Article
Identifying an Image-Processing Method for Detection of Bee Mite in Honey Bee Based on Keypoint Analysis
by Hong Gu Lee, Min-Jee Kim, Su-bae Kim, Sujin Lee, Hoyoung Lee, Jeong Yong Sin and Changyeun Mo
Agriculture 2023, 13(8), 1511; https://doi.org/10.3390/agriculture13081511 - 28 Jul 2023
Cited by 5 | Viewed by 2079
Abstract
Economic and ecosystem issues associated with beekeeping may stem from bee mites rather than other bee diseases. The honey mites that stick to bees are small and possess a reddish-brown color, rendering it difficult to distinguish them with the naked eye. Objective and [...] Read more.
Economic and ecosystem issues associated with beekeeping may stem from bee mites rather than other bee diseases. The honey mites that stick to bees are small and possess a reddish-brown color, rendering it difficult to distinguish them with the naked eye. Objective and rapid technologies to detect bee mites are required. Image processing considerably improves detection performance. Therefore, this study proposes an image-processing method that can increase the detection performance of bee mites. A keypoint detection algorithm was implemented to identify keypoint location and frequencies in images of bees and bee mites. These parameters were analyzed to determine the rational measurement distance and image-processing. The change in the number of keypoints was analyzed by applying five-color model conversion, histogram normalization, and two-histogram equalization. The performance of the keypoints was verified by matching images with infested bees and mites. Among 30 given cases of image processing, the method applying normalization and equalization in the RGB color model image produced consistent quality data and was the most valid keypoint. Optimal image processing worked effectively in the measured 300 mm data in the range 300–1100 mm. The results of this study show that diverse image-processing techniques help to enhance the quality of bee mite detection significantly. This approach can be used in conjunction with an object detection deep-learning algorithm to monitor bee mites and diseases. Full article
Show Figures

Figure 1

19 pages, 4648 KiB  
Article
Soybean-MVS: Annotated Three-Dimensional Model Dataset of Whole Growth Period Soybeans for 3D Plant Organ Segmentation
by Yongzhe Sun, Zhixin Zhang, Kai Sun, Shuai Li, Jianglin Yu, Linxiao Miao, Zhanguo Zhang, Yang Li, Hongjie Zhao, Zhenbang Hu, Dawei Xin, Qingshan Chen and Rongsheng Zhu
Agriculture 2023, 13(7), 1321; https://doi.org/10.3390/agriculture13071321 - 28 Jun 2023
Cited by 9 | Viewed by 3126
Abstract
The study of plant phenotypes based on 3D models has become an important research direction for automatic plant phenotype acquisition. Building a labeled three-dimensional dataset of the whole growth period can help the development of 3D crop plant models in point cloud segmentation. [...] Read more.
The study of plant phenotypes based on 3D models has become an important research direction for automatic plant phenotype acquisition. Building a labeled three-dimensional dataset of the whole growth period can help the development of 3D crop plant models in point cloud segmentation. Therefore, the demand for 3D whole plant growth period model datasets with organ-level markers is growing rapidly. In this study, five different soybean varieties were selected, and three-dimensional reconstruction was carried out for the whole growth period (13 stages) of soybean using multiple-view stereo technology (MVS). Leaves, main stems, and stems of the obtained three-dimensional model were manually labeled. Finally, two-point cloud semantic segmentation models, RandLA-Net and BAAF-Net, were used for training. In this paper, 102 soybean stereoscopic plant models were obtained. A dataset with original point clouds was constructed and the subsequent analysis confirmed that the number of plant point clouds was consistent with corresponding real plant development. At the same time, a 3D dataset named Soybean-MVS with labels for the whole soybean growth period was constructed. The test result of mAccs at 88.52% and 87.45% verified the availability of this dataset. In order to further promote the study of point cloud segmentation and phenotype acquisition of soybean plants, this paper proposed an annotated three-dimensional model dataset for the whole growth period of soybean for 3D plant organ segmentation. The release of the dataset can provide an important basis for proposing an updated, highly accurate, and efficient 3D crop model segmentation algorithm. In the future, this dataset will provide important and usable basic data support for the development of three-dimensional point cloud segmentation and phenotype automatic acquisition technology of soybeans. Full article
Show Figures

Figure 1

19 pages, 10888 KiB  
Article
Method for Classifying Apple Leaf Diseases Based on Dual Attention and Multi-Scale Feature Extraction
by Jie Ding, Cheng Zhang, Xi Cheng, Yi Yue, Guohua Fan, Yunzhi Wu and Youhua Zhang
Agriculture 2023, 13(5), 940; https://doi.org/10.3390/agriculture13050940 - 25 Apr 2023
Cited by 5 | Viewed by 1742
Abstract
Image datasets acquired from orchards are commonly characterized by intricate backgrounds and an imbalanced distribution of disease categories, resulting in suboptimal recognition outcomes when attempting to identify apple leaf diseases. In this regard, we propose a novel apple leaf disease recognition model, named [...] Read more.
Image datasets acquired from orchards are commonly characterized by intricate backgrounds and an imbalanced distribution of disease categories, resulting in suboptimal recognition outcomes when attempting to identify apple leaf diseases. In this regard, we propose a novel apple leaf disease recognition model, named RFCA ResNet, equipped with a dual attention mechanism and multi-scale feature extraction capacity, to more effectively tackle these issues. The dual attention mechanism incorporated into RFCA ResNet is a potent tool for mitigating the detrimental effects of complex backdrops on recognition outcomes. Additionally, by utilizing the class balance technique in conjunction with focal loss, the adverse effects of an unbalanced dataset on classification accuracy can be effectively minimized. The RFB module enables us to expand the receptive field and achieve multi-scale feature extraction, both of which are critical for the superior performance of RFCA ResNet. Experimental results demonstrate that RFCA ResNet significantly outperforms the standard CNN network model, exhibiting marked improvements of 89.61%, 56.66%, 72.76%, and 58.77% in terms of accuracy rate, precision rate, recall rate, and F1 score, respectively. It is better than other approaches, performs well in generalization, and has some theoretical relevance and practical value. Full article
Show Figures

Figure 1

17 pages, 3925 KiB  
Article
Research on Provincial-Level Soil Moisture Prediction Based on Extreme Gradient Boosting Model
by Yifang Ren, Fenghua Ling and Yong Wang
Agriculture 2023, 13(5), 927; https://doi.org/10.3390/agriculture13050927 - 24 Apr 2023
Cited by 5 | Viewed by 1796
Abstract
As one of the physical quantities concerned in agricultural production, soil moisture can effectively guide field irrigation and evaluate the distribution of water resources for crop growth in various regions. However, the spatial variability of soil moisture is dramatic, and its time series [...] Read more.
As one of the physical quantities concerned in agricultural production, soil moisture can effectively guide field irrigation and evaluate the distribution of water resources for crop growth in various regions. However, the spatial variability of soil moisture is dramatic, and its time series data are highly noisy, nonlinear, and nonstationary, and thus hard to predict accurately. In this study, taking Jiangsu Province in China as an example, the data of 70 meteorological and soil moisture automatic observation stations from 2014 to 2022 were used to establish prediction models of 0–10 cm soil relative humidity (RHs10cm) via the extreme gradient boosting (XGBoost) algorithm. Before constructing the model, according to the measured soil physical characteristics, the soil moisture observation data were divided into three categories: sandy soil, loam soil, and clay soil. Based on the impacts of various factors on the soil water budget balance, 14 predictors were chosen for constructing the model, among which atmospheric and soil factors accounted for 10 and 4, respectively. Considering the differences in soil physical characteristics and the lagged effects of environmental impacts, the best influence times of the predictors for different soil types were determined through correlation analysis to improve the rationality of the model construction. To better evaluate the importance of soil factors, two sets of models (Model_soil&atmo and Model_atmo) were designed by taking soil factors as optional predictors put into the XGBoost model. Meanwhile, the contributions of predictors to the prediction results were analyzed with Shapley additive explanation (SHAP). Six prediction effect indicators, as well as a typical drought process that happened in 2022, were analyzed to evaluate the prediction accuracy. The results show that the time with the highest correlations between environmental predictors and RHs10cm varied but was similar between soil types. Among these predictors, the contribution rates of maximum air temperature (Tamax), cumulative precipitation (Psum), and air relative humidity (RHa) in atmospheric factors, which functioned as a critical factor affecting the variation in soil moisture, are relatively high in both models. In addition, adding soil factors could improve the accuracy of soil moisture prediction. To a certain extent, the XGBoost model performed better when compared with artificial neural networks (ANNs), random forests (RFs), and support vector machines (SVMs). The values of the correlation coefficient (R), root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE), Nash–Sutcliffe efficiency coefficient (NSE), and accuracy (ACC) of Model_soil&atmo were 0.69, 11.11, 4.87, 0.12, 0.50, and 88%, respectively. This study verified that the XGBoost model is applicable to the prediction of soil moisture at the provincial level, as it could reasonably predict the development processes of the typical drought event. Full article
Show Figures

Figure 1

18 pages, 10070 KiB  
Article
Design and Experiment of a Visual Detection System for Zanthoxylum-Harvesting Robot Based on Improved YOLOv5 Model
by Jinkai Guo, Xiao Xiao, Jianchi Miao, Bingquan Tian, Jing Zhao and Yubin Lan
Agriculture 2023, 13(4), 821; https://doi.org/10.3390/agriculture13040821 - 31 Mar 2023
Cited by 6 | Viewed by 2396
Abstract
In order to achieve accurate detection of mature Zanthoxylum in their natural environment, a Zanthoxylum detection network based on the YOLOv5 object detection model was proposed. It addresses the issues of irregular shape and occlusion caused by the growth of Zanthoxylum on trees [...] Read more.
In order to achieve accurate detection of mature Zanthoxylum in their natural environment, a Zanthoxylum detection network based on the YOLOv5 object detection model was proposed. It addresses the issues of irregular shape and occlusion caused by the growth of Zanthoxylum on trees and the overlapping of Zanthoxylum branches and leaves with the fruits, which affect the accuracy of Zanthoxylum detection. To improve the model’s generalization ability, data augmentation was performed using different methods. To enhance the directionality of feature extraction and enable the convolution kernel to be adjusted according to the actual shape of each Zanthoxylum cluster, the coordinate attention module and the deformable convolution module were integrated into the YOLOv5 network. Through ablation experiments, the impacts of the attention mechanism and deformable convolution on the performance of YOLOv5 were compared. Comparisons were made using the Faster R-CNN, SSD, and CenterNet algorithms. A Zanthoxylum harvesting robot vision detection platform was built, and the visual detection system was tested. The experimental results showed that using the improved YOLOv5 model, as compared to the original YOLOv5 network, the average detection accuracy for Zanthoxylum in its natural environment was increased by 4.6% and 6.9% in terms of [email protected] and [email protected]:0.95, respectively, showing a significant advantage over other network models. At the same time, on the test set of Zanthoxylum with occlusions, the improved model showed increased [email protected] and [email protected]:0.95 by 5.4% and 4.7%, respectively, compared to the original model. The improved model was tested on a mobile picking platform, and the results showed that the model was able to accurately identify mature Zanthoxylum in its natural environment at a detection speed of about 89.3 frames per second. This research provides technical support for the visual detection system of intelligent Zanthoxylum-harvesting robots. Full article
Show Figures

Figure 1

16 pages, 7705 KiB  
Article
Multi-Modal Late Fusion Rice Seed Variety Classification Based on an Improved Voting Method
by Xinyi He, Qiyang Cai, Xiuguo Zou, Hua Li, Xuebin Feng, Wenqing Yin and Yan Qian
Agriculture 2023, 13(3), 597; https://doi.org/10.3390/agriculture13030597 - 1 Mar 2023
Cited by 5 | Viewed by 2414
Abstract
Rice seed variety purity, an important index for measuring rice seed quality, has a great impact on the germination rate, yield, and quality of the final agricultural products. To classify rice varieties more efficiently and accurately, this study proposes a multimodal l fusion [...] Read more.
Rice seed variety purity, an important index for measuring rice seed quality, has a great impact on the germination rate, yield, and quality of the final agricultural products. To classify rice varieties more efficiently and accurately, this study proposes a multimodal l fusion detection method based on an improved voting method. The experiment collected eight common rice seed types. Raytrix light field cameras were used to collect 2D images and 3D point cloud datasets, with a total of 3194 samples. The training and test sets were divided according to an 8:2 ratio. The experiment improved the traditional voting method. First, multiple models were used to predict the rice seed varieties. Then, the predicted probabilities were used as the late fusion input data. Next, a comprehensive score vector was calculated based on the performance of different models. In late fusion, the predicted probabilities from 2D and 3D were jointly weighted to obtain the final predicted probability. Finally, the predicted value with the highest probability was selected as the final value. In the experimental results, after late fusion of the predicted probabilities, the average accuracy rate reached 97.4%. Compared with the single support vector machine (SVM), k-nearest neighbors (kNN), convolutional neural network (CNN), MobileNet, and PointNet, the accuracy rates increased by 4.9%, 8.3%, 18.1%, 8.3%, and 9%, respectively. Among the eight varieties, the recognition accuracy of two rice varieties, Hannuo35 and Yuanhan35, by applying the voting method improved most significantly, from 73.9% and 77.7% in two dimensions to 92.4% and 96.3%, respectively. Thus, the improved voting method can combine the advantages of different data modalities and significantly improve the final prediction results. Full article
Show Figures

Figure 1

16 pages, 2413 KiB  
Article
Research on Winter Wheat Growth Stages Recognition Based on Mobile Edge Computing
by Yong Li, Hebing Liu, Jialing Wei, Xinming Ma, Guang Zheng and Lei Xi
Agriculture 2023, 13(3), 534; https://doi.org/10.3390/agriculture13030534 - 23 Feb 2023
Cited by 5 | Viewed by 2158
Abstract
The application of deep learning (DL) technology to the identification of crop growth processes will become the trend of smart agriculture. However, using DL to identify wheat growth stages on mobile devices requires high battery energy consumption, significantly reducing the device’s operating time. [...] Read more.
The application of deep learning (DL) technology to the identification of crop growth processes will become the trend of smart agriculture. However, using DL to identify wheat growth stages on mobile devices requires high battery energy consumption, significantly reducing the device’s operating time. However, implementing a DL framework on a remote server may result in low-quality service and delays in the wireless network. Thus, the DL method should be suitable for detecting wheat growth stages and implementable on mobile devices. A lightweight DL-based wheat growth stage detection model with low computational complexity and a computing time delay is proposed; aiming at the shortcomings of high energy consumption and a long computing time, a wheat growth period recognition model and dynamic migration algorithm based on deep reinforcement learning is proposed. The experimental results show that the proposed dynamic migration algorithm has 128.4% lower energy consumption and 121.2% higher efficiency than the local implementation at a wireless network data transmission rate of 0–8 MB/s. Full article
Show Figures

Figure 1

15 pages, 4676 KiB  
Article
Global Reconstruction Method of Maize Population at Seedling Stage Based on Kinect Sensor
by Naimin Xu, Guoxiang Sun, Yuhao Bai, Xinzhu Zhou, Jiaqi Cai and Yinfeng Huang
Agriculture 2023, 13(2), 348; https://doi.org/10.3390/agriculture13020348 - 31 Jan 2023
Cited by 4 | Viewed by 1890
Abstract
Automatic plant phenotype measurement technology based on the rapid and accurate reconstruction of maize structures at the seedling stage is essential for the early variety selection, cultivation, and scientific management of maize. Manual measurement is time-consuming, laborious, and error-prone. The lack of mobility [...] Read more.
Automatic plant phenotype measurement technology based on the rapid and accurate reconstruction of maize structures at the seedling stage is essential for the early variety selection, cultivation, and scientific management of maize. Manual measurement is time-consuming, laborious, and error-prone. The lack of mobility of large equipment in the field make the high-throughput detection of maize plant phenotypes challenging. Therefore, a global 3D reconstruction algorithm was proposed for the high-throughput detection of maize phenotypic traits. First, a self-propelled mobile platform was used to automatically collect three-dimensional point clouds of maize seedling populations from multiple measurement points and perspectives. Second, the Harris corner detection algorithm and singular value decomposition (SVD) were used for the pre-calibration single measurement point multi-view alignment matrix. Finally, the multi-view registration algorithm and iterative nearest point algorithm (ICP) were used for the global 3D reconstruction of the maize seedling population. The results showed that the R2 of the plant height and maximum width measured by the global 3D reconstruction of the seedling maize population were 0.98 and 0.99 with RMSE of 1.39 cm and 1.45 cm and mean absolute percentage errors (MAPEs) of 1.92% and 2.29%, respectively. For the standard sphere, the percentage of the Hausdorff distance set of reconstruction point clouds less than 0.5 cm was 55.26%, and the percentage was 76.88% for those less than 0.8 cm. The method proposed in this study provides a reference for the global reconstruction and phenotypic measurement of crop populations at the seedling stage, which aids in the early management of maize with precision and intelligence. Full article
Show Figures

Figure 1

19 pages, 5386 KiB  
Article
A Cascaded Individual Cow Identification Method Based on DeepOtsu and EfficientNet
by Ruihong Zhang, Jiangtao Ji, Kaixuan Zhao, Jinjin Wang, Meng Zhang and Meijia Wang
Agriculture 2023, 13(2), 279; https://doi.org/10.3390/agriculture13020279 - 23 Jan 2023
Cited by 10 | Viewed by 2847
Abstract
Precision dairy farming technology is widely used to improve the management efficiency and reduce cost in large-scale dairy farms. Machine vision systems are non-contact technologies to obtain individual and behavioral information from animals. However, the accuracy of image-based individual identification of dairy cows [...] Read more.
Precision dairy farming technology is widely used to improve the management efficiency and reduce cost in large-scale dairy farms. Machine vision systems are non-contact technologies to obtain individual and behavioral information from animals. However, the accuracy of image-based individual identification of dairy cows is still inadequate, which limits the application of machine vision technologies in large-scale dairy farms. There are three key problems in dairy cattle identification based on images and biometrics: (1) the biometrics of different dairy cattle may be similar; (2) the complex shooting environment leads to the instability of image quality; and (3) for the end-to-end identification method, the identity of each cow corresponds to a pattern, and the increase in the number of cows will lead to a rapid increase in the number of outputs and parameters of the identification model. To solve the above problems, this paper proposes a cascaded dairy individual cow identification method based on DeepOtsu and EfficientNet, which can realize a breakthrough in dairy cow group identification accuracy and speed by binarization and cascaded classification of dairy cow body pattern images. The specific implementation steps of the proposed method are as follows. First, the YOLOX model was used to locate the trunk of the cow in the side-looking walking image to obtain the body pattern image, and then, the DeepOtsu model was used to binarize the body pattern image. After that, primary classification was carried out according to the proportion of black pixels in the binary image; then, for each subcategory obtained by the primary classification, the EfficientNet-B1 model was used for secondary classification to achieve accurate and rapid identification of dairy cows. A total of 11,800 side-looking walking images of 118 cows were used to construct the dataset; and the training set, validation set, and test set were constructed at a ratio of 5:3:2. The test results showed that the binarization segmentation accuracy of the body pattern image is 0.932, and the overall identification accuracy of the individual cow identification method is 0.985. The total processing time of a single image is 0.433 s. The proposed method outperforms the end-to-end dairy individual cow identification method in terms of efficiency and training speed. This study provides a new method for the identification of individual dairy cattle in large-scale dairy farms. Full article
Show Figures

Figure 1

17 pages, 2065 KiB  
Article
Utility of Deep Learning Algorithms in Initial Flowering Period Prediction Models
by Guanjie Jiao, Xiawei Shentu, Xiaochen Zhu, Wenbo Song, Yujia Song and Kexuan Yang
Agriculture 2022, 12(12), 2161; https://doi.org/10.3390/agriculture12122161 - 15 Dec 2022
Cited by 2 | Viewed by 1950
Abstract
The application of a deep learning algorithm (DL) can more accurately predict the initial flowering period of Platycladus orientalis (L.) Franco. In this research, we applied DL to establish a nationwide long-term prediction model of the initial flowering period of P. orientalis and [...] Read more.
The application of a deep learning algorithm (DL) can more accurately predict the initial flowering period of Platycladus orientalis (L.) Franco. In this research, we applied DL to establish a nationwide long-term prediction model of the initial flowering period of P. orientalis and analyzed the contribution rate of meteorological factors via Shapely Additive Explanation (SHAP). Based on the daily meteorological data of major meteorological stations in China from 1963–2015 and the observation of initial flowering data from 23 phenological stations, we established prediction models by using recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). The mean absolute error (MAE), mean absolute percentage error (MAPE), and coefficient of determination (R2) were used as training effect indicators to evaluate the prediction accuracy. The simulation results show that the three models are applicable to the prediction of the initial flowering of P. orientalis nationwide in China, with the average accuracy of the GRU being the highest, followed by LSTM and the RNN, which is significantly higher than the prediction accuracy of the regression model based on accumulated air temperature. In the interpretability analysis, the factor contribution rates of the three models are similar, the 46 temperature type factors have the highest contribution rate with 58.6% of temperature factors’ contribution rate being higher than 0 and average contribution rate being 5.48 × 10−4, and the stability of the contribution rate of the factors related to the daily minimum temperature factor has obvious fluctuations with an average standard deviation of 8.57 × 10−3, which might be related to the plants being sensitive to low temperature stress. The GRU model can accurately predict the change rule of the initial flowering, with an average accuracy greater than 98%, and the simulation effect is the best, indicating that the potential application of the GRU model is the prediction of initial flowering. Full article
Show Figures

Figure 1

12 pages, 3185 KiB  
Article
Research on Laying Hens Feeding Behavior Detection and Model Visualization Based on Convolutional Neural Network
by Hongyun Hao, Peng Fang, Wei Jiang, Xianqiu Sun, Liangju Wang and Hongying Wang
Agriculture 2022, 12(12), 2141; https://doi.org/10.3390/agriculture12122141 - 13 Dec 2022
Cited by 4 | Viewed by 2638
Abstract
The feeding behavior of laying hens is closely related to their health and welfare status. In large-scale breeding farms, monitoring the feeding behavior of hens can effectively improve production management. However, manual monitoring is not only time-consuming but also reduces the welfare level [...] Read more.
The feeding behavior of laying hens is closely related to their health and welfare status. In large-scale breeding farms, monitoring the feeding behavior of hens can effectively improve production management. However, manual monitoring is not only time-consuming but also reduces the welfare level of breeding staff. In order to realize automatic tracking of the feeding behavior of laying hens in the stacked cage laying houses, a feeding behavior detection network was constructed based on the Faster R-CNN network, which was characterized by the fusion of a 101 layers-deep residual network (ResNet101) and Path Aggregation Network (PAN) for feature extraction, and Intersection over Union (IoU) loss function for bounding box regression. The ablation experiments showed that the improved Faster R-CNN model enhanced precision, recall and F1-score from 84.40%, 72.67% and 0.781 to 90.12%, 79.14%, 0.843, respectively, which could enable the accurate detection of feeding behavior of laying hens. To understand the internal mechanism of the feeding behavior detection model, the convolutional kernel features and the feature maps output by the convolutional layers at each stage of the network were then visualized in an attempt to decipher the mechanisms within the Convolutional Neural Network(CNN) and provide a theoretical basis for optimizing the laying hens’ behavior recognition network. Full article
Show Figures

Figure 1

16 pages, 3839 KiB  
Article
Frequency-Enhanced Channel-Spatial Attention Module for Grain Pests Classification
by Junwei Yu, Yi Shen, Nan Liu and Quan Pan
Agriculture 2022, 12(12), 2046; https://doi.org/10.3390/agriculture12122046 - 29 Nov 2022
Cited by 6 | Viewed by 2745
Abstract
For grain storage and protection, grain pest species recognition and population density estimation are of great significance. With the rapid development of deep learning technology, many studies have shown that convolutional neural networks (CNN)-based methods perform extremely well in image classification. However, such [...] Read more.
For grain storage and protection, grain pest species recognition and population density estimation are of great significance. With the rapid development of deep learning technology, many studies have shown that convolutional neural networks (CNN)-based methods perform extremely well in image classification. However, such studies on grain pest classification are still limited in the following two aspects. Firstly, there is no high-quality dataset of primary insect pests specified by standard ISO 6322-3 and the Chinese Technical Criterion for Grain and Oil-seeds Storage (GB/T 29890). The images of realistic storage scenes bring great challenges to the identification of grain pests as the images have attributes of small objects, varying pest shapes and cluttered backgrounds. Secondly, existing studies mostly use channel or spatial attention mechanisms, and as a consequence, useful information in other domains has not been fully utilized. To address such limitations, we collect a dataset named GP10, which consists of 1082 primary insect pest images in 10 species. Moreover, we involve discrete wavelet transform (DWT) in a convolutional neural network to construct a novel triple-attention network (FcsNet) combined with frequency, channel and spatial attention modules. Next, we compare the network performance and parameters against several state-of-the-art networks based on different attention mechanisms. We evaluate the proposed network on our dataset GP10 and open dataset D0, achieving classification accuracy of 73.79% and 98.16%. The proposed network obtains more than 3% accuracy gains on the challenging dataset GP10 with parameters and computation operations slightly increased. Visualization with gradient-weighted class activation mapping (Grad-CAM) demonstrates that FcsNet has comparative advantages in image classification tasks. Full article
Show Figures

Figure 1

Review

Jump to: Editorial, Research

26 pages, 2151 KiB  
Review
The Path to Smart Farming: Innovations and Opportunities in Precision Agriculture
by E. M. B. M. Karunathilake, Anh Tuan Le, Seong Heo, Yong Suk Chung and Sheikh Mansoor
Agriculture 2023, 13(8), 1593; https://doi.org/10.3390/agriculture13081593 - 11 Aug 2023
Cited by 146 | Viewed by 57703
Abstract
Precision agriculture employs cutting-edge technologies to increase agricultural productivity while reducing adverse impacts on the environment. Precision agriculture is a farming approach that uses advanced technology and data analysis to maximize crop yields, cut waste, and increase productivity. It is a potential strategy [...] Read more.
Precision agriculture employs cutting-edge technologies to increase agricultural productivity while reducing adverse impacts on the environment. Precision agriculture is a farming approach that uses advanced technology and data analysis to maximize crop yields, cut waste, and increase productivity. It is a potential strategy for tackling some of the major issues confronting contemporary agriculture, such as feeding a growing world population while reducing environmental effects. This review article examines some of the latest recent advances in precision agriculture, including the Internet of Things (IoT) and how to make use of big data. This review article aims to provide an overview of the recent innovations, challenges, and future prospects of precision agriculture and smart farming. It presents an analysis of the current state of precision agriculture, including the most recent innovations in technology, such as drones, sensors, and machine learning. The article also discusses some of the main challenges faced by precision agriculture, including data management, technology adoption, and cost-effectiveness. Full article
Show Figures

Figure 1

Back to TopTop