Artificial Intelligence Tools to Optimize Livestock Production

A special issue of Animals (ISSN 2076-2615). This special issue belongs to the section "Animal System and Management".

Deadline for manuscript submissions: closed (30 October 2023) | Viewed by 14616

Special Issue Editors


E-Mail Website
Guest Editor
Department of Management, Development and Technology, School of Science and Engineering, São Paulo State University (UNESP), Av. Domingos da Costa Lopes, 780., Tupã 17602-496, SP, Brazil
Interests: poultry farming; heat stress; animal welfare assessment; image processing; computer vision; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Graduate Program in Production Engineering, Universidade Paulista, 1212 Dr. Bacelar Street, São Paulo 04026-002, Brazil
Interests: precision livestock farming; image analysis; broiler supply chain; pig farming
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Mechanics of Biosystems Engineering Department, Faculty of Agricultural Engineering and Rural Development, Agricultural Sciences and Natural Resources University of Khuzestan, Mollasani, Iran
Interests: machine learning; mechatronics; smart livestock farming

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) in agriculture aims to bring the revolution of robotics and automation to animal protein production and supply chains. The idea is to relieve human labor under tedious and monotouse conditions and to solve the problem of finding cost-effective methods to provide the world a sustainable solution for future food security.

There is high pressure on livestock production to mechanize further actions on-farm, and there is resistance to performing the physically challenging labor. In stark contrast, yields (meat, milk, and eggs) must keep increasing to feed the world population. We believe AI is the decisive answer to this food production and supply chain matter.

In this special issue, we hope to bring together articles exploring the frontline of AI application in animal production systems and supply chains, both in field and laboratory conditions, also expanding the current knowledge about animal welfare.

Prof. Dr. Danilo Florentino Pereira
Prof. Dr. Irenilza de Alencar Nääs
Dr. Saman Abdanan Mehdizadeh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Animals is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robotics
  • soft computing
  • digital image processing
  • deep learning
  • computer vision
  • non-invasion method

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 5394 KiB  
Article
Improving Known–Unknown Cattle’s Face Recognition for Smart Livestock Farm Management
by Yao Meng, Sook Yoon, Shujie Han, Alvaro Fuentes, Jongbin Park, Yongchae Jeong and Dong Sun Park
Animals 2023, 13(22), 3588; https://doi.org/10.3390/ani13223588 - 20 Nov 2023
Cited by 3 | Viewed by 1718
Abstract
Accurate identification of individual cattle is of paramount importance in precision livestock farming, enabling the monitoring of cattle behavior, disease prevention, and enhanced animal welfare. Unlike human faces, the faces of most Hanwoo cattle, a native breed of Korea, exhibit significant similarities and [...] Read more.
Accurate identification of individual cattle is of paramount importance in precision livestock farming, enabling the monitoring of cattle behavior, disease prevention, and enhanced animal welfare. Unlike human faces, the faces of most Hanwoo cattle, a native breed of Korea, exhibit significant similarities and have the same body color, posing a substantial challenge in accurately distinguishing between individual cattle. In this study, we sought to extend the closed-set scope (only including identifying known individuals) to a more-adaptable open-set recognition scenario (identifying both known and unknown individuals) termed Cattle’s Face Open-Set Recognition (CFOSR). By integrating open-set techniques to enhance the closed-set accuracy, the proposed method simultaneously addresses the open-set scenario. In CFOSR, the objective is to develop a trained model capable of accurately identifying known individuals, while effectively handling unknown or novel individuals, even in cases where the model has been trained solely on known individuals. To address this challenge, we propose a novel approach that integrates Adversarial Reciprocal Points Learning (ARPL), a state-of-the-art open-set recognition method, with the effectiveness of Additive Margin Softmax loss (AM-Softmax). ARPL was leveraged to mitigate the overlap between spaces of known and unknown or unregistered cattle. At the same time, AM-Softmax was chosen over the conventional Cross-Entropy loss (CE) to classify known individuals. The empirical results obtained from a real-world dataset demonstrated the effectiveness of the ARPL and AM-Softmax techniques in achieving both intra-class compactness and inter-class separability. Notably, the results of the open-set recognition and closed-set recognition validated the superior performance of our proposed method compared to existing algorithms. To be more precise, our method achieved an AUROC of 91.84 and an OSCR of 87.85 in the context of open-set recognition on a complex dataset. Simultaneously, it demonstrated an accuracy of 94.46 for closed-set recognition. We believe that our study provides a novel vision to improve the classification accuracy of the closed set. Simultaneously, it holds the potential to significantly contribute to herd monitoring and inventory management, especially in scenarios involving the presence of unknown or novel cattle. Full article
(This article belongs to the Special Issue Artificial Intelligence Tools to Optimize Livestock Production)
Show Figures

Figure 1

13 pages, 2726 KiB  
Article
Assessment of Preference Behavior of Layer Hens under Different Light Colors and Temperature Environments in Long-Time Footage Using a Computer Vision System
by Vanessa Kodaira, Allan Lincoln Rodrigues Siriani, Henry Ponti Medeiros, Daniella Jorge De Moura and Danilo Florentino Pereira
Animals 2023, 13(15), 2426; https://doi.org/10.3390/ani13152426 - 27 Jul 2023
Cited by 4 | Viewed by 1619
Abstract
As for all birds, the behavior of chickens is largely determined by environmental conditions. In many production systems, light intensity is low and red feather strains have low contrast with the background, making it impossible to use conventional image segmentation techniques. On the [...] Read more.
As for all birds, the behavior of chickens is largely determined by environmental conditions. In many production systems, light intensity is low and red feather strains have low contrast with the background, making it impossible to use conventional image segmentation techniques. On the other hand, studies of chicken behavior, even when using video camera resources, depend on human vision to extract the information of interest; and in this case, reduced samples are observed, due to the high cost of time and energy. Our work combined the use of advanced object detection techniques using YOLO v4 architecture to locate chickens in low-quality videos, and we automatically extracted information on the location of birds in more than 648 h of footage. We develop an automated system that allows the chickens to transition among three environments with different illuminations equipped with video cameras to monitor the presence of birds in each compartment, and we automatically count the number of birds in each compartment and determine their preference. Our chicken detection algorithm shows a mean average precision of 99.9%, and a manual inspection of the results showed an accuracy of 98.8%. Behavioral analysis results based on bird unrest index and permanence time indicate that chickens tend to prefer white light and disfavor green light, except in the presence of heat stress when no clear preference can be observed. This study demonstrates the potential of using computer vision techniques with low-resolution, low-cost cameras to monitor chickens in low-light conditions. Full article
(This article belongs to the Special Issue Artificial Intelligence Tools to Optimize Livestock Production)
Show Figures

Figure 1

21 pages, 12651 KiB  
Article
Multiview Monitoring of Individual Cattle Behavior Based on Action Recognition in Closed Barns Using Deep Learning
by Alvaro Fuentes, Shujie Han, Muhammad Fahad Nasir, Jongbin Park, Sook Yoon and Dong Sun Park
Animals 2023, 13(12), 2020; https://doi.org/10.3390/ani13122020 - 17 Jun 2023
Cited by 10 | Viewed by 4157
Abstract
Cattle behavior recognition is essential for monitoring their health and welfare. Existing techniques for behavior recognition in closed barns typically rely on direct observation to detect changes using wearable devices or surveillance cameras. While promising progress has been made in this field, monitoring [...] Read more.
Cattle behavior recognition is essential for monitoring their health and welfare. Existing techniques for behavior recognition in closed barns typically rely on direct observation to detect changes using wearable devices or surveillance cameras. While promising progress has been made in this field, monitoring individual cattle, especially those with similar visual characteristics, remains challenging due to numerous factors such as occlusion, scale variations, and pose changes. Accurate and consistent individual identification over time is therefore essential to overcome these challenges. To address this issue, this paper introduces an approach for multiview monitoring of individual cattle behavior based on action recognition using video data. The proposed system takes an image sequence as input and utilizes a detector to identify hierarchical actions categorized as part and individual actions. These regions of interest are then inputted into a tracking and identification mechanism, enabling the system to continuously track each individual in the scene and assign them a unique identification number. By implementing this approach, cattle behavior is continuously monitored, and statistical analysis is conducted to assess changes in behavior in the time domain. The effectiveness of the proposed framework is demonstrated through quantitative and qualitative experimental results obtained from our Hanwoo cattle video database. Overall, this study tackles the challenges encountered in real farm indoor scenarios, capturing spatiotemporal information and enabling automatic recognition of cattle behavior for precision livestock farming. Full article
(This article belongs to the Special Issue Artificial Intelligence Tools to Optimize Livestock Production)
Show Figures

Figure 1

19 pages, 7279 KiB  
Article
LAD-RCNN: A Powerful Tool for Livestock Face Detection and Normalization
by Ling Sun, Guiqiong Liu, Huiguo Yang, Xunping Jiang, Junrui Liu, Xu Wang, Han Yang and Shiping Yang
Animals 2023, 13(9), 1446; https://doi.org/10.3390/ani13091446 - 24 Apr 2023
Cited by 2 | Viewed by 2164
Abstract
With the demand for standardized large-scale livestock farming and the development of artificial intelligence technology, a lot of research in the area of animal face detection and face identification was conducted. However, there are no specialized studies on livestock face normalization, which may [...] Read more.
With the demand for standardized large-scale livestock farming and the development of artificial intelligence technology, a lot of research in the area of animal face detection and face identification was conducted. However, there are no specialized studies on livestock face normalization, which may significantly reduce the performance of face identification. The keypoint detection technology, which has been widely applied in human face normalization, is not suitable for animal face normalization due to the arbitrary directions of animal face images captured from uncooperative animals. It is necessary to develop a livestock face normalization method that can handle arbitrary face directions. In this study, a lightweight angle detection and region-based convolutional network (LAD-RCNN) was developed, which contains a new rotation angle coding method that can detect the rotation angle and the location of the animal’s face in one stage. LAD-RCNN also includes a series of image enhancement methods to improve its performance. LAD-RCNN has been evaluated on multiple datasets, including a goat dataset and infrared images of goats. Evaluation results show that the average precision of face detection was more than 97%, and the deviations between the detected rotation angle and the ground-truth rotation angle were less than 6.42° on all the test datasets. LAD-RCNN runs very fast and only takes 13.7 ms to process a picture on a single RTX 2080Ti GPU. This shows that LAD-RCNN has an excellent performance in livestock face recognition and direction detection, and therefore it is very suitable for livestock face detection and normalization. Full article
(This article belongs to the Special Issue Artificial Intelligence Tools to Optimize Livestock Production)
Show Figures

Figure 1

20 pages, 2216 KiB  
Article
DISubNet: Depthwise Separable Inception Subnetwork for Pig Treatment Classification Using Thermal Data
by Savina Jassica Colaco, Jung Hwan Kim, Alwin Poulose, Suresh Neethirajan and Dong Seog Han
Animals 2023, 13(7), 1184; https://doi.org/10.3390/ani13071184 - 28 Mar 2023
Cited by 6 | Viewed by 1956
Abstract
Thermal imaging is increasingly used in poultry, swine, and dairy animal husbandry to detect disease and distress. In intensive pig production systems, early detection of health and welfare issues is crucial for timely intervention. Using thermal imaging for pig treatment classification can improve [...] Read more.
Thermal imaging is increasingly used in poultry, swine, and dairy animal husbandry to detect disease and distress. In intensive pig production systems, early detection of health and welfare issues is crucial for timely intervention. Using thermal imaging for pig treatment classification can improve animal welfare and promote sustainable pig production. In this paper, we present a depthwise separable inception subnetwork (DISubNet), a lightweight model for classifying four pig treatments. Based on the modified model architecture, we propose two DISubNet versions: DISubNetV1 and DISubNetV2. Our proposed models are compared to other deep learning models commonly employed for image classification. The thermal dataset captured by a forward-looking infrared (FLIR) camera is used to train these models. The experimental results demonstrate that the proposed models for thermal images of various pig treatments outperform other models. In addition, both proposed models achieve approximately 99.96–99.98% classification accuracy with fewer parameters. Full article
(This article belongs to the Special Issue Artificial Intelligence Tools to Optimize Livestock Production)
Show Figures

Figure 1

14 pages, 2036 KiB  
Article
On the Development of a Wearable Animal Monitor
by Luís Fonseca, Daniel Corujo, William Xavier and Pedro Gonçalves
Animals 2023, 13(1), 120; https://doi.org/10.3390/ani13010120 - 28 Dec 2022
Cited by 5 | Viewed by 1861
Abstract
Animal monitoring is a task traditionally performed by pastoralists, as a way of ensuring the safety and well-being of animals; a tremendously arduous and lonely task, it requires long walks and extended periods of contact with the animals. The Internet of Things and [...] Read more.
Animal monitoring is a task traditionally performed by pastoralists, as a way of ensuring the safety and well-being of animals; a tremendously arduous and lonely task, it requires long walks and extended periods of contact with the animals. The Internet of Things and the possibility of applying sensors to different kinds of devices, in particular the use of wearable sensors, has proven not only to be less invasive to the animals, but also to have a low cost and to be quite efficient. The present work analyses the most impactful monitored features in the behavior learning process and their learning results. It especially addresses the impact of a gyroscope, which heavily influences the cost of the collar. Based on the chosen set of sensors, a learning model is subsequently established, and the learning outcomes are analyzed. Finally, the animal behavior prediction capability of the learning model (which was based on the sensed data of adult animals) is additionally subjected and evaluated in a scenario featuring younger animals. Results suggest that not only is it possible to accurately classify these behaviors (with a balanced accuracy around 91%), but that removing the gyroscope can be advantageous. Results additionally show a positive contribution of the thermometer in behavior identification but evidences the need for further confirmation in future work, considering different seasons of different years and scenarios including more diverse animals’ behavior. Full article
(This article belongs to the Special Issue Artificial Intelligence Tools to Optimize Livestock Production)
Show Figures

Figure 1

Back to TopTop