Next Article in Journal
Do Gridded Weather Datasets Provide High-Quality Data for Agroclimatic Research in Citrus Production in Brazil?
Next Article in Special Issue
Integration of an Innovative Atmospheric Forecasting Simulator and Remote Sensing Data into a Geographical Information System in the Frame of Agriculture 4.0 Concept
Previous Article in Journal / Special Issue
Development and Assessment of a Field-Programmable Gate Array (FPGA)-Based Image Processing (FIP) System for Agricultural Field Monitoring Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel YOLOv6 Object Detector for Monitoring Piling Behavior of Cage-Free Laying Hens

Department of Poultry Science, College of Agricultural and Environmental Sciences, University of Georgia, Athens, GA 30602, USA
*
Author to whom correspondence should be addressed.
AgriEngineering 2023, 5(2), 905-923; https://doi.org/10.3390/agriengineering5020056
Submission received: 24 March 2023 / Revised: 27 April 2023 / Accepted: 30 April 2023 / Published: 12 May 2023
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)

Abstract

:
Piling behavior (PB) is a common issue that causes negative impacts on the health, welfare, and productivity of the flock in poultry houses (e.g., cage-free layer, breeder, and broiler). Birds pile on top of each other, and the weight of the birds can cause physical injuries, such as bruising or suffocation, and may even result in death. In addition, PB can cause stress and anxiety in the birds, leading to reduced immune function and increased susceptibility to disease. Therefore, piling has been reported as one of the most concerning production issues in cage-free layer houses. Several strategies (e.g., adequate space, environmental enrichments, and genetic selection) have been proposed to prevent or mitigate PB in laying hens, but less scientific information is available to control it so far. The current study aimed to develop and test the performance of a novel deep-learning model for detecting PB and evaluate its effectiveness in four CF laying hen facilities. To achieve this goal, the study utilized different versions of the YOLOv6 models (e.g., YOLOv6t, YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l, and YOLOv6l relu). The objectives of this study were to develop a reliable and efficient tool for detecting PB in commercial egg-laying facilities based on deep learning and test the performance of new models in research cage-free facilities. The study used a dataset comprising 9000 images (e.g., 6300 for training, 1800 for validation, and 900 for testing). The results show that the YOLOv6l relu-PB models perform exceptionally well with high average recall (70.6%), [email protected] (98.9%), and [email protected]:0.95 (63.7%) compared to other models. In addition, detection performance increases when the camera is placed close to the PB areas. Thus, the newly developed YOLOv6l relu-PB model demonstrated superior performance in detecting PB in the given dataset compared to other tested models.

1. Introduction

Piling behavior (PB) is a common issue that can adversely affect the welfare, productivity, and overall health of the flock in any housing, including breeder, broiler, and cage-free layer facilities. Poultry piling is a phenomenon where birds densely cluster together, often resulting in birds being piled on top of one another [1,2]. Piling one over another can result in the birds becoming trapped, which can lead to suffocation and death [2,3]. In Australia, in free-range or cage-free laying hens, PB accounts for up to 40% of mortality [4]. The location and timing of smothering tend to be unpredictable and may vary between farms. According to surveys, over 50% of free-range or cage-free farms in the United Kingdom (UK) reported smothering at some point in their flocks [5]. The UK egg industry is estimated to lose £6.5 million annually due to smothering caused by PB [6]. However, PB behavior has been primarily observed in loose-housed layer flocks and is a significant animal welfare and economic concern for producers and the egg-laying industry [2,3,5,7,8].
The PB in laying hens is considered an animal welfare issue because it can negatively impact the birds’ physical and psychological well being, resulting in stress, overheating, injuries, feather pecking, and reduced mobility and natural behaviors [3,6]. Increased stress levels in birds result in reduced egg production [3] and egg quality [9,10], decreased immune function [11], and increased susceptibility to disease [3]. Birds piled one over another can result in overheating, leading to heat stress, suffocation, and even increased mortality. Similarly, overcrowding causes physical injuries, such as fractures [3]. In addition, birds piled on top of each other may limit mobility, leading to muscle atrophy and other health issues [12]. Piling can also prevent birds from accessing feed, water, and other resources. Sometimes, PB can also lead to feather pecking [13], increasing the cause of cannibalism in poultry. Piling can also reduce chickens’ ability to express their natural behavior, such as foraging, dust bathing, and socializing with other birds [1,2]. The threshold at which a pile turns into a smothering event is currently unknown [3], and understanding the biological causes of PB is necessary for effective mitigation.
The causes of PB are not well understood, and there is a lack of research in this area. However, several potential factors have been recorded. High stocking density is one of the most common factors contributing to PB [3,5,14]. Hens living in high-density environments may become stressed and develop abnormal behaviors, such as piling. Furthermore, laying hens’ nesting behavior and competition in nest use could lead to PB [14,15,16]. In poultry houses, a social hierarchy can develop, with dominant birds having first access to resources such as food, water, and nest boxes, leading to PB as subordinate birds attempt to access these resources [2,17]. In addition, environmental factors such as lighting, temperature, and ventilation may also influence PB [2,13,14]. For instance, hens may pile up due to low temperatures or poor ventilation. Finally, layer strains can differ in their patterns of nest use and PB with brown hens often mislaying eggs on the floor or grids of an enclosure more often than white hybrids [18,19]. This floor-laying behavior also becomes a cause of PB in cage-free layer facilities. Therefore, different prevention strategies may be required to address the multifactorial nature of this issue.
Mitigation strategies for PB in laying hens include increasing space per bird, providing enrichment, such as perches and nesting boxes, and reducing flock size [2,3,14,20]. Increasing space per bird [3] and providing perches [2,21] reduced PB in laying hen houses. Furthermore, providing enrichments (e.g., toys, natural materials, and different feed types) in poultry houses encourages birds to perform natural behaviors and, thus, reduces stress and PB [2,20,22]. Another way to mitigate PB is to establish a social hierarchy by providing additional resources, such as feeders and nesting boxes, to allow all birds access to the best resources. In addition, providing nest boxes for hens to lay eggs reduces PB as it fulfills their innate need for nest-building behavior [15]. Furthermore, adequate space, environmental enrichment, social hierarchy, nest boxes, improved ventilation, and light adjusting are important in mitigating PB. Adequate ventilation maintains a comfortable temperature and humidity level and reduces PB. Hens are photoperiodic animals, meaning their behavior is influenced by the amount and duration of light they receive [2].
Research on PB has focused on identifying potential environmental and management factors that contribute to its occurrences [1,2,3,15]. However, the unpredictability and disruption caused by the presence of an observer make it challenging to conduct experiments and obtain accurate data on PB in commercial settings [5,8]. Therefore, regularly monitoring the flock to identify any issues contributing to PB is important for maintaining the health and well-being of the birds. PB can signify a more serious underlying issue, such as disease or poor nutrition, and should be addressed accordingly. More in-depth research is needed to fully comprehend the reasons for PB and develop effective strategies to prevent its occurrence. Studies incorporating observational and experimental methods in commercial settings and considering the influence of genetics and individual variation in behavior can provide valuable insights into the underlying causes of PB in laying hens. Thus, early detection of PB is required with the help of image analysis.
Image analysis is a powerful technique that uses cameras to estimate the object present in a given area. One of the most effective methods for object detection is the use of machine learning (ML) algorithms which have been successful in detecting not only hens [23,24] but also their behaviors [25,26,27,28]. In particular, these algorithms have been developed to measure animal welfare by identifying both comfort and undesired behaviors [27]. For example, a convolutional neural network was used to classify the behaviors of broiler chickens based on images obtained by a depth camera, achieving a high accuracy rate of 99.17% in classifying flock behaviors [29]. Another study used the YOLOv3 algorithm to identify six distinct behaviors in a wire cage system consisting of two pens under varying stocking-density conditions [30]. The study accurately classified behaviors, such as mating, standing, feeding, spreading, fighting, and drinking. However, this model’s accuracy was lower in high-density cages due to the occlusion effect among the birds.
High-density housing of hens can lead to overcrowding or PB, which can cause negative consequences, such as an increased risk of smothering and significant losses. This risk is particularly high when the birds cluster together in certain areas of their living space. Although YOLOv4 and YOLOv4-tiny have detected behaviors, they often fail to recognize important comfort behaviors in detail [27]. This study focuses on detecting overcrowding behavior in hens which can lead to potential issues called PB. Although the research mentions overcrowding, it only focuses on behaviors, such as movements, laying, and dustbathing on the floor, and suggests this behavior might lead to overcrowding behavior. No detailed research focuses on detecting PB and PB in different situations and camera settings. However, recently, FELB detection research has been conducted using the YOLOv5 model and found higher performance in detecting FELB [28]. This FELB research was somewhat related to PB as hens gather to lay eggs on the floor. The researcher mentioned that PB during the daytime mostly occurs for performing FELB. To decrease floor eggs and FELB, it is important to recognize PB and build the best detection machine learning model.
Improving the recognition performance of the various behaviors using machine learning technology in the past could be a promising direction for detecting PB in hens. Improving detection could involve investigating new algorithms, data pre-processing techniques, or training strategies to enhance the accuracy of behavior recognition. Furthermore, by improving the detection of PB, we could gain a deeper understanding of the welfare of hens and develop effective interventions to improve their living conditions. For example, over the past few years, the YOLO algorithm successfully identified laying hens on the floor [23,24,25,26,27,28,30] regardless of their activity which could help control challenging PB during rearing and alert farmers early to potential issues. Therefore, this study used YOLOv6 to detect PB detection, expanding upon previous research to detect PB and identify the area where hens frequently perform PB. The objectives of this detection study were to (a) develop and test the best PB detection models and (b) compare the performance of deep learning models in research cage-free facilities. In the future, after finding the area, the producer or researcher can find its potential reason and issues related to overcrowding in hens.

2. Materials and Methods

The materials and methods section provides better organization and explains how image data were collected, processed, and used to train a YOLOv6 network for object detection in this study. The section is divided into several sections that discuss the housing and management of the animals involved in the study, the methods used to collect image and data samples, the techniques used to manipulate the image data, and the software and hardware used in the analysis. The section also includes a description of the YOLOv6 network architecture and the metrics associated with PB detection to evaluate its accuracy.

2.1. Experimental Housing and Management

This experiment was conducted in four CF research houses at the University of Georgia in Athens, GA, USA (Figure 1). A total of 200 Hy-line W36 birds were raised from day 1 to day 300 in each house, and each house measures 7.3 m in length, 6.1 m in width, and 3 m in height. The houses were equipped with lights, perches, nest boxes, feeders, and drinkers, and the floor was covered with pine shavings. The indoor conditions, such as light intensity and duration, ventilation rates, temperature, and relative humidity, were controlled using a Chore-Tronics Model 8 controller, and an in-detail housing system was described in the previous research [23]. This experiment followed the animal care and use guidelines established by the University of Georgia’s Institutional Animal Care and Use Committee (UGA IACUC).

2.2. Image and Data Collections

This study recorded the hens’ behaviors using six night-vision network cameras (PRO-1080MSB, Swann Communications USA Inc, Santa Fe Springs, CA, USA) mounted about 3 m above the litter floor in each room. In addition, two cameras were placed 0.5 m above the ground floor. The data acquisition was performed daily for 24 h and recorded in a digital video recorder (DVR-4580, Swann Communications USA Inc., Santa Fe Springs, CA, USA). The recorded video files were stored in .avi format with a 1920 × 1080 pixels resolution and 15 frames per second sampling rate. The data acquisition took place between 46–50 weeks of age.

2.3. Image Processing

The video data was converted into individual image files in .jpg format using the Free Video to JPG Converter App version 5.0. The resulting images were filtered based on PB presence and high-quality image datasets. In order to expand our dataset and improve detection accuracy, this study used various techniques, such as geometric transformations, brightness and contrast adjustments, and data normalization to process the newly obtained images. This technique allows us to create multiple new image datasets with more samples. Thus, obtained images were labeled using Makesense.AI in the YOLO format. Our findings indicated that implementing these techniques resulted in a notable increase in the final accuracy rate. This study uses image datasets of 9000 images (Pbnighttime, PBdaytime, Pbground, and Pbceiling image datasets), where 70% of the total image datasets were used for training, 20% for validation, and 10% for testing (Table 1). Different categories of classes used to compare in this study are illustrated in Figure 2.

2.4. YOLOv6 Network Description

YOLOv6 is the latest object detection algorithm launched and developed by Meituan in 2022 [31]. YOLOv6 is designed to be a single-stage object detection framework, meaning that it uses a single pass through the network to perform both object detection and classification, making YOLOv6 faster and more efficient than multi-stage object detection frameworks [32]. In addition, the YOLO model, such as YOLOv6, is designed to be hardware efficient, which makes it suitable for industrial applications where real-time object detection is required [33]. Furthermore, YOLOv6 is optimized for GPUs and can run on devices with limited computing resources, making it a popular choice for embedded systems and Internet of Things (IoT) devices. Furthermore, compared to its predecessor YOLO models, YOLOv6 has improved detection accuracy and inference speed, making it a more suitable choice for object detection tasks [31]. In this study, different YOLOv6-PB models, i.e., YOLOv6n-PB (nano), YOLOv6t-PB (tiny), YOLOv6s-PB (small), YOLOv6m-PB (medium), YOLOv6l-PB (large), and YOLOv6l relu-PB (large relu) were compared for PB detection. These YOLOv6-PB models differ in size and parameters. First, YOLOv6 models were compared with PBmodel image datasets and identified the best PB detection model. Later, the best model was compared with different camera settings (PBceiling and PBground) and photoperiod (PBnighttime and PBdaytime) conditions. Each model or class was run at 300 epochs and batch size 16. The higher the epochs, the higher will be the performance results.
YOLOv6-PB is a complex neural network architecture consisting of several parts, each of which plays a specific role in object detection (Figure 3). Some of the main parts of YOLOv6-PB are:

2.4.1. Model Input

The pre-trained PB image datasets were fed into the model for making predictions through the input part of the YOLOv6-PB model. Input images and labels were then passed into the neural network, which usually occurs in another part of the YOLOv6-PB model. The size of the input image depends on the YOLOv6-PB architecture of the network, but it is usually expected to be a fixed size, for example, 640 × 640 × 3 pixels as the default size. Therefore, in this study, a default size of the images is taken for analysis.

2.4.2. Model Backbone

The backbone extracts feature from the input PB image. In YOLOv6-PB, the backbone network is typically a pre-trained Convolutional Neural Network (CNN) that has been fine-tuned for object detection. The specific architecture of the backbone network in YOLOv6-PB can vary, but it typically consists of several convolutional layers, followed by max-pooling layers, which helps to reduce the feature map’s spatial dimensions. The convolutional layers detect low-level features in the PB image, such as edges and textures. Spatial pyramid pooling (SPP) helps max-pooling layers reduce the feature map’s size and maintain the most important features for object detection [34]. Similarly, The EfficientRep Backbone used in YOLOv6-PB is designed to both effectively use the computational resources of hardware, such as GPUs, and possess robust feature representation abilities as compared to the CSP-Backbone utilized by YOLOv5 [33].

2.4.3. Model Neck

The neck connects the backbone network to the rest of the network. It takes the PB output of the backbone network and performs additional processing to produce the final feature map used for PB object detection. In general, the purpose of the neck in YOLO-PB architectures is to provide intermediate feature maps suitable for the heads to make accurate predictions. Feather maps are often achieved through a series of convolutional, pooling, and up-sampling layers that manipulate the features from the backbone network to the desired scale and resolution for the heads. Regarding its neck design, YOLOv6-PB introduces a more efficient feature fusion network, known as the Rep-PAN Neck [35], to improve hardware utilization and the balance between accuracy and speed. This design is based on the hardware-aware neural network architecture concept [36].

2.4.4. Anchor Boxes

Anchor boxes are predefined bounding boxes that represent PB objects in the image. They provide priori information about the location and size of PB objects in the image. During training, the network learns to adjust the anchor boxes to fit the PB objects in the image better.

2.4.5. Detection Head

The detection head is responsible for predicting the PB objects in the image. It takes the output of the neck network and produces a set of class probabilities and bounding boxes for each targeted PB object in the image [37]. The detection head uses anchor boxes as a starting point and adjusts them to fit the PB objects in the image better. YOLOv6-PB utilizes a decoupled head structure, simplifying the head design while carefully balancing the representation capabilities of the relevant operations with the computational demands on the hardware [33].

2.4.6. Loss Function

The loss function trains the network by measuring the difference between the predictions made by the network and the ground truth annotations. The loss function measures the error between the predicted bounding boxes and the ground truth boxes and the error between the predicted class probabilities and the ground truth class labels of PB.
L o s s = λ 1 L c l s + λ 2 L o b j + λ 3 L l o c
where Lcls, Lobj, and Lloc represent class loss, PB object loss, and location or bounding box loss, respectively, λ is constant for respective loss.

2.4.7. Post-Processing

The final step in the PB object detection process is post-processing which involves refining the predictions made by the network and filtering out low-confidence detections, rescaling the bounding boxes to the original PB image size, and drawing the final PB detections on the image.
Each of these parts of YOLOv6-PB works together to perform object detection in real-time, allowing for the efficient and accurate detection of PB objects in images and videos.

2.5. Computational Parameters

To perform PB detection, a high-performing computational configuration is used. This detection study used the Oracle cloud with different configurations to train, validate, and test the image datasets (Table 2). The higher number of computational parameters increases the speed and detection accuracy of the model [26,28].

2.6. Performance Metrics

2.6.1. Precision

This metric measures the fraction of the total number of PB made by the PB object detection system that was correct. It is calculated by all positive detections, such as true positive (TP, image contains PB, so the model predicts it correctly) and false positive (FP, the image does not contain PB, but the model detects PB). The formula of precision is given below.
P r e c i s i o n = T P T P + F P = t r u e   p i l i n g   b e h a v i o r   d e t e c t i o n a l l   d e t e c t e d   b o u n d i n g   b o x e d
The overall positive and negative PB detection is made clear with the help of the confusion matrix in Figure 4.

2.6.2. Recall

This metric measures the fraction of the total number of PB objects in an image correctly detected by the PB object detection system. It is calculated based on TP and false negative (FN, image contains PB but unable to detect PB) detection results obtained from the YOLOv6-MD model.
R e c a l l = T P T P + F N = t r u e   p i l i n g   b e h a v i o r   d e t e c t i o n a l l   g r o u n d   t r u t h   b o u n d i n g   b o x e d

2.6.3. Mean Average Precision

This metric measures the average precision of the PB object detection system over multiple object classes at a threshold of 0.50 ([email protected]) or 0.50:0.95 ([email protected]:0.95). The mAP is calculated as the average of the precision values for each class, considering the number of true positive PB detections and the number of false positive PB detections.
m A P = i 1 C A P i C
where APi is the average precision of the ith category and C is the total number of categories.

2.6.4. Intersection over Union

In YOLOv6, the model used an Inter-section over Union (IoU) metric to determine whether an object was correctly detected. This metric calculates how much the detected bounding box overlaps with the ground truth bounding box given in Equation (5). A threshold value of 0.5 was used in the previous study to determine if the detection was a TP [38]. If the overlap between the detected and ground truth bounding boxes was at least 50%, it was considered a TP. However, if the overlap was less than 50%, it was labeled as an FN, meaning the object was undetected. The FP detections occurred when the model predicted a PB where none existed. On the other hand, TN cases occurred when the model correctly avoided making such predictions.
I o U = A r e a   o f   o v e r l a p A r e a   o f   u n i o n

3. Results

The study’s findings on hen PB were compared under various models and settings. Therefore, the result section is divided into three subsections which cover the performance comparison of YOLOv6-PB models and the performance of PB under different photoperiods and camera settings. In addition, the section provides an overview of the results on how different factors influence PB and includes an evaluation of the YOLOv6-PB model’s performance in detecting this behavior.

3.1. Performance Comparison of YOLOv6-PB Models

This study compared all YOLOv6-PB models to determine which model performs better in detecting PB, and the results are shown in Table 3 and Figure 5. Among all YOLOv6-PB models, YOLOv6l relu-PB performed better in terms of performance metrics, such as average recall (70.6%), [email protected] (98.9%), [email protected] (74.6%), and [email protected]:0.95 (63.7%). Similarly, after the YOLOv6l relu-PB model, YOLOv6n-PB shows higher average recall (69.8%) and [email protected] (98.9%) but lowest training time after YOLOv6t-PB model (2.03 h). However, YOLOv6t-PB performs lowest with 67.6% average recall, 67.3% [email protected], and 60.7% [email protected]:0.95. Furthermore, the training time to complete 2100 labeled image datasets at 300 epochs of batch size 16 was found in increasing order from smaller to bigger YOLOv6 models because the bigger YOLOv6 model consumes more time to perform and accurately detect the PB, as shown in Table 3. Thus, the YOLOv6t-PB and YOLOv6n-PB perform faster to train 2100 images and validate 600 images (almost 2.04 h), while YOLOv6l relu-PB acts slow in training and validation (4.24 h) at the same time. If this study compares based on training time, then YOLOv6n-PB performs faster and more accurately in detecting PB. However, other performance metrics are more important compared to training time. Thus, the YOLOv6l relu-PB outperforms and can be used in the future to detect PB, which ultimately helps to find the actual reason for PB so that it can be reduced on time. Since YOLOv6l relu-PB performs better, so we used this model for comparison within photoperiods and camera settings.
In Figure 6, YOLOv6-PB models were compared with training and validation datasets. Each model generated graph data based on the performance at each epoch. Since each model’s performance metrics are almost close, but when zoomed in and run non-parametric statistical analysis, then YOLOv6l_relu outperforms in mAP0.50:0.95. However, when comparing mAP@50, the results seem insignificant at a 0.05 level of significance. Although there is no significant difference among them, we can consider the highest mAP value YOLO model because every percentage increase in object detection is the most important.

3.2. Performance of Piling Behavior under Different Photoperiods

The YOLOv6l relu-PB was used to compare PB during nighttime and daytime, and the PB detection was found highest during nighttime compared to daytime with an average recall, [email protected], [email protected], and [email protected]:0.95 as shown in Table 4. Similarly, the model performance of results based on photoperiods and epochs to identify the performance level is shown in Figure 7. The performance metrics were found to be increased as the number of epochs increased, possibly due to more training, large architecture size, higher parameters, and more learning phenomenon. The graph shows that the performance metrics were highest during nighttime because of the largest PB flock size, and it is easy to detect a particular group by differentiating large groups of hens from individual hens, as shown in Figure 8.
Based on Figure 9, it can be observed that the IoU loss decreases as the number of training epochs increases. Furthermore, the YOLOv6-PB daytime model had a lower IoU loss than the YOLOv6-PB nighttime model, indicating that the nighttime model performed better in detecting PB.

3.3. Performance of Piling Behavior under Different Camera Settings

According to the results of this study, the YOLOv6l relu ground camera model (at the height of 0.5 m) showed the highest performance metrics for PB detection with an average recall of 66.8%, [email protected] of 96.4%, [email protected] of 56.9%, and [email protected]:0.95 of 57.6% due to the clear view it provides (Table 5). As a result, this model can be recommended for ground-level PB detection, as shown in Figure 10. In conclusion, the ground camera proved more effective for PB detection based on the test datasets and, thus, was recommended for this purpose.
Figure 11 shows that the IoU loss decreases as training epochs increase. Additionally, the results show that the YOLOv6-PB camera ground model had a lower IoU loss than the YOLOv6-PB ceiling model, indicating that the former performed better in detecting PB. Therefore, based on the lower IoU loss, it can be concluded that the YOLOv6-PB camera ground model was more effective in detecting PB than the YOLOv6-PB ceiling model.

4. Discussion

In the present study, we aimed to evaluate the performance of various YOLOv6 models for the detection of PB in poultry. The PB is a serious concern in commercial poultry farming, as it can lead to decreased welfare [2,20,22] and other negative consequences. In addition, PB has been linked to floor eggs [15,28] and FELB [28], so accurate detection can also help reduce these issues. This study focused on using computer vision techniques to detect PB accurately, which could help reduce its occurrence and improve poultry welfare. We analyzed several YOLOv6 models and found that the YOLOv6l relu-PB model performed the best on the available datasets, likely due to the model’s larger architecture consisting of several convolutional layers and parameters used [31,39]. Accurately detecting PB can help reduce false detections and lead to more effective identification and reduction of PB. However, it is important to consider that factors, such as housing systems and bird numbers in commercial farming, may impact the accuracy of PB detection. Therefore, the YOLOv6l relu-PB model will be further tested and optimized for commercial farm settings to increase its accuracy.
Piling behavior detection is crucial in various environmental situations, such as different photoperiod conditions and camera heights. This study has shown that detecting PB is crucial in maintaining animal welfare, especially during nighttime. Our results suggest that larger group sizes contribute significantly to the occurrence of PB at night. The PB was highest during nighttime as hens tend to pile together in a large group to secure safety through social contact [40]. Furthermore, our study has shown that without the use of night vision cameras, detection performance decreases during nighttime. Therefore, night vision cameras are recommended to enhance detection precision in both daytime and nighttime monitoring. The YOLOv6l relu-PB nighttime model was the most effective in accurately detecting PB during nighttime. Similarly, this study highlighted the significance of camera settings and heights in improving the accuracy of PB detection and found that the closer the camera was to the targeted objects, the higher the detection accuracy. While a camera close to the target object can help detect an object within short parameters, a ceiling camera can provide a whole room overview. In the future, a ceiling camera can aid in room overview and transmit PB signals to a ground robot which will enable the robot to navigate the PB area and locate the cause of PB, helping to reduce the gathering of hens in a specific place. Therefore, both camera heights are required to improve the PB detection model. To achieve this, training in more image datasets under various environmental conditions and settings is necessary.
In evaluating object detection models, Intersection over Union (IoU) has been recommended as a standard metric for assessing the quality of segmentation [41]. By analyzing IoU values separately, it was observed that segmenting fewer sampled classes was particularly challenging, even when using focal loss. A lower IoU loss signifies better detection accuracy, thereby supporting the conclusion that the YOLOv6-PB nighttime and YOLOv6-PB camera ground models performed better during testing. This information is crucial in evaluating the effectiveness of detection models and can help researchers identify areas that require further improvement. In summary, our study highlights the importance of considering IoU values in assessing detection accuracy, particularly when fewer classes are being segmented.
This study investigated the effectiveness of the YOLOv6 model in detecting behaviors, such as PB, in real-world settings. We found that the model performed well in various scenarios and could handle unpredictable situations. However, collecting enough image data for training PB detection models can be challenging, leading to data imbalance or overfitting issues that can affect the model’s accuracy [42]. To address this, we used data augmentation techniques, such as geometric transformations, brightness/contrast enhancement, and data normalization to increase the training dataset size and improve accuracy rates. Extending the training dataset through data augmentation is essential for accurate PB model detection, as accuracy depends heavily on the dataset’s size and resolution. We can overcome these challenges by improving the training procedure, pre-processing images, and achieving more accurate detection results.
This study has some limitations. For instance, Figure 12 highlights that it is not appropriate to evaluate a model’s effectiveness based solely on one aspect, rather, it should be assessed based on its overall performance. Unfortunately, our proposed model has some significant limitations. During the test phase, it occasionally misclassified behaviors, such as dustbathing, feeding, perching, foraging, and drinking activities, in a group as PB. Similarly, sometimes the model detects feeder as PB might be because of similar hen color. We intend to enhance the model by training it on additional datasets to address this issue. This study’s custom dataset includes many images of PB with classes for dustbathing, feeding, perching, foraging, drinking, or feeding. Therefore, if a hen was just gathering or coming closer to each other during these activities, the model waited until it detected PB before registering an identification. However, sometime model mistakenly detects the hens when more hens come close to each other. Moreover, nighttime detection proved challenging, leading us to replace regular cameras with night vision cameras. Without night vision capabilities, it is tough to identify birds. We also discovered that camera height significantly impacts detection accuracy. As the camera height increases, the sensitivity, resolution, signal-to-noise ratio, and field of view decrease, which could cause detection errors. However, the camera placed close to the hen also causes false detection problems by making the camera blur or detecting objects PB nearby. In addition, the cameras need to be cleaned daily to obtain the best quality videos because CF housing consists of higher dust particles [43,44]. Overall, the image quality decreases as the camera height increases or is too close to hens, and the camera’s periodic cleanliness is required. These limitations are the most noteworthy findings of our study.
In the future, detecting and preventing PB in hens is an important research area, and several promising directions exist to explore. One of these directions involves using advanced computer vision techniques, such as the YOLOv6 model, which can accurately identify and differentiate between objects and their behaviors. By analyzing videos, the YOLOv6 model can simultaneously evaluate multiple categories or classes and provide unique identifiers for each detected object, making it a good choice for detecting PB. Researchers could also explore the use of multi-sensor systems to obtain a complete view of hen behavior and develop non-invasive methods for detecting PB to reduce stress on the hens. Additionally, investigating the impact of environmental factors on PB and using machine learning algorithms to analyze large datasets could provide valuable insights into this issue. Finally, we could better understand the hen’s behavior and welfare by integrating data from multiple sensors, such as cameras, microphones, and pressure sensors.
To prevent PB, it is important to understand its underlying causes and motivations. Studying the social dynamics of hen groups, their individual personalities, and preferences can help develop targeted interventions to prevent or mitigate this behavior. Automated feedback systems can provide real-time information to farmers or caretakers about the prevalence and severity of PB, allowing them to intervene when necessary and improve welfare outcomes for hens. Improving an understanding of PB and the environmental and social factors that drive it can help to develop more effective strategies to improve the welfare and overall health of hens in both commercial and non-commercial settings.

5. Conclusions

This study tested different deep-learning models for detecting PB in research CF houses. The model development used 9000 images for training, validation, and testing. The results show YOLOv6l relu-PB model has a higher performance in detecting PB with higher [email protected] (98.9%), [email protected]:0.95 (63.7%), and average recall (70.6%) than other models. Similarly, ceiling and ground cameras are important to detect PB more precisely. However, ground camera results in higher precision for detecting PB. The camera with inbuild night vision technology can help in increasing detection accuracy. The camera placed at the ceiling has shown higher precision in detecting PB during nighttime. However, we encountered some common problems, such as inaccurate detection and difficulty recognizing objects that were too close or far away. To address these problems, we propose several deep-learning techniques for training the YOLOv6 model, such as linguistic transition, spontaneous geometric transition, and spontaneous color dithering.
Our research proposed the YOLOv6 model, which leverages efficient net to extract features from input images, thus enhancing the model’s feature learning and boosting the network’s performance. By detecting PB quickly and accurately, we can minimize the negative impact on animal welfare and reduce FELB, leading to better health outcomes and production. However, we noticed some limitations in real-time applications, such as the model’s inability to classify images containing groups of hens or those too close together. As a result, future research should focus on improving the approach’s accuracy and addressing these types of datasets. Overall, the YOLOv6l relu-PB detection model is recommended for monitoring PB and will be tested in commercial houses.

Author Contributions

Conceptualization: R.B.B. and L.C.; methodology: R.B.B. and L.C.; data analysis: R.B.B.; investigation: R.B.B., S.S., X.Y. and L.C.; resources: L.C.; writing—original draft preparation: R.B.B. and L.C.; supervision: L.C.; funding acquisition: L.C. All authors have read and agreed to the published version of the manuscript.

Funding

Egg Industry Center; Georgia Research Alliance (Venture Fund); Oracle America (Oracle for Research Grant, CPQ-2060433); UGA CAES Dean’s Office Research Fund; UGA COVID Recovery Research Fund; USDA-NIFA AFRI (2023-68008-39853); Hatch projects (GEO00895; Accession Number: 1021519) and (GEO00894; Accession Number: 1021518).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Campbell, D.; Makagon, M.; Swanson, J.; Siegford, J. Litter Use by Laying Hens in a Commercial Aviary: Dust Bathing and Piling. Poult. Sci. 2016, 95, 164–175. [Google Scholar] [CrossRef] [PubMed]
  2. Winter, J.; Toscano, M.J.; Stratmann, A. Piling Behaviour in Swiss Layer Flocks: Description and Related Factors. Appl. Anim. Behav. Sci. 2021, 236, 105272. [Google Scholar] [CrossRef]
  3. Gray, H.; Davies, R.; Bright, A.; Rayner, A.; Asher, L. Why Do Hens Pile? Hypothesizing the Causes and Consequences. Front. Vet. Sci. 2020, 7, 616836. [Google Scholar] [CrossRef] [PubMed]
  4. Rice, M.; Acharya, R.; Fisher, A.; Taylor, P.; Hemsworth, P. Characterising Piling Behaviour in Australian Free-Range Commercial Laying Hens. In ISAE 2020 Global Virtual Meeting: Online Programme Book; ISAE: Puch, Austria, 2020; p. 1. [Google Scholar]
  5. Barrett, J.; Rayner, A.; Gill, R.; Willings, T.; Bright, A. Smothering in UK Free-range Flocks. Part 1: Incidence, Location, Timing and Management. Vet. Rec. 2014, 175, 19. [Google Scholar] [CrossRef] [PubMed]
  6. Herbert, G.T.; Redfearn, W.D.; Brass, E.; Dalton, H.A.; Gill, R.; Brass, D.; Smith, C.; Rayner, A.C.; Asher, L. Extreme Crowding in Laying Hens during a Recurrent Smothering Outbreak. Vet. Rec. 2021, 188, e245. [Google Scholar] [CrossRef]
  7. Rayner, A.; Gill, R.; Brass, D.; Willings, T.; Bright, A. Smothering in UK Free-range Flocks. Part 2: Investigating Correlations between Disease, Housing and Management Practices. Vet. Rec. 2016, 179, 252. [Google Scholar] [CrossRef]
  8. Bright, A.; Johnson, E. Smothering in Commercial Free-Range Laying Hens: A Preliminary Investigation. Anim. Behav. 2011, 119, 203–209. [Google Scholar] [CrossRef]
  9. Marder, J.; Arad, Z. Panting and Acid-Base Regulation in Heat Stressed Birds. Comp. Biochem. Physiol. Part A Physiol. 1989, 94, 395–400. [Google Scholar] [CrossRef]
  10. Kang, H.; Park, S.; Jeon, J.; Kim, H.; Kim, S.; Hong, E.; Kim, C. Effect of Stocking Density on Laying Performance, Egg Quality and Blood Parameters of Hy-Line Brown Laying Hens in an Aviary System. Eur. Poult. Sci. 2018, 82, 245. [Google Scholar]
  11. Mashaly, M.; Hendricks, G., 3rd; Kalama, M.; Gehad, A.; Abbas, A.; Patterson, P. Effect of Heat Stress on Production Parameters and Immune Responses of Commercial Laying Hens. Poult. Sci. 2004, 83, 889–894. [Google Scholar] [CrossRef]
  12. Hartcher, K.M.; Jones, B. The Welfare of Layer Hens in Cage and Cage-Free Housing Systems. World’s Poult. Sci. J. 2017, 73, 767–782. [Google Scholar] [CrossRef]
  13. Campbell, D.L.; Hinch, G.N.; Downing, J.A.; Lee, C. Fear and Coping Styles of Outdoor-Preferring, Moderate-Outdoor and Indoor-Preferring Free-Range Laying Hens. Appl. Anim. Behav. Sci. 2016, 185, 73–77. [Google Scholar] [CrossRef]
  14. Gebhardt-Henrich, S.G.; Stratmann, A. What Is Causing Smothering in Laying Hens? Vet. Rec. 2016, 179, 250. [Google Scholar] [CrossRef] [PubMed]
  15. Riber, A.B. Development with Age of Nest Box Use and Gregarious Nesting in Laying Hens. Appl. Anim. Behav. Sci. 2010, 123, 24–31. [Google Scholar] [CrossRef]
  16. Giersberg, M.F.; Kemper, N.; Spindler, B. Pecking and Piling: The Behaviour of Conventional Layer Hybrids and Dual-Purpose Hens in the Nest. Appl. Anim. Behav. Sci. 2019, 214, 50–56. [Google Scholar] [CrossRef]
  17. Lentfer, T.L.; Gebhardt-Henrich, S.G.; Fröhlich, E.K.; von Borell, E. Influence of Nest Site on the Behaviour of Laying Hens. Appl. Anim. Behav. Sci. 2011, 135, 70–77. [Google Scholar] [CrossRef]
  18. Singh, R.; Cheng, K.; Silversides, F. Production Performance and Egg Quality of Four Strains of Laying Hens Kept in Conventional Cages and Floor Pens. Poult. Sci. 2009, 88, 256–264. [Google Scholar] [CrossRef]
  19. Villanueva, S.; Ali, A.; Campbell, D.; Siegford, J. Nest Use and Patterns of Egg Laying and Damage by 4 Strains of Laying Hens in an Aviary System1. Poult. Sci. 2017, 96, 3011–3020. [Google Scholar] [CrossRef] [PubMed]
  20. Altan, O.; Seremet, C.; Bayraktar, H. The Effects of Early Environmental Enrichment on Performance, Fear and Physiological Responses to Acute Stress of Broiler. Arch. Für Geflügelkunde 2013, 77, 23–28. [Google Scholar]
  21. Bist, R.B.; Subedi, S.; Chai, L.; Regmi, P.; Ritz, C.W.; Kim, W.K.; Yang, X. Effects of Perching on Poultry Welfare and Production: A Review. Poultry 2023, 2, 134–157. [Google Scholar] [CrossRef]
  22. Winter, J.; Toscano, M.J.; Stratmann, A. The Potential of a Light Spot, Heat Area, and Novel Object to Attract Laying Hens and Induce Piling Behaviour. Animal 2022, 16, 100567. [Google Scholar] [CrossRef] [PubMed]
  23. Yang, X.; Chai, L.; Bist, R.B.; Subedi, S.; Wu, Z. A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor. Animals 2022, 12, 1983. [Google Scholar] [CrossRef]
  24. Yang, X.; Bist, R.; Subedi, S.; Chai, L. A deep learning method for monitoring spatial distribution of cage-free hens. Artif. Intell. Agric. 2023, 8, 20–29. [Google Scholar] [CrossRef]
  25. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking Pecking Behaviors and Damages of Cage-Free Laying Hens with Machine Vision Technologies. Comput. Electron. Agric. 2023, 204, 107545. [Google Scholar] [CrossRef]
  26. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking Floor Eggs with Machine Vision in Cage-Free Hen Houses. Poult. Sci. 2023, 102, 102637. [Google Scholar] [CrossRef] [PubMed]
  27. Sozzi, M.; Pillan, G.; Ciarelli, C.; Marinello, F.; Pirrone, F.; Bordignon, F.; Bordignon, A.; Xiccato, G.; Trocino, A. Measuring Comfort Behaviours in Laying Hens Using Deep-Learning Tools. Animals 2023, 13, 33. [Google Scholar] [CrossRef] [PubMed]
  28. Bist, R.B.; Yang, X.; Subedi, S.; Chai, L. Mislaying behavior detection in cage-free hens with deep learning technologies. Poult. Sci. 2023, 102729. [Google Scholar] [CrossRef]
  29. Pu, H.; Lian, J.; Fan, M. Automatic Recognition of Flock Behavior of Chickens with Convolutional Neural Network and Kinect Sensor. Int. J. Pattern. Recognit. Artif. Intell. 2018, 32, 7. [Google Scholar] [CrossRef]
  30. Wang, C.Y.; Liao, H.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A New Backbone that can Enhance Learning Capability of CNN. In Proceedings of the CVF Conference on Computer Vision and Pattern Recognition WorNshops (CVPRW), Seattle, WA, USA, 13–19 June 2020; pp. 1571–1580. [Google Scholar]
  31. Mtjhl, L. Meituan/YOLOv6 2023. Available online: https://github.com/meituan/YOLOv6 (accessed on 18 January 2023).
  32. Horvat, M.; Gledec, G. A Comparative Study of YOLOv5 Models Performance for Image Localization and Classification. In Proceedings of the Central European Conference on Information and Intelligent Systems, Dubrovnik, Croatia, 20–22 September 2023; pp. 349–356. [Google Scholar]
  33. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W. YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
  34. He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
  35. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8759–8768. [Google Scholar]
  36. Weng, K.; Chu, X.; Xu, X.; Huang, J.; Wei, X. EfficientRep:An Efficient Repvgg-Style ConvNets with Hardware-Aware Neural Network Design. arXiv 2023, arXiv:2302.00386. [Google Scholar]
  37. Jocher, G. YOLOv5 (6.0/6.1) Brief Summary · Issue #6998 · Ultralytics/Yolov5. Available online: https://github.com/ultralytics/yolov5/issues/6998 (accessed on 10 March 2023).
  38. Aburaed, N.; Alsaad, M.; Mansoori, S.A.; Al-Ahmad, H. A Study on the Autonomous Detection of Impact Craters. In Proceedings of the Artificial Neural Networks in Pattern Recognition: 10th IAPR TC3 Workshop, ANNPR 2022, Dubai, United Arab Emirates, 24–26 November 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 181–194. [Google Scholar]
  39. Li, C.; Li, L.; Geng, Y.; Jiang, H.; Cheng, M.; Zhang, B.; Ke, Z.; Xu, X.; Chu, X. YOLOv6 v3. 0: A Full-Scale Reloading. arXiv 2023, arXiv:2301.05586. [Google Scholar] [CrossRef]
  40. Gregory, N.G. Physiology and Behaviour of Animal Suffering; John Wiley & Sons: Hoboken, NJ, USA, 2008; ISBN 1-4051-7302-5. Available online: https://books.google.com/books?hl=en&lr=&id=0bOZocGJMaAC&oi=fnd&pg=PR5&dq=Physiology+and+Behaviour+of+Animal+Suffering%3B+&ots=wJJQHce-sQ&sig=QF9zN5IbQGMMHKpGLcUnjR0cLNY#v=onepage&q=Physiology%20and%20Behaviour%20of%20Animal%20Suffering%3B&f=false (accessed on 25 December 2022).
  41. Martins Crispi, G.; Valente, D.S.M.; Queiroz, D.M.d.; Momin, A.; Fernandes-Filho, E.I.; Picanço, M.C. Using Deep Neural Networks to Evaluate Leafminer Fly Attacks on Tomato Plants. AgriEngineering 2023, 5, 273–286. [Google Scholar] [CrossRef]
  42. Sambasivam, G.A.O.G.D.; Opiyo, G.D. A predictive machine learning application in agriculture: Cassava disease detection and classification with imbalanced dataset using convolutional neural networks. Egypt. Inform. J. 2021, 22, 27–34. [Google Scholar] [CrossRef]
  43. Ni, J.Q.; Erasmus, M.A.; Croney, C.C.; Li, C.; Li, Y. A critical review of advancement in scientific research on food animal welfare-related air pollution. J. Hazard. Mater. 2021, 408, 124468. [Google Scholar] [CrossRef]
  44. Ni, J.Q.; Heber, A.J.; Darr, M.J.; Lim, T.T.; Diehl, C.A.; Bogan, B.W. Air quality monitoring and on-site computer system for livestock and poultry environment studies. Trans. ASABE 2009, 52, 937–947. [Google Scholar]
Figure 1. Experimental cage-free hen house used in this study.
Figure 1. Experimental cage-free hen house used in this study.
Agriengineering 05 00056 g001
Figure 2. Images datasets labeled based on class (a) PBceiling, (b) PBground, (c) PBnighttime, and (d) PBdaytime.
Figure 2. Images datasets labeled based on class (a) PBceiling, (b) PBground, (c) PBnighttime, and (d) PBdaytime.
Agriengineering 05 00056 g002
Figure 3. YOLOv6-PB architecture.
Figure 3. YOLOv6-PB architecture.
Agriengineering 05 00056 g003
Figure 4. Confusion matrix for piling behavior detection used for model evaluation.
Figure 4. Confusion matrix for piling behavior detection used for model evaluation.
Agriengineering 05 00056 g004
Figure 5. Piling behavior detection result comparison based on various models (a) YOLOv6t-PB, (b) YOLOv6n-PB, (c) YOLOv6s-PB, (d) YOLOv6m-PB, (e) YOLOv6l-PB, and (f) YOLOv6l relu-PB model.
Figure 5. Piling behavior detection result comparison based on various models (a) YOLOv6t-PB, (b) YOLOv6n-PB, (c) YOLOv6s-PB, (d) YOLOv6m-PB, (e) YOLOv6l-PB, and (f) YOLOv6l relu-PB model.
Agriengineering 05 00056 g005
Figure 6. Comparison of piling behavior detection results between different YOLOv6-PB models based on (a) [email protected] and (b) [email protected]:0.95 with 300 epochs and 16 batch size.
Figure 6. Comparison of piling behavior detection results between different YOLOv6-PB models based on (a) [email protected] and (b) [email protected]:0.95 with 300 epochs and 16 batch size.
Agriengineering 05 00056 g006
Figure 7. Comparison of piling behavior detection results during different photoperiods and epochs with (a) average recall, (b) [email protected], and (c) [email protected]:0.95.
Figure 7. Comparison of piling behavior detection results during different photoperiods and epochs with (a) average recall, (b) [email protected], and (c) [email protected]:0.95.
Agriengineering 05 00056 g007
Figure 8. Piling behavior detection result based on different photoperiods (a) nighttime (light turned off) and (b) daytime (light turned on).
Figure 8. Piling behavior detection result based on different photoperiods (a) nighttime (light turned off) and (b) daytime (light turned on).
Agriengineering 05 00056 g008
Figure 9. Comparison of IoU loss during training between YOLOv6-PB nighttime and YOLOv6-PB daytime models at 300 epochs and 16 batch size.
Figure 9. Comparison of IoU loss during training between YOLOv6-PB nighttime and YOLOv6-PB daytime models at 300 epochs and 16 batch size.
Agriengineering 05 00056 g009
Figure 10. Piling behavior detection result based on different camera settings (a) height 0.5 m (ground) and (b) height 3 m (ceiling).
Figure 10. Piling behavior detection result based on different camera settings (a) height 0.5 m (ground) and (b) height 3 m (ceiling).
Agriengineering 05 00056 g010
Figure 11. Comparison of IoU loss during training between YOLOv6-PB ceiling and YOLOv6-PB ground models at 300 epochs and 16 batch sizes.
Figure 11. Comparison of IoU loss during training between YOLOv6-PB ceiling and YOLOv6-PB ground models at 300 epochs and 16 batch sizes.
Agriengineering 05 00056 g011
Figure 12. Example of false piling behaviors detected by the model due to (a) occlusion, (b) foraging, (c) feeder presence, and (d) perching.
Figure 12. Example of false piling behaviors detected by the model due to (a) occlusion, (b) foraging, (c) feeder presence, and (d) perching.
Agriengineering 05 00056 g012
Table 1. Data pre-processing for PB model detection.
Table 1. Data pre-processing for PB model detection.
ClassOriginal DatasetTrain (70%)Validation (20%)Test (10%)
PBceiling15001050300150
PBground15001050300150
PBdaytime30002100600300
PBnighttime30002100600300
PBmodel30002100600300
Note: PBceiling and PBground represent PB observed at camera height of 3 m and 0.5 m, respectively, above the litter floor; PBdaytime and Pbnighttime represent PB during the light period and dark period, respectively.
Table 2. Computational parameters used for the PB model evaluation.
Table 2. Computational parameters used for the PB model evaluation.
ConfigurationParameters
CPU64 core OCPU
GPU (4 counts)4 × NVIDIA® A10 (24 GB)
Operating systemUbuntu 22.10
Accelerated environmentNVIDIA CUDA
Memory1024 GB
Drive (2 counts)7.68 TB NVMe SSD
LibrariesTorch 1.7.0, Torch-vision 0.8.1, OpenCV-python 4.1.1, NumPy 1.18.5
Table 3. Comparison of performance of the different models with different performance metrics.
Table 3. Comparison of performance of the different models with different performance metrics.
Performance MetricsYOLOv6t- PBYOLOv6n-PBYOLOv6s-PBYOLOv6m- PBYOLOv6l- PBYOLOv6l relu- PB
Average Recall (%)67.669.869.170.269.870.6
[email protected] (%)97.698.998.598.196.398.9
[email protected] (%)67.370.670.173.973.574.6
[email protected]:0.95 (%)60.762.862.263.462.463.7
Training time (hrs)2.032.042.072.973.234.24
mAP—mean average precision; hrs—hours; PB—piling behavior.
Table 4. Comparison of piling behavior during daytime and nighttime using the YOLOv6l relu model.
Table 4. Comparison of piling behavior during daytime and nighttime using the YOLOv6l relu model.
Data SummaryAverage Recall (%)[email protected] (%)[email protected] (%)[email protected]:0.95 (%)
YOLOv6l relu-nighttime89.498.998.887.0
YOLOv6l relu-daytime70.698.072.063.5
mAP—mean average precision.
Table 5. Comparison of piling behavior under different camera settings using the YOLOv6l relu model.
Table 5. Comparison of piling behavior under different camera settings using the YOLOv6l relu model.
Camera SettingsAverage Recall (%)[email protected] (%)[email protected] (%)[email protected]:0.95 (%)
YOLOv6l relu-ceiling63.893.154.054.5
YOLOv6l relu-ground66.896.456.957.6
Ceiling—3 m; ground—0.5 m; mAP—mean average precision.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bist, R.B.; Subedi, S.; Yang, X.; Chai, L. A Novel YOLOv6 Object Detector for Monitoring Piling Behavior of Cage-Free Laying Hens. AgriEngineering 2023, 5, 905-923. https://doi.org/10.3390/agriengineering5020056

AMA Style

Bist RB, Subedi S, Yang X, Chai L. A Novel YOLOv6 Object Detector for Monitoring Piling Behavior of Cage-Free Laying Hens. AgriEngineering. 2023; 5(2):905-923. https://doi.org/10.3390/agriengineering5020056

Chicago/Turabian Style

Bist, Ramesh Bahadur, Sachin Subedi, Xiao Yang, and Lilong Chai. 2023. "A Novel YOLOv6 Object Detector for Monitoring Piling Behavior of Cage-Free Laying Hens" AgriEngineering 5, no. 2: 905-923. https://doi.org/10.3390/agriengineering5020056

APA Style

Bist, R. B., Subedi, S., Yang, X., & Chai, L. (2023). A Novel YOLOv6 Object Detector for Monitoring Piling Behavior of Cage-Free Laying Hens. AgriEngineering, 5(2), 905-923. https://doi.org/10.3390/agriengineering5020056

Article Metrics

Back to TopTop