Next Article in Journal
Research on Automatic Tracking and Size Estimation Algorithm of “Low, Slow and Small” Targets Based on Gm-APD Single-Photon LIDAR
Previous Article in Journal
Accurate Tracking of Agile Trajectories for a Tail-Sitter UAV Under Wind Disturbances Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Body Weight Estimation of Cattle in Standing and Lying Postures Using Point Clouds Derived from Unmanned Aerial Vehicle-Based LiDAR

1
Laboratory of Geo-Information Science and Remote Sensing, Wageningen University & Research, 6708 PB Wageningen, The Netherlands
2
Agricultural Information Institute, Chinese Academy of Agriculture Sciences, Beijing 100012, China
3
Wageningen Environmental Research, Wageningen University & Research, 6708 PB Wageningen, The Netherlands
4
Big Data Development Center, Ministry of Agriculture and Rural Affairs, Beijing 100125, China
*
Authors to whom correspondence should be addressed.
Drones 2025, 9(2), 84; https://doi.org/10.3390/drones9020084
Submission received: 21 November 2024 / Revised: 18 January 2025 / Accepted: 20 January 2025 / Published: 22 January 2025

Abstract

:
This study aims to explore body weight estimation for cattle in both standing and lying postures, using 3D data. We apply a Unmanned Aerial Vehicle-based (UAV-based) LiDAR system to collect data during routine resting periods between feedings in the natural husbandry conditions of a commercial farm, which ensures minimal interruption to the animals. Ground truth data are obtained by weighing cattle as they voluntarily pass an environmentally embedded scale. We have developed separate models for standing and lying postures and trained them on features extracted from the segmented point clouds of cattle with unique identifiers (UIDs). The models for standing posture achieve high accuracy, with a best-performance model, Random Forest, obtaining an R 2 of 0.94, an MAE of 4.72 kg, and an RMSE of 6.33 kg. Multiple linear regression models are trained to estimate body weight for the lying posture, using volume- and posture-wise characteristics. The model used 1 cm as the thickness of the slice-wise volume calculation, achieving an R 2 of 0.71, an MAE of 7.71 kg, and an RMSE of 9.56 kg. These results highlight the potential of UAV-based LiDAR data for accurate and non-intrusive estimation of cattle body weight in lying and standing postures, which paves the way for improved management practices in precision livestock farming.

1. Introduction

The increasing demand for animal protein due to rising incomes [1] has led the husbandry sector, which encompasses beef cattle production, to adopt intensive livestock farming [2] with larger average farm sizes [3]. This shift requires a greater focus on animal management [4]. Body weight (BW) is a key indicator during cattle management, which offers valuable insights into various aspects, including health status [5,6,7], growth [6], and body reserves. Furthermore, BW plays a crucial role in refining diet strategies [7,8,9], optimising breeding practices [9,10], and improving disease detection [11].
Traditionally, the determination of BW involves the use of large weighing scales [12] or the application of equations based on morphological traits, such as heart girth [13], derived from manual measurements. However, these methods present limitations, particularly when applied in the context of intensive husbandry practices. They are not only time-consuming but also labour-intensive due to the physical manipulation and restraint of animals. As animal-contact processes, they introduce potential risks and induce stress, impacting both the operator and the animals involved in the weighing procedure [6,7]. Consequently, there is a growing demand for an automatic, accurate, and non-intrusive method for acquiring BW [14].
The advancement of remote sensing and computer vision (CV) has emerged as a promising approach to meet the demand. The use of CV-derived body traits from remote sensing data is aimed at assessing BW while minimising the stress and risks associated with traditional methods. For two-dimensional (2D) image approaches, the traits used for BW estimation involve direct body measurements, such as body diagonal length and withers height [15], or the area of top-view or side-view trunk contours, such as the dorsal area [16] and the side area [15], respectively. However, these 2D approaches inherit limitations related to body surface construction, anatomical landmark identification, and cattle segmentation [17,18,19]. In contrast, three-dimensional (3D) data provide a third dimension that can mitigate these challenges. The 3D data, including depth images and point clouds, broaden the traits of the BW estimation methods. Firstly, body measurements are enriched, allowing simultaneous consideration of the width, length, and height of cattle bodies [20]. Furthermore, girth-wise traits can be extracted from 3D data, including heart girth in [19]. Secondly, area-based approaches are employed to calculate the surface of the body trunk, which departs from traditional contour-based methods. Furthermore, there is a growing trend among researchers to use the volume of the torso as a valuable metric for estimating BW [9,19,20,21,22,23].
To date, researchers concentrating on CV-based BW estimation have used various sensors, including RGB cameras [15,16], infrared light depth cameras [9,17,20,21,23,24,25], and short-range LiDAR [7,19]. These sensors are mainly employed to capture data from a fixed view position, such as top-view, which confines cattle within restricted spaces resembling narrow passageways [21,23] or limited spaces such as chutes [9,16,17,20,24] or keeps them constrained to standing still [7,19]. Therefore, notable gaps exist in the previous research on estimating cattle BW under natural husbandry conditions. Firstly, the constrained animals deviated from their natural activities. Secondly, standing is the posture of the animals in these studies. Thirdly, the data acquisition platforms used are typically fixed, lack mobility, and potentially hinder the implementation of real-world scenarios. For this, [5] conducted an exploration to estimate body weight using one Unmanned Aerial Vehicle (UAV).
To bridge the identified gaps in the estimation of the body weight of cattle, this study introduces a UAV-based LiDAR system for data acquisition during routine resting periods between feedings under natural farming conditions of a commercial farm. This study aims to explore models tailored for standing and lying cattle and investigate the differences in methods for body volume calculations in lying cattle. The research contributes insights into cattle body weight estimation in postural variations, using a UAV-based LiDAR system.

2. Materials and Methods

2.1. Animals and Housing

The experiment was carried out at the Xinshiji commercial farm, situated at coordinates 34.2264264N, 113.6954367E, with an altitude of 98 m, located in Changge County, Henan Province, China. The study involved 96 hybrid male Simmental beef cattle. The cattle were separated into three pens: P e n 1 (located on the right), P e n 2 (located in the middle) and P e n 3 (located on the left), as illustrated in Figure 1a. The respective herd sizes of the three pens were 36, 33 and 27. The cattle received two feedings daily at 8 AM and 4 PM in the interior area below the roof and spent most of their daily rest in the exterior areas.

2.2. Data Collection

Data collection for this research spanned from 5 January to 9 January 2023. On 8 and 9 January, drone flights were carried out to capture point clouds that represented the experimental area, while weighing procedures were carried out during the five days to establish the ground truth for the weights of the individual animals. Subsequent analyses in this study were based on the integration of drone-derived point clouds and ground truth data.

2.2.1. Drone Flights

In this study, the term drone flight pertains to the execution of a single drone flight over the experimental areas following a predefined flight plan with explicit parameters for flight height and speed. To ensure a non-intrusive approach that did not pose harm to the animals, drone flights were scheduled during the cattle’s resting periods between daily feedings. Twelve flights were carried out, incorporating combinations of speeds (1 m/s, 2 m/s, 3 m/s and 5 m/s) and heights (8 m, 10 m and 15 m). The drone used for these flights was a DJI Matrice 300RTK, equipped with a Zenmuse L1 LiDAR system (L1). Throughout the data acquisition process, the L1 LiDAR system was oriented vertically towards the ground, scanning three pens using a non-repetitive scanning pattern with the single return mode at a sampling rate of 240 kHz. At the same time, images of the three pens were captured using the integrated RGB camera at a frequency of 1/3 Hz. Further details of the drone flight were illustrated by previous research of [26].

2.2.2. Weighing

The cattle weighing procedure began with the initial placement of a commercial, industrial scale with an accuracy of 0.5 kg at the entrance to the internal space of P e n 1 . Subsequently, the scale was systematically relocated, first from P e n 1 to P e n 2 and then from P e n 2 to P e n 3 , with a two-day interval between each relocation. Weighing was conducted before afternoon feeding, which was performed twice for both P e n 1 and P e n 2 and once for P e n 3 . For P e n 3 , an additional weighing session was performed after the morning feeding. The details of the time slots are shown in Table 1. The entrance door created a passage to facilitate the weighing process for P e n 1 and P e n 2 , forcing the cattle to cross the scale, as illustrated in Figure 1a,c. Throughout weighing, cattle had unrestricted access to walk ad libitum on the scale. To record the weighing process, two cameras were deployed. One camera was mounted at the top of the entrance, recording videos with a top-view perspective, while the other was placed on a tripod outside the fence, capturing videos of the entrance from a side-view angle simultaneously, as shown in Figure 1a. The display, linked to the scale via a cable, was positioned within the field of view of the side camera.

2.3. Data Preprocessing

The data pre-processing phase included three components: the segmentation of cattle, the identification of individual animals, and the volume calculation of lying cattle.

2.3.1. Segmentation

Data acquired during the drone flights were initially stored on a 128GB microSD card integrated into L1. After fieldwork, the data were processed using DJI Terra software on a computer equipped with an NVIDIA Tesla T4 GPU, featuring 16GB of memory and running a Windows operating system. The outcome of this processing was point clouds, which were saved in the .pcd format. Each .pcd file represented the point cloud corresponding to the areas surveyed, comprising the three pens. An exception was noted for the drone flights conducted at 3 m/s and 8 m, where a flight interruption occurred due to lower power, resulting in the absence of the data corresponding to P e n 3 .
In our study, individual cattle point clouds within the surveyed areas were segmented using tailored methods for standing and lying cattle. Segmentation for standing cattle employed an approach detailed in the research of [26], implemented in Python with Open3D [27]. For lying cattle, a three-step segmentation process was used. First, the point cloud was cropped and rotated to isolate and align the point clouds of three pens along the x-axis. Then, the RANSAC plane segmentation, implemented by Open3D, was applied to each cell of a 3 × 4 grid of the x and y axes of each pen’s point cloud, using a distance threshold of 4 cm. The resulting cells were merged to create a coarse point cloud of each pen without the ground. Finally, the coarse point cloud was segmented using the Connected-component labelling algorithm in CloudCompare (2.13.alpha [macOS 64-bit]), followed by manual segmentation to separate individual cattle point clouds from remaining large groups.
After the segmentation step, the numbers of point clouds used to estimate BW were 95 and 425 for standing and lying postures, respectively. Further details are provided in Table 1 and [28].

2.3.2. Identification

Cattle identification was inspired by the work of [29]. The identification was conducted manually by matching coat patterns of an animal’s RGB images derived from drone flights with the gallery of cattle’s representative images pen by pen. The cattle gallery was established by the chronological alignment of animals in the video data from top-view and side-view cameras that simultaneously recorded the weighing process, as described in Section 2.2.2. Each animal in the gallery was assigned a unique identifier (UID), which uses a systematic three-digit numbering scheme, starting with the pen number to which the animal belonged. For instance, “302” represents the second animal from P e n 3 . Subsequently, images obtained from a drone flight were annotated with cattle UIDs by matching their coat patterns, specifically the white regions, with the corresponding gallery for each pen. Figure 2 illustrates an example of the identification results. Finally, the point clouds of cattle described in Section 2.3.1 were identified using UIDs based on the corresponding annotated images, ensuring the link between the UIDs and the individual cattle. The raw data and corresponding annotations used for the identification process are available at https://doi.org/10.5281/zenodo.13364663.

2.3.3. Volume Calculation

The volume calculation of cattle point clouds, which represent standing postures, can be carried out through three distinct approaches, as summarised by [18]: 3D mesh, 2.5D volume, and the method proposed by [30]. The last method is based on the principles of Green’s theorem and the divergence theorem; Green’s theorem calculates the area of a slice, while the divergence theorem calculates the volume based on areas. In this study, the latter two methods were implemented to develop predictors for the models of lying cattle body weight estimation. The 3D mesh method was not used due to its requirement for watertight point clouds, which our segmented cattle point clouds do not fulfil.
The calculation of 2.5D volume was carried out using CloudCompare software in this study. The process involved selecting the ground plane, rasterizing it, and summing the height of each cell in the grid. This method assumes that the space below the valued height is filled, and the minimum z-value of the point clouds serves as the z-value of the ground plane. The grid step was set to 0.5 cm. Additionally, the value of the empty cell was interpolated based on the values of the surrounding cells. An illustrative example of the 2.5D volume calculation in CloudCompare is provided in [28].
In this study, the slice-wise volume calculation method represented a simplified implementation of the third method based on the divergence theorem and Green’s theorem. The detailed procedure for our implementation is illustrated in Figure 3. The process began with slicing a point cloud at a fixed thickness along the z-axis, progressing upward from the bottom. For each slice, all points were projected onto the x-y plane. Subsequently, the concave hull algorithm, as implemented by [31] based on the work of [32], was applied to these projected points. This step resulted in a set of marginal points delineating the contour of the sliced point cloud in the x-y plane, as shown in Figure 3f. These marginal points were then sorted counterclockwise around their centroid. In the following step, the Shoelace Algorithm, represented by Equation (1), was applied to calculate the area enclosed by the contour. This algorithm calculates the area of a polygon by summing the products of consecutive coordinates and taking half of the absolute value of the result. Finally, the volume of the point cloud at a given slice thickness was determined by summing the product of the area of each slice and the slice thickness, as depicted in Equation (2). In this study, slice thicknesses ranging from 1 cm to 20 cm, with increments of 1cm, were selected to derive slice-wise volumes across various thicknesses.
A r e a = 1 2 i = 0 n 1 ( x i · y i + 1 x i + 1 · y i )
V o l u m e = i = 1 j S l i c e A r e a i × T h i c k n e s s
In addition to the volumetric data, three parameters were derived from the point clouds: the average height of the point clouds, denoted as AvgH, the area of the top-projected points of the 10 cm thick bottom slice of a point cloud, denoted as Larea, and the area of the top-projected points of a point cloud, denoted as Area. The AvgH was calculated as the average height of all points within a point cloud, as depicted by Equation (3). Both Larea and Area were calculated using the concave hull algorithm, similar to the methodology used during the acquisition of the slice-wise volume. A contour of Larea is shown in Figure 3f.
A v g H = i = 1 n z i n m i n ( z i | i 1 , 2 , . . . , n )

2.4. Models for Standing Cattle

Body measurements (BMs) derived from the point cloud of standing cattle were sourced from the previous research of [26], where these measurements were automatically retrieved from the point clouds of the scanned area. The imported BMs included withers height (WiH), waist height (WH), hip height (HH), back height (BH), abdomen width (AW), and morphological backbone length (MBL). A variable named TimeGap was introduced to represent the time gap between the day of weighing and the drone flight. Additionally, variables related to flight speed and height during drone flights were included. Initially, the dataset of standing cattle was partitioned into training and test datasets in an 8:2 ratio. Three kinds of models were chosen for training in the training dataset, namely models based on multiple linear regression, Support Vector Machine (SVM), and Random Forest. The SVM and Random Forest models were trained using a ten-fold cross-validation approach on the training dataset. For linear regression, combined variables were introduced as predictors, encompassing coarse area-wise variables derived from the product of two BMs and volume-wise variables derived from the product of three BMs. In total, the full model of linear regression models for BW estimation comprised 22 variables. To identify the best model with the most effective subset of variables, various model selection techniques were used, including step-wise model selection based on the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), as well as LASSO and ridge regression.
For BW estimation of standing cattle, the performance of three kinds of models, including four linear regression models, SVM, and Random Forest, was compared using root mean squared error (RMSE) and RMSE of prediction (RMSEP) on the training and test datasets, respectively. To ensure an unbiased comparison, the process was repeated 50 times with different random splits of the training and test datasets with the same 8 : 2 ratio. RMSE and RMSEP were compared based on the respective results, using mean ( μ ) and standard deviation ( σ ).
To facilitate comparison with previous studies, evaluation metrics were selected with nuances for training and test datasets. Mean absolute error (MAE), mean absolute percentage error (MAPE), RMSE, normalised root mean square error percentage (NRMSE), and R 2 were utilised for the training dataset. MAE, MAPE, and RMSEP were employed to assess the performance of the model in the test dataset.

2.5. Models for Lying Cattle

In this study, multiple linear regression was used to determine optimal models for estimating the body weight of lying cattle. The dataset used for training was aggregated based on records related to information derived from point clouds of lying animals. Each record comprised key parameters, including UID, Gap, 2.5D volume (abbreviated S H D ), AvgH, Area, LArea, and slice-wise volume (referred to as V o l u m e ). The Gap parameter retained its definition as described in Section 2.4 for models of standing cattle. The aggregation was performed by computing the average values of Gap, SHD, AvgH, Area, LArea, and Volume for each UID group. These averaged values, along with the corresponding Weight, constituted a new record in the training dataset.
During the training process, augmented predictors were generated considering the intersections of two average parameters from AvgH, Area, LArea, V o l u m e , and S H D . These augmented predictors were designed to integrate posture features with volumes during training. A set of predictors (n = 18), comprising augmented predictors and average parameters, was used as predictors for the full model aimed at estimating the response variable, W e i g h t . Model selection was performed using AIC and BIC to identify the best-performing models from the full model.
After training, the study documented the correlation between Volume and Weight and the adjusted R 2 values of the best A I C and B I C models for each thickness. Analysis was conducted on these results, and model performance for estimating body weights of lying cattle was evaluated using R M S E and M A E on representative models.

3. Results

In this study, we performed body weight estimation training for standing cattle based on their body measurements and for lying cattle based on their body volumes. Subsequently, we analysed and compared the performance of the trained models.

3.1. Ground Truth

A total of 89 cattle were weighed and distributed in three pens, 35 in P e n 1 , 31 in P e n 2 and 23 in P e n 3 , which is detailed in Table 1. The weighed cattle’s mean ( μ ) was 225.4 kg, with a standard deviation ( σ ) of 20.1 kg. A detailed breakdown of individual weight records is provided in [28]. To assess the consistency of cattle from different pens, an Analysis of Variance (ANOVA) was conducted on the ground truth to compare the mean weights across pens. The calculated p-value for the F-statistic is 0.068, with degrees of freedom of 2 for pen and 86 for residuals. With a confidence level of 95%, the results suggest that there are no significant differences in mean weight between the three pens, which means that there should be no additional attention to the variable referring to the pens.

3.2. Modelling Body Weight for Standing Cattle

Models for estimating BW of standing cattle encompassed LASSO, Ridge Regression, SVM, Random Forest, and stepwise selection based on AIC and BIC. They were trained and tested on a dataset with a size of 95. Their evaluation involved the comparison of RMSE on the training dataset and RMSEP on the test dataset over 50 repetitions, as described in Section 2.4. The variation in RMSE (P) is depicted in Figure 4, and details with evaluation metrics are shared in [28].
In the overall analysis of RMSE values, the training dataset demonstrated a smaller μ and a smaller σ compared to the test dataset. Among the models’ performance on the training dataset, Random Forest exhibited superior performance in terms of RMSE, followed by SVM. Models employing stepwise techniques for model selection showed slightly better performance than those utilising regularisation techniques, including LASSO and Ridge Regression. In particular, stepwise AIC marginally outperformed stepwise BIC in terms of both μ and σ , while LASSO and Ridge Regression displayed similar performance. The model with the lowest RMSE, valued at 6.33 kg, was Random Forest, and it achieved metrics with MAE, MAPE, NRMSE, and R 2 of 4.72 kg, 2.10%, 2.84%, and 0.94, respectively. Concerning the RMSE values of the test dataset, which are also represented as the Root Mean Squared Error of Prediction (RMSEP), the difference among models was insignificant as that on the training dataset. They showed similar σ , and a μ with a maximum span of less than 5 kg. Specifically, SVM emerged as the best performer, obtaining the lowest μ of RMSEP. In contrast, the stepwise models exhibited higher mean values and larger standard deviations than the others, with the stepwise AIC demonstrating the worst performance. The best-performing model in the test dataset, which achieved the lowest RMSEP of 7.91 kg, was the SVM model. It obtained metrics with MAE, MAPE, and NRMSEP values of 6.39 kg, 2.85%, and 3.59%, respectively.

3.3. Modelling Body Weight for Lying Cattle

In total, 421 point clouds of lying cattle were obtained from 12 drone flights, all corresponding to animals with ground truth body weight. After filtering out data with incomplete body shapes and strong noise, a subset of 401 point clouds was used for volume calculations. Volumes were computed following the procedure outlined in Section 2.3.3. The summary of volume results from the acquisition process is illustrated in Figure 5a. For the 2.5D method, the point cloud was considered as one slice, resulting in a count of slices of 1. In the slice-wise methods, a decreasing trend in slice size was observed. This decrease was dramatic as the thickness increased from 1 cm to 8 cm, followed by a slower decline from 8 cm to 14 cm. For thicknesses ranging from 15 cm to 18 cm, the medians of the slice sizes remained relatively constant. This can be attributed to the typical height of the lying cattle, which has a median of approximately 60 cm. The volumes calculated using the slice-wise method exhibited a consistent increasing trend with the increase in thickness. This pattern can be attributed to the amplified effect of the gradient of a slice, where thicker slices result in larger amplification. For example, the area of a 10 cm thick slice surpassed the combined volumes of two 5 cm thick slices. The volumes obtained from the 2.5D method had a median between those of the slice-wise method with thicknesses of 4 cm and 5 cm. This result suggests that volumes calculated using thicknesses lower than 4 cm are closer to the true volume of the animal, whereas those exceeding 5 cm overestimated the volume. In addition, the red dots representing outliers in Figure 5a indicate variations in the heights of lying cattle, representing various lying postures in terms of height. The count of outliers decreased with increasing slice thickness, indicating a diminishing capacity to represent posture variations in height through the volume parameter.
The input dataset (n = 71), derived by average aggregation in UIDs (described in Section 2.5), was applied to train multiple linear regression models. Figure 5a illustrates the adjusted R 2 values of the models selected using both the A I C and B I C , as well as the correlation between slice-wise volume and ground truth. In general, several conclusions can be drawn regarding the models whose performance is illustrated in Figure 5b. First, models using volumes close to the real volume of cattle performed better than those over the 2.5D volumes, as shown by the grey dashed splitting lines. Secondly, while the models selected with AIC generally outperformed those with BIC, both methods confirmed the same models for the thicknesses of 1 cm and 2 cm. Volumes at these thicknesses are more accurate than others. Thirdly, models selected by AIC demonstrated a decreasing trend in adjusted R 2 as thickness increased, with a fluctuation at the thickness of 15cm, mirroring changes in correlations. Fourth, the adjusted R 2 values obtained by using BIC decreased rapidly from 1 cm to 6 cm, remaining consistent thereafter with the peaks. Specifically, the model achieving the highest adjusted R 2 of 0.71 was obtained at the thickness of 1 cm, which was determined by both A I C and B I C . The formula of the model is presented in Equation (4). The lowest adjusted R 2 value, 0.607, was obtained from B I C , with a thickness of 6 cm, 7 cm, 8 cm, 9 cm, 10 cm, 14 cm, 17 cm, 19 cm, 20 cm, indicating identical models were selected for these thicknesses. The formula is described in Equation (5).
Performance metrics are detailed in Table 2. The models represented by Equation (4) performed best with the lowest RMSE of 9.56 kg, the second lowest MAE of 7.71 kg and the second lowest MAPE of 3.46%. The models represented by Equation (5) performed worst with RMSE, MAE, and MAPE of 11.31 kg, 8.84 kg, and 3.95%, respectively. Notably, the model selected by AIC at the thickness of 5 cm acquired slightly better MAE and MAPE than the model represented by Equation (4) the RMSE of the latter is better than the former. Other models performed with metrics between the models represented by Equations (4) and (5). Further details regarding these models are available in the shared data.
W e i g h t G a p + V o l u m e + A v g H × L A r e a + L A r e a × S H D
W e i g h t A v g H × L A r e a + L A r e a × S H D

4. Discussion

In this study, our primary aim was to develop non-intrusive body weight estimation models for both standing and lying cattle. To achieve this aim, we employed a UAV-based LiDAR system to acquire research data during the routine rest periods of cattle between feedings, which are under the natural husbandry conditions of a commercial farm. The collected data were initially processed and subsequently trained separately on body weight estimation models for standing and lying postures. To our knowledge, this study represents the first application of UAV-based LiDAR data to estimate the body weight of cattle in standing and lying postures.

4.1. Data Acquisition

Our data acquisition methodology in this study represents a significant advancement over previous approaches. By using drone flights, we obtained the data comprising both 2D images and 3D point clouds capturing diverse postures of the animals in a short time. More importantly, our approach allowed for the collection of data from a wide range of animals without interrupting the natural behaviour of the animals in natural husbandry environments, which is committed to animal welfare. It is much different from previous studies. For instance, [20] confined animals to restricted spaces during image collection, while [19] conducted data collection under controlled laboratory conditions, which involved using cumbersome equipment and marking landmarks on animals.
As for data acquisition for establishing the ground truth of the body weight of cattle, we placed an electric scale at the passages (demonstrated in Figure 1) and re-positioned it across three pens. Weight data were collected as animals walked past the scale ad libitum. This weighing approach proved to be generally effective, ensuring the welfare of both animals and humans by reducing the potential risks associated with traditional and intrusive weighing methods. However, we encountered several notable constraints during the cattle weight acquisition fieldwork. The use of the scale initially presented challenges due to its cumbersome nature and risks during weighing, consistent with the observations of [12,14]. In addition, practical complications arise from soft ground conditions on the farm, which introduce difficulties in calibrating the scale and result in a gradual decrease in accuracy over time. This decline was particularly evident when a small flock of cattle stood on the scale, causing an imbalance at the feet of the scale. Another constraint was the variation of reactions exhibited by the cattle towards the scale. Although most of the cattle adapted to the scale and walked past it naturally, some displayed curiosity by standing on it or walking past it repeatedly. On the contrary, a subset of cattle resisted walking past, choosing to wait outside until the door was fully opened, allowing them to bypass the scale from the side. These constraints highlight the considerations of ensuring the accuracy of the scale and different animal responses on the facilities when implementing weighing processes under natural husbandry conditions, providing important information for fine-turn data acquisition methods in future studies.
In our fieldwork, to mitigate the uncertainty introduced by the soft ground and ensure complete coverage of individuals within the pens, weighing processes were systematically repeated. Specifically, weighing was carried out twice for P e n 1 and P e n 3 , and three times for P e n 2 . Additionally, the re-calibration step of the scale was carried out twice or three times during each weighing process in practice.
For this study, the ground truth of the individual cattle was established by applying the maximum weight of each animal. To validate the effectiveness of ground truth, a statistical test was performed under the condition that the animals were of the same age (mean: 9 months). The null hypothesis ( H 0 ), which assumed that there was no difference in the mean weight of the ground truth in the pens, was tested by ANOVA. The result, at a 95% confidence level, did not yield enough evidence to reject H 0 . This outcome affirms that cattle from different pens could be reasonably treated as a cohesive group for the purpose of body weight estimation.

4.2. Model Analysis

This study explored body weight estimation models for two types of postures, standing and lying, exhibited by cattle, using point cloud data acquired from our UAV-based LiDAR system. Models were separately analysed based on postures as follows:

4.2.1. Models for Standing Posture

BW estimation models for standing cattle were discussed in Section 2.4 and internally compared in Section 3.2. Our analysis revealed that SVM and Random Forest outperformed linear regression-based models, including stepwise AIC, stepwise BIC, LASSO, and Ridge Regression, on the training dataset. The μ and σ of RMSE for Random Forest and SVM were 7.35 kg and 0.45 kg, and 10.07 kg and 0.83 kg, respectively, illustrated in Figure 4. Although SVM’s performance was slightly inferior to Random Forest, we recommend considering SVM as the primary choice for future research. There are two reasons for this recommendation. Firstly, SVM achieved the lowest μ of RMSEP among all models, indicating a better overall performance in prediction accuracy for the test dataset. Secondly, SVM shows an intersection in the RMSE and RMSEP results compared with the Random Forest (as depicted in Figure 4), suggesting a superior generalisation capability of SVM. Furthermore, LASSO and Ridge Regression models could be considered as secondary options for future research. Despite the higher μ values of RMSE compared to other models, they demonstrated intermediate levels of σ , indicating stability in their predictions. Importantly, their performance on RMSE and RMSEP was more consistent than other models, making them reliable alternatives for BW estimation of standing cattle.
A detailed comparison of this study with previous studies is provided in Table 3. In our training dataset, the best-performing model determined by RMSE evaluation was the Random Forest model, achieving an R 2 of 0.94. Compared to state-of-the-art studies that use models to estimate the BW of standing cattle in the training dataset using R 2 , our best-performance model demonstrated exceptional performance compared to the majority. It is slightly below that of [22] (0.97) and [9] (0.96). However, our model demonstrated exceptional performance, with a higher MAPE of 2.10% compared to 3.13% reported by [22]. The RMSE of our model (6.33 kg) significantly outperformed [9] (26.89 kg).
In our test dataset, the SVM model achieved RMSEP and NRMSEP within the interval of (14.08, 15.68) kg and (6.28%, 6.97%) with confidence 95%, respectively, which is comparable to previous studies. Specifically, the RMSEP outperformed that reported by [23] (22.57 kg). However, compared with [20]’s work on cattle in a similar growth phase, their predictions surpassed the performance of this study. One possible reason for this outcome could be over-fitting in their models, particularly concerning the size of model inputs. During the comparison, we observed that previous studies statistically favoured evaluating their results in training datasets rather than test datasets, detailed in Table 3. This preference may introduce bias due to the over-fitting of the models and potentially weaken the generalisation of research because of the lack of testing. Therefore, we recommend that future researchers demonstrate the generalisation of their models by testing them on independent datasets.
Overall, these internal and external comparisons underscore the feasibility and efficacy of our approach for BW estimation on standing posture using UAV-based LiDAR systems under natural husbandry conditions.

4.2.2. Models for Lying Posture

BW estimation models for lying cattle were described in Section 2.5, and a comparison was made internally in Section 3.3. In this study, we summarised two body weight estimation models specifically tailored to lying cattle. The models are the Basic Estimation Model (BEM) and the Elevated Estimation Model (EEM). The BEM, represented by Equation (5), serves as a foundational model to estimate the body weights of cattle based on data on lying posture. It provides a simplified approach to weight estimation. The EEM, as depicted by Equation (4), takes weight estimation to a higher level by incorporating additional factors slice-wise volume and day-wise variations with the BEM. The inclusion of slice-wise volume allows for a more precise representation of the cattle’s body volume, enhancing the model’s predictive capability. Furthermore, accounting for day-wise variations addresses the daily growth of cattle, providing a more comprehensive understanding of weight changes over time. The mutual portion of BEM and EEM are A v g H L A r e a + L A r e a S H D , including parameters of SHD, AvgH, and LArea and their paired productions of (AvgH, LArea) and (LArea, SHD). These parameters were defined in Section 2.5, and they were selected to capture essential aspects of lying postures. SHD, the 2.5D volume, accounts for the coarse volume of the cattle. AvgH represents the average height of all points in a point cloud, reflecting variations in height within lying postures that are quantitatively expressed by the large range of the boxplot at a thickness of 1 cm in Figure 5a. LArea, the ground area beneath the cattle’s body, offers insights into ground coverage of lying posture. LArea was decided to be extracted from the 10 cm bottom slice, as this thickness strikes a balance between capturing accurate ground area coverage and minimising potential inaccuracies due to missing points or extra points. Thinner slices may lead to an underestimation of ground area coverage due to missing points. These missing points can be caused by the angle of the drone flight path and the side of the cattle body far away from the path. For example, the bottom part of the body may miss points, as shown in the point cloud in Figure 3b,e. Thicker slices may overestimate the ground area due to points from higher body parts such as the head. Moreover, Area, which represents the area of the orthographic projection of a point cloud of a lying cattle, has not been used by BEM and EEM. The explanation is that the 2.5D methods used the Area parameter during the acquisition of SHD. During model training, we used average aggregation in UIDs to mitigate variations in lying postures of the same animal induced by different drone flights. This approach ensured consistency in the representation of lying postures for individual cattle across multiple data collection combinations of speed and height.
In summary, both the BEM and EEM employed average aggregation on UIDs to address variations within individual cattle across different drone flights. They used parameters, including SHD, AvgH, and LArea, to express the lying posture of the animals. Additionally, the EEM model used additional parameters, Volume and Gap, to enhance the volume representation of lying cattle and harbour daily growth changes, respectively. To our knowledge, this study marks the second instance of estimating cattle body weight using UAV-based data. Previously, [5] employed a dataset with outliers removed with a size of 13 for lying cattle estimation. They utilised Structure from Motion (SfM) to construct 3D models of lying cattle from RGB images and developed a linear regression model with a single predictor (Volume of SfM model), yielding an R 2 of 0.73 and MAE of 38 kg. In comparison, our model demonstrates better MAE, but it is trained on research subjects having a relatively narrow weight range. The EEM in this study achieved an R 2 of 0.74 (adjusted R 2 of 0.71). These two studies obtain the same performance level in R 2 . However, the R 2 reported by [5] was obtained after removing three data points from the original dataset.
In our approach to estimating the body weight of lying cattle, even our most effective model, the EEM, leaves 29% of the variation unexplained. Several factors may contribute to these unexplained variations. Firstly, noise within the point clouds can introduce variability. This noise may arise from additional points surrounding the animal’s point cloud, such as protruding fur or soil blocks, as well as points resulting from movement, like the movement of the head. Secondly, missing parts of the cattle’s body, often due to occlusion during data collection, can also lead to unexplained variation. For instance, the lower portion of the head may be obscured; the incomplete representations of the cattle’s body, as depicted in the upper portion of the point cloud in Figure 3b, may be obscured by the animal’s body, especially when the animal lies far from the flight path. Additionally, while our model utilises AvgH and LArea to represent lying posture, these features may not fully capture its characteristics. Moreover, significant density diversity exists among cattle, particularly among those with varying body condition levels. For future research that aims to estimate body weight using point cloud volume, we recommend minimising animal movement further during data acquisition, implementing noise filtering and point cloud completion better during pre-processing, incorporating more comprehensive features to describe lying cattle during feature extraction, and integrating body condition scores into the model during training.

4.3. Limitations and Future Work

Despite the previously mentioned limitations, several other limitations have been acknowledged. Firstly, limitations exist in the experimental design. The homogeneity of the cattle group in terms of age may limit the general applicability of findings to a wider range of age categories. In addition, the use of a single LiDAR in a one-way scanning may result in incomplete point cloud data for the body side of cattle positioning far from the flight path, affecting the accuracy of body volume estimations, especially for specific postures, like lying with head up. Secondly, one limitation related to the models is the representation of dynamic changes in animal body weight. In our study, these changes were approximated by the time gap between weighing and drone flights, which lacks detailed expression at the individual level. Moreover, the impact of excretion, which can lead to significant weight changes in a short time, was not considered. Last but not least, the absence of continuous monitoring limited our understanding of changes in cattle body weight over periods. Addressing these limitations in future research studies could enhance the capabilities, accuracy, and practicality of cattle body weight estimation in PLF.
As for our future work, several tasks are planned. Firstly, we aim to tailor methods for cattle body weight estimation using both lying and standing data. Secondly, regarding identification, we intend to apply deep learning techniques to automate the process, thus avoiding the burden of human-visual tasks. Thirdly, we seek to explore point cloud completion techniques to enhance the accuracy of body volume extraction from point clouds. Ultimately, our goal is to implement an automated cattle body weight estimation system at the individual level, which could significantly benefit livestock farming practices by improving efficiency and animal welfare on farms. Additionally, we plan to extend our methods to address a wider variety of scenarios. This includes adapting our techniques for free-ranging cattle on pasture, monitoring the grazing of other livestock such as goats and yaks, and even exploring applications for wild animal monitoring. By expanding the applicability of our research, we aim to broaden its utility across diverse environments and animal species.

5. Conclusions

In this study, UAV-based LiDAR scanning was deployed across 12 drone flights to non-intrusively collect data on both standing and lying cattle postures. We developed tailored models for estimating body weights of cattle in different postures, which achieved remarkable performance metrics. Specifically, the Random Forest model for the standing posture demonstrated an MAE of 4.72 kg and RMSE of 6.33 kg, which represent a significant advancement in standing cattle’s body weight estimation. For the lying posture, our multiple linear regression model achieved an MAE of 7.71 kg and RMSE of 9.56 kg, highlighting the feasibility of our approach. Our exploration of models for lying posture involved training on slice-wise volumes, derived from various slice thicknesses, thus highlighting the significance of 2.5D volume data and determining the threshold thickness between 2.5D volume and slice-wise volumes. Despite these limitations, such as the focus on animals of similar age and incomplete expression of features in lying postures, this research contributes to precision livestock farming by advancing 3D data acquisition and methodologies for posture-wise body weight estimation. We recommend future researchers to address these limitations and further refine body weight estimation in precision livestock farming.

Author Contributions

Y.W.: conceptualisation, methodology, software, validation, formal analysis, investigation, data curation, writing—original draft, writing—review and editing, visualisation, project administration. S.M.: conceptualisation, methodology, writing—review and editing, supervision. W.W.: conceptualisation, methodology, resources, writing—review and editing, supervision, project administration. L.K.: conceptualisation, methodology, resources, writing—review and editing, supervision, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data and models for this study are shared at https://doi.org/10.5281/zenodo.11277007.

Acknowledgments

We acknowledge the assistance provided by Huang (the rancher) and Yan (the operator) from the Xinshiji farm.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tullo, E.; Finzi, A.; Guarino, M. Environmental impact of livestock farming and Precision Livestock Farming as a mitigation strategy. Sci. Total Environ. 2019, 650, 2751–2760. [Google Scholar] [CrossRef]
  2. Ilea, R. Intensive livestock farming: Global trends, increased environmental concerns, and ethical solutions. J. Agric. Environ. Ethics 2009, 22, 153–167. [Google Scholar] [CrossRef]
  3. Lowder, S.; Skoet, J.; Raney, T. The number, size, and distribution of farms, smallholder farms, and family farms worldwide. World Dev. 2016, 87, 16–29. [Google Scholar] [CrossRef]
  4. Berckmans, D. Precision livestock farming technologies for welfare management in intensive livestock systems. Rev. Sci. Tech. 2014, 33, 189–196. [Google Scholar] [CrossRef] [PubMed]
  5. Los, S.; Mücher, C.; Kramer, H.; Franke, G.; Kamphuis, C. Estimating body dimensions and weight of cattle on pasture with 3D models from UAV imagery. Smart Agric. Technol. 2023, 4, 100167. [Google Scholar] [CrossRef]
  6. Ruchay, A.; Kober, V.; Dorofeev, K.; Kolpakov, V.; Dzhulamanov, K.; Kalschikov, V.; Guo, H. Comparative analysis of machine learning algorithms for predicting live weight of Hereford cows. Comput. Electron. Agric. 2022, 195, 106837. [Google Scholar] [CrossRef]
  7. Xavier, C.; Le Cozler, Y.; Depuille, L.; Caillot, A.; Lebreton, A.; Allain, C.; Delouard, J.; Delattre, L.; Luginbuhl, T.; Faverdin, P.; et al. The use of 3-dimensional imaging of Holstein cows to estimate body weight and monitor the composition of body weight change throughout lactation. J. Dairy Sci. 2022, 105, 4508–4519. [Google Scholar] [CrossRef]
  8. Buhman, M.; Perino, L.; Galyean, M.; Swingle, R. Eating and drinking behaviors of newly received feedlot calves. Prof. Anim. Sci. 2000, 16, 241–246. [Google Scholar] [CrossRef]
  9. Martins, B.; Mendes, A.; Silva, L.; Moreira, T.; Costa, J.; Rotta, P.; Chizzotti, M.; Marcondes, M. Estimating body weight, body condition score, and type traits in dairy cows using three dimensional cameras and manual body measurements. Livest. Sci. 2020, 236, 104054. [Google Scholar] [CrossRef]
  10. Ghotbaldini, H.; Mohammadabadi, M.; Nezamabadi-pour, H.; Babenko, O.; Bushtruk, M.; Tkachenko, S. Predicting breeding value of body weight at 6-month age using Artificial Neural Networks in Kermani sheep breed. Acta Sci. Anim. Sci. 2019, 41, e45282. [Google Scholar] [CrossRef]
  11. Frigo, E.; Dechow, C.; Pedron, O.; Cassell, B. The genetic relationship of body weight and early-lactation health disorders in two experimental herds. J. Dairy Sci. 2010, 93, 1184–1192. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, Z.; Shadpour, S.; Chan, E.; Rotondo, V.; Wood, K.; Tulpan, D. ASAS-NANP SYMPOSIUM: Applications of machine learning for livestock body weight prediction from digital images. J. Anim. Sci. 2021, 99, skab022. [Google Scholar] [CrossRef] [PubMed]
  13. Heinrichs, A.; Heinrichs, B.; Jones, C.; Erickson, P.; Kalscheur, K.; Nennich, T.; Heins, B.; Cardoso, F. Verifying Holstein heifer heart girth to body weight prediction equations. J. Dairy Sci. 2017, 100, 8451–8454. [Google Scholar] [CrossRef]
  14. Qiao, Y.; Kong, H.; Clark, C.; Lomax, S.; Su, D.; Eiffert, S.; Sukkarieh, S. Intelligent perception for cattle monitoring: A review for cattle identification, body condition score evaluation, and weight estimation. Comput. Electron. Agric. 2021, 185, 106143. [Google Scholar] [CrossRef]
  15. Yan, Q.; Ding, L.; Wei, H.; Wang, X.; Jiang, C.; Degen, A. Body weight estimation of yaks using body measurements from image analysis. Measurement 2019, 140, 76–80. [Google Scholar] [CrossRef]
  16. Weber, V.; Lima Weber, F.; Silva Oliveira, A.; Astolfi, G.; Menezes, G.; Andrade Porto, J.; Rezende, F.; Moraes, P.; Matsubara, E.; Mateus, R.; et al. Cattle weight estimation using active contour models and regression trees Bagging. Comput. Electron. Agric. 2020, 179, 105804. [Google Scholar] [CrossRef]
  17. Song, X.; Bokkers, E.; Tol, P.; Koerkamp, P.; Van Mourik, S. Automated body weight prediction of dairy cows using 3-dimensional vision. J. Dairy Sci. 2018, 101, 4448–4459. [Google Scholar] [CrossRef]
  18. Wang, Y.; Mücher, S.; Wang, W.; Guo, L.; Kooistra, L. A review of three-dimensional computer vision used in precision livestock farming for cattle growth management. Comput. Electron. Agric. 2023, 206, 107687. [Google Scholar] [CrossRef]
  19. Le Cozler, Y.; Allain, C.; Xavier, C.; Depuille, L.; Caillot, A.; Delouard, J.; Delattre, L.; Luginbuhl, T.; Faverdin, P. Volume and surface area of Holstein dairy cows calculated from complete 3D shapes acquired using a high-precision scanning system: Interest for body weight estimation. Comput. Electron. Agric. 2019, 165, 104977. [Google Scholar] [CrossRef]
  20. Cominotte, A.; Fernandes, A.; Dorea, J.; Rosa, G.; Ladeira, M.; Van Cleef, E.; Pereira, G.; Baldassini, W.; Neto, O. Automated computer vision system to predict body weight and average daily gain in beef cattle during growing and finishing phases. Livest. Sci. 2020, 232, 103904. [Google Scholar] [CrossRef]
  21. Hansen, M.; Smith, M.; Smith, L.; Jabbar, K.; Forbes, D. Automated monitoring of dairy cow body condition, mobility and weight using a single 3D video capture device. Comput. Ind. 2018, 98, 14–22. [Google Scholar] [CrossRef] [PubMed]
  22. Kamchen, S.; Santos, E.; Lopes, L.; Vendrusculo, L.; Condotta, I. Application of depth sensor to estimate body mass and morphometric assessment in Nellore heifers. Livest. Sci. 2021, 245, 104442. [Google Scholar] [CrossRef]
  23. Nir, O.; Parmet, Y.; Werner, D.; Adin, G.; Halachmi, I. 3D Computer-vision system for automatically estimating heifer height and body mass. Biosyst. Eng. 2018, 173, 4–10. [Google Scholar] [CrossRef]
  24. Gomes, R.; Monteiro, G.; Assis, G.; Busato, K.; Ladeira, M.; Chizzotti, M. Estimating body weight and body composition of beef cattle trough digital image analysis. J. Anim. Sci. 2016, 94, 5414–5422. [Google Scholar] [CrossRef]
  25. Kuzuhara, Y.; Kawamura, K.; Yoshitoshi, R.; Tamaki, T.; Sugai, S.; Ikegami, M.; Kurokawa, Y.; Obitsu, T.; Okita, M.; Sugino, T.; et al. A preliminarily study for predicting body weight and milk properties in lactating Holstein cows using a three-dimensional camera system. Comput. Electron. Agric. 2015, 111, 186–193. [Google Scholar] [CrossRef]
  26. Wang, Y.; Mücher, S.; Wang, W.; Kooistra, L. Automated retrieval of cattle body measurements from unmanned aerial vehicle-based LiDAR point clouds. Comput. Electron. Agric. 2024, 227, 109521. [Google Scholar] [CrossRef]
  27. Zhou, Q.; Park, J.; Koltun, V. Open3D: A Modern Library for 3D Data Processing. arXiv 2018, arXiv:1801.09847. [Google Scholar]
  28. Wang, Y. Data and Models for Cattle Body Weight Estimation Using Uav-Based Lidar Point Clouds; Wageningen University & Research: Wageningen, The Netherlands, 2024. [Google Scholar]
  29. Mücher, C.; Los, S.; Franke, G.; Kamphuis, C. Detection, identification and posture recognition of cattle with satellites, aerial photography and UAVs using deep learning techniques. Int. J. Remote Sens. 2022, 43, 2377–2392. [Google Scholar] [CrossRef]
  30. Mirtich, B. Fast and accurate computation of polyhedral mass properties. J. Graph. Tools 1996, 1, 31–50. [Google Scholar] [CrossRef]
  31. Agafonkin, V. Concaveman. GitHub Repository. 2021. Available online: https://github.com/mapbox/concaveman (accessed on 14 May 2024).
  32. Park, J.; Oh, S. A new concave hull algorithm and concaveness measure for n-dimensional datasets. J. Inf. Sci. Eng. 2012, 28, 587–600. [Google Scholar]
  33. Tasdemir, S.; Ozkan, I. ANN approach for estimation of cow weight depending on photogrammetric body dimensions. Int. J. Eng. Geosci. 2019, 4, 36–44. [Google Scholar] [CrossRef]
Figure 1. Demonstration of weighing on the farm: (a) Provides an overview of the experimental areas, including the weighing scene in P e n 1 . (b) Presents an image captured by the camera in P e n 1 , which displays the cattle and the recorded time. (c) Displays images captured by the device mounted on the tripod, showing a front view of the cattle depicted in (b). The display in the bottom left corner can show the animal’s weight.
Figure 1. Demonstration of weighing on the farm: (a) Provides an overview of the experimental areas, including the weighing scene in P e n 1 . (b) Presents an image captured by the camera in P e n 1 , which displays the cattle and the recorded time. (c) Displays images captured by the device mounted on the tripod, showing a front view of the cattle depicted in (b). The display in the bottom left corner can show the animal’s weight.
Drones 09 00084 g001
Figure 2. The demonstration of cattle unique identifiers (UIDs) for the Pen 2 of a drone flight.
Figure 2. The demonstration of cattle unique identifiers (UIDs) for the Pen 2 of a drone flight.
Drones 09 00084 g002
Figure 3. Demonstration of slice-wise volume calculation: (a) Corresponding RGB image of the cattle. (b,c) Depict the point cloud in top-view and front-view, respectively. (d) Illustrates the results of slices. (e) Presents the top-view of the bottom slice. (f) Shows the concave contour of (e) on the x-y plane.
Figure 3. Demonstration of slice-wise volume calculation: (a) Corresponding RGB image of the cattle. (b,c) Depict the point cloud in top-view and front-view, respectively. (d) Illustrates the results of slices. (e) Presents the top-view of the bottom slice. (f) Shows the concave contour of (e) on the x-y plane.
Drones 09 00084 g003
Figure 4. RMSE variation of six models for body weight estimation of standing cattle on training and test datasets over 50 repetitions.
Figure 4. RMSE variation of six models for body weight estimation of standing cattle on training and test datasets over 50 repetitions.
Drones 09 00084 g004
Figure 5. The results of lying methods: (a) Displays boxplots comparing volumes and counts of slices obtained from the 2.5D and slice-wise methods described in Section 2.3.3. The x-axis labels for the slice-wise method range from 1 to 20, where each number represents the slice thickness in centimetres. The blue dashed line represents the median volume value from the 2.5D method. (b) Illustrates the variation of adjusted R 2 and the correlation between slice-wise volume and Weight.
Figure 5. The results of lying methods: (a) Displays boxplots comparing volumes and counts of slices obtained from the 2.5D and slice-wise methods described in Section 2.3.3. The x-axis labels for the slice-wise method range from 1 to 20, where each number represents the slice thickness in centimetres. The blue dashed line represents the median volume value from the 2.5D method. (b) Illustrates the variation of adjusted R 2 and the correlation between slice-wise volume and Weight.
Drones 09 00084 g005
Table 1. Record Counts for Ground Truth and Posture-based Models in Cattle Body Weight Estimation Across Three Pens.
Table 1. Record Counts for Ground Truth and Posture-based Models in Cattle Body Weight Estimation Across Three Pens.
TasksWeighing 1Standing 2Lying 3
Pens 5 PM6 PM7 PM8 AM8 PM9 AM9 PMGT4n0 5n1 6n0 5n1 6
Pen 1 (n = 36)2832/////352321180168
Pen 2 (n = 33)//291730//313736144132
Pen 3 (n = 27)/////2015234138147125
total 8910195471425
1 Weighing refers to the fieldwork conducted to scale the cattle, establishing the ground truth for individual cattle body weight. 2 Record counts related to standing cattle body weight estimation. 3 Record counts related to lying cattle body weight estimation. 4 Ground truth counts of cattle body weight. 5 The count of records directly obtained from point clouds. 6 The count of records associated with ground truth. / Not available.
Table 2. Performance of models selected by Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) for lying cattle on 20 different thicknesses used for the slice-wise volume calculation.
Table 2. Performance of models selected by Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) for lying cattle on 20 different thicknesses used for the slice-wise volume calculation.
Thickness 1RMSE 2 (kg)MAE 3 (kg)MAPE 4 (%)
AICBICAICBICAICBIC
19.569.567.717.713.463.46
29.959.957.927.923.563.56
310.1510.307.938.073.553.62
410.2110.408.028.123.593.64
59.9710.437.548.093.383.62
610.2011.317.848.843.513.95
710.3911.318.168.843.653.95
810.1711.317.988.843.573.95
910.2611.318.098.843.633.95
1010.2811.317.808.843.503.95
1110.1010.507.818.263.523.70
1210.2410.548.098.193.623.68
1310.2810.588.018.123.583.63
1410.3011.317.958.843.573.95
1510.0010.447.718.003.483.59
1610.2510.548.138.243.643.70
1710.4111.318.118.843.623.95
1810.6111.088.558.773.833.93
1910.4711.317.878.843.543.95
2010.2711.318.238.843.693.95
1 Unit: cm. 2 RMSE: Root Mean Square Error. 3 MAE: Mean Absolute Error. 4 MAPE: Mean Absolute Percentage Error. value: the minimum of a metric; value: the maximum of a metric.
Table 3. Comparative analysis of models for estimating the body weight of standing cattle.
Table 3. Comparative analysis of models for estimating the body weight of standing cattle.
StudiesCattlePredictorsInputModelsPerformance R 2
The best-performance model of This study45withers height, waist height, hip height, back height, abdomen width, morphological backbone length95RFRMSE = 6.33 kg, MAE = 4.72 kg, MAPE = 2.1%, NRMSE = 2.84%0.94
[22]93body volume/LRMAE = 8.85 kg, RMSE = 3.17 kg, MAPE = 3.13%0.97
[9]55rump width, thorax width, dorsal area225LASSORMSE = 26.89 kg0.96
body weight, height, body depth, body lateral volume225LASSORMSE = 49.2 kg0.89
[20] 124dorsal area, body volume, six widths of body, six heights of body, body length15MLRRMSEP = 15.19 kg, NRMSEP = 7.1%0.63
15LASSORMSEP = 13.63 kg, NRMSEP = 6.37%0.70
15PLSRMSEP = 12.20 kg, NRMSEP = 5.7%0.76
15ANNRMSEP = 11.40, NRMSEP = 5.32%0.79
[33]/wither height, hip height, body length, hip width115ANN/0.995
[19]64volume, area, hip width, backside width, wither height, heart girth177MLRRMSE = 18.2 kg, NRMSE = 2.72%0.93
[17]30hip width, days in milk, parity30MLRRMSE = 41.2 kg, MAPE = 5.2%/
[23]68area, volume, withers height, age/MLRRMSEP = 22.57 kg, MAPE = 5.598%, NRMSEP = 5.478%0.946
[21]17volume185LRMAPE = 6.1%0.81
RMSE: Root Mean Square Error; RMSEP: Root Mean Square Error on non-training dataset; MAE: Mean Absolute Error; MAPE: Mean Absolute Percentage Error; NRMSE(P): Normalised Root Mean Squared Error (on non-training dataset) RF: Random Forest; LR: Linear Regression; LASSO: Least Absolute Shrinkage and Selection Operator; MLR: Multiple Linear Regression; PLS: Partial Least Squares; ANN: Artificial Neural Network; 1 The models used for this study were trained on the data at the stocker phase after weaning.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Mücher, S.; Wang, W.; Kooistra, L. Body Weight Estimation of Cattle in Standing and Lying Postures Using Point Clouds Derived from Unmanned Aerial Vehicle-Based LiDAR. Drones 2025, 9, 84. https://doi.org/10.3390/drones9020084

AMA Style

Wang Y, Mücher S, Wang W, Kooistra L. Body Weight Estimation of Cattle in Standing and Lying Postures Using Point Clouds Derived from Unmanned Aerial Vehicle-Based LiDAR. Drones. 2025; 9(2):84. https://doi.org/10.3390/drones9020084

Chicago/Turabian Style

Wang, Yaowu, Sander Mücher, Wensheng Wang, and Lammert Kooistra. 2025. "Body Weight Estimation of Cattle in Standing and Lying Postures Using Point Clouds Derived from Unmanned Aerial Vehicle-Based LiDAR" Drones 9, no. 2: 84. https://doi.org/10.3390/drones9020084

APA Style

Wang, Y., Mücher, S., Wang, W., & Kooistra, L. (2025). Body Weight Estimation of Cattle in Standing and Lying Postures Using Point Clouds Derived from Unmanned Aerial Vehicle-Based LiDAR. Drones, 9(2), 84. https://doi.org/10.3390/drones9020084

Article Metrics

Back to TopTop