Next Article in Journal
Predictions of Cu, Zn, and Cd Concentrations in Soil Using Portable X-Ray Fluorescence Measurements
Next Article in Special Issue
The Use of Artificial Neural Networks for Forecasting of Air Temperature inside a Heated Foil Tunnel
Previous Article in Journal / Special Issue
Development of an Open-Source Thermal Image Processing Software for Improving Irrigation Management in Potato Crops (Solanum tuberosum L.)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessment of Laying Hens’ Thermal Comfort Using Sound Technology

1
College of Water Resources & Civil Engineering, China Agricultural University, Beijing 100083, China
2
Division Measure, Model & Mange Bioresponses, Department of Biosystems, KU Leuven, Kasteelpark Arenberg 30, 3001 Heverlee, Belgium
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(2), 473; https://doi.org/10.3390/s20020473
Submission received: 14 December 2019 / Revised: 5 January 2020 / Accepted: 11 January 2020 / Published: 14 January 2020
(This article belongs to the Special Issue Sensors in Agriculture 2019)

Abstract

:
Heat stress is one of the most important environmental stressors facing poultry production and welfare worldwide. The detrimental effects of heat stress on poultry range from reduced growth and egg production to impaired health. Animal vocalisations are associated with different animal responses and can be used as useful indicators of the state of animal welfare. It is already known that specific chicken vocalisations such as alarm, squawk, and gakel calls are correlated with stressful events, and therefore, could be used as stress indicators in poultry monitoring systems. In this study, we focused on developing a hen vocalisation detection method based on machine learning to assess their thermal comfort condition. For extraction of the vocalisations, nine source-filter theory related temporal and spectral features were chosen, and a support vector machine (SVM) based classifier was developed. As a result, the classification performance of the optimal SVM model was 95.1 ± 4.3% (the sensitivity parameter) and 97.6 ± 1.9% (the precision parameter). Based on the developed algorithm, the study illustrated that a significant correlation existed between specific vocalisations (alarm and squawk call) and thermal comfort indices (temperature-humidity index, THI) (alarm-THI, R = −0.414, P = 0.01; squawk-THI, R = 0.594, P = 0.01). This work represents the first step towards the further development of technology to monitor flock vocalisations with the intent of providing producers an additional tool to help them actively manage the welfare of their flock.

1. Introduction

Animal vocalisations are a fundamental component of animal behaviour, and can be used as useful indicators of the animal welfare state [1,2,3]. Heat stress is one of the most critical environmental stressors in poultry production worldwide [4]. The detrimental effects of heat stress on poultry range from reduced growth and egg production to decreased egg quality and safety [5,6]. Moreover, high ambient temperatures have marked impacts on the behaviour, feed and water intake, heat production, and physiological responses (body temperature, respiratory rate and heart rate) of poultry [7,8], which might elicit specific vocalisation such as alarm, squawk, and gakel calls [9].
The gakel-call, as an indicator of frustration, can serve as an additional indicator of welfare in laying hens [10]. The number of gakel-calls and alarm-cackles can be regarded as potential indicators of frustration when recorded continuously [9]. In addition, alarm-calls were found as indicators of anxiety [1]. Total vocalisation (distress call) rate (i.e., the sum of all calls per animal/unit of time) in chickens is positively correlated with the occurrence of negative or stressful events and an increase in distress calls is considered an indication of compromised animal welfare [11,12]. Therefore, squawk and alarm calls are recognised as essential vocalisations from an animal welfare perspective [13,14]. Counting the number of vocalisations without specifying their types in most cases is not sufficient for welfare assessment [2]. However, little is still known about whether and how these specific vocalisations of laying hens are related to their thermal environment.
The thermal environment of different production systems can be compared using various thermal comfort indices. Among them, the temperature-humidity index (THI) is an important indicator that combines in a single parameter of environmental conditions (temperature, humidity) and provides an idea about the overall effect of the current environment on the tested animals [15,16]. Therefore, this study concentrates on quantitatively measuring frustration-related vocalisations (alarm, squawk, and gakel calls) to explore whether they are useful indicators for evaluating animal thermal comfort condition. This study will lead to a deeper understanding of meaning and relevance with respect to welfare.
Nowadays, modern technology makes it possible to use cameras, microphones, and sensors in proximity to animals to take the place of farmers’ eyes and ears in monitoring animal health and welfare effectively [17,18]. Bio-acoustical-based tools and methods for farming environments are non-invasive techniques for welfare judgements that are recognised as having an essential position in health and welfare monitoring technologies [2]. In animal vocalisation studies, the focus has been to utilise the data for research purposes only. Information on specific vocalisations is often extracted manually from its spectrogram and the intuition of the researcher often drives the choice of parameters. This manual extraction makes the process unsuitable for online and real-time large data analysis [19].
To improve the current state-of-the-art developments, two key components of algorithm development are required for detecting animal vocalisations, i.e., feature extraction and classification. For feature extraction, the source-filter theory of vocal production is a robust framework for studying animal vocal communication. According to this theory, the source-related vocal features (the fundamental frequency, f0) and filter-related features (formants, F1–F3) are considered as valid feature input parameters [20]. For example, filter-related features can help in decoding the differences between estrus and feed anticipating vocalizations in cows [21]. Both source-related and filter-related vocal features are essential in discriminating individual animal vocalisation [22]. For the classification of the vocalisations, another algorithm is necessary to operate on the feature output. Classification through support vector machines (SVM) has been shown to give excellent results in animal sound recognition [23,24,25]. Although SVM was initially designed for binary (2-classes) classification [26], later research has extended it to multi-category regression and classification research, making it suitable for classifying vocalisations [27]. For example, the sound of healthy and avian-influenza-infected chickens in the laboratory can be classified using SVM with an accuracy rate from 84% to 90% [28]. Livestock vocalisation classification algorithms based on SVM have been developed successfully, where they are targeting livestock-related sound with high accuracy for data sets from different species (sheep: 99.29%, cattle: 95.78%, dogs: 99.67%) [29].
Given the above rationale, this study aims at developing a sound-based monitoring method to automatically evaluate laying hens’ thermal comfort through automated vocalisation detection. A key innovation of this study is the application of sound techniques to automatically and remotely assess laying hens’ heat stress conditions for the first time. The objectives are as following: (i) automatic hens’ call detection, (ii) sound recognition model, (iii) application of the algorithm.

2. Materials and Methods

2.1. Animal and House

Experiments were carried out at a pilot farm (the Shangzhuang experimental station of China Agricultural University, Beijing, China). There, 100 hens (breed: Jingfen) were reared in a perch husbandry system at the age of 18 to 20 weeks (Figure 1). The area of this system was 4.5 m L × 1.6 m W × 2.8 m H (Figure 2). There was ad libitum access to food and water and a timer-controlled light schedule (light period: 8:00 a.m. to 6:00 p.m.) was conducted during the experimental period. Experiments were performed inside a temperature-controlled chamber to monitor and record hens’ vocalisation in different thermal environments (THI level). THI was recorded every five minutes and divided into four levels: comfort zone (THI < 70), alert zone (THI 70–75), danger zone (THI 76–81) and emergency zone (THI > 81) [15]. Additionally, sounds were classified into four categories: gakel, alarm, squawk, and others (Table 1). All experimental procedures were conducted in conformity with the Jingfen management guidelines and treatments for the care and use of laboratory animals. All recording procedures were non-invasive and did not cause any disturbance to the animals during their regular daily activity.

2.2. Data Collection

The sound was recorded using a top-view Kinect V1 (a sound monitoring system) for Windows (Microsoft Corp., Redmond, WA, USA). The system was installed 3.0 m above the floor. Sound data were acquired continuously in Waveform Audio File Format (1 channel, 32-bit resolution, 16,000 Hz, recording at approximately 55 s of each file). Frequency information up to 8000 Hz (= Nyquist frequency) was considered enough to automatically detect the sound events of interest at a low computational cost [31]. The Kinect was connected to a mini industrial personal computer (IPC) via a USB cable (Figure 1). A mobile hard disk drive (HDD) with 2 TB storage and USB 3.0 was used to record sound data. During this experiment, 105 h of data were recorded. The original data are uncompressed, so more than 23 GB of data were stored on the HDD.

2.3. Sound Signal Pre-Processing and Labelling

The sound signal was divided into frames before pre-processing. Each frame consisted of N = 512 samples to comply with the size of the discrete Fourier transform (DFT). Overlapping frames with a 50% overlap were recommended to avoid losing information, and the Hamming window was used to reduce edge effects and spectral leakage in each frame. Then, each frame was filtered based on spectral subtraction and the band-pass filter algorithm in MATLAB [32]. After filtering, sound data was labelled by manual audio-visual inspection [33]. Audacity® software version 2.3.0 was used to replay and label the data by two trained technicians to inspect each recording and annotate the start and end time of all call events. Meanwhile, there were also some dubious labels after validated by the animal expert. These labels were scored by the following method to decide which one should be removed or could be added as other call types in Table 1.
In addition, manual labelling was used to score the quality of measured data corresponding to 3 ratings [34]. Overlapped sounds labelled as rating 1 were not selected as training and testing datasets due to their complex acoustic features and limitations in sound source separation technology. Moreover, unclear sounds labelled as rating 2 were not regarded as training or testing datasets. Only clear vocalisations in rating 3 were used for data analysis in the following section.

2.4. Algorithm for Automatic Hens’ Call Detection

The developed automatic detection algorithm could be split into three parts: automatic sound event selection, feature extraction, and classification (Figure 3). In the pre-processing step, the raw sound data were cleaned by removing background noise. Random 8 h raw data were selected for manual labelling and another 21.5 h data were used for automatic sound event detection to annotate the beginning and end of possible hen calls. Next, each frame-based feature vectors were calculated and then they were classified into different types of calls. The number of labelled sounds for training and testing were 2368 and 228,200, respectively. The entire algorithm was developed in NI LabVIEW 2015 (American National Instrument Corp., Austin, TX, USA) and MATLAB R2012a (MathWorks Inc., Natick, MA, USA).

2.4.1. Automatic Sound Event Selection

The objective was to detect the calls of hens. A good selection of each sound event such as the exact beginning and end, will improve the success rate of the classification algorithm [34]. Filtered hen calls contain more energy than the background noise, so the envelope of energy method was selected to perform the sound detection. This automatic sound event selection method extracted endpoints from a continuous recording based on a manually selected threshold (0.35 in this trial) [34,35]. The algorithm was gradually optimised by maximising the overlap factor between the algorithm output and the labelled sound [36].

2.4.2. Feature Extraction

After testing 69 spectral and temporal features, only nine features were chosen according to their observable and distinguishing difference among four types of calls shown in their feature histograms (Table 2). These features were competent enough for adequate classification. The equation for spectral energy is expressed as:
E = log 10 n = 1 L X n X max 2
where n = 1, …, L represents the index of the vector of frequency bins, and X(n) is the energy of the nth frequency bin. First, the spectral energy is normalised by dividing the maximal energy of all frequency bins.

2.4.3. Classification

The SVM classifiers were developed based on statistical learning theory [38]. They are widely used because of their generalization ability. The basic idea is to transform input vectors into a high-dimensional feature space using a nonlinear transformation function, and then to do a linear separation in the feature space. The SVM algorithm can construct a variety of learning machines by use of different kernel functions [23]. Three kinds of kernel functions are usually calculated as:
  • Polynomial kernel function:
    K ( x , y )   =   ( g a m m a * x T y + c o e f 0 ) degree
  • Radial basis function with Gaussian kernel:
    K ( x , y ) = exp g a m m a * x y 2
  • Sigmoid function:
    K ( x , y ) = tanh g a m m a * x T y + c o e f 0
Although they were initially designed to solve two-class classification problems, multi-class classifications can be performed using the two common methods: one-vs.-all and one-vs.-one [39]. The former method was applied in the LabVIEW program explicitly designed for classifying the hens’ call. The default maximum iterations and tolerance was set at 10,000 and 0.0001, respectively.
A total of 2368 clips were manually labelled for training data and validation data. To avoid overfitting the data, k-fold cross-validation was used to estimate the classification performance [40]. Given the limitation of data size, the sound data were split into five smaller sets and the algorithm was trained using four sets (80% of the data) and validated on the remaining set (20% of the data). The average of the five validation sets was used to calculate the performance of the algorithm.

2.4.4. Performance Estimation

The overlap factor indicates the similarity between the algorithm output and the labelled sound to evaluate the selection performance. Moreover, the performance of both event selection and classification was evaluated by calculating two different statistical measurements, i.e., sensitivity (recall) and precision [36].
sensitivity = number   of   true   positives number   of   true   positives   +   number   of   false   negatives × 100 %
p r e c i s i o n = n u m b e r   o f   t r u e   p o s i t i v e s n u m b e r   o f   t r u e   p o s i t i v e s + n u m b e r   o f   f a l s e   p o s i t i v e s × 100 %

3. Results and Discussions

3.1. Algorithm Performance

To obtain optimised parameters for the training SVM model, the code: classification search parameters.vi was implemented on the LabVIEW software platform. The optimal hyper-parameters of the SVM model are C_SVC for SVM type, polynomial for kernel type, value 1 for C (penalty factor), value 3 for the degree, value 0.2 for gamma, and value 1 for coef0. Table 3 presents the final confusion matrix in the SVM training and testing stage, where 5-fold cross-validation was used to estimate the classification performance. As shown in Table 3, gakel calls can be easily classified as other calls, which results in the lowest sensitivity. The leading cause of the low recognition rate might be a relatively small number of gakel calls. With an improved training set, classification success would increase. In terms of the precision parameter, all the call types, except for alarm call, can be easily classified as an alarm call. In this experiment, an alarm call and squawk call could be more easily confused with each other. The stage of the manual label and automatic sound event selection was significant to the final classification performance.
As shown in Table 4, the alarm call shares the highest sensitivity rate and the gakel call shares the highest precision rate. Although the gakel call shares the lowest sensitivity, the method has a high recognition rate with reference to the overall call. The average parameters are both above 95.0%.
Specific vocalisations such as alarm, squawk, and gakel calls in chicken can be regarded as a sound indicator of health and/or well-being. In this paper, a bio-acoustical method was developed to automatically recognise these specific vocalisations, which is better than manual observation using subjective and time-consuming judgement [10,13]. Moreover, the method can realise automatic hen call detection in a large data set. However, it is impossible to determine by manual examination of the spectrograms or by subjectively listening to a huge volume of recordings. In this study, the SVM model has shown excellent results in animal sound recognition. The optimal parameters of the SVM are determined through 5-fold cross-validation, which can easily overcome the problem of overfitting or overlearning in the training set. The resulting performances were 95.1 ± 4.3 (for sensitivity rate) and 97.6 ± 1.9 (for precision rate). Other similar sound recognition performances were 92.1% for accuracy rate [23], 92% (an average accuracy for three behaviours), and 84% (an average precision for three behaviours) [25], 91.9–94.3% (frog calls) [41], 88.75% (accuracy of heat stress condition in turkeys) [42], 95.78–99.67% (three animal vocalisations: sheep, cattle, and dog) [29], and 66.7% for sensitivity rate and 88.4% for precision rate (sneeze or no-sneeze) [36], respectively. Compared to other references, the number of extracted features was less than that in other methods. For example, chicken vocalisation was analysed in the time domain, and 25 statistical features were extracted from the sound signals and the five best features were selected using the FDA (Fisher Discriminate Analysis) method to obtain features as the classifier ANN’s (artificial neural network) input parameters [27]. In this study, only nine features were extracted and the five best features were sufficient to achieve a high precision rate of 97.0% and a high sensitivity rate of 94.7% (Figure 4 and Figure 5). In contrast, the reference could only achieve an average accuracy of 88.68% (train) and 83.33% (test) [27].
As shown in Figure 4 and Figure 5, sequential feature selection was performed to easily observe their contribution to the recognition rate among different features. Although the maximum precision rate was nine features, the first four features (feature order 1, 8, 7, and 2) had already achieved a stable and high recognition rate. The same was true of the sensitivity rate, while the difference was that five features achieved a stable and high recognition rate. The corresponding selected features were feature order 7, 9, 5, 1, and 8.

3.2. Thermal Comfort Assessment

After the development of the sound recognition model, this study tried to assess and analyse the hens’ thermal comfort condition using recognised flock calls. The utility of this model in monitoring animal heat stress was based on a significant correlation between THI and the number of calls. As shown in Table 5 and Table 6, the Pearson correlation was significant at the 0.01 level (2-tailed). Based on this correlation, in the next step, it was possible to develop a vocal-based algorithm to predict animal thermal comfort levels automatically.
Using the auto-label sound data in the testing dataset and SVM classifier, we could realise the quantitative analysis of all vocalisations under different thermal conditions. Figure 6 presents the difference between the number of classified hen calls under different thermal conditions. We found that the number of levels of alarm calls presented more than that of squawk calls produced in the alert zone. In contrast, the latter one was higher than that of the alarm calls produced in the emergency zone. It could be concluded that an extreme thermal environment may easily cause more distress calls than that in a thermal comfort environment. There may be several explanations as follows. Squawk and alarm calls are both recognised as essential vocalisations from an animal welfare perspective [9]. The squawk call is a vocalisation of high frequency that can be made when the bird is startled or is experiencing pain [14]. Meanwhile, the alarm call is a rather soft turmoil reaction that occurs in response to the appearance of animals or people, things, and mild external changes [13]. Another research found that chickens under thermal distress conditions produced more distress calls than those under thermal comfort conditions [43]. Squawk calls are consistent with the results of this reference. However, alarm calls show the opposite pattern, which could not be explained in this study.
Hens tend to make a cautionary sound in response to changes in the external stimuli and respond sensitively and together as a group to the abrupt change of such sounds [14]. Specific vocalisations such as alarm, squawk in the chickens are correlated significantly with THI, which is meaningful and was observed in this chicken research. These sounds could be regarded as sound indicators for estimating the thermal comfort of chickens in different thermal environments. In some cases, vocalisations are not always presented. Chronic stress and even chronic pain seem to evoke no vocalisation in most animals [44].
Moreover, chronically adverse physical conditions may be expressed by an irregular voiceprint of normal vocalisation [44]. Therefore, further research will study changes in the detailed voiceprint of specific vocalisations under chronic and acute stress conditions. The work conducted in this study was aimed at minimising the challenges to those expected in commercial deployment of the technology. With the significant correlation between THI and specific vocalisations, farmers can only use a single sound indicator to assess bird thermal comfort levels without collecting environmental parameters automatically. As shown in Figure 7, the method has the potential for online farm application using fewer sensors and perceiving individual animal behaviour rather than the ambient environment. Also, the system could realise anomalous sound monitoring and thermal comfort evaluation.
Additionally, there have been several attempts to measure heat stress responses in poultry. The noise analysis method has demonstrated that it can evaluate thermal comfort [43]. However, the research only realised the interpretation of the noise amplitude and noise frequency spectrum of bird vocalisations under thermal distress conditions using manual statistic instead of automatic detection. Another automatic stress recognition system using sound techniques was designed to detect laying hens physical stress caused by changes in the temperature and mental stress from fear [45]. The average accuracy was 96.2% based on the SVM binary-classifier. Meanwhile, the target sound is none instead of the specific vocalisations, which may be difficult for manual observation or application of automatic sound detection in commercial farms.
Despite ambient environment monitoring sensors, the possibility still exists for situations that cause laying hens stress. Unfortunately, since we were only able to monitor and stress one flock and record the birds’ specific vocalisations, more data should be gathered in different settings. It should be verified that the method worked in this henhouse and could be generalised to other settings. Another essential and ill-considered factor in this experiment was the stocking density because it has been proven that as group size decreased, the number of distress calls increased [11]. In future research, a larger sample size would help us detect the subtle differences in the number of levels of specific vocalisations under thermal conditions. Moreover, specific vocalisations correlated with non-welfare animal behaviour and other stressors such as pollutants, harmful gas, and equipment failure would be a valuable area of research.

4. Conclusions

We proposed an innovative way of assessing thermal comfort, correlating THI with chicken vocalizations. The method can realise automatic hen call detection and classification. The method yielded a precision rate of 97.6 ± 1.9% and a sensitivity rate of 95.1 ± 4.3%. Based on this algorithm, the thermal comfort level of birds may be evaluated easily. Owing to thermal inertia in a commercial henhouse, this method might also be used as an early warning detection tool to avoid the lag of monitoring the ambient environment. Moreover, we found that specific vocalisations such as alarm and squawk calls of chickens were correlated significantly with heat stress indices (THI). Finally, the bio-acoustical method with data that accumulated through the monitoring system may be considered as a stressor indicator to evaluate animal welfare in the future.

Author Contributions

Data curation, X.D. and M.L.; Formal analysis, L.C. and T.N.; Investigation, M.L.; Methodology, X.D. and L.C.; Project administration, G.T. and C.W.; Software, X.D.; Supervision, T.N.; Writing—original draft, X.D.; Writing—review & editing, G.T., C.W. and T.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China (grant number: 2017YFD0701602) and (grant number: 2016YFD0700204), and the China Scholarship Council (grant number: 201806350182).

Acknowledgments

This work is supported by the Key Laboratory of Agricultural Engineering in Structure and Environment, which provided research farms, hardware, and technical assistance. Special thanks to Kaidong Lei and Kang Zhou for equipment installation and Wanping Zheng for proofreading the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zimmerman, P.H.; Koene, P. The effect of frustrative nonreward on vocalisations and behaviour in the laying hen, Gallus domesticus. Behav. Process. 1998, 44, 73–79. [Google Scholar] [CrossRef]
  2. Manteuffel, G.; Puppe, B.; Schön, P.C. Vocalization of farm animals as a measure of welfare. Appl. Anim. Behav. Sci. 2004, 88, 163–182. [Google Scholar] [CrossRef]
  3. Kuhne, F.; Sauerbrey, A.F.C.; Adler, S. The discrimination-learning task determines the kind of frustration-related behaviours in laying hens (Gallus domesticus). Appl. Anim. Behav. 2013, 148, 192–200. [Google Scholar] [CrossRef]
  4. Nascimento, D.C.N.D.; Dourado, L.R.B.; de Siqueira, J.C.; de Lima, S.B.P.; da Silva, M.C.M.; da Silva, G.G.; Sakomura, N.K.; Ferreira, G.J.B.C.; Biagiotti, D. Productive features of broiler chickens in hot weather: Effects of strain and sex. Semin. Ciências Agrárias 2018, 39, 731–746. [Google Scholar] [CrossRef]
  5. Lara, L.; Rostagno, M. Impact of heat stress on poultry production. Animals 2013, 3, 356–369. [Google Scholar] [CrossRef]
  6. Freitas, L.C.S.R.; Tinôco, I.F.F.; Baêta, F.C.; Barbari, M.; Conti, L.; Teles Júnior, C.G.S.; Cândido, M.G.L.; Morais, C.V.; Sousa, F.C. Correlation between egg quality parameters, housing thermal conditions and age of laying hens. Agron. Res. 2017, 15, 687–693. [Google Scholar]
  7. Mutibvu, T.; Chimonyo, M.; Halimani, T.E. Physiological responses of slow-growing chickens under diurnally cycling temperature in a hot environment. Braz. J. Poult. Sci. 2017, 19, 567–576. [Google Scholar] [CrossRef] [Green Version]
  8. Vieira, F.M.C.; Groff, P.M.; Silva, I.J.O.; Nazareno, A.C.; Godoy, T.F.; Coutinho, L.L.; Vieira, A.M.C.; Silva-Miranda, K.O. Impact of exposure time to harsh environments on physiology, mortality, and thermal comfort of day-old chickens in a simulated condition of transport. Int. J. Biometeorol. 2019, 63, 777–785. [Google Scholar] [CrossRef]
  9. Collias, N.E. The vocal repertoire of the red junglefowl-a spectrographic classification and the code of communication. Condor 1987, 89, 510–524. [Google Scholar] [CrossRef] [Green Version]
  10. Zimmerman, P.H.; Koene, P.; van Hooff, J. Thwarting of behaviour in different contexts and the gakel-call in the laying hen. Appl. Anim. Behav. 2000, 69, 255–264. [Google Scholar] [CrossRef]
  11. Marx, G.; Leppelt, J.; Ellendorff, F. Vocalisation in chicks (Gallus dom.) during stepwise social isolation. Appl. Anim. Behav. 2001, 75, 61–74. [Google Scholar] [CrossRef]
  12. Du, X.; Lao, F.; Teng, G. A sound source localisation analytical method for monitoring the abnormal night vocalisations of poultry. Sensors 2018, 18, 2906. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Bright, A. Vocalisations and acoustic parameters of flock noise from feather pecking and non-feather pecking laying flocks. Br. Poult. Sci. 2008, 49, 241–249. [Google Scholar] [CrossRef] [PubMed]
  14. Kim, N.Y.; Jang, S.Y.; Kim, S.J.; Jeon, B.T.; Oh, M.R.; Kim, E.K.; Seong, H.J.; Tang, Y.J.; Yun, Y.S.; Moon, S.H. Behavioral and vocal characteristics of laying hens under different housing and feeding conditions. J. Anim. Plant Sci. 2017, 27, 65–74. [Google Scholar]
  15. Xin, H.; Harmon, J.D. Livestock industry facilities and environment: Heat stress indices for livestock. In Agriculture and Environment Extension Publications; Iowa State University Digital Repository: Ames, IA, USA, 1998; p. 163. [Google Scholar]
  16. Perera, W.N.U.; Dematawewa, C.M.B. Thermal comfort differences in poultry houses and its influence on growth performance of broiler strains. Acta Hortic. 2017, 1152, 415–420. [Google Scholar] [CrossRef]
  17. Vandermeulen, J.; Kashiha, M.; Ott, S.; Bahr, C.; Moons, C.P.H.; Tuyttens, F.; Niewold, T.A.; Berckmans, D. Combination of image and sound analysis for behaviour monitoring in pigs. In Proceedings of the 6th european conference on Precision Livestock Farming, Leuven, Belgium, 10–12 September 2013. [Google Scholar]
  18. Kashiha, M.; Pluk, A.; Bahr, C.; Vranken, E.; Berckmans, D. Development of an early warning system for a broiler house using computer vision. Biosyst. Eng. 2013, 116, 36–45. [Google Scholar] [CrossRef]
  19. Mielke, A.; Zuberbühler, K. A method for automated individual, species and call type recognition in free-ranging animals. Anim. Behav. 2013, 86, 475–482. [Google Scholar] [CrossRef]
  20. Favaro, L.; Gamba, M.; Alfieri, C.; Pessani, D.; McElligott, A.G. Vocal individuality cues in the African penguin (Spheniscus demersus): A source-filter theory approach. Sci. Rep.-UK 2015, 5. [Google Scholar] [CrossRef] [Green Version]
  21. Yeon, S.C.; Jeon, J.H.; Houpt, K.A.; Chang, H.H.; Lee, H.C.; Lee, H.J. Acoustic features of vocalizations of Korean native cows (Bos taurus coreanea) in two different conditions. Appl. Anim. Behav. 2006, 101, 1–9. [Google Scholar] [CrossRef]
  22. Chuang, M.; Kam, Y.; Bee, M.A. Territorial olive frogs display lower aggression towards neighbours than strangers based on individual vocal signatures. Anim. Behav. 2017, 123, 217–228. [Google Scholar] [CrossRef]
  23. Dhanalakshmi, P.; Palanivel, S.; Ramalingam, V. Classification of audio signals using SVM and RBFNN. Expert Syst. Appl. 2009, 36, 6069–6075. [Google Scholar] [CrossRef]
  24. Chen, L.J.; Mao, X.; Xue, Y.L.; Cheng, L.L. Speech emotion recognition: Features and classification models. Digit. Signal Process. 2012, 22, 1154–1160. [Google Scholar] [CrossRef]
  25. Steen, K.A.; Therkildsen, O.R.; Karstoft, H.; Green, O. A vocal-based analytical method for goose behaviour recognition. Sensors 2012, 12, 3773–3788. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  27. Banakar, A.; Sadeghi, M.; Shushtari, A. An intelligent device for diagnosing avian diseases: Newcastle, infectious bronchitis, avian influenza. Comput. Electron. Agr. 2016, 127, 744–753. [Google Scholar] [CrossRef]
  28. Huang, J.; Wang, W.; Zhang, T. Method for detecting avian influenza disease of chickens based on sound analysis. Biosyst. Eng. 2019, 180, 16–24. [Google Scholar] [CrossRef]
  29. Bishop, J.C.; Falzon, G.; Trotter, M.; Kwan, P.; Meek, P.D. Livestock vocalisation classification in farm soundscapes. Comput. Electron. Agric. 2019, 162, 531–542. [Google Scholar] [CrossRef]
  30. Konishi, M. The role of auditory feedback in the vocal behavior of the domestic fowl. Zeitschrift für Tierpsychologie 1963, 20, 349–367. [Google Scholar]
  31. Cao, Y.F.; Yu, L.G.; Teng, G.H.; Zhao, S.M.; Liu, X.M. Feature extraction and classification of laying hens’ vocalization and mechanical noise. Trans. Chin. Soc. Agric. Eng. 2014, 18, 190–197. [Google Scholar]
  32. Martin, R. Noise power spectral density estimation based on optimal smoothing and minimum statistics. IEEE Trans. Audio Speech 2001, 9, 504–512. [Google Scholar] [CrossRef] [Green Version]
  33. Tullo, E.; Fontana, I.; Diana, A.; Norton, T.; Berckmans, D.; Guarino, M. Application note: Labelling, a methodology to develop reliable algorithm in PLF. Comput. Electron. Agric. 2017, 142, 424–428. [Google Scholar] [CrossRef]
  34. Carpentier, L.; Berckmans, D.; Youssef, A.; Berckmans, D.; van Waterschoot, T.; Johnston, D.; Ferguson, N.; Earley, B.; Fontana, I.; Tullo, E.; et al. Automatic cough detection for bovine respiratory disease in a calf house. Biosyst. Eng. 2018, 173, 45–56. [Google Scholar] [CrossRef]
  35. Exadaktylos, V.; Silva, M.; Aerts, J.M.; Taylor, C.J.; Berckmans, D. Real-time recognition of sick pig cough sounds. Comput. Electron. Agric. 2008, 63, 207–214. [Google Scholar] [CrossRef]
  36. Carpentier, L.; Vranken, E.; Berckmans, D.; Paeshuyse, J.; Norton, T. Development of sound-based poultry health monitoring tool for automated sneeze detection. Comput. Electron. Agric. 2019, 162, 573–581. [Google Scholar] [CrossRef]
  37. Giannakopoulos, T.; Pikrakis, A. Introduction to Audio Analysis: A MATLAB® Approach, 1st ed.; Academic Press: Cambridge, MA, USA, 2014; ISBN 978-0-08-099388-1. [Google Scholar]
  38. Vapnik, V.N. Statistical Learning Theory; Wiley-Interscience: New York, NY, USA, 1998; pp. 156–160. ISBN 978-0-47-103003-4. [Google Scholar]
  39. Özbek, M.E.; Özkurt, N.; Savacı, F.A. Wavelet ridges for musical instrument classification. J. Intell. Inf. Syst. 2012, 38, 241–256. [Google Scholar] [CrossRef] [Green Version]
  40. Refaeilzadeh, P.; Tang, L.; Liu, H. Cross-Validation, in Encyclopedia of Database Systems; Springer: Boston, MA, USA, 2009; pp. 532–538. ISBN 9780387355443. [Google Scholar]
  41. Chen, W.; Chen, S.S.; Lin, C.C.; Chen, Y.Z.; Lin, W.C. Automatic recognition of frog calls using a multi-stage average spectrum. Comput. Math. Appl. 2012, 64, 1270–1281. [Google Scholar] [CrossRef] [Green Version]
  42. Liu, L.S.; Ni, J.Q.; Li, Y.S.; Erasmus, M.; Stevenson, R.; Shen, M.X. Assessment of heat stress in turkeys using animal vocalization analysis. In Proceedings of the 2018 ASABE Annual International Meeting, Detroit, MI, USA, 29 July–1 August 2018. [Google Scholar] [CrossRef]
  43. Moura, D.J.d.; Naas, I.d.A.; Alves, E.C.d.S.; Carvalho, T.M.R.d.; Vale, M.M.d.; Lima, K.A.O.d. Noise analysis to evaluate chick thermal comfort. Sci. Agric. 2008, 65, 438–443. [Google Scholar] [CrossRef]
  44. Tokuda, I.; Riede, T.; Neubauer, J.; Owren, M.J.; Herzel, H. Nonlinear analysis of irregular animal vocalizations. J. Acoust. Soc. Am. 2002, 111, 2908–2919. [Google Scholar] [CrossRef] [Green Version]
  45. Lee, J.; Noh, B.; Jang, S.; Park, D.; Chung, Y.; Chang, H.H. Stress deection and classification of laying hens by sound analysis. Asian Austral. J. Anim. 2015, 28, 592–598. [Google Scholar] [CrossRef]
Figure 1. On-site perch husbandry system.
Figure 1. On-site perch husbandry system.
Sensors 20 00473 g001
Figure 2. Schematic of the experiment platform.
Figure 2. Schematic of the experiment platform.
Sensors 20 00473 g002
Figure 3. Flow chart of automatic hens’ call detection.
Figure 3. Flow chart of automatic hens’ call detection.
Sensors 20 00473 g003
Figure 4. The plot of precision rate in different feature sequential selection. The red circle marks the maximum recognition rate.
Figure 4. The plot of precision rate in different feature sequential selection. The red circle marks the maximum recognition rate.
Sensors 20 00473 g004
Figure 5. The plot of sensitivity rate in different feature sequential selection. The red circle marks the maximum recognition rate.
Figure 5. The plot of sensitivity rate in different feature sequential selection. The red circle marks the maximum recognition rate.
Sensors 20 00473 g005
Figure 6. The number of levels of hen calls in different thermal environments. Alert zone (THI 70–75), danger zone (THI 76–81), and emergency zone (THI > 81). The number of levels axis indicates the normalised range (from level 1 to level 10) of call quantity per five minutes.
Figure 6. The number of levels of hen calls in different thermal environments. Alert zone (THI 70–75), danger zone (THI 76–81), and emergency zone (THI > 81). The number of levels axis indicates the normalised range (from level 1 to level 10) of call quantity per five minutes.
Sensors 20 00473 g006aSensors 20 00473 g006b
Figure 7. LabVIEW panel of the sound monitoring system for WEB.
Figure 7. LabVIEW panel of the sound monitoring system for WEB.
Sensors 20 00473 g007
Table 1. Description of different hens’ vocalisations.
Table 1. Description of different hens’ vocalisations.
Call TypeDescription
GakelSoft, brief (<0.2 s) repetitive notes generally with a wide frequency range; often emphasize low frequencies (below 2 kHz). Notes with definite, clear harmonic structure [10,30].
AlarmHigh pitched sound of duration (<0.2 s) with a distinct harmonic structure, moderately loud (similar to alert calls [9,13,30]).
SquawkComponent notes are short (<0.1 s) with an abrupt onset and ending and cover a wide frequency range. This call is a moderately loud sound (similar to distress cries [9,13,30]).
OthersOther hens’ vocalisations. Total vocalisation rate is positively correlated with event aversiveness in domestic chickens [13].
Table 2. Description of feature parameters.
Table 2. Description of feature parameters.
Feature ParametersDescriptionOrder
Jitter_f0Mean absolute difference between frequencies of consecutive f0 periods divided by the mean frequency of f0 (fundamental frequency) [20]1
Jitter_F1Mean absolute difference between frequencies of consecutive F1 periods divided by the mean frequency of F1 (the first formant) [20]2
Jitter_F2Mean absolute difference between frequencies of consecutive F2 periods divided by the mean frequency of F2 (the second formant) [20]3
Shimmer_F1Mean absolute difference between the amplitudes of consecutive F1 periods divided by the mean amplitude of F1 [20]4
Shimmer_F3Mean absolute difference between the amplitudes of consecutive F3 periods divided by the mean amplitude of F3 (the third formant) [20]5
ZCRThe zero-crossing rate (ZCR) of an audio frame is the rate of sign-changes of the signal during the frame [37]6
Spectral spreadThe spectral spread is the second central moment of the spectrum [36]7
Spectral energyRefer to Equation (1)8
Spectral centroidThe spectral centroid is the centre of ‘gravity’ of the spectrum [36]9
Table 3. Confusion matrix of support vector machine (SVM) modelling.
Table 3. Confusion matrix of support vector machine (SVM) modelling.
Real Call TypeClassified by 9 Features
AlarmGakelSquawkOthersTotalSensitivity (%)
Alarm90605691798.8
Gakel7963411087.3
Squawk180727575096.9
Others801357059196.4
Total939967485852368-
Precision (%)96.5100.097.297.4--
Note: - denotes null value.
Table 4. Classification performance of the SVM model.
Table 4. Classification performance of the SVM model.
Call TypeClassification Performance
Sensitivity ± SD (%)Precision ± SD (%)
Alarm98.4 ± 0.595.5 ± 1.4
Gakel88.9 ± 1.4100.0 ± 0.0
Squawk96.1 ± 1.796.6 ± 0.4
Others97.0 ± 1.398.1 ± 0.5
Total95.1 ± 4.397.6 ± 1.9
Table 5. Correlations between the alarm and temperature-humidity index (THI).
Table 5. Correlations between the alarm and temperature-humidity index (THI).
THIAlarm
THIPearson Correlation1−0.414 **
Sig. (2-tailed) 0.008
N4040
AlarmPearson Correlation−0.414 **1
Sig. (2-tailed)0.008
N4040
** Correlation is significant at the 0.01 level (2-tailed).
Table 6. Correlations between the squawk and THI.
Table 6. Correlations between the squawk and THI.
THISquawk
THIPearson Correlation10.594 **
Sig. (2-tailed) 0.000
N4040
SquawkPearson Correlation0.594 **1
Sig. (2-tailed)0.000
N4040
** Correlation is significant at the 0.01 level (2-tailed). 0.000 denotes that the value is less than 0.001.

Share and Cite

MDPI and ACS Style

Du, X.; Carpentier, L.; Teng, G.; Liu, M.; Wang, C.; Norton, T. Assessment of Laying Hens’ Thermal Comfort Using Sound Technology. Sensors 2020, 20, 473. https://doi.org/10.3390/s20020473

AMA Style

Du X, Carpentier L, Teng G, Liu M, Wang C, Norton T. Assessment of Laying Hens’ Thermal Comfort Using Sound Technology. Sensors. 2020; 20(2):473. https://doi.org/10.3390/s20020473

Chicago/Turabian Style

Du, Xiaodong, Lenn Carpentier, Guanghui Teng, Mulin Liu, Chaoyuan Wang, and Tomas Norton. 2020. "Assessment of Laying Hens’ Thermal Comfort Using Sound Technology" Sensors 20, no. 2: 473. https://doi.org/10.3390/s20020473

APA Style

Du, X., Carpentier, L., Teng, G., Liu, M., Wang, C., & Norton, T. (2020). Assessment of Laying Hens’ Thermal Comfort Using Sound Technology. Sensors, 20(2), 473. https://doi.org/10.3390/s20020473

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop