Next Article in Journal
Cellulose-Based Electrochemical Sensors
Previous Article in Journal
Enhancing Mixed Traffic Flow with Platoon Control and Lane Management for Connected and Autonomous Vehicles
Previous Article in Special Issue
Evaluation of Different Sensor Systems for Classifying the Behavior of Dairy Cows on Pasture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Complete Pipeline to Extract Temperature from Thermal Images of Pigs

Adaptation Physiology Group, Department of Animal Sciences, Wageningen University & Research, P.O. Box 338, 6700 AH Wageningen, The Netherlands
*
Author to whom correspondence should be addressed.
Present address: Wageningen Livestock Research, Wageningen University & Research, P.O. Box 338, 6700 AH Wageningen, The Netherlands.
Sensors 2025, 25(3), 643; https://doi.org/10.3390/s25030643
Submission received: 5 December 2024 / Revised: 17 January 2025 / Accepted: 20 January 2025 / Published: 22 January 2025

Abstract

:
Using deep learning or artificial intelligence (AI) in research with animals is a new interdisciplinary area of research. In this study, we have explored the potential of thermal imaging and AI in pig research. Thermal cameras play a vital role in obtaining and collecting a large amount of data, and AI has the capabilities of processing and extracting valuable information from these data. The amount of data collected using thermal imaging is huge, and automation techniques are therefore crucial to find a meaningful interpretation of the changes in temperature. In this paper, we present a complete pipeline to extract temperature automatically from a selected Region of Interest (ROI). This system consists of three stages: the first one checks whether the ROI is completely visible to observe the thermal temperature, and then the second stage uses an encoder–decoder structure of a convolution neural network to segment the ROI, if the condition was met at stage one. In the last stage, the maximum temperature is extracted and saved in an external file. The segmentation model showed good performance, with a mean Pixel Class accuracy of 92.3%, and a mean Intersection over Union of 87.1%. The extracted temperature observed by the model entirely matched the manually observed temperature. The system showed reliable results to be used independently without human intervention to determine the temperature in the selected ROI in pigs.

Graphical Abstract

1. Introduction

Thermal imaging or infrared thermography is a contactless and non-invasive technique to remotely observe the temperature distribution patterns on the surface of the observed objects [1,2]. Thermal imaging creates images using the infrared waves that are emitted by all objects in the observed space. It has been widely used in different practical applications, including quality control [3], natural sciences [4], military [5], medicine [6,7,8], veterinary, and animal sciences [9]. In animal studies, it has shown great potential as a non-intrusive tool in remotely monitoring animals [2], or more specifically in monitoring their health [10]. Thermal cameras have the ability to capture and collect a huge amount of data remotely without causing stress to animals, which may alter the behavior of animals.
The aim of this study was to build a system able to detect and segment the body parts of pigs from thermal imaging footages to monitor the changes in the skin temperature of these parts to be used in future animal emotion research. The proposed system is composed of two integrated models. The first model is responsible for deciding whether the Region of Interest (ROI) is completely visible in the frame, and the second one segments the ROI to extract the maximum temperature. To our knowledge, this is the first study that used semantic segmentation to extract an animal body part from thermal videos, where the animals were kept in relatively low-lighting conditions, similar to the lighting conditions on real farms. The base of the ears was chosen as the ROI in this study, as it is proposed as an important area in emotion research [1,11], and it can be observed well from the top view, which is how the thermal camera was set up.
The main contributions of this study are as follows:
  • This study offers a complete pipeline able to automatically detect and track a Region of Interest (ROI) in the thermal videos of pigs moving freely. The pipeline can continuously extract temperature over a long period without any human intervention and save the observed temperature in external files for further analysis. To the best of the authors’ knowledge, this is the first complete pipeline for the automatic extraction of thermal temperature in animals.
  • The proposed system can be transferred to be applied on any animal or extended to extract the temperature of other body parts. In addition, thermal imaging is a suitable approach to identify animals, especially in low-light conditions, which is likely the case on most farms, whereas AI has the capabilities of processing and extracting valuable information from recorded data.
  • The system can be used to observe the change in the temperature of farm animals over a long period of time, which is crucial for animal research.
This paper is structured as follows: First, previous research related to our study is discussed in Section 2. The architecture of the system is described in Section 3. The dataset, preprocessing, and augmentation are subsequently described in Section 4. Section 5 presents the implementation of the system, and Section 6, the results. Finally, the results are discussed, and some conclusions are presented in Section 7.

2. Related Work

Thermal imaging has been used in animal research to monitor their physical condition and behavior. For disease diagnosis, for example, Dunbar et al. [12] detected temperature changes in the feet of mule deer infected with foot and mouth disease, finding a considerable rise in temperature two days before the first symptoms appeared. Avni-Magen et al. [13] found, during monitoring elephants with thermal cameras over three months, that the infected parts of the elephants such as the ears had a significantly higher temperature in comparison to other parts. In animal control research, thermal imaging was used to detect pregnancy in the black rhinoceros [14], and to determine the ovulation time in Asian elephants and black rhinoceros [15]. In animal emotion research, Nakayama et al. [16] reached to a conclusion that nasal temperature can work as an accurate indicator of a shift from a neutral to a negative emotional state in nonhuman primates. These studies relied mostly on the manual extraction of thermal data, as investigators manually located the area of interest (AOI) and extracted the temperature frame by frame. The manual method is a tiring and time-consuming task, considering that some thermal cameras record 20 to 30 frames per second. Hence, extracting the temperature manually for a 10 min thermal video, for instance, would be quite infeasible. Some recent studies used semi-automatic methods; however, they still depend on manual labor and a relatively subjective judgment to define the AOI [17]. Lu et al. [11] developed a Support Vector Machines (SVMs) algorithm based on the geometric shape and contour features of pigs to identify the ROI, i.e., the ear base. The model was applied on thermal images and compared with the manual measuring method. The comparison showed that for the left and right ear base, respectively, 97% and 98% of the testing images had an error within 0.4 °C. Although these results are great, the algorithm worked only on images selected by the experts in which the ROI was visible. The algorithm was not able to extract temperature automatically without supervision.
The advances in artificial intelligence and computer vision opened the door to process substantial amounts of data and automatize time-consuming tasks. However, there is still a limited number of thermal imaging studies where the potential of using artificial intelligence has been explored [17,18], especially in animal sciences. Cho et al. [19] proposed a deep learning model to automatically recognize peoples’ psychological stress levels from their breathing patterns using a thermal camera. Kakileti et al. [20] explored different architectures of semantic segmentation and found that encode–decode based architectures (UNet, VNet) work better in segmenting breast cancer in thermal images. Mazur-Milecka and Ruminski [21] used UNet and VNet models for the semantic segmentation of the thermal images of laboratory rats from their backgrounds, while they were in close contact during social behavior tests.
Image segmentation models have been widely used in medical image diagnosis, starting with fully convolutional networks (FCNs) [22] as a pioneering approach in the field of image segmentation. Further, other models have been developed like PSPNet [23], DeepLab [24], and Mask-RCNN [25] to improve the performance of image segmentation. UNet is the most popular network architecture and has been widely applied in medical research [26]. In our study, we are proposing a system consisting of two integrated models to extract the temperature of ROIs (i.e., the ear base) from thermal footage. The first model evaluates whether the base of the ear of both sides, left and right, is completely visible in the frame. If the ROI is visible, the second model segments the base of the ears to subsequently extract the maximum temperature.

3. Model Architecture

The proposed system for extracting temperature consists of three stages as shown in Figure 1. The first stage is a classification model, and each frame enters the model one by one as an input, where the model decides whether the ROI is completely visible in the frame. If the condition is met, the frame is passed to the next stage to extract the ROI. If the condition is not sufficient, the frame is discarded, and the same process is repeated for the next frame. The second stage is the segmentation stage, where the ROI is detected and extracted from the animal body. At the final stage, the maximum temperature is extracted and saved automatically with its number in an external file for further analysis.

3.1. Stage 1: Classification Model

This stage is crucial to ensure the quality of the extracted temperature. The considered ROI in this study is the ear base of both left and right ears (Figure 2). As the animal moves freely in the pen during the test (see Section 4: dataset), these regions are sometimes partially visible or not visible at all in the frame. Skipping this stage and going directly to stage 2 may introduce errors in the extracted temperature, as the segmentation model will still be able to detect the partially visible ROI and extract the maximum temperature in the shown part only, whilst the real maximum temperature can be on the hidden part.
A common approach now in training a deep neural network is to use transfer learning rather than training the network from the beginning. Yosinski et al. [27] proved that even on applications that are far distant from the base task of the pre-trained network, transfer learning works better than using random initial values. The transfer learning approach was applied in training this classification model, using different architectures such as ResNet 50, VGG16, and Inception networks. As the main concern was to extract the temperature only when the ROI was completely visible in the frame, the architecture that yielded the least rate of false-positive results was chosen.

3.2. Stage 2: Segmentation Model

Semantic segmentation or image segmentation segments an input image by assigning each pixel to a certain label. It is a classification process operated at the pixel level [28]. The architecture of the UNet network is composed of two parts; encoder (down-sampling) and decoder (up-sampling). The encoder works on capturing the semantic information of the image, whereas the decoder locates this information. This architecture reduces the number of training parameters, enabling better performance. The model employed to segment the ROI is a UNet network with a backbone of a ResNet101 network as an encoder (Figure 3). He et al. [25] proposed the ResNet architecture in 2015, as a solution for the deep gradient degradation problem. Many subsequent networks followed with improving capabilities, such ResNet50 and ResNet101. It was inspired by the VGG-19 model [29], where a global average pool replaces the fully connected layer in the VGG model, and a “shortcut connection” is used. The backbone ResNet101 consists of a sequential of a 7 × 7 convolutional layer, a max pooling layer, and 33 residual blocks. Each residual block contains three 3 × 3 convolutional layers with a rectified linear unit (ReLU) activation and a batch normalization. The up-sampling part starts from a 1024-channel 24 × 32, processed by a 2 × 2 transposed convolution with a stride of 2. The up-sampled feature map, which has 512 channels and a size of 48 × 64, is concatenated with the corresponding feature map from the down-sampling path, which has been processed through a 1 × 1 convolution to produce a 512-channel output. This process is repeated until the size of the feature map is recovered with a size of 768 × 1024. The final layer is the output layer with a softmax activation function and three channels for the three detected labels; the ear base of the left side, the ear base of the right side, and anything else in the image frame is considered as background.

3.3. Stage 3: Temperature Extraction

At the final stage, the maximum temperatures [11,30] were extracted from the segmented ROIs. For each side, the thermal data of pixels outside the segmented area were discarded by assigning zero values to them, and then the maximum temperature was observed from the thermal pixels that only lay within the boundary of the extracted ROI. The maximum temperatures of both the left and right ear bases were then saved with the frame number in an external file.

4. Dataset

The thermal footages used for this research were captured during a frustration challenge test, which was part of a larger experiment to study resilience in pigs (see for more details [31]). The experiment was conducted at Carus, the animal research facility of Wageningen University and Research, Wageningen, the Netherlands. A total of 373 female pigs (TN70 × Tempo) were tested in the frustration challenge. For this challenge, each pig was taken out from its home pen and moved to a small pen (1.2 × 0.6 m) in a test room for 10 min. Here, the isolated animal was able to see, smell, and hear other pigs exploring and playing freely in a “play arena”. The inability to join the playing pigs may have induced a feeling of frustration. The experiment was approved by the Animal Care and Use Committee of Wageningen University and Research (DEC code: AVD1040020186245), and the established principles of laboratory animal care and use were followed, as well as the Dutch law on animal experiments.
A FLIR T1020 thermal imaging camera was mounted on a tripod with a distance of about 1 m between the pig’s head and the camera. The emissivity was set at 0.98. The ambient temperature and humidity of the test room were also set and checked regularly, and their settings were adapted when necessary. The resolution of the camera was 768, 1024 pixels, and had a 40 mm focal length lens with an accuracy of 0.02 reading at 25 °C and a thermal sensitivity range of −40 to 2000 °C. The thermal camera filmed the isolated pig in order to monitor if there was a change in temperature related to the induced negative emotional state.
A total of 373 thermal imaging videos were recorded during the test. To develop our model, 10 thermal videos were chosen to be processed with FLIR Research software Max 4.40 in order to build the datasets. The 10 videos were converted into both a jpg format and a csv format; the jpg images were used to train the model, and the csv format was used to extract thermal information.
For the classification model, a total of 12,784 images were selected from the processed 10 videos, assigning them to one of two classes; the ROI is visible (5388 images), and the ROI is not or partially visible (7396 images). Figure 4 shows examples of the two classes. The dataset is divided as follows; 9584 images for training, 1600 images for validation, and 1600 images for testing.
For the segmentation model, a new set of 577 images was selected to build the model dataset. The selected images were chosen to cover all possible positions of the ROI in order to be recognized by the model. The images were segmented manually using the APEER online platform. APEER is a cloud-based platform designed for image analysis tasks, offering various tools for annotating and segmenting images. The base of the pig’s ear lacked a well-defined boundary, as it varied between frames due to the pig’s movements. However, the area of interest consistently exhibited a higher temperature compared to other parts of the pig’s body, making it distinguishable in thermal imaging. Thermal images, like any digital images, consist of pixels, where each pixel represents a specific temperature value. The FLIR Research software assigns unique colors to these values based on the selected color palette. In this study, the IRONBOW color palette was used, which assigns lighter colors to higher temperature pixels. More information about thermal color representation palettes can be found at [32].
The 577 selected images were saved using this palette in RGB format. The lightest shades near the base of the ears, corresponding to the highest temperatures in this region, were identified as the boundary of the ROI. This selection was made by visually inspecting each image to locate these light regions. Switching off the red and green channels and keeping the blue channel helped in the visual assessment. To ensure consistency in defining the ROI boundaries, a single operator performed the annotation process using the brush tool provided in APEER. This protocol helped minimize variability caused by subjective judgment. The IRONBOW-colored images were used only to aid the annotation process. The training of the segmentation model and all predictions were performed on grayscale images. Each annotated image has three labels; left-ear-base side, right-ear-base side, and the background. Due to the high cost of the annotation process in terms of time and effort, there are always insufficient data to support segmentation models’ training. Data augmentation helps to expand the annotated dataset to improve model robustness and reduce the possibility of overfitting during the training process [33,34]. To this extent, the annotated images were flipped horizontally and vertically and saved in new files, making a total number of 1731 images. Other augmentation processes were performed during training, such as random rotation, shifting, and different contrast and brightness. The dataset was split into 1431 images for training, 200 images for validation, and 100 images for testing.

5. Model Implementation

For training and testing of the proposed model, an HP workstation with 2× Intel Xeon E5-2678V3, NVIDIA Quadro RTX 5000 graphic card, and 128 GB of RAM (sourced from a computer hardware seller in the Netherlands) was used along with a Tensorflow 2.10 and Python 3.9.0 environment. In this section, the implementation of each stage is explained.

5.1. Classification Model

The images were resized to 320 × 320 and used as an input to train the model. Different network architectures were trained, and the network with the lowest false-positive rate was chosen. The summary of the hyperparameters of the model is shown in Table 1. The model is trained with the Stochastic gradient descent (SGD) optimizer, momentum 0.9, and learning rate 1 × 10−3, batch size 32, epochs 300, and callbacks is ReduceLROnPlateau (factor = 0.8, patience = 6) monitoring the valid loss. To calculate the loss of the model, the binary cross-entropy loss function was used. The loss function is expressed as follows:
L o s s = 1 n ( i = 1 N y i log y ^ i 1 y i log   ( 1 y ^ i ) )
where N is the number of classes, y is the real label, and y ^ is the predicted label for an image (i).
The metrics used to evaluate the performance of the model are the accuracy and false-positive rate (FPR). The accuracy indicates the overall performance of the model. It calculates the percentage of correct classification, using the following function:
A c c u r a c y = T P + T N T P + T N + F N + F P
where T P , true positive, is the number of labels correctly predicted as positive; T N , true negative, is the number of labels correctly predicted as negative; F P , false positive, is the number of images falsely predicted as positive; and F N , false negative, is the number of images falsely predicted as negative. For this study, positive means that the ROI is well and completely visible, and negative means that the ROI is not or only partly visible.
The false-positive rate refers the ratio of images misclassified to be positive to the sum of rightly classified images. It is calculated as follows:
F P R = F P T N + F P
where T N and F N denote similarly as above.

5.2. Segmentation Model

The input image size is its original size (1024 × 768), as the segmented pixels were used to retrieve the thermal information from the original thermal file, and any resizing would cause information loss. The network was trained using the Stochastic gradient descent (SGD) optimizer with a batch size of 2. The initial learning rate was 1 × 10−4, with a learning rate reduction if no improvement occurred in the network performance for 10 epochs. A summary of the hyperparameters of the model is shown in Table 2. The Jaccard loss was used as a loss function in training. The Jaccard index, also known as the Jaccard similarity coefficient, was introduced in Jaccard [35]. It is one of the most frequently used loss measures in segmentation models [36]. It measures the similarity between two sets. In segmentation models, the loss function evaluates the dissimilarity between the ground truth and the predicted segmentation value. The Jaccard loss is calculated as follows:
J a c c a r d   l o s s = 1 I o U = 1 A r e a   o f   O v e r l a p A r e a   o f   U n i o n
where I o U is the Intersection over Union. It measures the ratio of the intersection and union of the predicted pixels y ^ i and ground truth y i .
The performance of the model was evaluated using Pixel Class Accuracy and mean Intersection over Union (mIoU). Pixel Class Accuracy (PCA) measures the average of the proportion of the total number of correct pixels predicted by the model for each class, which is given by the following:
P C A = 1 N   i = 1 N P c i P t i = 1 N   i = 1 N   T P i + T N i T P i + T N i + F P i + F N i
where P c i is the number of correctly predicted pixels in class i , P t i is the total number of pixels in that class, and N is the total number of classes. T P i is true positive and denotes to the number of pixels rightly predicted as a class i , T N i is true negative and denotes to the number of pixels rightly predicted as not a class i , F P i is false positive and represents the number of pixels falsely predicted as a class i , and F N i is false negative and represents the number of pixels falsely predicted as not a class i .
Although Pixel Class Accuracy can give an indication about model performance, it is not sufficient in segmentation models, due to a class imbalance problem, where there is a dominant class (background), and the classes needed to be predicted cover only a small portion of the image. Hence, the model could show high accuracy as it predicts all pixels as the dominant class, where in reality, it shows poor results to predict other classes. Therefore, mean Intersection over Union (mIoU) is crucial to assess the performance of segmentation models. The IoU is calculated as follows:
m I o U = 1 N   i = 1 N   A r e a   o f   O v e r l a p   i A r e a   o f   U n i o n   i = 1 N   i = 1 N     T P i T P i + F P i + F N i
where A r e a   o f   O v e r l a p   i is the number of overlapping pixels between the prediction and the ground truth ( T P i ) in class i , and A r e a   o f   U n i o n   i is the sum of the predicted pixels and the ground truth pixels in the same class, including T P i , F P i , and F N i [37]. N is the number of classes.

5.3. Temperature Extraction

The final stage of the model was to process the entire thermal video and extract the maximum temperature. This stage worked like the engine of the model; it used the previous two models to extract and save the temperature. For each frame in the sequence of a thermal video, the thermal frame was converted to a jpg image format and resized to an image size of 320 × 320. The classification model examined the visibility of the ROI. If the condition was met, the segmentation model segmented the ROI of both sides and forwarded it to the final stage of the model to extract the temperature. The maximum temperatures of both sides with their coordinates were saved as records in a csv file along with the frame number. If the ROI was not completely visible in the frame, the frame was discarded, and its record was saved as a missing record. Then, the next frame went through the same process. At the end of the thermal video, the file was saved externally on a hard drive to be analyzed for further research. Each thermal footage had a duration of over 10 min. After processing by the model, more than 19,000 records were saved per video. Table 3 shows an example of the saved temperature.
The temperature was measured manually as well, and these manual records were compared to the observed measurements by the model (see Section 5 for more information).

6. Results

The performance of the different architectures of classification models was examined on the test dataset, and a comparison of their performance is shown in Table 4. Most architectures achieved a high overall accuracy; however, the Inception and ResNet50 networks achieved the best results in terms of the lowest proportion of falsely predicted positives (FPRs), which was our concern in this study. Model ensembling is a method that combines several individual models to achieve better generalization performance [38]. The Inception and ResNet50 were ensembled equally, which achieved a slightly better performance, with an overall accuracy of 99% and an FPR of 0.5%.
For the segmentation model, the overall PCA was 92.3%, and for the left-side and right-side classes, the PCA was 88.7% and 88.2%, respectively. The overall mIoU was 87.1%. The IoU for the left side was 80.9%, and for the right side, it was 80.5%. Figure 5 shows four examples of the segmentation model output.
In addition, the model was tested by comparing the model temperature output with manual temperature measurements. A set of 200 images was measured manually using FLIR ResearchIR, version 4.40. The temperature parameters of the left and right ear base areas can be obtained by drawing ellipses around each ear base position, as shown in Figure 6. FLIR subsequently provided temperature statistics of the drawn Region of Interest, including the mean, minimum, and maximum temperatures.
For all 200 records, the manually measured temperatures and the model output temperatures had 100% agreement together. Figure 7 presents a comparison of selected records, showing the manual temperature observations alongside the model’s output temperatures.

7. Discussion and Conclusions

Thermal imaging as a non-invasive technique to monitor the health and welfare of animals has become increasingly popular. However, it is a labor-intensive technique; therefore, automation is key to achieve a functioning thermal system. In this study, we described the development of an automatic system to process thermal videos and extract temperature automatically for further analysis. The model is designed to extract the external body temperature of a defined body part of pigs. The defined area was the ear base of both sides, left and right. The emotions of animals are regarded to consist of two dimensions, i.e., valence (an emotion is positive or negative) and arousal (intensity of the emotion) [39,40,41]. Skin temperature can likely be used as an indicator of valence and arousal in both humans and animals [42,43,44]. When aroused by a stimulus, the activation of the sympathetic branch of the autonomic nervous system leads to peripheral blood vessels to constrict in order to direct blood and, thereby, energy and oxygen to the core of the body where it is needed [45,46]. This leads to an initial drop, and subsequently, due to vasodilation, to a gradual increase in temperature in the periphery of the body, such as (parts of) the face [45,46]. Neuroimaging research, moreover, suggests that the two hemispheres of the brain might play different roles in processing positive vs. negative emotions, with a debate on their specific contribution [47,48,49,50]. The asymmetry in, for instance, ear skin temperature in response to an emotional stimulus may, thus, reflect lateralized brain activity and hence be a marker of valence [44]. For this context, the developed system could be used in future research to process thermal video footages of pigs in the frustration challenge test to extract the maximum temperature of the base of the ears every 10 s to examine the hypothesis of asymmetry in ear temperature as a response to a negative stimulus. The proposed system consists of three stages. The first one uses a classification model to check the visibility of the ROI. The accuracy of this classification model was 99% and had a false-positive rate of 0.5%. The second stage tracks and segments the ROI, using a UNet network with a backbone of a ResNet101 network as an encoder. The overall PCA of this segmentation model was 92.3%. For the left-side and right-side classes, the PCA was 88.7% and 88.2%, respectively. The overall mIoU was 87.1%. The IoU for the left side was 80.9%, and for the right side, it was 80.5%. We consider the performance of the model as good, especially for an ROI that did not have clearly defined boundaries. Future work could explore improving performance by evaluating alternative segmentation architectures, such as using UNet with different backbone networks, or exploring models like fully convolutional networks (FCNs) [22], SegNet [51], and DeepLabV3+ [52], as well as testing different protocols for defining the area of interest. Additionally, experimenting with alternative annotation strategies or incorporating more training data could further enhance model performance. The final stage extracts the maximum temperature and saves it in an external file. For this study, the ROIs were selected to be the left and right side of the ear base of pigs.
The performance of the model was also examined by comparing the temperature in a set of 200 thermal images measured manually and by the model. The comparison showed full agreement between the manually observed records and the model-observed records, showing that the model can be used reliably and as a replacement for the manual method to obtain the temperature from future thermal records.
The model was well able to track and locate the ROIs in videos recorded in relatively low-lighting conditions, which are similar to the conditions on farms. Thermal cameras indeed have the ability to record high-quality thermal footages in ill-light conditions [53,54], which thus shows their potential to be used in high-precision farms.
Unlike in human research, the incapability of animals to pose to the camera added a new challenge for any automated process to be applied on animals. The model developed in this study tackled this problem by examining first the visibility of the ROIs and observing the temperature only when the ROI was completely visible in the thermal frame. This also means that this automatic process can work to monitor the temperature of the selected ROI of an animal over a long period of time without any human intervention.
Although the pig was able to move freely in this study, it was confined alone in a small pen as a negative stimulus. The model’s performance, therefore, still needs to be examined in conditions where pigs have more freedom to move and have close interactions with each other, which is the normal situation on a farm. Some modifications may be needed to accommodate the new filming situation, which can be investigated in future research.
The chosen area of interest in this study does not have a well-defined boundary, such as those found in more distinct features like the eyes or nose. It is subjectively defined, even for manual observation. To ensure consistency in defining the boundaries of the ROI, the annotation process was carried out by a single operator, following a detailed protocol described in the methodology. However, using multiple operators may introduce variability in the boundary definition, potentially impacting the accuracy of the segmentation model. Future research could evaluate the extent of this variability by involving multiple operators and examining its effect on the model’s performance.
The methodology presented here can, moreover, be applied in different fields in animal husbandry, such as health, by monitoring the temperature of the eyes, feet, and ears of cattle and pigs [55,56]. Furthermore, observing changes in the body surface temperatures of specific areas, such as the armpits, buttocks, chest, and groin, in animals requiring high physical performance (e.g., horses) can provide valuable insights into body temperature regulation during training. This information can be useful for assessing horse fitness [57,58]. A last example is that changes in vulva temperature can be used to identify estrus and monitor the transition toward ovulation [59,60,61]. Other examples can be found in [62,63].
In conclusion, the model developed in this study showed to be reliable to determine the temperature in the selected ROI in pigs without human intervention. With some more research, this model can likely even be used, for instance, to monitor temperature changes in interacting pigs and in other research areas in animal science that make use of thermal imaging.

Author Contributions

R.B.: Conceptualization, Methodology, Software, Data Curation, Writing—Original Draft Preparation. I.R.: Supervision, Project Administration, Funding Acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Wageningen University and Research Animal Sciences program Next Level Animal Sciences (NLAS)—Data and models. NLAS had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Institutional Review Board Statement

The experiment was approved by the Animal Care and Use Committee of Wageningen University and Research (DEC code: AVD1040020186245), and the established principles of laboratory animal care and use were followed, as well as the Dutch law on animal experiments.

Informed Consent Statement

Not applicable.

Data Availability Statement

Dataset available on request from the authors.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Boileau, A.; Farish, M.; Turner, S.P.; Camerlink, I. Infrared Thermography of Agonistic Behaviour in Pigs. Physiol. Behav. 2019, 210, 112637. [Google Scholar] [CrossRef] [PubMed]
  2. Cilulko, J.; Janiszewski, P.; Bogdaszewski, M.; Szczygielska, E. Infrared Thermal Imaging in Studies of Wild Animals. Eur. J. Wildl Res. 2013, 59, 17–23. [Google Scholar] [CrossRef]
  3. Cruz, S.; Paulino, A.; Duraes, J.; Mendes, M. Real-Time Quality Control of Heat Sealed Bottles Using Thermal Images and Artificial Neural Network. J. Imaging 2021, 7, 24. [Google Scholar] [CrossRef] [PubMed]
  4. Okada, T.; Fukuhara, T.; Tanaka, S.; Taguchi, M.; Arai, T.; Senshu, H.; Sakatani, N.; Shimaki, Y.; Demura, H.; Ogawa, Y.; et al. Highly Porous Nature of a Primitive Asteroid Revealed by Thermal Imaging. Nature 2020, 579, 518–522. [Google Scholar] [CrossRef] [PubMed]
  5. Akula, A.; Ghosh, R.; Sardana, H.K.; Predeep, P.; Thakur, M.; Varma, M.K.R. Thermal Imaging And Its Application In Defence Systems. In Proceedings of the OPTICS 2011: International Conference on Light, Kerala, India, 23–25 May 2011; pp. 333–335. [Google Scholar]
  6. Glehr, M.; Stibor, A.; Sadoghi, P.; Schuster, C.; Quehenberger, F.; Gruber, G.; Leithner, A.; Windhager, R. Thermal Imaging as a Noninvasive Diagnostic Tool for Anterior Knee Pain Following Implantation of Artificial Knee Joints. Int. J. Thermodyn. 2011, 14, 71–78. [Google Scholar] [CrossRef]
  7. Mayr, H. Thermographic Evaluation After Knee Surgery the Thermal Image in Medicine and Biology; Ammer, K., Ring, E.F., Eds.; Uhlen: Wien, Austria, 1995. [Google Scholar]
  8. Romanò, C.L.; Romanò, D.; Dell’Oro, F.; Logoluso, N.; Drago, L. Healing of Surgical Site after Total Hip and Knee Replacements Show Similar Telethermographic Patterns. J. Orthopaed. Traumatol. 2011, 12, 81–86. [Google Scholar] [CrossRef]
  9. Lavers, C.; Franks, K.; Floyd, M.; Plowman, A. Application of Remote Thermal Imaging and Night Vision Technology to Improve Endangered Wildlife Resource Management with Minimal Animal Distress and Hazard to Humans. J. Phys. Conf. Ser. 2005, 15, 207–212. [Google Scholar] [CrossRef]
  10. Soerensen, D.D.; Pedersen, L.J. Infrared Skin Temperature Measurements for Monitoring Health in Pigs: A Review. Acta Vet. Scand. 2015, 57, 5. [Google Scholar] [CrossRef] [PubMed]
  11. Lu, M.; He, J.; Chen, C.; Okinda, C.; Shen, M.; Liu, L.; Yao, W.; Norton, T.; Berckmans, D. An Automatic Ear Base Temperature Extraction Method for Top View Piglet Thermal Image. Comput. Electron. Agric. 2018, 155, 339–347. [Google Scholar] [CrossRef]
  12. Dunbar, M.R.; Johnson, S.R.; Rhyan, J.C.; McCollum, M. Use of Infrared Thermography to Detect Thermographic Changes in Mule Deer (Odocoileus Hemionus) Experimentally Infected with Foot-and-Mouth Disease. J. Zoo Wildl. Med. 2009, 40, 296–301. [Google Scholar] [CrossRef]
  13. Avni-Magen, N.; Zaken, S.; Kaufman, E.; Kelmer, G. Use of Infrared Thermography in Early Diagnosis of Pathologies in Asian Elephants (Elephas maximus). Isr. J. Vet. Med. 2017, 72, 22–27. [Google Scholar]
  14. Hilsberg, S. Infrared-Thermography in Zoo Animals: New Experiences with This Method, Its Use in Pregnancy and Inflammation Diagnosis and Survey of Environmental Influences and Thermoregulation in Zoo Animals. In Proceedings of the Second Scientific Meeting, Chester, UK, 21–24 May 1998. [Google Scholar]
  15. Hilsberg-Merz, S. Infrared Thermography in Zoo and Wild Animals. In Zoo and Wild Animal Medicine; Elsevier: Amsterdam, The Netherlands, 2008; pp. 20–33. ISBN 978-1-4160-4047-7. [Google Scholar]
  16. Nakayama, K.; Goto, S.; Kuraoka, K.; Nakamura, K. Decrease in Nasal Temperature of Rhesus Monkeys (Macaca mulatta) in Negative Emotional State. Physiol. Behav. 2005, 84, 783–790. [Google Scholar] [CrossRef] [PubMed]
  17. Sonkusare, S.; Ahmedt-Aristizabal, D.; Aburn, M.J.; Nguyen, V.T.; Pang, T.; Frydman, S.; Denman, S.; Fookes, C.; Breakspear, M.; Guo, C.C. Detecting Changes in Facial Temperature Induced by a Sudden Auditory Stimulus Based on Deep Learning-Assisted Face Tracking. Sci. Rep. 2019, 9, 4729. [Google Scholar] [CrossRef]
  18. Neethirajan, S. Affective State Recognition in Livestock—Artificial Intelligence Approaches. Animals 2022, 12, 759. [Google Scholar] [CrossRef]
  19. Cho, Y.; Bianchi-Berthouze, N.; Julier, S.J. DeepBreath: Deep Learning of Breathing Patterns for Automatic Stress Recognition Using Low-Cost Thermal Imaging in Unconstrained Settings. In Proceedings of the 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), San Antonio, TX, USA, 23–26 October 2017; pp. 456–463. [Google Scholar]
  20. Kakileti, S.T.; Dalmia, A.; Manjunath, G. Exploring Deep Learning Networks for Tumour Segmentation in Infrared Images. Quant. InfraRed Thermogr. J. 2020, 17, 153–168. [Google Scholar] [CrossRef]
  21. Mazur-Milecka, M.; Ruminski, J. Deep Learning Based Thermal Image Segmentation for Laboratory Animals Tracking. Quant. InfraRed Thermogr. J. 2021, 18, 159–176. [Google Scholar] [CrossRef]
  22. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  23. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
  24. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. arXiv 2014, arXiv:1412.7062. [Google Scholar]
  25. He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef] [PubMed]
  26. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Lecture Notes in Computer Science; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7. [Google Scholar]
  27. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How Transferable Are Features in Deep Neural Networks? arXiv 2014, arXiv:1411.1792. [Google Scholar] [CrossRef]
  28. Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A Review of Semantic Segmentation Using Deep Neural Networks. Int. J. Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef]
  29. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  30. Amorim Franchi, G.; Moscovice, L.R.; Telkänranta, H.; Pedersen, L.J. Variations in Salivary Oxytocin, Eye Caruncle Temperature and Behavior Indicate Anticipation and Valuing of Environmental Enrichment Material in Fattening Pigs. 2023. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4503958 (accessed on 19 January 2025).
  31. Luo, L.; Zande, L.E.V.D.; Marwijk, M.A.V.; Knol, E.F.; Rodenburg, T.B.; Bolhuis, J.E.; Parois, S.P. Impact of Enrichment and Repeated Mixing on Resilience in Pigs. Front. Vet. Sci. 2022, 9, 829060. [Google Scholar] [CrossRef]
  32. TELEDYNE FLIR. Print Your Perfect Palette. Available online: https://www.flir.eu/discover/ots/outdoor/your-perfect-palette/?srsltid=AfmBOoqnmKs15fGiPHUfZb1_tDWij6o6RI2uD5w3F_xikHMF1Uye9I44 (accessed on 17 January 2025).
  33. Taylor, L.; Nitschke, G. Improving Deep Learning with Generic Data Augmentation. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1542–1547. [Google Scholar]
  34. Wong, S.C.; Gatt, A.; Stamatescu, V.; McDonnell, M.D. Understanding Data Augmentation for Classification: When to Warp? In Proceedings of the 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, QLD, Australia, 30 November–2 December 2016; pp. 1–6. [Google Scholar]
  35. Jaccard, P. Distribution de La Flore Alpine Dans Le Bassin Des Dranses et Dans Quelques Regions Voisines. Bull. Soc. Vaudoise Sci. Nat. 1901, 37, 241–272. [Google Scholar]
  36. Yuan, Y.; Chao, M.; Lo, Y.-C. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance. IEEE Trans. Med. Imaging 2017, 36, 1876–1886. [Google Scholar] [CrossRef] [PubMed]
  37. Taha, A.A.; Hanbury, A. Metrics for Evaluating 3D Medical Image Segmentation: Analysis, Selection, and Tool. BMC Med. Imaging 2015, 15, 29. [Google Scholar] [CrossRef] [PubMed]
  38. Ganaie, M.A.; Hu, M.; Malik, A.K.; Tanveer, M.; Suganthan, P.N. Ensemble Deep Learning: A Review. Eng. Appl. Artif. Intell. 2022, 115, 105151. [Google Scholar] [CrossRef]
  39. Russell, J.A. Core Affect and the Psychological Construction of Emotion. Psychol. Rev. 2003, 110, 145. [Google Scholar] [CrossRef]
  40. Mendl, M.; Burman, O.H.P.; Paul, E.S. An Integrative and Functional Framework for the Study of Animal Emotion and Mood. Proc. R. Soc. B. 2010, 277, 2895–2904. [Google Scholar] [CrossRef]
  41. Mendl, M.; Paul, E.S. Animal Affect and Decision-Making. Neurosci. Biobehav. Rev. 2020, 112, 144–163. [Google Scholar] [CrossRef]
  42. Genno, H.; Ishikawa, K.; Kanbara, O.; Kikumoto, M.; Fujiwara, Y.; Suzuki, R.; Osumi, M. Using Facial Skin Temperature to Objectively Evaluate Sensations. Int. J. Ind. Ergon. 1997, 19, 161–171. [Google Scholar] [CrossRef]
  43. Kosonogov, V.; De Zorzi, L.; Honoré, J.; Martínez-Velázquez, E.S.; Nandrino, J.-L.; Martinez-Selva, J.M.; Sequeira, H. Facial Thermal Variations: A New Marker of Emotional Arousal. PLoS ONE 2017, 12, e0183592. [Google Scholar] [CrossRef]
  44. Ramirez Montes De Oca, M.A.; Mendl, M.; Whay, H.R.; Held, S.D.; Lambton, S.L.; Telkänranta, H. An Exploration of Surface Temperature Asymmetries as Potential Markers of Affective States in Calves Experiencing or Observing Disbudding. Anim. Welf. 2024, 33, e45. [Google Scholar] [CrossRef] [PubMed]
  45. Telkanranta, H.; Paul, E.; Mendl, M. Measuring Animal Emotions with Infrared Thermography: How to Realise the Potential and Avoid the Pitfalls. In Proceedings of the Recent Advances in Animal Welfare Science VI, Newcastle, UK, 28 June 2018. [Google Scholar]
  46. Kremer, L.; Klein Holkenborg, S.E.J.; Reimert, I.; Bolhuis, J.E.; Webb, L.E. The Nuts and Bolts of Animal Emotion. Neurosci. Biobehav. Rev. 2020, 113, 273–286. [Google Scholar] [CrossRef] [PubMed]
  47. Leliveld, L.M.C.; Langbein, J.; Puppe, B. The Emergence of Emotional Lateralization: Evidence in Non-Human Vertebrates and Implications for Farm Animals. Appl. Anim. Behav. Sci. 2013, 145, 1–14. [Google Scholar] [CrossRef]
  48. Gainotti, G. Emotions and the Right Hemisphere: Can New Data Clarify Old Models? Neuroscientist 2019, 25, 258–270. [Google Scholar] [CrossRef] [PubMed]
  49. Harmon-Jones, E.; Gable, P.A.; Peterson, C.K. The Role of Asymmetric Frontal Cortical Activity in Emotion-Related Phenomena: A Review and Update. Biol. Psychol. 2010, 84, 451–462. [Google Scholar] [CrossRef]
  50. Goursot, C. Laterality in Pigs and Its Links with Personality, Emotions and Animal Welfare. Ph.D. Thesis, Universität Rostock, Rostock, Germany, 2020. [Google Scholar]
  51. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  52. Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  53. Manullang, M.C.T.; Lin, Y.-H.; Lai, S.-J.; Chou, N.-K. Implementation of Thermal Camera for Non-Contact Physiological Measurement: A Systematic Review. Sensors 2021, 21, 7777. [Google Scholar] [CrossRef]
  54. Jiang, A.; Noguchi, R.; Ahamed, T. Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN. Sensors 2022, 22, 2065. [Google Scholar] [CrossRef]
  55. Bashiruddin, J.B.; Mann, J.; Finch, R.; Zhang, Z.; Paton, D. Preliminary Study of the Use of Thermal Imaging to Assess Surface Temperatures during Foot-and-Mouth Disease Virus Infection in Cattle, Sheep and Pigs. In Proceedings of the 2006 Session of the Research Group of the Standing Technical Committee of the European Commission for the Control of Foot-and-Mouth Disease, Paphos, Cyprus, 17–20 October 2006; Food and Agriculture Organization: Rome, Italy, 2006; pp. 304–308. [Google Scholar]
  56. Rainwater-Lovett, K.; Pacheco, J.M.; Packer, C.; Rodriguez, L.L. Detection of Foot-and-Mouth Disease Virus Infected Cattle Using Infrared Thermography. Vet. J. 2009, 180, 317–324. [Google Scholar] [CrossRef]
  57. Menegassi, S.R.O.; Pereira, G.R.; Dias, E.A.; Koetz, C.; Lopes, F.G.; Bremm, C.; Pimentel, C.; Lopes, R.B.; Da Rocha, M.K.; Carvalho, H.R.; et al. The Uses of Infrared Thermography to Evaluate the Effects of Climatic Variables in Bull’s Reproduction. Int. J. Biometeorol. 2016, 60, 151–157. [Google Scholar] [CrossRef] [PubMed]
  58. Rizzo, M.; Arfuso, F.; Alberghina, D.; Giudice, E.; Gianesella, M.; Piccione, G. Monitoring Changes in Body Surface Temperature Associated with Treadmill Exercise in Dogs by Use of Infrared Methodology. J. Therm. Biol. 2017, 69, 64–68. [Google Scholar] [CrossRef] [PubMed]
  59. Scolari, S.; Evans, R.; Knox, R.; Tamassia, M.; Clark, S. 41 Determination of the Relationship Between Vulvar Skin Temperatures and Time of Ovulation in Swine Using Digital Infrared Thermography. Reprod. Fertil. Dev. 2010, 22, 178. [Google Scholar] [CrossRef]
  60. Scolari, S.; Clark, S.; Knox, R.; Tamassia, M. Vulvar Skin Temperature Changes Significantly during Estrus in Swine as Determined by Digital Infrared Thermography. JSHAP 2011, 19, 151–155. [Google Scholar] [CrossRef]
  61. Simões, V.G.; Lyazrhi, F.; Picard-Hagen, N.; Gayrard, V.; Martineau, G.-P.; Waret-Szkuta, A. Variations in the Vulvar Temperature of Sows during Proestrus and Estrus as Determined by Infrared Thermography and Its Relation to Ovulation. Theriogenology 2014, 82, 1080–1085. [Google Scholar] [CrossRef] [PubMed]
  62. Zheng, S.; Zhou, C.; Jiang, X.; Huang, J.; Xu, D. Progress on Infrared Imaging Technology in Animal Production: A Review. Sensors 2022, 22, 705. [Google Scholar] [CrossRef] [PubMed]
  63. Redaelli, V.; Zaninelli, M.; Martino, P.; Luzi, F.; Costa, L.N. A Precision Livestock Farming Technique from Breeding to Slaughter: Infrared Thermography in Pig Farming. Appl. Sci. 2024, 14, 5780. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the complete model.
Figure 1. Flow chart of the complete model.
Sensors 25 00643 g001
Figure 2. ROI is the ear base of both sides, left and right.
Figure 2. ROI is the ear base of both sides, left and right.
Sensors 25 00643 g002
Figure 3. ResNet101-UNet architecture.
Figure 3. ResNet101-UNet architecture.
Sensors 25 00643 g003
Figure 4. The upper row shows examples of unsuitable ROIs for extracting temperature, the bottom row shows examples where ROIs were appropriate to extract temperature.
Figure 4. The upper row shows examples of unsuitable ROIs for extracting temperature, the bottom row shows examples where ROIs were appropriate to extract temperature.
Sensors 25 00643 g004
Figure 5. Examples of automatically extracted temperatures by the model.
Figure 5. Examples of automatically extracted temperatures by the model.
Sensors 25 00643 g005
Figure 6. The temperatures of the ear base of the left and right side were measured manually using FLIR ResearchIR. Ellipse 1, red in the image, was for the right side, and Ellipse 2, green in the image, was for the left side. Both ellipses were drawn manually by the observer. The statistics of the ellipses are shown in the table within the figure.
Figure 6. The temperatures of the ear base of the left and right side were measured manually using FLIR ResearchIR. Ellipse 1, red in the image, was for the right side, and Ellipse 2, green in the image, was for the left side. Both ellipses were drawn manually by the observer. The statistics of the ellipses are shown in the table within the figure.
Sensors 25 00643 g006
Figure 7. Comparison between manually observed temperature and temperatures observed by the model.
Figure 7. Comparison between manually observed temperature and temperatures observed by the model.
Sensors 25 00643 g007
Table 1. Summary of the hyperparameters of the classification model.
Table 1. Summary of the hyperparameters of the classification model.
Parameter NameSelected Value
Model settingInput size320 × 320 × 3
OptimizerSGD
Learning rate1 × 10−3
Momentum0.9
Training settingLoss functionBinary cross-entropy
Batch size32
Epoch300
ReduceLROnPlateauMonitor Valid loss
Patience6
Factor0.8
EnvironmentGPUNVIDIA Quadro RTX 5000
PlatformPython 3.9
Tool boxTensorflow
Table 2. Summary of the hyperparameters of the segmentation model.
Table 2. Summary of the hyperparameters of the segmentation model.
Parameter NameSelected Value
Model settingInput size768 × 1024 × 3
OptimizerSGD
Learning rate1 × 10−4
Momentum0.9
Training settingLoss functionJaccard loss
Batch size2
Epoch300
ReduceLROnPlateauMonitor Valid loss
Patience10
Factor0.8
EnvironmentGPUNVIDIA Quadro RTX 5000
PlatformPython 3.9
Tool boxTensorflow
Table 3. An example of extracted temperature.
Table 3. An example of extracted temperature.
FrameLeft
Temp 1
Right
Temp
L_pos 2R_pos
039.8339.48(652, 667)(724, 614)
139.7839.45(631, 673)(718, 619)
239.5639.38(623, 679)(709, 636)
339.5339.24(615, 690)(694, 649)
439.5539.20(599, 698)(682, 660)
539.3939.20(576, 712)(665, 674)
1 The left temp column gives the extracted maximum temperature of the base of the left ear, and similarly, the right temp column is for the right side of the ear. 2 The L_pos column shows the coordinates of the pixel with the maximum temperature in the left side, and similarly R_pos are the coordinates for the right side.
Table 4. A comparison between different model architectures’ performance.
Table 4. A comparison between different model architectures’ performance.
Model ArchitectureAccuracyFalse-Positive Rate
ResNet-5097.4%1.62%
VGG-1696.9%2.12%
Inception97.7%1.52%
ResNet-10195.8%2.88%
Xception96.8%2.67%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bekhit, R.; Reimert, I. A Complete Pipeline to Extract Temperature from Thermal Images of Pigs. Sensors 2025, 25, 643. https://doi.org/10.3390/s25030643

AMA Style

Bekhit R, Reimert I. A Complete Pipeline to Extract Temperature from Thermal Images of Pigs. Sensors. 2025; 25(3):643. https://doi.org/10.3390/s25030643

Chicago/Turabian Style

Bekhit, Rodania, and Inonge Reimert. 2025. "A Complete Pipeline to Extract Temperature from Thermal Images of Pigs" Sensors 25, no. 3: 643. https://doi.org/10.3390/s25030643

APA Style

Bekhit, R., & Reimert, I. (2025). A Complete Pipeline to Extract Temperature from Thermal Images of Pigs. Sensors, 25(3), 643. https://doi.org/10.3390/s25030643

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop