EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection
Abstract
:1. Introduction
- For real-time deployment, a deep-learning-based pig detector should handle unseen data. An ensemble-based pig detection method is proposed in this study to improve the detection accuracy in overexposed regions (as an example of unseen data), presumably for the first time. The detection of pigs from such overexposed regions is very challenging because the pixel distribution of such regions caused by strong sunlight through a window is different from that of other regions. Without using training data, including those from overexposed regions, image preprocessing for diversity and a model ensemble with different preprocessed images can robustly detect pigs from overexposed regions.
- Another practical issue in applying deep-learning-based techniques to pig detection is the annotation cost for large-scale data. Experimental results for pig detection with large-scale test data have not yet been reported because the box-level annotation cost for this data is very expensive. Accuracy metrics for pig detection in a closed pig pen are proposed to evaluate the accuracy of detection, without box-level annotation. Presumably, this is the first report of large-scale pig detection in a pig pen with 216,000 test data, without any box-level annotation. It is also indicated that the detection accuracy with 216,000 raw video frames is very similar to that with 13,997 key frames. Thus, reducing the number of test images using key frame extraction is effective in reducing both the evaluation cost (with very large test data) and the inference time (with the model ensemble).
2. Background
3. Proposed Method
3.1. Image Preprocessing
3.2. Model Ensemble with Two Models
Algorithm 1. Box merging algorithm. |
Input: First boxes first_box, Second boxes second_box, no_pigs |
Output: Merged boxes result_box |
Initialize: Input boxes in first set first_box |
Input boxes in second set second_box |
if (confidence sum of first_box ≥ confidence sum of second_box & size of first_box = no_pigs) do |
return first_box as result_box |
else if (size of second_box = no_pigs) do |
return second_box as result_box |
else do |
matched_boxes = 0 |
sort first_box and second_box in descending order of confidence value |
for i = 1 to size of first_box do |
for j = 1 to size of second_box do |
max_iou = largest IOU of first_box[i] and second_box[j] |
if max_iou > iou_thresh do |
matched_boxes++ |
result_box[matched_boxes] = first_box[i] |
if matched_boxes < no_pigs do |
for k = matched_boxes + 1 to no_pigs do |
add remaining first_box or second_box into result_box |
return result_box |
3.3. Key Frame Extraction and Accuracy Metrics
Algorithm 2. Key frame extraction algorithm. |
Input: Current frame CF, Previous key frame PF |
Output: Key frame ft |
Initialize: Average bounding box size S, |
Pixel difference of CF and PF D, |
Number of pigs in pig pen N, |
Hyperparameter |
return CF |
else do |
return 0 |
4. Experimental Results
4.1. Experimental Setup and Resources for the Experiment
4.2. Evaluation of Detection Performance
4.3. Discussion
- While research on unseen data that include strong sunlight in the same farm was performed, the development of a more robust model (through semi-supervised or self-supervised learning) using unseen data from other farms might be necessary in future research. In addition, the remaining errors with each key frame could be solved by exploiting the temporal information among the key frames; thus, this issue could be addressed in future research.
- As shown in Table 9, the accuracy improvement of EnsemblePigDet was strongly dependent on the accuracy of the baseline model used. Because all the 13,997 key frames were considered as difficult images due to overexposed regions, ACCmax_pigs of the light-weight model was significantly degraded. Note that EmbeddedPigDet [26] modified TinyYOLOv2 for embedded board implementations, and 13,962 key frames (from 13,997 total key frames) typically produced one or two errors (i.e., missing and/or false pig errors) in each keyframe with EmbeddedPigDet. That is, EmbeddedPigDet targeted for embedded board implementations cannot be used for a hard scenario including strong sunlight, and the accuracy improvement of EnsemblePigDet based on EmbeddedPigDet was limited. Ensemble techniques for light-weight baseline models need to be studied further.
- In this study, even though fast and accurate YOLOv4 was applied, the execution time of the ensemble model (i.e., the total time on a PC for processing one input image was 29.84 ms, with 5.22 ms for two preprocessing executions, 24.24 ms for two YOLOv4 executions, and 0.38 msec for one postprocessing execution) was slower than those of single models. However, if detection was applied to key frames that captured movements through pig pen monitoring using the proposed method, an average of 20-fold (extracting 13,997 key frames of 216,000 frames using the key frame extraction method) reduction of computation complexity was verified. Therefore, with the RTX2080 Ti GPU, the detection speed of the video composed of key frames was 17 times faster than that of the raw video (i.e., 7 min were required for processing 13,997 key frames obtained from the two-hour raw video); thus, the proposed method with key frames could be executed in real time even on an embedded board.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Banhazi, T.; Lehr, H.; Black, J.; Crabtree, H.; Schofield, P.; Tscharke, M.; Berckmans, D. Precision Livestock Farming: An International Review of Scientific and Commercial Aspects. Int. J. Agric. Biol. 2012, 5, 1–9. [Google Scholar]
- Neethirajan, S. Recent Advances in Wearable Sensors for Animal Health Management. Sens. Bio-Sens. Res. 2017, 12, 15–29. [Google Scholar] [CrossRef] [Green Version]
- Tullo, E.; Fontana, I.; Guarino, M. Precision livestock farming: An overview of image and sound labelling. In Proceedings of the 6th European Conference on Precision Livestock Farming, Leuven, Belgium, 10–12 September 2013; pp. 30–38. [Google Scholar]
- Matthews, S.; Miller, A.; Clapp, J.; Plötz, T.; Kyriazakis, I. Early Detection of Health and Welfare Compromises through Automated Detection of Behavioural Changes in Pigs. Vet. J. 2016, 217, 43–51. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tscharke, M.; Banhazi, T. A Brief Review of the Application of Machine Vision in Livestock Behaviour Analysis. J. Agric. Inform. 2016, 7, 23–42. [Google Scholar]
- Korean Government. 4th Industrial Revolution and Agriculture; Korean Government: Seoul, Korea, 2016. (In Korean)
- Han, S.; Zhang, J.; Zhu, M.; Wu, J.; Kong, F. Review of automatic detection of pig behaviours by using Image Analysis. In Proceedings of the International Conference on AEECE, Chengdu, China, 26–28 May 2017; pp. 1–6. [Google Scholar]
- Nasirahmadi, A.; Hensel, O.; Edwards, S.; Sturm, B. A New Approach for Categorizing Pig Lying Behaviour based on a Delaunay Triangulation Method. Animal 2017, 11, 131–139. [Google Scholar] [CrossRef] [Green Version]
- Schofield, C. Evaluation of Image Analysis as A Means of Estimating the Weight of Pigs. J. Agric. Eng. Res. 1990, 47, 287–296. [Google Scholar] [CrossRef]
- Wouters, P.; Geers, R.; Parduyns, G.; Goossens, K.; Truyen, B.; Goedseels, V.; Van der Stuyft, E. Image-Analysis Parameters as Inputs for Automatic Environmental Temperature Control in Piglet Houses. Comput. Electron. Agric. 1990, 5, 233–246. [Google Scholar] [CrossRef]
- Brunger, J.; Traulsen, I.; Koch, R. Model-based Detection of Pigs in Images under Sub-Optimal Conditions. Comput. Electron. Agric. 2018, 152, 59–63. [Google Scholar] [CrossRef]
- Oczak, M.; Maschat, K.; Berckmans, D.; Vranken, E.; Baumgartner, J. Automatic Estimation of Number of Piglets in a Pen during Farrowing, using Image Analysis. Biosyst. Eng. 2016, 151, 81–89. [Google Scholar] [CrossRef]
- Nasirahmadi, A.; Hensel, O.; Edwards, S.; Sturm, B. Automatic Detection of Mounting Behaviours among Pigs using Image Analysis. Comput. Electron. Agric. 2016, 124, 295–302. [Google Scholar] [CrossRef] [Green Version]
- Kang, F.; Wang, C.; Li, J.; Zong, Z. A Multiobjective Piglet Image Segmentation Method based on an Improved Noninteractive GrabCut Algorithm. Adv. Multimed. 2018, 2018, 108876. [Google Scholar] [CrossRef] [Green Version]
- Li, B.; Liu, L.; Shen, M.; Sun, Y.; Lu, M. Group-Housed Pig Detection in Video Surveillance of Overhead Views using Multi-Feature Template Matching. Biosyst. Eng. 2019, 181, 28–39. [Google Scholar] [CrossRef]
- Kashiha, M.; Bahr, C.; Ott, S.; Moons, C.; Niewold, T.; Tuyttens, F.; Berckmans, D. Automatic Monitoring of Pig Locomotion using Image Analysis. Livest. Sci. 2014, 159, 141–148. [Google Scholar] [CrossRef]
- Ahrendt, P.; Gregersen, T.; Karstoft, H. Development of a Real-Time Computer Vision System for Tracking Loose-Housed Pigs. Comput. Electron. Agric. 2011, 76, 169–174. [Google Scholar] [CrossRef]
- Matthews, S.; Miller, A.; Plötz, T.; Kyriazakis, I. Automated Tracking to Measure Behavioural Changes in Pigs for Health and Welfare Monitoring. Sci. Rep. 2017, 7, 17582. [Google Scholar] [CrossRef] [PubMed]
- Lu, M.; Xiong, Y.; Li, K.; Liu, L.; Yan, L.; Ding, Y.; Lin, X.; Yang, X.; Shen, M. An Automatic Splitting Method for the Adhesive Piglets Gray Scale Image based on the Ellipse Shape Feature. Comput. Electron. Agric. 2016, 120, 53–62. [Google Scholar] [CrossRef]
- Yang, Q.; Xiao, D.; Lin, S. Feeding Behavior Recognition for Group-Housed Pigs with the Faster R-CNN. Comput. Electron. Agric. 2018, 155, 453–460. [Google Scholar] [CrossRef]
- Nasirahmadi, A.; Sturm, B.; Olsson, A.; Jeppsson, K.; Muller, S.; Edwards, S.; Hensel, O. Automatic Scoring of Lateral and Sternal Lying Posture in Grouped Pigs Using Image Processing and Support Vector Machine. Comput. Electron. Agric. 2019, 156, 475–481. [Google Scholar] [CrossRef]
- Psota, E.; Mittek, M.; Perez, L.; Schmidt, T.; Mote, B. Multi-Pig Part Detection and Association with a Fully-Convolutional Network. Sensors 2019, 19, 852. [Google Scholar] [CrossRef] [Green Version]
- Sun, L.; Liu, Y.; Chen, S.; Luo, B.; Li, Y.; Liu, C. Pig Detection Algorithm based on Sliding Windows and PCA Convolution. IEEE Access 2019, 7, 44229–44238. [Google Scholar] [CrossRef]
- Riekert, M.; Klein, A.; Adrion, F.; Hoffmann, C.; Gallmann, E. Automatically Detecting Pig Position and Posture by 2D Camera Imaging and Deep Learning. Comput. Electron. Agric. 2020, 174, 1–16. [Google Scholar] [CrossRef]
- Lee, S.; Ahn, H.; Seo, J.; Chung, Y.; Park, D.; Pan, S. Practical Monitoring of Undergrown Pigs for IoT-Based Large-Scale Smart Farm. IEEE Access 2019, 7, 173796–173810. [Google Scholar] [CrossRef]
- Nasirahmadi, A.; Sturm, B.; Edwards, S.; Jeppsson, K.H.; Olsson, A.C.; Müller, S.; Hensel, O. Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs. Sensors 2019, 19, 3738. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Brünger, J.; Gentz, M.; Traulsen, I.; Koch, R. Panoptic Segmentation of Individual Pigs for Posture Recognition. Sensors 2020, 20, 3710. [Google Scholar] [CrossRef] [PubMed]
- Sivamani, S.; Choi, S.H.; Lee, D.H.; Park, J.; Chon, S. Automatic Posture Detection of Pigs on Real-Time using YOLO Framework. Int. J. Res. Trends Innov. 2020, 5, 81–88. [Google Scholar]
- Seo, J.; Ahn, H.; Kim, D.; Lee, S.; Chung, Y.; Park, D. EmbeddedPigDet: Fast and Accurate Pig Detection for Embedded Board Implementations. Appl. Sci. 2020, 10, 2878. [Google Scholar] [CrossRef]
- Cowton, J.; Kyriazakis, I.; Bacardit, J. Automated Individual Pig Localisation, Tracking and Behaviour Metric Extraction using Deep Learning. IEEE Access 2020, 7, 108049–108060. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.; Liao, H. Yolov4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Lin, T.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollar, P.; Zitnick, C. Microsoft COCO: Common Objects in Context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 740–755. [Google Scholar]
- Everingham, M.; Van Gool, L.; Williams, C.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef] [Green Version]
- Deselaers, T.; Alexe, B.; Ferrari, V. Weakly supervised localization and learning with generic knowledge. Int. J. Comput. Vis. 2012, 100, 275–293. [Google Scholar] [CrossRef] [Green Version]
- Zuiderveld, K. Contrast Limited Adaptive Histogram Equalization; Academic Press Inc.: Cambridge, MA, USA, 1994. [Google Scholar]
- Open Source Computer Vision, OpenCV. Available online: http://opencv.org (accessed on 1 April 2021).
- Intel. Intel RealSense D435. Available online: https://click.intel.com/intelr-realsensetm-depth-camera-d435.html (accessed on 28 February 2018).
- NVIDIA Jetson Xavier NX, NVIDIA. Available online: https://developer.nvidia.com/embedded/jetson-xavier-nx-devkit (accessed on 5 May 2021).
Management of Overexposed Region | Data Size | No. of Pigs in a Pen | No. of Test Images | Detection Technique | Reference |
---|---|---|---|---|---|
No | 640 × 480 | 22 | 270 | Image Processing | [8] |
720 × 540 | 12 | 500 | Image Processing | [11] | |
1280 × 720 | 7–13 | Not Specified | Image Processing | [12] | |
640 × 480 | 22, 23 | Not Specified | Image Processing | [13] | |
1440 × 1440 | 9 | 200 | Image Processing | [14] | |
1024 × 768 | 4 | 100 | Image Processing | [15] | |
720 × 576 | 10 | Not Specified | Image Processing | [16] | |
Not Specified | 3 | Not Specified | Image Processing | [17] | |
512 × 424 | 19 | Not Specified | Image Processing | [18] | |
Not Specified | 2~12 | 330 | Image Processing | [19] | |
2560 × 1440 | 4 | 100 | Deep Learning | [20] | |
960 × 720 | ~30 | 500 | Image Processing | [21] | |
1920 × 1080 | Not Specified | 400 | Deep Learning | [22] | |
64 × 64 | 6 | 500 | Deep Learning | [23] | |
1280 × 720 | ~79 | 160 | Deep Learning | [24] | |
1280 × 720 | 9 | 1000 | Image Processing + Deep Learning | [25] | |
1280 × 720 | ~32 | 400 | Deep Learning | [26] | |
1280 × 800 | 13 | 226 | Deep Learning | [27] | |
720 × 480 | 2 | 1792 | Deep Learning | [28] | |
1280 × 720 | 9 | 1000 | Image Processing + Deep Learning | [29] | |
1920 × 1080 | 20 | 828 | Deep Learning | [30] | |
Yes | 1280 × 720 | 9 | 216,000 (13,997 Key Frames) (4193 Hard Frames) | Image Processing + Deep Learning (+ Accuracy Metrics) | Proposed Method |
TilesGridSize | ClipLimit | Difference | Entropy |
---|---|---|---|
(2,2) | 0.2 | 51.14 | 5.48 |
(2,2) | 0.4 | 51.30 | 5.49 |
(2,2) | 0.6 CLAHESFB | 51.31 | 5.49 |
(2,2) | 0.8 | 51.30 | 5.49 |
(2,2) | 1.0 | 51.30 | 5.49 |
(4,4) | 0.2 | 21.44 | 5.57 |
(4,4) | 0.4 | 17.75 | 5.60 |
(4,4) | 0.6 | 18.13 | 5.60 |
(4,4) | 0.8 | 18.15 | 5.60 |
(4,4) | 1.0 | 18.15 | 5.60 |
(8,8) | 0.2 | 17.91 | 5.54 |
(8,8) | 0.4 | 9.27 | 5.60 |
(8,8) | 0.6 | 7.33 | 5.61 |
(8,8) | 0.8 | 7.45 | 5.61 |
(8,8) | 1.0 CLAHEET | 7.46 | 5.62 |
(16,16) | 0.2 | 13.41 | 5.45 |
(16,16) | 0.4 | 4.63 | 5.54 |
(16,16) | 0.6 | 0.78 | 5.57 |
(16,16) | 0.8 | 0.69 | 5.58 |
(16,16) | 1.0 | 1.24 | 5.59 |
(32,32) | 0.2 | 17.13 | 5.37 |
(32,32) | 0.4 | 7.82 | 5.47 |
(32,32) | 0.6 | 1.77 | 5.51 |
(32,32) | 0.8 | 1.81 | 5.54 |
(32,32) | 1.0 | 4.19 | 5.55 |
(64,64) | 0.2 | 18.68 | 5.25 |
(64,64) | 0.4 | 11.54 | 5.39 |
(64,64) | 0.6 | 6.15 | 5.44 |
(64,64) | 0.8 | 1.01 | 5.48 |
(64,64) | 1.0 | 2.85 | 5.50 |
Model | # Error Frames with max_pigs (ACCmax_pigs) | ||
---|---|---|---|
216,000 Raw Video Frames | 13,997 Key Frames | ||
Single Model | Baseline YOLOv4 | 43,344 (79.93%) | 3976 (71.59%) |
Model A (proposed) | 20,248 (90.63%) | 1619 (88.43%) | |
Model B (proposed) | 21,092 (90.24%) | 1844 (86.83%) | |
Ensemble Model | EnsemblePigDet (proposed) | 12,244 (94.33%) | 621 (95.56%) |
Model | # Error Frames with max_pigs (ACCmax_pigs) | ||
---|---|---|---|
13,997 Key Frames | 4193 Hard Frames | ||
Single Model | Baseline YOLOv4 | 3976 (71.59%) | 1906 (54.54%) |
Model A (proposed) | 1619 (88.43%) | 848 (79.78%) | |
Model B (proposed) | 1844 (86.83%) | 778 (81.45%) | |
Ensemble Model | EnsemblePigDet (proposed) | 621 (95.56%) | 279 (93.34%) |
Model | # Error Frames with Manual Inspection (ACCmanual_inspection) | ||
---|---|---|---|
13,997 Key Frames | 4193 Hard Frames | ||
Single Model | Baseline YOLOv4 | 3889 (72.21%) | 1866 (55.49%) |
Model A (proposed) | 1532 (89.05%) | 808 (80.72%) | |
Model B (proposed) | 1757 (87.44%) | 738 (82.39%) | |
Ensemble Model | EnsemblePigDet (proposed) | 649 (95.36%) | 302 (92.79%) |
Proposed Models | # Error Frames with max_pigs | ||
---|---|---|---|
(ACCmax_pigs) | |||
Test Data with | Test Data with | ||
Preprocessing A | Preprocessing B | ||
Single Model | Training Data with Preprocessing A | 4111 (70.06%) | 13,997 (0.00%) |
Training data with Preprocessing B | 6391 (54.34%) | 3287 (76.51%) | |
Training Data with Preprocessing A and Preprocessing B | 3741 (73.27%) | 4198 (70.00%) | |
Training Data with Preprocessing A and Raw Input | 1619 (88.43%) | 10,743 (23.24%) | |
Model A | |||
Training Data with Preprocessing B and Raw Input | 3881 (72.27%) | 1844 (86.83%) | |
Model B | |||
Training Data with Preprocessing A and Preprocessing B and Raw Input | 3185 (77.24%) | 3891 (72.20%) | |
Ensemble Model | Training Data with Preprocessing A + Preprocessing B | 2498 (82.15%) | |
Training Data with Preprocessing A and Raw Input + Preprocessing B and Raw Input | 621 (95.56%) | ||
EnsemblePigDet |
Proposed Ensemble Models | # Error Frames with max_pigs (ACCmax_pigs) | |
---|---|---|
Confidence1 = 0.3 and Confidence2 = 0.5 | Threshold1 = 0.5 and Threshold2 = 0.3 | 956 (93.17%) |
Threshold1 = 0.7 and Threshold2 = 0.3 | 1162 (91.70%) | |
Threshold1 = 0.7 and Threshold2 = 0.5 | 1077 (92.31%) | |
Confidence1 = 0.3 and Confidence2 = 0.7 | Threshold1 = 0.5 and Threshold2 = 0.3 | 1173 (91.62%) |
Threshold1 = 0.7 and Threshold2 = 0.3 | 3092 (77.91%) | |
Threshold1 = 0.7 and Threshold2 = 0.5 | 3090 (77.92%) | |
Confidence1 = 0.5 and Confidence2 = 0.7 | Threshold1 = 0.5 and Threshold2 = 0.3 | 1753 (87.48%) |
Threshold1 = 0.7 and Threshold2 = 0.3 | 2283 (83.69%) | |
Threshold1 = 0.7 and Threshold2 = 0.5 | 2271 (83.78%) |
Proposed Ensemble Models | # Error Frames with max_pigs (ACCmax_pigs) | |
---|---|---|
Confidence1 = 0.5 and Confidence2 = 0.3 | Threshold1 = 0.3 and Threshold2 = 0.5 | 832 (94.06%) |
Threshold1 = 0.3 and Threshold2 = 0.7 | 637 (95.45%) | |
Threshold1 = 0.5 and Threshold2 = 0.7 | 621 (95.56%) EnsemblePigDet | |
Confidence1 = 0.7 and Confidence2 = 0.3 | Threshold1 = 0.3 and Threshold2 = 0.5 | 916 (93.46%) |
Threshold1 = 0.3 and Threshold2 = 0.7 | 678 (95.16%) | |
Threshold1 = 0.5 and Threshold2 = 0.7 | 673 (95.19%) | |
Confidence1 = 0.7 and Confidence2 = 0.5 | Threshold1 = 0.3 and Threshold2 = 0.5 | 1312 (90.63%) |
Threshold1 = 0.3 and Threshold2 = 0.7 | 1361 (90.28%) | |
Threshold1 = 0.5 and Threshold2 = 0.7 | 1353 (90.33%) |
Model | # Error Frames with max_pigs | Execution Time | ||
---|---|---|---|---|
(ACCmax_pigs) | PC | Embedded Board | ||
(RTX2080Ti) | (Xavier NX [38]) | |||
YOLOv4 [31] | Baseline | 3976 (71.59%) | 168 s | 2661 s |
≈3 min | ≈44 min | |||
EnsemblePigDet | 621 (95.56%) | 417 s | 5427 s | |
(proposed) | ≈7 min | ≈90 min | ||
TinyYOLOv4 [31] | Baseline | 4886 (65.09%) | 120 s | 379 s |
≈2 min | ≈6 min | |||
EnsemblePigDet | 3652 (73.90%) | 231 s | 762 s | |
(proposed) † | ≈4 min | ≈13 min | ||
Embedded PigDet [26] * | Baseline | 13,962 (0.25%) | 42 s | 265 s |
≈1 min | ≈5 min | |||
EnsemblePigDet | 13,469 (3.77%) | 94 s | 532 s | |
(proposed) † | ≈2 min | ≈9 min |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ahn, H.; Son, S.; Kim, H.; Lee, S.; Chung, Y.; Park, D. EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection. Appl. Sci. 2021, 11, 5577. https://doi.org/10.3390/app11125577
Ahn H, Son S, Kim H, Lee S, Chung Y, Park D. EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection. Applied Sciences. 2021; 11(12):5577. https://doi.org/10.3390/app11125577
Chicago/Turabian StyleAhn, Hanse, Seungwook Son, Heegon Kim, Sungju Lee, Yongwha Chung, and Daihee Park. 2021. "EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection" Applied Sciences 11, no. 12: 5577. https://doi.org/10.3390/app11125577
APA StyleAhn, H., Son, S., Kim, H., Lee, S., Chung, Y., & Park, D. (2021). EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection. Applied Sciences, 11(12), 5577. https://doi.org/10.3390/app11125577