Individual Beef Cattle Identification Using Muzzle Images and Deep Learning Techniques
Abstract
:Simple Summary
Abstract
1. Introduction
Cattle Type | Image Size (Pixels) | Image Type | Restrained | Cattle Counts | Images per Cattle | Total Images | Identification Method | Accuracy (%) | Processing Time (ms/Image) | Reference |
---|---|---|---|---|---|---|---|---|---|---|
Dairy | Printed | Y | 6 | Manual | [16] | |||||
Printed | Y | − | 200 | Manual | − | [17] | ||||
− | Printed | Y | 65 | − | Manual | [18] | ||||
Beef | 256 × 256 | Grayscale | Y | − | 43 | DIP | 46.5 | [31] | ||
Beef | 320 × 240 | Printed | Y | 29 | 10 | 290 | ML | 98.9 | [12] | |
Beef | 200 × 200 | Grayscale | 8 | 10 | 80 | DIP | 90.0 | [32] | ||
Grayscale | − | 15 | 7 | 105 | DIP | 93.3 | 37–879 | [15] | ||
Beef | Printed | Y | 20 | 8 | 160 | DIP | 98.3 | [20] | ||
− | Grayscale | 53 | 20 | 1060 | DIP | − | [19] | |||
Beef | 300 × 400 | Grayscale | − | 31 | 7 | 217 | ML | 99.5 | [33] | |
− | RGB | − | 28 | 20 | 560 | ML | 100.0 | − | [25] | |
− | RGB | − | 52 | 20 | 1040 | ML | 96.0 | [24] | ||
Beef | RGB | N | 14 | 5 | 70 | DIP | 100.0 | [21] | ||
Beef | 300 × 400 | Grayscale | 31 | 7 | 217 | ML | 99.5 | [34] | ||
Beef | 300 × 400 | Grayscale | − | 31 | 7 | 217 | ML | 99.5 | 48–1362 | [23] |
Beef | RGB | 52 | 6 | 312 | ML | 96.4 | − | [35] | ||
Dairy | 400 × 400 | RGB | 500 | 10 | 5000 | DIP | 93.9 | [36] | ||
Dairy | 200 × 200 | RGB | 500 | 10 | 5000 | ML | 94.9 | [37] | ||
Dairy | 200 × 200 | RGB | 500 | 10 | 5000 | DL | 98.9 | [27] | ||
Dairy | 200 × 200 | RGB | − | 500 | 10 | 5000 | ML | 93.9 | [38] | |
Dairy | RGB | N | 15 | 7 | 105 | ML | 93.0 | 368–1193 | [30] | |
Beef | RGB | Y | 60 | 5–10 | 460 | DIP | 98.1 | [39] | ||
Beef | − | RGB | 45 | 20 | 900 | ML | 96.5 | [40] | ||
Beef | RGB | Y | 431 | 1600 | ML | 95.0 | − | [41] | ||
Dairy | 200 × 200 | RGB | 400 | 10 | 4000 | DL | 98.9 | [28] | ||
Beef | 1024 × 1024 | RGB | Y | 300 | 2900 | DL | 99.1 | − | [29] | |
Dairy | 64 × 64 | RGB | 186 | 5 | 930 | ML | 83.4 | − | [13] |
2. Materials and Methods
2.1. Image Collection and Dataset Curation
2.2. Deep Learning Image Classification Models
2.3. General Model Evaluation and Development Strategies
2.4. Optimization for Class Imbalance
3. Results and Discussion
3.1. Examples of Validation Performance
3.2. Testing Performance of the Selected Deep Learning Image Classification Models
3.3. Optimization Peformance for Class Imbalance
3.4. Identification Accuracy of Each Cattle
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Cattle ID | Image Counts | Weight | Cattle ID | Image Counts | Weight | Cattle ID | Image Counts | Weight | Cattle ID | Image Counts | Weight |
---|---|---|---|---|---|---|---|---|---|---|---|
0100 | 8 | 8.75 | 3812 | 12 | 5.83 | 4680 | 19 | 3.68 | 5143 | 9 | 7.78 |
0200 | 10 | 7.00 | 3814 | 13 | 5.38 | 4685 | 11 | 6.36 | 5153 | 5 | 14.00 |
0300 | 17 | 4.12 | 3819 | 19 | 3.68 | 4686 | 31 | 2.26 | 5164 | 32 | 2.19 |
0400 | 7 | 10.00 | 3832 | 42 | 1.67 | 4716 | 16 | 4.38 | 5165 | 10 | 7.00 |
0500 | 14 | 5.00 | 3842 | 14 | 5.00 | 4717 | 5 | 14.00 | 5170 | 40 | 1.75 |
0600 | 19 | 3.68 | 3844 | 15 | 4.67 | 4733 | 29 | 2.41 | 5171 | 20 | 3.50 |
0700 | 16 | 4.38 | 3847 | 21 | 3.33 | 4739 | 26 | 2.69 | 5197 | 14 | 5.00 |
0800 | 18 | 3.89 | 3852 | 29 | 2.41 | 4748 | 15 | 4.67 | 5207 | 18 | 3.89 |
0900 | 12 | 5.83 | 3856 | 16 | 4.38 | 4770 | 11 | 6.36 | 5208 | 4 | 17.50 |
1000 | 12 | 5.83 | 4208 | 18 | 3.89 | 4775 | 15 | 4.67 | 5215 | 48 | 1.46 |
1100 | 11 | 6.36 | 4259 | 6 | 11.67 | 4776 | 25 | 2.80 | 5224 | 28 | 2.50 |
1200 | 11 | 6.36 | 4323 | 19 | 3.68 | 4804 | 15 | 4.67 | 5234 | 8 | 8.75 |
1300 | 12 | 5.83 | 4326 | 10 | 7.00 | 4819 | 18 | 3.89 | 5235 | 24 | 2.92 |
1400 | 13 | 5.38 | 4330 | 28 | 2.50 | 4820 | 38 | 1.84 | 5249 | 24 | 2.92 |
1500 | 6 | 11.67 | 4339 | 20 | 3.50 | 4833 | 26 | 2.69 | 5273 | 18 | 3.89 |
1600 | 14 | 5.00 | 4347 | 19 | 3.68 | 4839 | 26 | 2.69 | 5275 | 14 | 5.00 |
1700 | 12 | 5.83 | 4363 | 21 | 3.33 | 4840 | 16 | 4.38 | 5282 | 6 | 11.67 |
1800 | 22 | 3.18 | 4369 | 16 | 4.38 | 4895 | 24 | 2.92 | 5283 | 14 | 5.00 |
1900 | 8 | 8.75 | 4381 | 24 | 2.92 | 4915 | 30 | 2.33 | 5297 | 32 | 2.19 |
2000 | 14 | 5.00 | 4385 | 23 | 3.04 | 4921 | 14 | 5.00 | 5298 | 25 | 2.80 |
2100 | 4 | 17.50 | 4399 | 7 | 10.00 | 4947 | 15 | 4.67 | 5307 | 10 | 7.00 |
2200 | 6 | 11.67 | 4421 | 32 | 2.19 | 4951 | 39 | 1.79 | 5314 | 13 | 5.38 |
2220 | 6 | 11.67 | 4422 | 22 | 3.18 | 4969 | 12 | 5.83 | 5325 | 36 | 1.94 |
2300 | 22 | 3.18 | 4451 | 7 | 10.00 | 4971 | 11 | 6.36 | 5355 | 4 | 17.50 |
2320 | 14 | 5.00 | 4454 | 26 | 2.69 | 4984 | 24 | 2.92 | 5359 | 10 | 7.00 |
2400 | 23 | 3.04 | 4456 | 29 | 2.41 | 4985 | 11 | 6.36 | 5360 | 35 | 2.00 |
2500 | 33 | 2.12 | 4479 | 25 | 2.80 | 4986 | 26 | 2.69 | 5362 | 18 | 3.89 |
2510 | 10 | 7.00 | 4488 | 11 | 6.36 | 4995 | 6 | 11.67 | 5373 | 27 | 2.59 |
2600 | 27 | 2.59 | 4499 | 29 | 2.41 | 5009 | 17 | 4.12 | 5374 | 27 | 2.59 |
2700 | 17 | 4.12 | 4537 | 12 | 5.83 | 5026 | 23 | 3.04 | 5403 | 26 | 2.69 |
2710 | 15 | 4.67 | 4539 | 18 | 3.89 | 5028 | 21 | 3.33 | 5404 | 22 | 3.18 |
2740 | 8 | 8.75 | 4545 | 29 | 2.41 | 5066 | 14 | 5.00 | 5407 | 40 | 1.75 |
2800 | 24 | 2.92 | 4549 | 4 | 17.50 | 5073 | 14 | 5.00 | 5408 | 18 | 3.89 |
2900 | 15 | 4.67 | 4551 | 28 | 2.50 | 5077 | 16 | 4.38 | 5410 | 20 | 3.50 |
2930 | 6 | 11.67 | 4568 | 23 | 3.04 | 5083 | 29 | 2.41 | 5411 | 31 | 2.26 |
3000 | 15 | 4.67 | 4607 | 34 | 2.06 | 5090 | 18 | 3.89 | 5425 | 26 | 2.69 |
3100 | 13 | 5.38 | 4613 | 70 | 1.00 | 5097 | 25 | 2.80 | 5427 | 13 | 5.38 |
3200 | 16 | 4.38 | 4614 | 25 | 2.80 | 5100 | 8 | 8.75 | 5432 | 19 | 3.68 |
3300 | 13 | 5.38 | 4649 | 34 | 2.06 | 5112 | 14 | 5.00 | 5477 | 9 | 7.78 |
3400 | 7 | 10.00 | 4668 | 21 | 3.33 | 5132 | 30 | 2.33 | 5507 | 25 | 2.80 |
3420 | 4 | 17.50 | 4678 | 19 | 3.68 | 5133 | 12 | 5.83 | 5508 | 25 | 2.80 |
3802 | 8 | 8.75 | 4679 | 16 | 4.38 | 5138 | 11 | 6.36 | 5509 | 22 | 3.18 |
5519 | 25 | 2.80 | 5781 | 27 | 2.59 | 6124 | 10 | 7.00 | 6295 | 10 | 7.00 |
5529 | 29 | 2.41 | 5784 | 16 | 4.38 | 6161 | 18 | 3.89 | 6313 | 34 | 2.06 |
5537 | 37 | 1.89 | 5803 | 15 | 4.67 | 6167 | 21 | 3.33 | 6331 | 32 | 2.19 |
5556 | 8 | 8.75 | 5804 | 26 | 2.69 | 6171 | 12 | 5.83 | 6333 | 54 | 1.30 |
5559 | 21 | 3.33 | 5806 | 14 | 5.00 | 6184 | 16 | 4.38 | 6442 | 13 | 5.38 |
5581 | 29 | 2.41 | 5809 | 24 | 2.92 | 6189 | 12 | 5.83 | 6446 | 8 | 8.75 |
5604 | 14 | 5.00 | 5815 | 14 | 5.00 | 6191 | 18 | 3.89 | 6458 | 10 | 7.00 |
5605 | 14 | 5.00 | 5816 | 9 | 7.78 | 6196 | 17 | 4.12 | 6479 | 15 | 4.67 |
5620 | 12 | 5.83 | 5836 | 22 | 3.18 | 6197 | 12 | 5.83 | 6499 | 9 | 7.78 |
5630 | 4 | 17.50 | 5844 | 30 | 2.33 | 6199 | 14 | 5.00 | 6505 | 14 | 5.00 |
5633 | 13 | 5.38 | 5886 | 25 | 2.80 | 6210 | 10 | 7.00 | 6506 | 18 | 3.89 |
5634 | 13 | 5.38 | 5925 | 4 | 17.50 | 6213 | 12 | 5.83 | 6530 | 10 | 7.00 |
5639 | 12 | 5.83 | 5932 | 38 | 1.84 | 6216 | 18 | 3.89 | 6606 | 12 | 5.83 |
5654 | 53 | 1.32 | 5953 | 30 | 2.33 | 6220 | 12 | 5.83 | 8050 | 4 | 17.50 |
5658 | 12 | 5.83 | 5971 | 45 | 1.56 | 6226 | 15 | 4.67 | 8094 | 10 | 7.00 |
5670 | 16 | 4.38 | 5986 | 5 | 14.00 | 6237 | 6 | 11.67 | 8095 | 8 | 8.75 |
5677 | 12 | 5.83 | 6011 | 32 | 2.19 | 6253 | 8 | 8.75 | 9021 | 10 | 7.00 |
5695 | 20 | 3.50 | 6012 | 19 | 3.68 | 6266 | 10 | 7.00 | 9029 | 29 | 2.41 |
5697 | 31 | 2.26 | 6017 | 12 | 5.83 | 6276 | 15 | 4.67 | 9634 | 31 | 2.26 |
5717 | 16 | 4.38 | 6022 | 8 | 8.75 | 6277 | 12 | 5.83 | 9635 | 8 | 8.75 |
5745 | 15 | 4.67 | 6038 | 22 | 3.18 | 6278 | 13 | 5.38 | 9736 | 18 | 3.89 |
5761 | 10 | 7.00 | 6066 | 50 | 1.40 | 6282 | 12 | 5.83 | 9742 | 19 | 3.68 |
5762 | 7 | 10.00 | 6071 | 26 | 2.69 | 6283 | 5 | 14.00 | 9773 | 10 | 7.00 |
5774 | 16 | 4.38 | 6084 | 36 | 1.94 | 6287 | 14 | 5.00 | 9798 | 42 | 1.67 |
5777 | 12 | 5.83 | 6098 | 20 | 3.50 | 6294 | 10 | 7.00 | 9801 | 10 | 7.00 |
References
- Tilman, D.; Balzer, C.; Hill, J.; Befort, B.L. Global food demand and the sustainable intensification of agriculture. Proc. Natl. Acad. Sci. USA 2011, 108, 20260–20264. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Terry, S.A.; Basarab, J.A.; Guan, L.L.; McAllister, T.A. Strategies to improve the efficiency of beef cattle production. Can. J. Anim. Sci. 2020, 101, 1–19. [Google Scholar] [CrossRef]
- USDA Economic Research Service. Sector at a Glance. Available online: https://www.ers.usda.gov/topics/animal-products/cattle-beef/sector-at-a-glance/ (accessed on 15 January 2022).
- National Cattlemen’s Beef Association. Industry Statistics. Available online: https://www.ncba.org/producers/industry-statistics (accessed on 15 January 2022).
- Wagner, J.J.; Archibeque, S.L.; Feuz, D.M. The modern feedlot for finishing cattle. Annu. Rev. Anim. Biosci. 2014, 2, 535–554. [Google Scholar] [CrossRef] [PubMed]
- USDA. Census of Agriculture, c2017. Report; AC-17-A-51; USDA, National Agricultural Statistics Service: Washington, DC, USA, 2019.
- Drouillard, J.S. Current situation and future trends for beef production in the United States of America—A review. Asian-Australas. J. Anim. Sci. 2018, 31, 1007. [Google Scholar] [CrossRef] [Green Version]
- Poudel, P.B.; Poudel, M.R.; Gautam, A.; Phuyal, S.; Tiwari, C.K.; Bashyal, N.; Bashyal, S. COVID-19 and its global impact on food and agriculture. J. Biol. Today’s World 2020, 9, 221–225. [Google Scholar]
- Beaver, A.; Proudfoot, K.L.; von Keyserlingk, M.A.G. Symposium review: Considerations for the future of dairy cattle housing: An animal welfare perspective. J. Dairy Sci. 2020, 103, 5746–5758. [Google Scholar] [CrossRef]
- USDA. Livestock, Dairy, and Poultry Outlook, January 2021. Report; LDP-M-319; USDA, Economic Research Service: Washington, DC, USA, 2021.
- Awad, A.I. From classical methods to animal biometrics: A review on cattle identification and tracking. Comput. Electron. Agric. 2016, 123, 423–435. [Google Scholar] [CrossRef]
- Barry, B.; Gonzales-Barron, U.A.; McDonnell, K.; Butler, F.; Ward, S. Using muzzle pattern recognition as a biometric approach for cattle identification. Trans. ASABE 2007, 50, 1073–1080. [Google Scholar] [CrossRef]
- Kaur, A.; Kumar, M.; Jindal, M.K. Shi-Tomasi corner detector for cattle identification from muzzle print image pattern. Ecol. Inform. 2022, 68, 101549. [Google Scholar] [CrossRef]
- Smith, G.C.; Tatum, J.D.; Belk, K.E.; Scanga, J.A.; Grandin, T.; Sofos, J.N. Traceability from a US perspective. Meat Sci. 2005, 71, 174–193. [Google Scholar] [CrossRef]
- Awad, A.I.; Zawbaa, H.M.; Mahmoud, H.A.; Nabi, E.H.H.A.; Fayed, R.H.; Hassanien, A.E. A robust cattle identification scheme using muzzle print images. In Proceedings of the Federated Conference on Computer Science and Information Systems, Kraków, Poland, 8–11 September 2013; pp. 529–534. [Google Scholar]
- Petersen, W. The identification of the bovine by means of nose-prints. J. Dairy Sci. 1922, 5, 249–258. [Google Scholar] [CrossRef]
- Hirsch, M.; Graham, E.F.; Dracy, A.E. A Classification for the Identification of Bovine Noseprints1. J. Dairy Sci. 1952, 35, 314–319. [Google Scholar] [CrossRef]
- Dracy, A.E.; Graham, E.F.; Hirsch, M. Some Similarities of Monozygotic Cattle Noseprints1. J. Dairy Sci. 1953, 36, 11–13. [Google Scholar] [CrossRef]
- El-Bakry, H.M.; El-Hennawy, I.; El Hadad, H.M. Bovines muzzle identification using box-counting. Int. J. Comput. Sci. Inf. Secur. 2014, 12, 29. [Google Scholar]
- Noviyanto, A.; Arymurthy, A.M. Beef cattle identification based on muzzle pattern using a matching refinement technique in the SIFT method. Comput. Electron. Agric. 2013, 99, 77–84. [Google Scholar] [CrossRef]
- Monteiro, M.; Cadavez, V.; Monteiro, F.C. Cattle identification based in biometric features of the muzzle. In Proceedings of the In RECPAD 2015–21St Edition of the Portuguese Conference on Pattern Recognition, Faro, Portugal, 9 September 2015; pp. 34–35. [Google Scholar]
- Li, G.; Huang, Y.; Chen, Z.; Chesser, G.D.; Purswell, J.L.; Linhoss, J.; Zhao, Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. Sensors 2021, 21, 1492. [Google Scholar] [CrossRef]
- Gaber, T.; Tharwat, A.; Hassanien, A.E.; Snasel, V. Biometric cattle identification approach based on weber’s local descriptor and adaboost classifier. Comput. Electron. Agric. 2016, 122, 55–66. [Google Scholar] [CrossRef] [Green Version]
- Mahmoud, H.A.; Hadad, H.M.R.E. Automatic cattle muzzle print classification system using multiclass support vector machine. Int. J. Image Min. 2015, 1, 126–140. [Google Scholar] [CrossRef]
- Mahmoud, H.A.; Hadad, H.M.E.; Mousa, F.A.; Hassanien, A.E. Cattle classifications system using Fuzzy K- Nearest Neighbor Classifier. In Proceedings of the 2015 International Conference on Informatics, Electronics & Vision (ICIEV), Fukuoka, Japan, 15–18 June 2015; pp. 1–5. [Google Scholar]
- O’Shea, K.; Nash, R. An introduction to convolutional neural networks. arXiv 2015, arXiv:1511.08458v2. [Google Scholar]
- Kumar, S.; Pandey, A.; Satwik, K.S.R.; Kumar, S.; Singh, S.K.; Singh, A.K.; Mohan, A. Deep learning framework for recognition of cattle using muzzle point image pattern. Measurement 2018, 116, 1–17. [Google Scholar] [CrossRef]
- Bello, R.-W.; Talib, A.Z.H.; Mohamed, A.S.A.B. Deep Belief Network Approach for Recognition of Cow using Cow Nose Image Pattern. Walailak J. Sci. Technol. (WJST) 2021, 18, 8984. [Google Scholar] [CrossRef]
- Shojaeipour, A.; Falzon, G.; Kwan, P.; Hadavi, N.; Cowley, F.C.; Paul, D. Automated Muzzle Detection and Biometric Identification via Few-Shot Deep Transfer Learning of Mixed Breed Cattle. Agronomy 2021, 11, 2365. [Google Scholar] [CrossRef]
- Awad, A.I.; Hassaballah, M. Bag-of-visual-words for cattle identification from muzzle print images. Appl. Sci. 2019, 9, 4914. [Google Scholar] [CrossRef] [Green Version]
- Minagawa, H.; Fujimura, T.; Ichiyanagi, M.; Tanaka, K. Identification of beef cattle by analyzing images of their muzzle patterns lifted on paper. In AFITA 2002: Asian Agricultural Information Technology & Management, Proceedings of the Third Asian Conference for Information Technology in Agriculture, Beijing, China, 26–28 October 2002; China Agricultural Scientech Press: Beijing, China, 2002; pp. 596–600. [Google Scholar]
- Noviyanto, A.; Arymurthy, A.M. Automatic cattle identification based on muzzle photo using speed-up robust features approach. In Proceedings of the 3rd European Conference of Computer Science, ECCS, Paris, France, 2–4 December 2012; p. 114. [Google Scholar]
- Tharwat, A.; Gaber, T.; Hassanien, A.E.; Hassanien, H.A.; Tolba, M.F. Cattle identification using muzzle print images based on texture features approach. In Proceedings of the Fifth International Conference on Innovations in Bio-Inspired Computing and Applications IBICA, Ostrava, Czech Republic, 23–25 June 2014. [Google Scholar]
- Tharwat, A.; Gaber, T.; Hassanien, A.E. Two biometric approaches for cattle identification based on features and classifiers fusion. Int. J. Image Min. 2015, 1, 342–365. [Google Scholar] [CrossRef]
- El-Henawy, I.; El Bakry, H.M.; El Hadad, H.M. A new muzzle classification model using decision tree classifier. Int. J. Electron. Inf. Eng. 2017, 6, 12–24. [Google Scholar]
- Kumar, S.; Singh, S.K.; Singh, A.K. Muzzle point pattern based techniques for individual cattle identification. IET Image Process. 2017, 11, 805–814. [Google Scholar] [CrossRef]
- Kumar, S.; Singh, S.K. Automatic identification of cattle using muzzle point pattern: A hybrid feature extraction and classification paradigm. Multimed. Tools Appl. 2017, 76, 26551–26580. [Google Scholar] [CrossRef]
- Kumar, S.; Singh, S.K.; Abidi, A.I.; Datta, D.; Sangaiah, A.K. Group Sparse Representation Approach for Recognition of Cattle on Muzzle Point Images. Int. J. Parallel Program. 2018, 46, 812–837. [Google Scholar] [CrossRef]
- Nurtanio, I.; Areni, I.S.; Bugiwati, S.R.; Bustamin, A.; Rahmatullah, M. A Portable Cattle Tagging Based on Muzzle Pattern. Int. J. Interact. Mob. Technol. 2020, 14, 134–149. [Google Scholar] [CrossRef]
- Sian, C.; Jiye, W.; Ru, Z.; Lizhi, Z. Cattle identification using muzzle print images based on feature fusion. IOP Conf. Ser. Mater. Sci. Eng. 2020, 853, 012051. [Google Scholar] [CrossRef]
- Kusakunniran, W.; Wiratsudakul, A.; Chuachan, U.; Kanchanapreechakorn, S.; Imaromkul, T.; Suksriupatham, N.; Thongkanchorn, K. Biometric for Cattle Identification Using Muzzle Patterns. Int. J. Pattern Recognit. Artif. Intell. 2019, 34, 2056007. [Google Scholar] [CrossRef]
- Xiong, Y.; Li, G.; Erickson, G.E. Beef Cattle Muzzle/Noseprint database for individual identification. March 3, 2022 ed.; 2022; Available online: https://zenodo.org/record/6324361 (accessed on 3 March 2022).
- Li, G.; Hui, X.; Lin, F.; Zhao, Y. Developing and evaluating poultry preening behavior detectors via mask region-based convolutional neural network. Animals 2020, 10, 1762. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A. One weird trick for parallelizing convolutional neural networks. arXiv 2014, arXiv:1404.5997. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Chen, Y.; Li, J.; Xiao, H.; Jin, X.; Yan, S.; Feng, J. Dual path networks. arXiv 2017, arXiv:1707.01629. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 6105–6114. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2016; pp. 1251–1258. [Google Scholar]
- Tan, M.; Chen, B.; Pang, R.; Vasudevan, V.; Sandler, M.; Howard, A.; Le, Q.V. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2820–2828. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Radosavovic, I.; Kosaraju, R.P.; Girshick, R.; He, K.; Dollár, P. Designing network design spaces. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 14–19 June 2020; pp. 10428–10436. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2015; pp. 770–778. [Google Scholar]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 21–26 July 2016; pp. 1492–1500. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.-T.; Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany; 2018; pp. 116–131. [Google Scholar]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K.J.A.P.A. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Zagoruyko, S.; Komodakis, N. Wide residual networks. arXiv 2016, arXiv:1605.07146. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Kai, L.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Koo, K.-M.; Cha, E.-Y. Image recognition performance enhancements using image normalization. Hum. -Cent. Comput. Inf. Sci. 2017, 7, 33. [Google Scholar] [CrossRef] [Green Version]
- Phan, T.H.; Yamamoto, K. Resolving class imbalance in object detection with weighted cross entropy losses. arXiv 2020, arXiv:2006.01413. [Google Scholar]
Model Name and Reference | Model Version | Highlighted Features | Total Parameters (Million) | Model Size (MB) |
---|---|---|---|---|
AlexNet [44] | AlexNet | First parallelization and distributed training with multiple GPUs. | 61.1 | 221.6 |
DenseNet [45] | DenseNet121 | Connections between each layer and every other layer in a feed-forward fashion. Numbers indicate that the model contains 121, 161, 169, and 201 layers in the networks. | 8.1 | 28.2 |
DenseNet161 | 29.0 | 104.4 | ||
DenseNet169 | 14.3 | 50.3 | ||
DenseNet201 | 20.2 | 72.3 | ||
DPN [46] | DPN68 | Dual-path architecture; feature re-usage; new feature exploration; contains 68 layers. | 13.0 | 46.2 |
EfficientNet [47] | EfficientNet_b0 | Model scaling and balancing network depth, width, and resolution; neural architecture search; b0 to b7 correspond to input sizes of (256, 224), (256, 240), (288, 288), (320, 300), (384, 380), (489, 456), (561, 528), and (633, 600) pixels, respectively. | 5.3 | 16.9 |
EfficientNet_b1 | 7.8 | 26.6 | ||
EfficientNet_b2 | 9.1 | 31.3 | ||
EfficientNet_b3 | 12.2 | 42.9 | ||
EfficientNet_b4 | 19.3 | 69.5 | ||
EfficientNet_b5 | 30.4 | 111.2 | ||
EfficientNet_b6 | 43.0 | 159.0 | ||
EfficientNet_b7 | 66.3 | 247.6 | ||
Inception [48,49,50,51] | GoogleNet | Increasing the depth and width of the network while keeping the computational budget constant. | 13.0 | 22.6 |
InceptionV3 | Factorized convolutions and aggressive regularization. | 27.2 | 96.1 | |
InceptionV4 | Combination of Inception architectures with residual connections. | 42.7 | 159.1 | |
InceptionResNetV2 | 55.8 | 209.5 | ||
Xception | Depth-wise separable convolutions; dubbed Xception. | 22.9 | 81.8 | |
MnasNet [52] | MnasNet_0.5 | Automated mobile neural architecture search approach, model latency, mobile phones, and factorized hierarchical search space; 0.5 and 1.0 indicate the network with depth multipliers of 0.5 and 1.0. | 2.2 | 5.1 |
MnasNet_1.0 | 4.4 | 13.4 | ||
MobileNet [53,54] | MobileNetV2 | Inverted residual structure, lightweight depth-wise convolutions, and maintaining representational power. | 3.5 | 10.0 |
MobileNetV3_Large | Hardware-aware network architecture search complemented by the NetAdapt algorithm. MobileNetV3-Large and MobileNetV3-Small target high- and low-resource use cases. | 2.5 | 17.6 | |
MobileNetV3_Small | 5.5 | 7.0 | ||
RegNet [55] | RegNetY_400MF | Parametrizing populations of networks, elevated design space level, quantized linear function, and a wide range of flop regimes; RegNetX indicates the network with the X block (a standard residual bottleneck block), and RegNetY indicates the network with the X block and Squeeze-and-Excitation networks; 400MF, 800MF, 1.6GF, 3.2GF, 8.0GF, 16GF, and 32GF represent networks with flop regimes of 400 MB, 800 MB, 1.6 GB, 3.2 GB, 8.0 GB, 16 GB, and 32 GB, respectively. | 4.3 | 15.6 |
RegNetY_800MF | 6.4 | 22.6 | ||
RegNetY_1.6GF | 11.2 | 40.7 | ||
RegNetY_3.2GF | 19.4 | 70.4 | ||
RegNetY_8.0GF | 39.4 | 145.1 | ||
RegNetY_16GF | 83.6 | 311.1 | ||
RegNetY_32GF | 145.0 | 543.7 | ||
RegNetX_400MF | 5.5 | 20.2 | ||
RegNetX_800MF | 7.3 | 26.1 | ||
RegNetX_1.6GF | 9.2 | 32.8 | ||
RegNetX_3.2GF | 15.3 | 56.0 | ||
RegNetX_8.0GF | 39.6 | 146.1 | ||
RegNetX_16GF | 54.3 | 201.9 | ||
RegNetX_32GF | 107.8 | 405 | ||
ResNet [56] | ResNet18 | Residual learning framework, shortcut connections, avoiding feature vanishing, and achieving decent accuracy in deeper neural networks; 18, 34, 50, 101, and 152 indicate networks with 18, 34, 50, 101, and 152 layers, respectively. | 11.7 | 43.2 |
ResNet34 | 21.8 | 81.9 | ||
ResNet50 | 25.6 | 92.1 | ||
ResNet101 | 44.5 | 164.8 | ||
ResNet152 | 60.2 | 224.8 | ||
ResNeXt [57] | ResNeXt50_32×4d | Highly modularized network architecture, aggregating a set of transformations with the same topology, and cardinality; 50 and 101 refer to networks with 50 and 101 layers, respectively; 32 refers to networks with 32 paths/cardinalities in the widthwise direction; 4d and 8d refer to networks with 4 and 8 stages/depths of residual blocks. | 25.0 | 90.1 |
ResNeXt101_32×8d | 88.8 | 334 | ||
ShuffleNet [58] | ShuffleNetV2_×0.5 | Direct metric of computation complexity on the target platform, FLOPs; ×0.5 and ×1.0 refer to networks with 0.5× and 1.0× output channels, respectively. | 1.4 | 2.5 |
ShuffleNetV2_×1.0 | 2.3 | 6.0 | ||
SqueezeNet [59] | SqueezeNet_1.0 | 50× fewer parameters, and <0.5 MB model sizes; SqueezeNet_1.0 is the original network, while SqueezeNet_1.1 has 2.4× less computation and slightly fewer parameters than the original version. | 1.2 | 3.4 |
SqueezeNet_1.1 | 1.2 | 3.3 | ||
VGG [60] | VGG11 | Increasing depth using an architecture with very small (3 × 3) convolution filters; 11, 13, 16, and 19 indicate networks with 11, 13, 16, and 19 layers, respectively; BN represents networks with batch normalization. | 132.9 | 495.4 |
VGG11_BN | 132.9 | 495.5 | ||
VGG13 | 133.0 | 496.1 | ||
VGG13_BN | 133.0 | 496.2 | ||
VGG16 | 138.4 | 516.4 | ||
VGG16_BN | 138.4 | 516.5 | ||
VGG19 | 143.7 | 536.6 | ||
VGG19_BN | 143.7 | 536.7 | ||
Wide ResNet [61] | Wide_ResNet50_2 | Decreasing depth and increasing width of residual networks, and bottleneck network; 50 and 101 refer to networks with 50 and 101 layers, respectively; 2 is used to differentiate the network from ResNet. | 68.9 | 257.4 |
Wide_ResNet101_2 | 126.9 | 479.1 |
Model | Accuracy (%) | Processing Speed (ms/Image) | CI | Model | Accuracy (%) | Processing Speed (ms/Image) | CI |
---|---|---|---|---|---|---|---|
AlexNet | 96.5 | 36.0 | 7.8 | RegNetY_32GF | 94.7 | 564.0 | 22.6 |
DenseNet121 | 93.0 | 153.5 | 25.6 | RegNetX_400MF | 86.6 | 53.1 | 32.6 |
DenseNet161 | 94.7 | 278.6 | 21.6 | RegNetX_800MF | 84.6 | 70.0 | 36.2 |
DenseNet169 | 94.7 | 183.8 | 19.4 | RegNetX_1.6GF | 84.8 | 99.5 | 36.4 |
DenseNet201 | 94.6 | 224.4 | 21.6 | RegNetX_3.2GF | 86.6 | 142.0 | 35.2 |
DPN68 | 94.4 | 153.1 | 19.8 | RegNetX_8.0GF | 88.0 | 208.6 | 37.0 |
EfficientNet_b0 | 49.4 | 122.4 | 48.2 | RegNetX_16GF | 89.8 | 360.4 | 37.4 |
EfficientNet_b1 | 55.1 | 159.3 | 45.8 | RegNetX_32GF | 92.3 | 574.3 | 32.4 |
EfficientNet_b2 | 54.7 | 171.3 | 46.8 | ResNet18 | 90.5 | 60.3 | 27.6 |
EfficientNet_b3 | 60.0 | 221.3 | 44.6 | ResNet34 | 93.7 | 86.2 | 19.4 |
EfficientNet_b4 | 51.2 | 283.1 | 52.2 | ResNet50 | 91.3 | 153.0 | 29.2 |
EfficientNet_b5 | 51.0 | 425.6 | 54.2 | ResNet101 | 94.2 | 228.7 | 23.4 |
EfficientNet_b6 | 47.3 | 468.2 | 56.0 | ResNet152 | 93.7 | 319.1 | 26.8 |
EfficientNet_b7 | 54.1 | 678.2 | 53.4 | ResNeXt50_32×4d | 93.0 | 180.4 | 25.6 |
GoogleNet | 59.4 | 78.3 | 40.8 | ResNeXt101_32×8d | 96.1 | 419.6 | 18.8 |
InceptionV3 | 81.7 | 112.9 | 38.4 | ShuffleNetV2_×0.5 | 1.2 | 32.3 | 47.4 |
InceptionV4 | 80.6 | 187.0 | 42.0 | ShuffleNetV2_×1.0 | 1.3 | 43.3 | 47.2 |
InceptionResNetV2 | 66.9 | 244.7 | 44.8 | SqueezeNet_1.0 | 95.0 | 62.1 | 12.6 |
Xception | 58.3 | 207.0 | 45.6 | SqueezeNet_1.1 | 95.9 | 45.3 | 9.8 |
MnasNet_0.5 | 2.9 | 46.2 | 46.8 | VGG11 | 96.7 | 127.0 | 10.8 |
MnasNet_1.0 | 57.6 | 66.1 | 41.6 | VGG11_BN | 98.1 | 141.0 | 6.2 |
MobileNetV2 | 91.3 | 77.4 | 26.2 | VGG13 | 98.0 | 175.9 | 9.4 |
MobileNetV3_Large | 95.9 | 60.2 | 11.4 | VGG13_BN | 97.7 | 196.0 | 11.0 |
MobileNetV3_Small | 93.2 | 35.6 | 18.8 | VGG16 | 97.7 | 211.0 | 12.4 |
RegNetY_400MF | 90.7 | 59.6 | 26.4 | VGG16_BN | 98.4 | 239.1 | 9.2 |
RegNetY_800MF | 86.5 | 75.2 | 34.8 | VGG19 | 97.1 | 248.0 | 14.6 |
RegNetY_1.6GF | 88.8 | 103.8 | 32.6 | VGG19_BN | 98.1 | 276.6 | 11.8 |
RegNetY_3.2GF | 91.6 | 150.5 | 27.4 | Wide_ResNet50_2 | 89.6 | 243.7 | 36.6 |
RegNetY_8.0GF | 92.1 | 269.8 | 30.8 | Wide_ResNet101_2 | 90.4 | 404.4 | 37.0 |
RegNetY_16GF | 93.6 | 370.3 | 28.0 |
Accuracy | Processing Speed | |
---|---|---|
Total parameter | 0.389 | 0.517 |
Model size | 0.391 | 0.521 |
Model | Cross Entropy | Weighted cross Entropy | Data Augmentation | Model Loading Time (ms) | |||
---|---|---|---|---|---|---|---|
Accuracy (%) | Processing Speed (ms/Image) | Accuracy (%) | Processing Speed (ms/Image) | Accuracy (%) | Processing Speed (ms/Image) | ||
AlexNet | 96.5 | 36.0 | 95.8 | 36.3 | 95.7 | 29.7 | 95.7 |
DenseNet161 | 93.0 | 153.5 | 97.3 | 286.2 | 98.3 | 139.1 | 133.0 |
DenseNet169 | 94.7 | 278.6 | 97.6 | 176.1 | 97.9 | 90.2 | 807.2 |
DenseNet201 | 94.7 | 183.8 | 97.1 | 221.5 | 98.2 | 110.5 | 963.3 |
DPN68 | 94.6 | 224.4 | 97.8 | 151.8 | 98.6 | 80.5 | 1183.2 |
MobileNetV3_Large | 94.4 | 153.1 | 97.4 | 61.6 | 95.2 | 39.8 | 261.2 |
MobileNetV3_Small | 96.5 | 36.0 | 95.8 | 35.9 | 86.6 | 28.3 | 186.3 |
RegNetY_32GF | 94.7 | 564.0 | 97.1 | 553.7 | 95.1 | 297.5 | 244.3 |
ResNet34 | 93.7 | 86.2 | 97.0 | 88.3 | 97.6 | 54.7 | 767.4 |
ResNeXt101_32×8d | 96.1 | 419.6 | 98.0 | 419.1 | 98.5 | 210.7 | 2539.9 |
SqueezeNet_1.0 | 95.0 | 62.1 | 92.6 | 62.0 | 78.3 | 39.6 | 120.0 |
SqueezeNet_1.1 | 95.9 | 45.3 | 94.1 | 44.7 | 93.9 | 32.4 | 127.7 |
VGG11 | 96.7 | 127.0 | 96.5 | 128.2 | 97.2 | 77.8 | 3391.3 |
VGG11_BN | 98.1 | 141.0 | 98.2 | 141.8 | 98.0 | 83.3 | 3237.8 |
VGG13 | 98.0 | 175.9 | 95.6 | 176.2 | 98.2 | 99.3 | 3227.4 |
VGG13_BN | 97.7 | 196.0 | 97.9 | 199.6 | 98.5 | 109.5 | 3279.0 |
VGG16 | 97.7 | 211.0 | 96.9 | 213.4 | 97.4 | 117.9 | 3435.4 |
VGG16_BN | 98.4 | 239.1 | 97.7 | 238.5 | 98.7 | 125.2 | 3414.0 |
VGG19 | 97.1 | 248.0 | 95.7 | 249.5 | 97.8 | 137.3 | 3525.2 |
VGG19_BN | 98.1 | 276.6 | 98.5 | 274.8 | 97.8 | 159.6 | 3554.4 |
Average ± SD | 96.1 ± 1.6 | 192.9 ± 129.8 | 96.7 ± 1.5 | 188.0 ± 131.5 | 95.9 ± 5.0 | 103.2 ± 66.5 | 1724.7 ± 1494.2 |
Development Strategy | Number of Cattle with 0% Identification Accuracy | Number of Cattle 100% Accurately Identified | Accuracy (%, Excluding 100% and 0%) |
---|---|---|---|
Cross-entropy | 4 | 248 | 96.2 ± 15.1 |
Weighted cross-entropy | 4 | 254 | 97.5 ± 13.3 |
Data augmentation | 3 | 255 | 97.7 ± 12.3 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, G.; Erickson, G.E.; Xiong, Y. Individual Beef Cattle Identification Using Muzzle Images and Deep Learning Techniques. Animals 2022, 12, 1453. https://doi.org/10.3390/ani12111453
Li G, Erickson GE, Xiong Y. Individual Beef Cattle Identification Using Muzzle Images and Deep Learning Techniques. Animals. 2022; 12(11):1453. https://doi.org/10.3390/ani12111453
Chicago/Turabian StyleLi, Guoming, Galen E. Erickson, and Yijie Xiong. 2022. "Individual Beef Cattle Identification Using Muzzle Images and Deep Learning Techniques" Animals 12, no. 11: 1453. https://doi.org/10.3390/ani12111453
APA StyleLi, G., Erickson, G. E., & Xiong, Y. (2022). Individual Beef Cattle Identification Using Muzzle Images and Deep Learning Techniques. Animals, 12(11), 1453. https://doi.org/10.3390/ani12111453