A Method for Obtaining 3D Point Cloud Data by Combining 2D Image Segmentation and Depth Information of Pigs
Abstract
:Simple Summary
Abstract
1. Introduction
2. Materials and Methods
2.1. Data Acquisition and Preprocessing
2.2. Fast 3D Detection of Pigs Based on YOLOv5s
2.2.1. Improved Backbone with Res2Net
2.2.2. Acquisition of 3D Information on Pigs
Algorithm 1: Obtain the 3D point cloud coordinates of the detected object in the 2D image |
3. Experimental Setup
3.1. Experimental Platform
3.2. Experimental Dataset Construction
3.3. Network Training Parameters
3.4. Comparison with State-of-the-Art Methods
3.5. Performance Evaluation
4. Results
4.1. Two-Dimensional Pig Image Detection and Segmentation Results
4.2. Three-Dimensional Point Cloud Based on Pig Segmentation Mask and Depth Information
5. Discussion
5.1. Advantages and Limitations
5.2. Generalization of the Model
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- de Paulo Farias, D.; dos Santos Gomes, M.G. COVID-19 outbreak: What should be done to avoid food shortages? Trends Food Sci. Technol. 2020, 102, 291. [Google Scholar] [CrossRef]
- Qiao, Y.; Truman, M.; Sukkarieh, S. Cattle segmentation and contour extraction based on Mask R-CNN for precision livestock farming. Comput. Electron. Agric. 2019, 165, 104958. [Google Scholar] [CrossRef]
- Luo, X.; Yang, R.; Xu, Q. Spatial mismatch evolution of global population and food and its influencing factors. J. Nat. Resour. 2021, 36, 1381–1397. [Google Scholar] [CrossRef]
- Woonwong, Y.; Do Tien, D.; Thanawongnuwech, R. The future of the pig industry after the introduction of African swine fever into Asia. Anim. Front. 2020, 10, 30–37. [Google Scholar] [CrossRef]
- OECD; Food and Agriculture Organization of the United Nations. OECD-FAO Agricultural Outlook 2022–2031; OECD Publishing: Paris, France, 2022; 363p. [Google Scholar] [CrossRef]
- Qiao, Y.; Kong, H.; Clark, C.; Lomax, S.; Su, D.; Eiffert, S.; Sukkarieh, S. Intelligent perception for cattle monitoring: A review for cattle identification, body condition score evaluation, and weight estimation. Comput. Electron. Agric. 2021, 185, 106143. [Google Scholar] [CrossRef]
- Benjamin, M.; Yik, S. Precision Livestock Farming in Swine Welfare: A Review for Swine Practitioners. Animals 2019, 9, 133. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Jiang, H.; Qiao, Y.; Jiang, S.; Lin, H.; Sun, Q. The Research Progress of Vision-Based Artificial Intelligence in Smart Pig Farming. Sensors 2022, 22, 6541. [Google Scholar] [CrossRef]
- Zhang, L.; Gray, H.; Ye, X.; Collins, L.; Allinson, N. Automatic Individual Pig Detection and Tracking in Pig Farms. Sensors 2019, 19, 1188. [Google Scholar] [CrossRef] [Green Version]
- Guo, Q.; Sun, Y.; Min, L.; van Putten, A.; Knol, E.; Visser, B.; Rodenburg, T.; Bolhuis, L.; Bijma, P.; de With, P.H.N. Video-based Detection and Tracking with Improved Re-Identification Association for Pigs and Laying Hens in Farms. In Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Online, 6–8 February 2022; Farinella, G., Radeva, P., Bouatouch, K., Eds.; SciTePress: Setúbal, Portugal, 2022. [Google Scholar] [CrossRef]
- Cowton, J.; Kyriazakis, I.; Bacardit, J. Automated Individual Pig Localisation, Tracking and Behaviour Metric Extraction Using Deep Learning. IEEE Access 2019, 7, 108049–108060. [Google Scholar] [CrossRef]
- Qiao, Y.; Chai, L.; He, D.; Su, D. (Eds.) Advances in Sensors, Big Data and Machine Learning in Intelligent Animal Farming; MDPI: Basel, Switzerland, 2022. [Google Scholar]
- Yang, Q.; Xiao, D.; Lin, S. Feeding behavior recognition for group-housed pigs with the Faster R-CNN. Comput. Electron. Agric. 2018, 155, 453–460. [Google Scholar] [CrossRef]
- Riekert, M.; Klein, A.; Adrion, F.; Hoffmann, C.; Gallmann, E. Automatically detecting pig position and posture by 2D camera imaging and deep learning. Comput. Electron. Agric. 2020, 174, 105391. [Google Scholar] [CrossRef]
- Yin, D.; Tang, W.; Chen, P.; Yang, B. Pig Target Detection from Image Based on Improved YOLO V3. In Advances in Artificial Intelligence and Security, Proceedings of the 7th International Conference, ICAIS 2021, Dublin, Ireland, 19–23 July 2021; Sun, X., Zhang, X., Xia, Z., Bertino, E., Eds.; Springer: Cham, Switzerland, 2021; pp. 94–104. [Google Scholar] [CrossRef]
- Ahn, H.; Son, S.; Kim, H.; Lee, S.; Chung, Y.; Park, D. EnsemblePigDet: Ensemble Deep Learning for Accurate Pig Detection. Appl. Sci. 2021, 11, 5577. [Google Scholar] [CrossRef]
- Wutke, M.; Heinrich, F.; Das, P.P.; Lange, A.; Gentz, M.; Traulsen, I.; Warns, F.K.; Schmitt, A.O.; Gültas, M. Detecting Animal Contacts—A Deep Learning-Based Pig Detection and Tracking Approach for the Quantification of Social Contacts. Sensors 2021, 21, 7512. [Google Scholar] [CrossRef]
- Bo, Z.; Atif, O.; Lee, J.; Park, D.; Chung, Y. Gan-based video denoising with attention mechanism for field-applicable pig detection system. Sensors 2022, 22, 3917. [Google Scholar] [CrossRef]
- Kim, J.; Suh, Y.; Lee, J.; Chae, H.; Ahn, H.; Chung, Y.; Park, D. EmbeddedPigCount: Pig Counting with Video Object Detection and Tracking on an Embedded Board. Sensors 2022, 22, 2689. [Google Scholar] [CrossRef]
- Xiao, D.; Lin, S.; Liu, Y.; Yang, Q.; Wu, H. Group-housed pigs and their body parts detection with Cascade Faster R-CNN. Int. J. Agric. Biol. Eng. 2022, 15, 203–209. [Google Scholar] [CrossRef]
- Tu, S.; Yuan, W.; Liang, Y.; Wang, F.; Wan, H. Automatic detection and segmentation for group-housed pigs based on PigMS R-CNN. Sensors 2021, 21, 3251. [Google Scholar] [CrossRef]
- Hu, Z.; Yang, H.; Lou, T. Dual attention-guided feature pyramid network for instance segmentation of group pigs. Comput. Electron. Agric. 2021, 186, 106140. [Google Scholar] [CrossRef]
- Zhai, X.; Tian, J.; Li, J. Instance segmentation method of adherent targets in pig images based on improved mask R-CNN. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; pp. 368–373. [Google Scholar]
- Liu, C.; Su, J.; Wang, L.; Lu, S.; Li, L. LA-DeepLab V3+: A Novel Counting network for pigs. Agriculture 2022, 12, 284. [Google Scholar] [CrossRef]
- Lu, J.; Wang, W.; Zhao, K.; Wang, H. Recognition and segmentation of individual pigs based on Swin Transformer. Anim. Genet. 2022, 53, 794–802. [Google Scholar] [CrossRef]
- Gan, H.; Ou, M.; Li, C.; Wang, X.; Guo, J.; Mao, A.; Ceballos, M.C.; Parsons, T.D.; Liu, K.; Xue, Y. Automated detection and analysis of piglet suckling behaviour using high-accuracy amodal instance segmentation. Comput. Electron. Agric. 2022, 199, 107162. [Google Scholar] [CrossRef]
- Zhong, Z. A novel visible and infrared image fusion method based on convolutional neural network for pig-body feature detection. Multimed. Tools Appl. 2022, 81, 2757–2775. [Google Scholar] [CrossRef]
- Song, Y.; Jin, T.; Dai, Y.; Song, Y.; Zhou, X. Through-wall human pose reconstruction via UWB MIMO radar and 3D CNN. Remote Sens. 2021, 13, 241. [Google Scholar] [CrossRef]
- Zhao, H.; Yang, D.; Yu, J. 3D target detection using dual domain attention and SIFT operator in indoor scenes. Vis. Comput. 2022, 38, 3765–3774. [Google Scholar] [CrossRef]
- Liang, Z.; Chen, C.; Chen, Y.; Chen, G.; Liu, G. Point cloud classification and segmentation network based on double local features fusion of high-dimensional and low-dimensional. Laser Infrared 2022, 52, 1557–1564. [Google Scholar]
- Camuffo, E.; Mari, D.; Milani, S. Recent Advancements in Learning Algorithms for Point Clouds: An Updated Overview. Sensors 2022, 22, 1357. [Google Scholar] [CrossRef]
- Su, H.; Maji, S.; Kalogerakis, E.; Learned-Miller, E. Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 945–953. [Google Scholar]
- Feng, Y.; Zhang, Z.; Zhao, X.; Ji, R.; Gao, Y. GVCNN: Group-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 264–272. [Google Scholar]
- Wu, B.; Wan, A.; Yue, X.; Keutzer, K. SqueezeSeg: Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 1887–1893. [Google Scholar]
- Maturana, D.; Scherer, S. Voxnet: A 3D convolutional neural network for real-time object recognition. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 922–928. [Google Scholar]
- Tchapmi, L.; Choy, C.; Armeni, I.; Gwak, J.; Savarese, S. Segcloud: Semantic segmentation of 3D point clouds. In Proceedings of the 2017 International Conference on 3D Vision (3DV), Qingdao, China, 10–12 October 2017; pp. 537–547. [Google Scholar]
- Shi, S.; Wang, X.; Li, H. PointRCNN: 3D object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 770–779. [Google Scholar]
- Yang, Z.; Sun, Y.; Liu, S.; Shen, X.; Jia, J. STD: Sparse-to-dense 3D object detector for point cloud. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1951–1960. [Google Scholar]
- Xie, J.; Zheng, Z.; Gao, R.; Wang, W.; Zhu, S.C.; Wu, Y.N. Generative VoxelNet: Learning energy-based models for 3D shape synthesis and analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 2468–2484. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 1–10. [Google Scholar]
- Zhao, H.; Jiang, L.; Fu, C.W.; Jia, J. PointWeb: Enhancing local neighborhood features for point cloud processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5565–5573. [Google Scholar]
- Wu, W.; Qi, Z.; Fuxin, L. PointConv: Deep convolutional networks on 3D point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9621–9630. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Lu, Q.; Chen, C.; Xie, W.; Luo, Y. PointNGCNN: Deep convolutional networks on 3D point clouds with neighborhood graph filters. Comput. Graph. 2020, 86, 42–51. [Google Scholar] [CrossRef]
- Lan, H.; Chen, H.; Zhang, P. Point Cloud Classification and Segmentation Model Based on Graph Convolution and 3D Direction Convolution. Comput. Eng. Appl. 2023, 59, 182–191. [Google Scholar]
- Zhang, K.; Zhu, Y.; Wang, X.; Zhang, Y.; Zhong, R. Three-Dimensional Point Cloud Semantic Segmentation Network Based on Spatial Graph Convolution Network. Laser Optoelectron. Prog. 2023, 60, 387–395. [Google Scholar]
- Chen, C.; Fragonara, L.Z.; Tsourdos, A. GAPNet: Graph attention based point neural network for exploiting local feature of point cloud. arXiv 2019, arXiv:1905.08705. [Google Scholar]
- Yang, J.; Dang, J. Semantic segmentation of 3D point cloud based on contextual attention CNN. J. Commun. 2020, 7, 195–203. [Google Scholar]
- Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. RandLA-Net: Efficient semantic segmentation of large-scale point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Online, 14–19 June 2020; pp. 11108–11117. [Google Scholar]
- Wu, J.; Cui, Y.; Zhao, X.; Chen, R.; Xu, G. SSA-PointNet++: A Space Self-Attention CNN for the Semantic Segmentation of 3D Point Cloud. J. Comput.-Aided Des. Comput. Graph. 2022, 34, 437–448. [Google Scholar]
- Vanian, V.; Zamanakos, G.; Pratikakis, I. Improving performance of deep learning models for 3D point cloud semantic segmentation via attention mechanisms. Comput. Graph. 2022, 106, 277–287. [Google Scholar] [CrossRef]
- Liu, H.; Huo, G.; Li, Q.; Guan, X.; Tseng, M.L. Multiscale lightweight 3D segmentation algorithm with attention mechanism: Brain tumor image segmentation. Expert Syst. Appl. 2023, 214, 119166. [Google Scholar] [CrossRef]
- Ma, Y.; Chen, X.; Zhang, W.; Zheng, L.; Gao, W.; Wang, M. An Approach of On-barn Pig Weight Estimation via 3D Computer Vision by Kinect V2. In Proceedings of the 2020 American Society of Agricultural and Biological Engineers (ASABE) Annual International Virtual Meeting, Online, 13–15 July 2020; p. 1. [Google Scholar]
- Li, G.; Liu, X.; Ma, Y.; Wang, B.; Zheng, L.; Wang, M. Body size measurement and live body weight estimation for pigs based on back surface point clouds. Biosyst. Eng. 2022, 218, 10–22. [Google Scholar] [CrossRef]
- Yu, H.; Lee, K.; Morota, G. Forecasting dynamic body weight of nonrestrained pigs from images using an RGB-D sensor camera. Transl. Anim. Sci. 2021, 5, txab006. [Google Scholar] [CrossRef]
- Zhang, J.; Zhuang, Y.; Ji, H.; Teng, G. Pig weight and body size estimation using a multiple output regression convolutional neural network: A fast and fully automatic method. Sensors 2021, 21, 3218. [Google Scholar] [CrossRef]
- Nguyen, A.H.; Holt, J.P.; Knauer, M.T.; Abner, V.A.; Lobaton, E.J.; Young, S.N. Towards rapid weight assessment of finishing pigs using a handheld, mobile RGB-D camera. Biosyst. Eng. 2023, 226, 155–168. [Google Scholar] [CrossRef]
- Liu, T.; Teng, G.; Zhang, S.; Li, Z.; Guo, P. Reconstruction and application of 3D pig body model based on point cloud data. Trans. Chin. Soc. Agric. Mach. 2014, 45, 291–295. [Google Scholar]
- He, H.; Qiao, Y.; Li, X.; Chen, C.; Zhang, X. Automatic weight measurement of pigs based on 3D images and regression network. Comput. Electron. Agric. 2021, 187, 106299. [Google Scholar] [CrossRef]
- Yi, Y.; Peng, Y.; Qi, J. Research on image similarity retrieval algorithm based on dual hash. Inf. Commun. Technol. 2019, 13, 33–38. [Google Scholar]
- Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; NanoCode012; Kwon, Y.; TaoXie; Fang, J.; imyhxy; Michael, K.; et al. ultralytics/yolov5: V6.1—TensorRT, TensorFlow Edge TPU and OpenVINO Export and Inference. Zenodo 2022. [Google Scholar] [CrossRef]
- Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2net: A new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 652–662. [Google Scholar] [CrossRef] [Green Version]
- Labelme, Version 5.0.1; Massachusetts Institute of Technology: Cambridge, MA, USA, 2023. Available online: http://labelme.csail.mit.edu (accessed on 7 June 2023).
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Computer Vision—ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer: Cham, Switzerland, 2016; Volume 9905, pp. 21–37. [Google Scholar] [CrossRef] [Green Version]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 318–327. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
Hardware | Type |
---|---|
CPU | 11th Gen Intel Core i7-11700 @ 2.50 GHz |
GPU | NVIDIA GeForce GTX 1080Ti |
Memory | 16.0 GB |
Video Memory | 11.0 GB |
Hard disk | 4 TB |
Parameters | Value | Remarks |
---|---|---|
Image size | 640 | Shrank image size to 640 × 640 during training. |
Batch size | 32 | Ensured maximum batch size with limited graphics card memory. |
Training epochs | 100 | Ensured that network was well-trained. |
Optimiser | SGD | Selected suitable optimiser with limited graphics card memory. |
Learning rate | 0.04 | Learning rate during model training. |
Initial weights | Random | No pretraining weights were used. |
Predict_conf_thres | 0.9 | The confidence threshold of predict boxse. |
Hyperparameter | hyp.scratch-low.yaml | Hyperparameter file in YOLOv5s. |
Authors, Year | Methods | Datasets | mAP@ 0.5-D | mAP@ 0.5:0.95-D | mAP@ 0.5-S | mAP@ 0.5:0.95-S | Global ACC |
---|---|---|---|---|---|---|---|
Ren et al., 2015 [66] | Faster R-CNN | Our dataset | 94.5 | 69.0 | ✔ | ✔ | ✔ |
Liu et al., 2016 [65] | SSD | Our dataset | 84.7 | 54.7 | ✔ | ✔ | ✔ |
Lin et al., 2017 [67] | RetinaNet | Our dataset | 95.6 | 76.7 | ✔ | ✔ | ✔ |
Ren et al., 2015 [66] | Faster R-CNN | UK dataset | 99.0 | 86.0 | ✔ | ✔ | ✔ |
Liu et al., 2016 [65] | SSD | UK dataset | 98.2 | 70.0 | ✔ | ✔ | ✔ |
Lin et al., 2017 [67] | RetinaNet | UK dataset | 96.3 | 70.7 | ✔ | ✔ | ✔ |
Long et al., 2015 [69] | FCN | Our dataset | ✔ | ✔ | ✔ | 79.7 | 94.6 |
Chen et al., 2017 [70] | DeepLabv3 | Our dataset | ✔ | ✔ | ✔ | 77.1 | 93.0 |
He et al., 2017 [68] | Mask-RCNN | Our dataset | 93.7 | 73.0 | 92.6 | 78.0 | ✔ |
Long et al., 2015 [69] | FCN | UK dataset | ✔ | ✔ | ✔ | 89.3 | 97.1 |
Chen et al., 2017 [70] | DeepLabv3 | UK dataset | ✔ | ✔ | ✔ | 87.8 | 96.2 |
He et al., 2017 [68] | Mask-RCNN | UK dataset | 95.7 | 84.1 | 93.3 | 82.3 | ✔ |
Jocher et al., 2022 [62] | YOLOv5s | Our dataset | 99.4 | 85.3 | 99.4 | 81.6 | ✔ |
Our work | YOLOv5s_Res2Net | Our dataset | 99.5 | 89.6 | 99.5 | 84.8 | ✔ |
Jocher et al., 2022 [62] | YOLOv5s | UK dataset | 99.3 | 90.5 | 99.3 | 87.2 | ✔ |
Our work | YOLOv5s_Res2Net | UK dataset | 99.4 | 93.4 | 99.4 | 89.4 | ✔ |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, S.; Jiang, H.; Qiao, Y.; Jiang, S. A Method for Obtaining 3D Point Cloud Data by Combining 2D Image Segmentation and Depth Information of Pigs. Animals 2023, 13, 2472. https://doi.org/10.3390/ani13152472
Wang S, Jiang H, Qiao Y, Jiang S. A Method for Obtaining 3D Point Cloud Data by Combining 2D Image Segmentation and Depth Information of Pigs. Animals. 2023; 13(15):2472. https://doi.org/10.3390/ani13152472
Chicago/Turabian StyleWang, Shunli, Honghua Jiang, Yongliang Qiao, and Shuzhen Jiang. 2023. "A Method for Obtaining 3D Point Cloud Data by Combining 2D Image Segmentation and Depth Information of Pigs" Animals 13, no. 15: 2472. https://doi.org/10.3390/ani13152472
APA StyleWang, S., Jiang, H., Qiao, Y., & Jiang, S. (2023). A Method for Obtaining 3D Point Cloud Data by Combining 2D Image Segmentation and Depth Information of Pigs. Animals, 13(15), 2472. https://doi.org/10.3390/ani13152472