Iterative Removal of G-PCC Attribute Compression Artifacts Based on a Graph Neural Network
Abstract
:1. Introduction
- (1)
- In view of the problem, with the loss of high-frequency information that may be triggered during attribute compression, a bi-branch attention module is designed to capture the basic structural information, while mining and enhancing the high-frequency features in PCs. By efficiently fusing the features as extracted above, it not only ensures information complementarity between the features, but also promotes their synergy, thus further boosting the comprehensiveness and accuracy of feature expression;
- (2)
- Given that the attributes of neighboring points in a compressed PC should have a certain level of spatial consistency, an innovative feature constraint mechanism based on global features is proposed. By regulating local regions through global features, it can effectively prevent attributes from abrupt changes or inconsistencies during the reconstruction process, thus significantly enhancing the attribute continuity of PCs;
- (3)
- This paper proposes an iterative removal method for G-PCC attribute artifacts based on a GNN. This is the first time that the idea of iterative optimization has been introduced into the field of removing attribute artifacts of PCs. The method not only boosts the adaptability to complex artifacts, but also significantly improves the visual rendering quality of compressed PCs.
2. Related Work
2.1. Point Cloud Attribute Compression
2.2. Point Cloud Attribute Compression Artifact Removal
2.3. Point Cloud Graph Neural Networks
3. Methodology
3.1. Point Cloud Chunking
3.2. Feature Extraction Module
3.3. Point Cloud Attribute Offset Estimation Module
3.4. Iterative Removal of Artifacts
4. Experiments
4.1. Experiment Setup
4.2. Quantitative Quality Evaluation
4.3. Qualitative Quality Evaluation
4.4. Extended Application in Terms of RAHT
4.5. Spatial, Temporal, and Computational Complexity
4.6. Ablation Experiments
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- He, Y.; Li, B.; Ruan, J.; Yu, A.; Hou, B. ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios. Electronics 2024, 13, 1341. [Google Scholar] [CrossRef]
- Gamelin, G.; Chellali, A.; Cheikh, S.; Ricca, A.; Dumas, C.; Otmane, S. Point-cloud avatars to improve spatial communication in immersive collaborative virtual environments. Pers. Ubiquitous Comput. 2021, 25, 467–484. [Google Scholar] [CrossRef]
- Sun, X.; Song, S.; Miao, Z.; Tang, P.; Ai, L. LiDAR Point Clouds Semantic Segmentation in Autonomous Driving Based on Asymmetrical Convolution. Electronics 2023, 12, 4926. [Google Scholar] [CrossRef]
- Wang, Q.; Kim, M.K. Applications of 3D point cloud data in the construction industry: A fifteen-year review from 2004 to 2018. Adv. Eng. Inform. 2019, 39, 306–319. [Google Scholar] [CrossRef]
- Gao, P.; Zhang, L.; Lei, L.; Xiang, W. Point Cloud Compression Based on Joint Optimization of Graph Transform and Entropy Coding for Efficient Data Broadcasting. IEEE Trans. Broadcast. 2023, 69, 727–739. [Google Scholar] [CrossRef]
- Li, D.; Ma, K.; Wang, J.; Li, G. Hierarchical Prior-Based Super Resolution for Point Cloud Geometry Compression. IEEE Trans. Image Process. 2024, 33, 1965–1976. [Google Scholar] [CrossRef] [PubMed]
- Graziosi, D.; Nakagami, O.; Kuma, S.; Zaghetto, A.; Suzuki, T.; Tabatabai, A. An overview of ongoing point cloud compression standardization activities: Video-based (V-PCC) and geometry-based (G-PCC). APSIPA Trans. Signal Inf. Process. 2020, 9, e13. [Google Scholar] [CrossRef]
- MPEG 3DG. V-PCC Codec Description. In Document ISO/IEC JTC 1/SC 29/WG 11 MPEG, N19526; ISO/IEC: Newark, DE, USA, 2020.
- MPEG 3DG. G-PCC Codec Description. In Document ISO/IEC JTC 1/SC 29/WG 7 MPEG, N00271; ISO/IEC: Newark, DE, USA, 2020.
- Sullivan, G.J.; Ohm, J.R.; Han, W.J.; Wiegand, T. Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1649–1668. [Google Scholar] [CrossRef]
- Bross, B.; Chen, J.; Ohm, J.R.; Sullivan, G.J.; Wang, Y.K. Developments in international video coding standardization after AVC, with an overview of versatile video coding (VVC). Proc. IEEE 2021, 109, 1463–1493. [Google Scholar] [CrossRef]
- Huang, Y.; Peng, J.; Kuo, C.C.J.; Gopi, M. A generic scheme for progressive point cloud coding. IEEE Trans. Vis. Comput. Graph. 2008, 14, 440–453. [Google Scholar] [CrossRef]
- Jackins, C.L.; Steven, L.T. Oct-trees and their use in representing three-dimensional objects. Comput. Graph. Image Process. 1980, 14, 249–270. [Google Scholar] [CrossRef]
- Schnabel, R.; Reinhard, K. Octree-based Point-Cloud Compression. PBG@ SIGGRAPH 2006, 3, 111–121. [Google Scholar]
- Anis, A.; Chou, P.A.; Ortega, A. Compression of dynamic 3D point clouds using subdivisional meshes and graph wavelet transforms. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 6360–6364. [Google Scholar]
- Pavez, E.; Chou, P.A.; De Queiroz, R.L.; Ortega, A. Dynamic polygon clouds: Representation and compression for VR/AR. APSIPA Trans. Signal Inf. Process. 2018, 7, e15. [Google Scholar] [CrossRef]
- Mammou, K.; Tourapis, A.M.; Singer, D.; Su, Y. Video-based and hierarchical approaches point cloud compression. In Document ISO/IEC JTC1/SC29/WG11 m41649; ISO/IEC: Macau, China, 2017. [Google Scholar]
- Mammou, K.; Tourapis, A.; Kim, J.; Robinet, F.; Valentin, V.; Su, Y. Lifting scheme for lossy attribute encoding in TMC1. In Document ISO/IEC JTC1/SC29/WG11 m42640; ISO/IEC: San Diego, CA, USA, 2018. [Google Scholar]
- De Queiroz, R.L.; Chou, P.A. Compression of 3D point clouds using a region-adaptive hierarchical transform. IEEE Trans. Image Process. 2016, 25, 3947–3956. [Google Scholar] [CrossRef]
- PCC Content Database. Available online: https://mpeg-pcc.org/index.php/pcc-content-database (accessed on 9 April 2023).
- Karczewicz, M.; Hu, N.; Taquet, J.; Chen, C.-Y.; Misra, K.; Andersson, K.; Yin, P.; Lu, T.; François, E.; Chen, J. VVC in-loop filters. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3907–3925. [Google Scholar] [CrossRef]
- Ma, D.; Zhang, F.; Bull, D.R. MFRNet: A new CNN architecture for post-processing and in-loop filtering. IEEE J. Sel. Top. Signal Process. 2020, 15, 378–387. [Google Scholar] [CrossRef]
- Nasiri, F.; Hamidouche, W.; Morin, L.; Dhollande, N.; Cocherel, G. A CNN-based prediction-aware quality enhancement framework for VVC. IEEE Open J. Signal Process. 2021, 2, 466–483. [Google Scholar] [CrossRef]
- Tsai, C.Y.; Chen, C.Y.; Yamakage, T.; Chong, I.S.; Huang, Y.-W.; Fu, C.-M.; Itoh, T.; Watanabe, T.; Chujoh, T.; Karczewicz, M.; et al. Adaptive loop filtering for video coding. IEEE J. Sel. Top. Signal Process. 2013, 7, 934–945. [Google Scholar] [CrossRef]
- Fu, C.M.; Alshina, E.; Alshin, A.; Huang, Y.W.; Chen, C.-Y.; Tsai, C.-Y.; Hsu, C.-W.; Lei, S.-M.; Park, J.-H.; Han, W.-J. Sample adaptive offset in the HEVC standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1755–1764. [Google Scholar] [CrossRef]
- Norkin, A.; Bjontegaard, G.; Fuldseth, A.; Narroschke, M.; Ikeda, M.; Andersson, K.; Zhou, M.; Van der Auwera, G. HEVC deblocking filter. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1746–1754. [Google Scholar] [CrossRef]
- Dong, C.; Deng, Y.; Loy, C.C.; Tang, X. Compression artifacts reduction by a deep convolutional network. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 576–584. [Google Scholar]
- Wang, Z.; Ma, C.; Liao, R.L.; Ye, Y. Multi-density convolutional neural network for in-loop filter in video coding. In Proceedings of the 2021 Data Compression Conference (DCC), Snowbird, UT, USA, 23–26 March 2021; pp. 23–32. [Google Scholar]
- Lin, K.; Jia, C.; Zhang, X.; Wang, S.; Ma, S.; Gao, W. Nr-cnn: Nested-residual guided cnn in-loop filtering for video coding. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2022, 18, 1–22. [Google Scholar] [CrossRef]
- Pan, Z.; Yi, X.; Zhang, Y.; Jeon, B.; Kwong, S. Efficient in-loop filtering based on enhanced deep convolutional neural networks for HEVC. IEEE Trans. Image Process. 2020, 29, 5352–5366. [Google Scholar] [CrossRef] [PubMed]
- Jia, W.; Li, L.; Li, Z.; Zhang, X.; Liu, S. Residual-guided in-loop filter using convolution neural network. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2021, 17, 1–19. [Google Scholar] [CrossRef]
- Zhang, Y.; Shen, T.; Ji, X.; Zhang, Y.; Xiong, R.; Dai, Q. Residual highway convolutional neural networks for in-loop filtering in HEVC. IEEE Trans. Image Process. 2018, 27, 3827–3841. [Google Scholar] [CrossRef] [PubMed]
- Wang, D.; Xia, S.; Yang, W.; Liu, J. Combining progressive rethinking and collaborative learning: A deep framework for in-loop filtering. IEEE Trans. Image Process. 2021, 30, 4198–4211. [Google Scholar] [CrossRef]
- Kong, L.; Ding, D.; Liu, F.; Mukherjee, D.; Joshi, U.; Chen, Y. Guided CNN restoration with explicitly signaled linear combination. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 3379–3383. [Google Scholar]
- Quach, M.; Valenzise, G.; Dufaux, F. Folding-based compression of point cloud attributes. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 3309–3313. [Google Scholar]
- Wang, J.; Ding, D.; Li, Z.; Ma, Z. Multiscale point cloud geometry compression. In Proceedings of the 2021 Data Compression Conference (DCC), Snowbird, UT, USA, 23–26 March 2021; pp. 73–82. [Google Scholar]
- Guarda, A.F.; Rodrigues, N.M.; Pereira, F. Adaptive deep learning-based point cloud geometry coding. IEEE J. Sel. Top. Signal Process. 2020, 15, 415–430. [Google Scholar] [CrossRef]
- Quach, M.; Valenzise, G.; Dufaux, F. Learning convolutional transforms for lossy point cloud geometry compression. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–29 September 2019; pp. 4320–4324. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 2. [Google Scholar]
- Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S.E.; Bronstein, M.M.; Solomon, J.M. Dynamic graph cnn for learning on point clouds. ACM Trans. Graph. (TOG) 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Chen, C.; Fragonara, L.Z.; Tsourdos, A. GAPointNet: Graph attention based point neural network for exploiting local feature of point cloud. Neurocomputing 2021, 438, 122–132. [Google Scholar] [CrossRef]
- Wang, L.; Huang, Y.; Hou, Y.; Zhang, S.; Shan, J. Graph attention convolution for point cloud semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 10296–10305. [Google Scholar]
- Shi, W.; Rajkumar, R. Point-gnn: Graph neural network for 3d Object detection in a point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1711–1719. [Google Scholar]
- Te, G.; Hu, W.; Zheng, A.; Guo, Z. Rgcnn: Regularized graph cnn for point cloud segmentation. In Proceedings of the 26th ACM International Conference on Multimedia, Seoul, Republic of Korea, 22–26 October 2018; pp. 746–754. [Google Scholar]
- Chen, S.; Duan, C.; Yang, Y.; Li, D.; Feng, C.; Tian, D. Deep unsupervised learning of 3D point clouds via graph topology inference and filtering. IEEE Trans. Image Process. 2019, 29, 3183–3198. [Google Scholar] [CrossRef]
- Liang, Z.; Yang, M.; Deng, L.; Wang, C.; Wang, B. Hierarchical depthwise graph convolutional neural network for 3D semantic segmentation of point clouds. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 8152–8158. [Google Scholar]
- Zhang, C.; Florencio, D.; Loop, C. Point cloud attribute compression with graph transform. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 2066–2070. [Google Scholar]
- Hammond, D.K.; Vandergheynst, P.; Gribonval, R. Wavelets on graphs via spectral graph theory. Appl. Comput. Harmon. Anal. 2011, 30, 129–150. [Google Scholar] [CrossRef]
- Haar, A. Zur Theorie der Orthogonalen Funktionensysteme; Georg-August-Universitat: Gottingen, Germany, 1909. [Google Scholar]
- Schwarz, S.; Preda, M.; Baroncini, V.; Budagavi, M.; Cesar, P.; Chou, P.A.; Cohen, R.A.; Krivokuća, M.; Lasserre, S.; Li, Z.; et al. Emerging MPEG standards for point cloud compression. IEEE J. Emerg. Sel. Top. Circuits Syst. 2018, 9, 133–148. [Google Scholar] [CrossRef]
- Sheng, X.; Li, L.; Liu, D.; Xiong, Z.; Li, Z.; Wu, F. Deep-PCAC: An end-to-end deep lossy compression framework for point cloud attributes. IEEE Trans. Multimed. 2021, 24, 2617–2632. [Google Scholar] [CrossRef]
- He, Y.; Ren, X.; Tang, D.; Zhang, Y.; Xue, X.; Fu, Y. Density-preserving deep point cloud compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2333–2342. [Google Scholar]
- Wang, J.; Ma, Z. Sparse tensor-based point cloud attribute compression. In Proceedings of the 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR), Virtual, 2–4 August 2022; IEEE: Piscataway, NJ, USA; pp. 59–64. [Google Scholar]
- Fang, G.; Hu, Q.; Wang, H.; Xu, Y.; Guo, Y. 3dac: Learning attribute compression for point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 14819–14828. [Google Scholar]
- Sheng, X.; Li, L.; Liu, D.; Xiong, Z. Attribute artifacts removal for geometry-based point cloud compression. IEEE Trans. Image Process. 2022, 31, 3399–3413. [Google Scholar] [CrossRef]
- Ding, D.; Zhang, J.; Wang, J.; Ma, Z. CARNet: Compression Artifact Reduction for Point Cloud Attribute. arXiv 2022, arXiv:2209.08276. [Google Scholar]
- Xing, J.; Yuan, H.; Hamzaoui, R.; Liu, H.; Hou, J. GQE-Net: A graph-based quality enhancement network for point cloud color attribute. IEEE Trans. Image Process. 2023, 32, 6303–6317. [Google Scholar] [CrossRef]
- Zhang, K.; Hao, M.; Wang, J.; Chen, X.; Leng, Y.; de Silva, C.W.; Fu, C. Linked dynamic graph cnn: Learning through point cloud by linking hierarchical features. In Proceedings of the 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Shanghai, China, 26–28 November 2021; pp. 7–12. [Google Scholar]
- Wei, M.; Wei, Z.; Zhou, H.; Hu, F.; Si, H.; Chen, Z.; Zhu, Z.; Qiu, J.; Yan, X.; Guo, Y.; et al. AGConv: Adaptive graph convolution on 3D point clouds. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 9374–9392. [Google Scholar] [CrossRef]
- PCC DataSets. Available online: http://mpegfs.int-evry.fr/MPEG/PCC/DataSets/pointCloud/CfP/datasets (accessed on 9 April 2023).
- Schwarz, S.; Martin, C.G.; Flynn, D.; Budagavi, M. Common test conditions for point cloud compression. In Document ISO/IEC JTC1/SC29/WG11 w17766; Ljubljana, Slovenia, 2018.
- Meynet, G.; Nehmé, Y.; Digne, J.; Lavoué, G. PCQM: A full-reference quality metric for colored 3D point clouds. In Proceedings of the 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX), Virtual, 26–28 May 2020; pp. 1–6. [Google Scholar]
- JPEG Pleno Database. Available online: https://plenodb.jpeg.org (accessed on 9 April 2023).
Categories | Method | Comments |
---|---|---|
Traditional methods | Zhang et al. [48] | Graph transformation applied to attribute compression; high computational complexity, high cost |
Queiroz et al. [50] | Maintains efficient compression performance and reduces computational complexity | |
Mammou et al. [17] | Predictive coding; applied to sparse PCs; significant compression performance | |
Deep learning-based methods | Sheng et al. [52] | Pioneering an end-to-end learning framework specifically for PC attribute compression |
He et al. [53] | Maintaining PC density, point-by-point property compression | |
Wang et al. [54] | The framework incorporates sparse convolution, improves efficiency | |
Fang et al. [55] | Efficient compression, reduced storage requirements |
Point Cloud | PCQM (×102)↓ | RGB-PSNR (dB)↑ | ||||
---|---|---|---|---|---|---|
G-PCC [9] | MS-GAT [56] | Ours | G-PCC [9] | MS-GAT [56] | Ours | |
Andrew | 1.3638 | 1.3337 | 1.3264 | 24.6359 | 24.7908 | 24.8025 |
David | 1.0940 | 1.1285 | 1.0889 | 30.6753 | 31.0564 | 31.0753 |
Phil | 1.4518 | 1.4269 | 1.3811 | 25.4156 | 25.5728 | 25.6495 |
Ricardo | 0.5745 | 0.6167 | 0.5694 | 32.1032 | 32.5871 | 32.7163 |
Sarah | 0.5901 | 0.5570 | 0.5560 | 32.1174 | 32.4206 | 32.4897 |
Longdress | 0.9948 | 0.9350 | 0.9361 | 24.2499 | 24.4443 | 24.4359 |
Red and black | 0.9398 | 0.8994 | 0.9231 | 28.2514 | 28.5280 | 28.4774 |
Soldier | 1.0493 | 0.9885 | 1.0085 | 27.5076 | 27.8499 | 27.7555 |
Dancer | 0.6532 | 0.6221 | 0.6391 | 30.8330 | 31.5893 | 31.3574 |
Model | 0.5469 | 0.5001 | 0.5213 | 29.1102 | 29.6597 | 29.4921 |
Average | 0.9265 | 0.9008 | 0.8950 | 28.4899 | 28.8089 | 28.8252 |
Point Cloud | PCQM (×102)↓ | RGB-PSNR (dB)↑ | ||
---|---|---|---|---|
G-PCC [9] | Ours | G-PCC [9] | Ours | |
Andrew | 2.0138 | 2.0114 | 23.6869 | 23.7308 |
David | 1.5752 | 1.5562 | 28.8023 | 28.9123 |
Phil | 2.3690 | 2.3358 | 23.8219 | 23.9428 |
Ricardo | 0.9131 | 0.9101 | 30.1297 | 30.1850 |
Sarah | 0.9536 | 0.9258 | 29.6014 | 29.7508 |
Longdress | 1.7362 | 1.7273 | 22.4465 | 22.5071 |
Red and black | 1.5020 | 1.5095 | 26.4601 | 26.4987 |
Soldier | 1.8023 | 1.7938 | 25.7579 | 25.7886 |
Dancer | 0.9789 | 0.9767 | 29.1226 | 29.1746 |
Model | 0.8411 | 0.8408 | 27.2609 | 27.3099 |
Average | 1.4685 | 1.4587 | 26.7090 | 26.7801 |
Model | MS-GAT [56] | Ours | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Part | Y | U | V | Comb | Overall | Part | RGB | Comb | Overall | FLOPs | |
David | 0.19 | 22.38 | 21.87 | 22.22 | 0.03 | 66.69 | 0.14 | 6.20 | 0.05 | 6.38 | 2.81T |
Phil | 0.25 | 25.01 | 24.48 | 24.70 | 0.03 | 74.47 | 0.25 | 5.29 | 0.13 | 5.66 | 3.16T |
Ricar | 0.07 | 14.90 | 14.15 | 14.49 | 0.02 | 43.63 | 0.07 | 3.39 | 0.04 | 3.51 | 1.82T |
Sara | 0.14 | 20.53 | 19.90 | 20.05 | 0.02 | 60.64 | 0.15 | 4.73 | 0.03 | 4.92 | 2.57T |
Longdress | 1.75 | 50.51 | 50.16 | 50.13 | 0.05 | 152.6 | 1.87 | 10.89 | 0.09 | 12.84 | 6.50T |
Dance | 23.46 | 162.96 | 162.85 | 163.02 | 0.18 | 512.47 | 23.88 | 33.77 | 0.24 | 57.89 | 21.19T |
Andrew | 0.12 | 19.05 | 18.51 | 18.55 | 0.02 | 56.25 | 0.14 | 5.21 | 0.02 | 5.36 | 2.40T |
Model | 23.36 | 162.21 | 164.22 | 161.53 | 0.18 | 511.5 | 23.27 | 42.83 | 0.15 | 66.25 | 20.86T |
Red and black | 1.56 | 48.00 | 48.12 | 47.96 | 0.05 | 145.69 | 1.60 | 10.12 | 0.11 | 11.84 | 6.20T |
Soldier | 3.76 | 70.06 | 69.32 | 69.36 | 0.07 | 212.57 | 3.94 | 14.47 | 0.12 | 18.54 | 9.00T |
Average | 5.47 | 59.56 | 59.36 | 59.20 | 0.07 | 183.65 | 5.53 | 13.69 | 0.10 | 19.32 | 7.65T |
Point Cloud | PCQM (×102)↓ | RGB-PSNR (dB)↑ | ||||
---|---|---|---|---|---|---|
I | II | III | I | II | III | |
David | 0.3997 | 0.4008 | 0.4012 | 37.2630 | 37.2494 | 37.2468 |
Phil | 0.2214 | 0.2221 | 0.2219 | 32.1643 | 32.1614 | 32.1631 |
Ricardo | 0.2021 | 0.2031 | 0.2038 | 39.2426 | 39.2188 | 39.2093 |
Sarah | 0.1443 | 0.1447 | 0.1454 | 39.2129 | 39.1802 | 39.1735 |
Longdress | 0.1344 | 0.1346 | 0.1347 | 31.8037 | 31.7999 | 31.7973 |
Dance | 0.1795 | 0.1807 | 0.1804 | 37.6484 | 37.5588 | 37.5884 |
Andrew | 0.2807 | 0.2813 | 0.2819 | 30.6925 | 30.6884 | 30.6735 |
Model | 0.1543 | 0.1560 | 0.1557 | 36.1274 | 35.9864 | 36.0986 |
Red and black | 0.2023 | 0.2037 | 0.2043 | 35.0430 | 34.9648 | 34.8639 |
Soldier | 0.1338 | 0.1350 | 0.1343 | 35.1473 | 35.1202 | 35.1364 |
Average | 0.2053 | 0.2062 | 0.2064 | 35.4345 | 35.3928 | 35.3951 |
Point Cloud | PCQM (×102)↓ | RGB-PSNR (dB)↑ | ||||||
---|---|---|---|---|---|---|---|---|
K = 8 | K = 12 | K = 16 | K = 20 | K = 8 | K = 12 | K = 16 | K = 20 | |
David | 1.0797 | 1.0852 | 1.0889 | 1.0758 | 31.0714 | 31.0623 | 31.0753 | 31.0723 |
Phil | 1.3734 | 1.3897 | 1.3811 | 1.3812 | 25.6354 | 25.6528 | 25.6495 | 25.6459 |
Ricardo | 0.5553 | 0.5651 | 0.5694 | 0.5580 | 32.6992 | 32.7124 | 32.7163 | 32.6945 |
Sarah | 0.5645 | 0.5686 | 0.5560 | 0.5649 | 32.3384 | 32.3559 | 32.4897 | 32.3547 |
Longdress | 0.9380 | 0.9490 | 0.9361 | 0.9397 | 24.4270 | 24.4421 | 24.4359 | 24.4511 |
Dance | 0.6340 | 0.6243 | 0.6391 | 0.6252 | 31.3192 | 31.3833 | 31.3574 | 31.3495 |
Andrew | 1.3271 | 1.3293 | 1.3264 | 1.3319 | 24.7983 | 24.8188 | 24.8025 | 24.8132 |
Model | 0.5165 | 0.5087 | 0.5213 | 0.5127 | 29.4699 | 29.5164 | 29.4921 | 29.4921 |
Red and black | 0.9179 | 0.9244 | 0.9231 | 0.9061 | 28.5342 | 28.5148 | 28.4774 | 28.5174 |
Soldier | 1.0034 | 1.0034 | 1.0085 | 0.9983 | 27.7540 | 27.7813 | 27.7555 | 27.7802 |
Average | 0.8910 | 0.8948 | 0.8950 | 0.8894 | 28.8047 | 28.8240 | 28.8252 | 28.8171 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
He, Z.; Yang, W.; Li, L.; Bai, R. Iterative Removal of G-PCC Attribute Compression Artifacts Based on a Graph Neural Network. Electronics 2024, 13, 3768. https://doi.org/10.3390/electronics13183768
He Z, Yang W, Li L, Bai R. Iterative Removal of G-PCC Attribute Compression Artifacts Based on a Graph Neural Network. Electronics. 2024; 13(18):3768. https://doi.org/10.3390/electronics13183768
Chicago/Turabian StyleHe, Zhouyan, Wenming Yang, Lijun Li, and Rui Bai. 2024. "Iterative Removal of G-PCC Attribute Compression Artifacts Based on a Graph Neural Network" Electronics 13, no. 18: 3768. https://doi.org/10.3390/electronics13183768
APA StyleHe, Z., Yang, W., Li, L., & Bai, R. (2024). Iterative Removal of G-PCC Attribute Compression Artifacts Based on a Graph Neural Network. Electronics, 13(18), 3768. https://doi.org/10.3390/electronics13183768