A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement
Abstract
:1. Introduction
2. Related Work
2.1. There-Dimensional Reconstruction
2.2. Surface Reconstruction
2.3. Style Transfer
2.4. Texture Mapping
3. Methodology
3.1. Dataset
3.2. Point Cloud Data Acquisition
3.3. Surface Reconstruction
3.4. Style-Transfer-Based Texture Enhancement
3.5. Texture Mapping
- Step one. A set of parameter space values are obtained by applying the projector function to points in space, transforming 3D points into texture coordinates. The relationship between the points in the world coordinate system and the points in the pixel coordinate system are shown as follows:
- Step two. Before accessing the texture with these new values, corresponding functions convert the parameter space values to the texture space. The image appears at position on the object’s surface with uv values in the normal range of [0,1). Textures outside this range are displayed according to the corresponder function.
- Step Three. These texture space locations are used to obtain the corresponding color values from the texture. The built-in functions bilinear and trilinear interpolation sampling are used to map spatial points between the UV space points.
- Step Four. The value transform function is used to transform the retrieved results, and finally the new values are used to change the surface properties , such as material or coloring normals, and so on.
4. Result and Discussion
4.1. SFM-Based Point Cloud Data Acquisition
4.1.1. Feature Point Detection
4.1.2. Feature Point Matching
4.1.3. Triangulation of Feature Points
4.1.4. Point Cloud Data
4.2. Surface Reconstruction
4.3. Texture Enhancement
4.4. Texture Mapping
5. Conclusions and Future Work
5.1. Conclusions
5.2. Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
LiDAR | Laser Scanning/Light Detection and Ranging |
TOF | Time of Flight |
ALS | Airborne Laser Scanning |
MLS | Mobile Laser Scanning |
TLS | Terrestrial Laser Scanning |
SFM | Structure from Motion |
MVS | Multi-View Stereo Vision |
BA | Bundle Adjustment |
CNN | Convolutional Neural Network |
D-CV | Depth-based Cost Volume |
P-CV | Pose-based Cost Volume |
CVP-DC | Cost Voxel Pyramid Depth Completion |
SDF | Signed Distance Function |
MLP | Multi-Layer Perceptrons |
EC-Net | Edge-aware Network |
ICP | Iterative Closest Point |
IMQ | Inverse Multiquadric |
SDF | Signed Distance Function |
GAN | Generative Adversarial Neural Network |
NeRF | Neural Radiance Fields |
SIFT | Scale-Invariant Feature Transform |
DoG | Difference of Gaussians |
KAZE | KAZE Features |
SURF | Speeded-Up Robust Features |
KNN | K-Nearest Neighbor search |
NPSAC | NAdjacent Points Sample Consensus |
PROSAC | Progressive Sample Consensus |
RANSAC | Random Sample Consensus |
VGG | Visual Geometry Group |
PSNR | Peak Signal-to-Noise Ratio |
SSIM | Structural Similarity Index Measure |
Appendix A. Camera Calibration
References
- Tao, F.; Xiao, B.; Qi, Q.; Cheng, J.; Ji, P. Digital twin modeling. J. Manuf. Syst. 2022, 64, 372–389. [Google Scholar] [CrossRef]
- Gong, H.; Su, D.; Zeng, S.; Chen, X. Advancements in digital twin modeling for underground spaces and lightweight geometric modeling technologies. Autom. Constr. 2024, 165, 105578. [Google Scholar] [CrossRef]
- Wu, H.; Ji, P.; Ma, H.; Xing, L. A comprehensive review of digital twin from the perspective of total process: Data, models, networks and applications. Sensors 2023, 23, 8306. [Google Scholar] [CrossRef] [PubMed]
- Elaksher, A.; Ali, T.; Alharthy, A. A quantitative assessment of LiDAR data accuracy. Remote Sens. 2023, 15, 442. [Google Scholar] [CrossRef]
- Piedra-Cascón, W.; Meyer, M.J.; Methani, M.M.; Revilla-León, M. Accuracy (trueness and precision) of a dual-structured light facial scanner and interexaminer reliability. J. Prosthet. Dent. 2020, 124, 567–574. [Google Scholar] [CrossRef] [PubMed]
- Frangez, V.; Salido-Monzú, D.; Wieser, A. Assessment and improvement of distance measurement accuracy for time-of-flight cameras. IEEE Trans. Instrum. Meas. 2022, 71, 1003511. [Google Scholar] [CrossRef]
- Bi, S.; Gu, Y.; Zou, J.; Wang, L.; Zhai, C.; Gong, M. High precision optical tracking system based on near infrared trinocular stereo vision. Sensors 2021, 21, 2528. [Google Scholar] [CrossRef]
- Wang, Y.; Funk, N.; Ramezani, M.; Papatheodorou, S.; Popović, M.; Camurri, M.; Leutenegger, S.; Fallon, M. Elastic and efficient LiDAR reconstruction for large-scale exploration tasks. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE: New York, NY, USA, 2021; pp. 5035–5041. [Google Scholar]
- Zhang, J.; Zhang, F.; Kuang, S.; Zhang, L. Nerf-lidar: Generating realistic lidar point clouds with neural radiance fields. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 7178–7186. [Google Scholar]
- Wang, Z. Review of real-time three-dimensional shape measurement techniques. Measurement 2020, 156, 107624. [Google Scholar] [CrossRef]
- Wang, Z.; Zhou, Q.; Shuang, Y. Three-dimensional reconstruction with single-shot structured light dot pattern and analytic solutions. Measurement 2020, 151, 107114. [Google Scholar] [CrossRef]
- Liu, H.; Cao, C.; Ye, H.; Cui, H.; Gao, W.; Wang, X.; Shen, S. Lightweight Structured Line Map Based Visual Localization. IEEE Robot. Autom. Lett. 2024, 9, 5182–5189. [Google Scholar] [CrossRef]
- Cao, D.; Liu, W.; Liu, S.; Chen, J.; Liu, W.; Ge, J.; Deng, Z. Simultaneous calibration of hand-eye and kinematics for industrial robot using line-structured light sensor. Measurement 2023, 221, 113508. [Google Scholar] [CrossRef]
- Liang, Z.; Chang, H.; Wang, Q.; Wang, D.; Zhang, Y. 3D reconstruction of weld pool surface in pulsed GMAW by passive biprism stereo vision. IEEE Robot. Autom. Lett. 2019, 4, 3091–3097. [Google Scholar] [CrossRef]
- Li, Y.; Wang, Z. RGB line pattern-based stereo vision matching for single-shot 3-D measurement. IEEE Trans. Instrum. Meas. 2020, 70, 5004413. [Google Scholar] [CrossRef]
- Jing, J.; Li, J.; Xiong, P.; Liu, J.; Liu, S.; Guo, Y.; Deng, X.; Xu, M.; Jiang, L.; Sigal, L. Uncertainty guided adaptive warping for robust and efficient stereo matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 3318–3327. [Google Scholar]
- Hu, Y.; Chen, Q.; Feng, S.; Tao, T.; Asundi, A.; Zuo, C. A new microscopic telecentric stereo vision system-calibration, rectification, and three-dimensional reconstruction. Opt. Lasers Eng. 2019, 113, 14–22. [Google Scholar] [CrossRef]
- Berra, E.; Peppa, M. Advances and challenges of UAV SFM MVS photogrammetry and remote sensing: Short review. In Proceedings of the 2020 IEEE Latin American Grss & ISPRS Remote Sensing Conference (Lagirs), Santiago, Chile, 21–26 March 2020; IEEE: New York, NY, USA, 2020; pp. 533–538. [Google Scholar]
- Gao, L.; Zhao, Y.; Han, J.; Liu, H. Research on multi-view 3D reconstruction technology based on SFM. Sensors 2022, 22, 4366. [Google Scholar] [CrossRef]
- Wang, J.; Rupprecht, C.; Novotny, D. Posediffusion: Solving pose estimation via diffusion-aided bundle adjustment. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 9773–9783. [Google Scholar]
- Pan, L.; Baráth, D.; Pollefeys, M.; Schönberger, J.L. Global Structure-from-Motion Revisited. In Proceedings of the European Conference on Computer Vision (ECCV), Milan, Italy, 29 September–4 October 2024. [Google Scholar]
- Barath, D.; Mishkin, D.; Eichhardt, I.; Shipachev, I.; Matas, J. Efficient initial pose-graph generation for global sfm. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14546–14555. [Google Scholar]
- Liu, S.; Jiang, S.; Liu, Y.; Xue, W.; Guo, B. Efficient SfM for Large-Scale UAV Images Based on Graph-Indexed BoW and Parallel-Constructed BA Optimization. Remote Sens. 2022, 14, 5619. [Google Scholar] [CrossRef]
- Bond, Y.L.; Ledwell, S.; Osornio, E.; Cruz, A.C. Efficient Scene Reconstruction for Unmanned Aerial Vehicles. In Proceedings of the 2023 Fifth International Conference on Transdisciplinary AI (TransAI), Laguna Hills, CA, USA, 25–27 September 2023; IEEE: New York, NY, USA, 2023; pp. 266–269. [Google Scholar]
- Barath, D.; Noskova, J.; Eichhardt, I.; Matas, J. Pose-graph via Adaptive Image Re-ordering. In Proceedings of the BMVC, London, UK, 21–24 November 2022; p. 127. [Google Scholar]
- Radenović, F.; Tolias, G.; Chum, O. Fine-tuning CNN image retrieval with no human annotation. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 1655–1668. [Google Scholar] [CrossRef]
- Wei, X.; Zhang, Y.; Li, Z.; Fu, Y.; Xue, X. Deepsfm: Structure from motion via deep bundle adjustment. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 230–247. [Google Scholar]
- Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Li, Z.; Luo, S.; Zeng, W.; Guo, S.; Zhuo, J.; Zhou, L.; Ma, Z.; Zhang, Z. 3d reconstruction system for foot arch detecting based on openmvg and openmvs. In Proceedings of the 2022 5th International Conference on Pattern Recognition and Artificial Intelligence (PRAI), Chengdu, China, 19–21 August 2022; IEEE: New York, NY, USA, 2022; pp. 1017–1022. [Google Scholar]
- Lyra, V.G.d.M.; Pinto, A.H.; Lima, G.C.; Lima, J.P.; Teichrieb, V.; Quintino, J.P.; da Silva, F.Q.; Santos, A.L.; Pinho, H. Development of an efficient 3D reconstruction solution from permissive open-source code. In Proceedings of the 2020 22nd Symposium on Virtual and Augmented Reality (SVR), Virtual, 7–10 November 2020; IEEE: New York, NY, USA, 2020; pp. 232–241. [Google Scholar]
- Wu, C. Towards linear-time incremental structure from motion. In Proceedings of the 2013 International Conference on 3D Vision-3DV, Seattle, WA, USA, 29 June–1 July 2013; IEEE: New York, NY, USA, 2013; pp. 127–134. [Google Scholar]
- Zhou, L.; Sun, G.; Li, Y.; Li, W.; Su, Z. Point cloud denoising review: From classical to deep learning-based approaches. Graph. Model. 2022, 121, 101140. [Google Scholar] [CrossRef]
- Huang, Z.; Wen, Y.; Wang, Z.; Ren, J.; Jia, K. Surface reconstruction from point clouds: A survey and a benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 9727–9748. [Google Scholar] [CrossRef]
- Azinović, D.; Martin-Brualla, R.; Goldman, D.B.; Nießner, M.; Thies, J. Neural rgb-d surface reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 6290–6301. [Google Scholar]
- You, C.C.; Lim, S.P.; Lim, S.C.; San Tan, J.; Lee, C.K.; Khaw, Y.M.J. A survey on surface reconstruction techniques for structured and unstructured data. In Proceedings of the 2020 IEEE Conference on Open Systems (ICOS), Penang, Malaysia, 17–19 November 2020; IEEE: New York, NY, USA, 2020; pp. 37–42. [Google Scholar]
- Wu, Y.; Hu, X.; Zhang, Y.; Gong, M.; Ma, W.; Miao, Q. SACF-Net: Skip-attention based correspondence filtering network for point cloud registration. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 3585–3595. [Google Scholar] [CrossRef]
- Lu, D.; Lu, X.; Sun, Y.; Wang, J. Deep feature-preserving normal estimation for point cloud filtering. Comput. Aided Des. 2020, 125, 102860. [Google Scholar] [CrossRef]
- Zhang, S.; Cui, S.; Ding, Z. Hypergraph spectral analysis and processing in 3D point cloud. IEEE Trans. Image Process. 2020, 30, 1193–1206. [Google Scholar] [CrossRef] [PubMed]
- Ren, D.; Ma, Z.; Chen, Y.; Peng, W.; Liu, X.; Zhang, Y.; Guo, Y. Spiking pointnet: Spiking neural networks for point clouds. arXiv 2024, arXiv:2310.06232. [Google Scholar]
- Hao, H.; Jincheng, Y.; Ling, Y.; Gengyuan, C.; Sumin, Z.; Huan, Z. An improved PointNet++ point cloud segmentation model applied to automatic measurement method of pig body size. Comput. Electron. Agric. 2023, 205, 107560. [Google Scholar] [CrossRef]
- Hermosilla, P.; Ritschel, T.; Ropinski, T. Total denoising: Unsupervised learning of 3D point cloud cleaning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 52–60. [Google Scholar]
- Liu, X.Y.; Wang, H.; Chen, C.; Wang, Q.; Zhou, X.; Wang, Y. Implicit surface reconstruction with radial basis functions via PDEs. Eng. Anal. Bound. Elem. 2020, 110, 95–103. [Google Scholar] [CrossRef]
- Dai, P.; Xu, J.; Xie, W.; Liu, X.; Wang, H.; Xu, W. High-quality surface reconstruction using gaussian surfels. In Proceedings of the ACM SIGGRAPH 2024 Conference Papers, Denver, CO, USA, 27 July–1 August 2024; pp. 1–11. [Google Scholar]
- Comi, M.; Lin, Y.; Church, A.; Tonioni, A.; Aitchison, L.; Lepora, N.F. Touchsdf: A deepsdf approach for 3d shape reconstruction using vision-based tactile sensing. IEEE Robot. Autom. Lett. 2024, 9, 5719–5726. [Google Scholar] [CrossRef]
- Gatys, L.; Ecker, A.; Bethge, M. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. arXiv 2015, arXiv:1505.07376. [Google Scholar]
- Zhang, Y.; Huang, N.; Tang, F.; Huang, H.; Ma, C.; Dong, W.; Xu, C. Inversion-based style transfer with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 10146–10156. [Google Scholar]
- Lin, C.T.; Huang, S.W.; Wu, Y.Y.; Lai, S.H. GAN-based day-to-night image style transfer for nighttime vehicle detection. IEEE Trans. Intell. Transp. Syst. 2020, 22, 951–963. [Google Scholar] [CrossRef]
- Gatys, L.A.; Ecker, A.S.; Bethge, M.; Hertzmann, A.; Shechtman, E. Controlling perceptual factors in neural style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3985–3993. [Google Scholar]
- Tang, H.; Liu, S.; Lin, T.; Huang, S.; Li, F.; He, D.; Wang, X. Master: Meta style transformer for controllable zero-shot and few-shot artistic style transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 18329–18338. [Google Scholar]
- Zhang, C.; Xu, X.; Wang, L.; Dai, Z.; Yang, J. S2wat: Image style transfer via hierarchical vision transformer using strips window attention. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 7024–7032. [Google Scholar]
- Zhang, Z.; Sun, J.; Li, G.; Zhao, L.; Zhang, Q.; Lan, Z.; Yin, H.; Xing, W.; Lin, H.; Zuo, Z. Rethink arbitrary style transfer with transformer and contrastive learning. Comput. Vis. Image Underst. 2024, 241, 103951. [Google Scholar] [CrossRef]
- Zhang, C.; Dai, Z.; Cao, P.; Yang, J. Edge enhanced image style transfer via transformers. In Proceedings of the 2023 ACM International Conference on Multimedia Retrieval, Thessaloniki, Greece, 12–15 June 2023; pp. 105–114. [Google Scholar]
- Zhu, M.; He, X.; Wang, N.; Wang, X.; Gao, X. All-to-key attention for arbitrary style transfer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 23109–23119. [Google Scholar]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
- Bi, S.; Xu, Z.; Sunkavalli, K.; Hašan, M.; Hold-Geoffroy, Y.; Kriegman, D.; Ramamoorthi, R. Deep reflectance volumes: Relightable reconstructions from multi-view photometric images. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 294–311. [Google Scholar]
- Thies, J.; Zollhöfer, M.; Nießner, M. Deferred neural rendering: Image synthesis using neural textures. ACM Trans. Graph. 2019, 38, 1–12. [Google Scholar] [CrossRef]
- Xiang, F.; Xu, Z.; Hasan, M.; Hold-Geoffroy, Y.; Sunkavalli, K.; Su, H. Neutex: Neural texture mapping for volumetric neural rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7119–7128. [Google Scholar]
- Gupta, S.; Thakur, K.; Kumar, M. 2D-human face recognition using SIFT and SURF descriptors of face’s feature regions. Vis. Comput. 2021, 37, 447–456. [Google Scholar] [CrossRef]
- Abaspur Kazerouni, I.; Dooly, G.; Toal, D. Underwater image enhancement and mosaicking system based on A-KAZE feature matching. J. Mar. Sci. Eng. 2020, 8, 449. [Google Scholar] [CrossRef]
- Yong, A.; Hong, Z. SIFT matching method based on K nearest neighbor support feature points. In Proceedings of the 2016 IEEE International Conference on Signal and Image Processing (ICSIP), Beijing, China, 13–15 August 2016; IEEE: New York, NY, USA, 2016; pp. 64–68. [Google Scholar]
- Özyeşil, O.; Voroninski, V.; Basri, R.; Singer, A. A survey of structure from motion. Acta Numer. 2017, 26, 305–364. [Google Scholar] [CrossRef]
- Sreeram, V.; Agathoklis, P. On the properties of Gram matrix. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1994, 41, 234–237. [Google Scholar] [CrossRef]
- Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
Hardware | Cost (GBP) | Accuracy (mm) | Measurement Range (m) | Outdoor Work |
---|---|---|---|---|
Lidar scanner | 600+ | 1–3 [4] | 200+ | Unaffected |
Structured light camera | 200–4000 | 0.01–0.32 [5] | 0.3–10 [5] | Highly affected |
ToF camera | 400–30,000 | 0.5–2.2 [6] | 0.5–6.0 [6] | Minimally affected |
Stereo vision camera | 200+ | 0.05–1 [7] | 10–100 [7] | Unaffected |
Photogrammetry methods | N/A | 1–10 | 10–100 | Unaffected |
Content Weight | Style Weight | Content Layer | Style Layer | Optimizer | Learning Rate | Ephochs | Iteration |
---|---|---|---|---|---|---|---|
1 | 1000 | Adam | 0.03 | 20 | 100 |
Threshold | 0.65 | 0.70 | 0.75 | 0.80 | 0.85 | |||||
---|---|---|---|---|---|---|---|---|---|---|
Rate | Quantity | Rate | Quantity | Rate | Quantity | Rate | Quantity | Rate | Quantity | |
CNC1 | 0.0596 | 95 | 0.0734 | 117 | 0.0891 | 142 | 0.1350 | 215 | 0.1965 | 313 |
CNC2 | 0.0519 | 93 | 0.0664 | 119 | 0.0859 | 154 | 0.1233 | 221 | 0.2015 | 361 |
ROBOTS | 0.0109 | 16 | 0.0157 | 23 | 0.0321 | 47 | 0.0560 | 82 | 0.1051 | 154 |
STATUE | 0.0548 | 91 | 0.0759 | 126 | 0.1048 | 174 | 0.1452 | 241 | 0.2133 | 325 |
FOUNTAIN | 0.1247 | 328 | 0.1433 | 377 | 0.1699 | 447 | 0.1984 | 522 | 0.2535 | 667 |
CASTLE | 0.1777 | 513 | 0.2040 | 589 | 0.2442 | 705 | 0.2910 | 841 | 0.3543 | 1023 |
Threshold | 0.65 | 0.70 | 0.75 | 0.80 | 0.85 | |||||
---|---|---|---|---|---|---|---|---|---|---|
Rate | Quantity | Rate | Quantity | Rate | Quantity | Rate | Quantity | Rate | Quantity | |
CNC1 | 0.0902 | 326 | 0.0999 | 361 | 0.1151 | 416 | 0.1300 | 470 | 0.1494 | 540 |
CNC2 | 0.0506 | 347 | 0.0573 | 393 | 0.0642 | 440 | 0.0713 | 489 | 0.0791 | 542 |
ROBOTS | 0.0288 | 93 | 0.0349 | 113 | 0.0470 | 152 | 0.0591 | 191 | 0.0724 | 234 |
STATUE | 0.0617 | 103 | 0.0707 | 118 | 0.0779 | 130 | 0.0845 | 141 | 0.0911 | 152 |
FOUNTAIN | 0.1173 | 160 | 0.1254 | 171 | 0.1334 | 182 | 0.1400 | 191 | 0.1481 | 202 |
CASTLE | 0.1696 | 573 | 0.1835 | 620 | 0.1995 | 674 | 0.2175 | 735 | 0.2362 | 798 |
Threshold | 0.65 | 0.70 | 0.75 | 0.80 | 0.85 | |||||
---|---|---|---|---|---|---|---|---|---|---|
Rate | Quantity | Rate | Quantity | Rate | Quantity | Rate | Quantity | Rate | Quantity | |
CNC1 | 0.0552 | 2470 | 0.0652 | 2918 | 0.0764 | 3418 | 0.0890 | 3981 | 0.1030 | 4609 |
CNC2 | 0.0367 | 2151 | 0.0432 | 2543 | 0.0510 | 2993 | 0.0595 | 3492 | 0.0697 | 4090 |
ROBOTS | 0.0225 | 1185 | 0.0298 | 1566 | 0.0380 | 1996 | 0.0485 | 2549 | 0.0604 | 3177 |
STATUE | 0.0533 | 738 | 0.0599 | 800 | 0.0652 | 870 | 0.0710 | 947 | 0.0767 | 1024 |
FOUNTAIN | 0.1685 | 5716 | 0.1712 | 5806 | 0.1736 | 5889 | 0.1759 | 5967 | 0.1784 | 6051 |
CASTLE | 0.1866 | 5174 | 0.1947 | 5379 | 0.2020 | 5601 | 0.2100 | 5823 | 0.2170 | 6017 |
Vertices | Faces | Time Consumption | Equipment | Equipment Cost | |
---|---|---|---|---|---|
Robot | 95,556 | 115,736 | 29 min | Leica RTC 360 | 92,800 GBP |
CNC1 | 175,175 | 204,458 | 40 min | Leica RTC 360 | 92,800 GBP |
CNC2 | 195,788 | 210,396 | 34 min | Leica RTC 360 | 92,800 GBP |
Vertices | Faces | Time Consumption | Equipment | Equipment Cost | |
---|---|---|---|---|---|
Robot | 172,875 | 145,702 | 33 min | Iphone 13 pro | 949 GBP |
CNC1 | 216,893 | 234,135 | 37 min | Iphone 13 pro | 949 GBP |
CNC2 | 221,040 | 200,205 | 38 min | Iphone 13 pro | 949 GBP |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
You, B.; Honarvar Shakibaei Asli, B. A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement. Big Data Cogn. Comput. 2024, 8, 164. https://doi.org/10.3390/bdcc8110164
You B, Honarvar Shakibaei Asli B. A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement. Big Data and Cognitive Computing. 2024; 8(11):164. https://doi.org/10.3390/bdcc8110164
Chicago/Turabian StyleYou, Boyang, and Barmak Honarvar Shakibaei Asli. 2024. "A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement" Big Data and Cognitive Computing 8, no. 11: 164. https://doi.org/10.3390/bdcc8110164
APA StyleYou, B., & Honarvar Shakibaei Asli, B. (2024). A Model Development Approach Based on Point Cloud Reconstruction and Mapping Texture Enhancement. Big Data and Cognitive Computing, 8(11), 164. https://doi.org/10.3390/bdcc8110164