3D Structure from 2D Dimensional Images Using Structure from Motion Algorithms
Abstract
:1. Introduction
2. The Historical Emara Palace in Najran
3. Materials and Methods
3.1. Digital Camera Used
3.2. Site Preparing and Imaging
3.3. SfM Workflow
4. Three-Dimensional Point Clouds Based on Image Data and Existing Software
4.1. Imagery Acquisition for Generation of Point Cloud
4.2. Software Packages
- Agisoft Metashape commersoftware was used to process the overlapped captured images [29]. The general approach is as follows: the importation of photographs, alignment (creation of a sparse point cloud for image orientation using the SIFT technique and bundle block adjustment), the generation of point clouds, and the building of a mesh and texture.
- VisualSFM developed by Wu, 2013, http://ccwu.me/vsfm/ (accessed on 2 March 2022), is an open-source software package that processes the data on the local PC.
- Regad3D is also an open-source software package that uses an SfM tool that is free and open-source (https://www.regard3d.org/ (accessed on 3 March 2022)).
4.3. Results and Evaluation
4.3.1. Performance and Completeness
4.3.2. Accuracy
4.3.3. Direct Cloud-to-Cloud Comparison
5. Discussion
6. Conclusions
Funding
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Peña-Villasenín, S.; Gil-Docampo, M.; Ortiz-Sanz, J. 3-D Modeling of Historic Façades Using SFM Photogrammetry Metric Documentation of Different Building Types of a Historic Center. Int. J. Arch. Herit. 2017, 11, 871–890. [Google Scholar] [CrossRef]
- Mancini, F.; Pirotti, F. Innovations in photogrammetry and remote sensing: Modern sensors, new processing strategies and frontiers in applications. Sensors 2021, 21, 2420. [Google Scholar] [CrossRef] [PubMed]
- Sansoni, G.; Trebeschi, M.; Docchio, F. State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation. Sensors 2009, 9, 568–601. [Google Scholar] [CrossRef] [PubMed]
- Karagianni, A. Terrestrial laser scanning and satellite data in cultural heritage building documentation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 46, 361–366. [Google Scholar] [CrossRef]
- Elkhrachy, I. Modeling and Visualization of Three Dimensional Objects Using Low-Cost Terrestrial Photogrammetry. Int. J. Arch. Herit. 2019, 14, 1456–1467. [Google Scholar] [CrossRef]
- Moyano, J.E.; Nieto-Julián, J.E.; Lenin, L.M.; Bruno, S. Operability of Point Cloud Data in an Architectural Heritage Information Model. Int. J. Arch. Herit. 2021, 1–20. [Google Scholar] [CrossRef]
- Kersten, T.P.; Mechelke, K.; Maziull, L. 3D model of al zubarah fortress in qatar-Terrestrial laser scanning vs. dense image matching. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 1–8. [Google Scholar] [CrossRef] [Green Version]
- Grussenmeyer, P.; Landes, T.; Voegtle, T.; Ringle, K. Comparison Methods of Terrestrial Laser Scanning, Photogrammetry and Tacheometry Data for Recording of Cultural Heritage Buildings. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 213–218. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84981192332&partnerID=40&md5=49320dead7a8eec25ae4c19010b5c0b9 (accessed on 5 March 2022).
- Yastikli, N. Documentation of cultural heritage using digital photogrammetry and laser scanning. J. Cult. Herit. 2007, 8, 423–427. [Google Scholar] [CrossRef]
- Lerma, J.L.; Navarro, S.; Cabrelles, M.; Villaverde, V. Terrestrial laser scanning and close range photogrammetry for 3D archaeological documentation: The Upper Palaeolithic Cave of Parpalló as a case study. J. Archaeol. Sci. 2010, 37, 499–507. [Google Scholar] [CrossRef]
- Atkinson, K.B. Introduction to Modern Photogrammetry. Photogramm. Rec. 2003, 18, 329–330. [Google Scholar] [CrossRef]
- Granshaw, S.I. Close Range Photogrammetry: Principles, Methods and Applications. Photogramm. Rec. 2010, 25, 203–204. [Google Scholar] [CrossRef]
- Iglhaut, J.; Cabo, C.; Puliti, S.; Piermattei, L.; O’Connor, J.; Rosette, J. Structure from Motion Photogrammetry in Forestry: A Review. Curr. For. Rep. 2019, 5, 155–168. [Google Scholar] [CrossRef] [Green Version]
- Micheletti, N.; Chandler, J.H.; Lane, S.N. Investigating the geomorphological potential of freely available and accessible structure-from-motion photogrammetry using a smartphone. Earth Surf. Process. Landf. 2014, 40, 473–486. [Google Scholar] [CrossRef] [Green Version]
- Westoby, M.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
- Bemis, S.; Micklethwaite, S.; Turner, D.; James, M.R.; Akciz, S.; Thiele, S.T.; Bangash, H.A. Ground-based and UAV-Based photogrammetry: A multi-scale, high-resolution mapping tool for structural geology and paleoseismology. J. Struct. Geol. 2014, 69, 163–178. [Google Scholar] [CrossRef]
- Wang, S.; Clark, R.; Wen, H.; Trigoni, N. Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In Proceedings of the 2017 IEEE international conference on robotics and automation (ICRA), Singapore, 29 May–3 June 2017; pp. 2043–2050. [Google Scholar]
- Zhou, T.; Brown, M.; Snavely, N.; Lowe, D.G. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1851–1858. [Google Scholar]
- Klodt, M.; Vedaldi, A. Supervising the new with the old: Learning sfm from sfm. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 698–713. [Google Scholar]
- Liu, J.; Ding, H.; Shahroudy, A.; Duan, L.-Y.; Jiang, X.; Wang, G.; Kot, A.C. Feature boosting network for 3d pose estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 494–501. [Google Scholar] [CrossRef] [Green Version]
- Wei, X.; Zhang, Y.; Li, Z.; Fu, Y.; Xue, X. DeepSFM: Structure from Motion via Deep Bundle Adjustment. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 230–247. [Google Scholar] [CrossRef]
- Ulvi, A. Documentation, Three-Dimensional (3D) Modelling and visualization of cultural heritage by using Unmanned Aerial Vehicle (UAV) photogrammetry and terrestrial laser scanners. Int. J. Remote Sens. 2021, 42, 1994–2021. [Google Scholar] [CrossRef]
- Esposito, S.; Fallavollitaa, P.; Wahbeh, W.; Nardinocchic, C.; Balsia, M. Performance evaluation of UAV photogrammetric 3D reconstruction. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Quebec City, QC, Canada, 13–18 July 2014; pp. 4788–4791. [Google Scholar] [CrossRef]
- Altman, S.; Xiao, W.; Grayson, B. Evaluation of Low-Cost Terrestrial Photogrammetry for 3D Reconstruction of Complex Buildings. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 199–206. [Google Scholar] [CrossRef] [Green Version]
- Kingsland, K. Comparative analysis of digital photogrammetry software for cultural heritage. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00157. [Google Scholar] [CrossRef]
- Niederheiser, R.; Mokroš, M.; Lange, J.; Petschko, H.; Prasicek, G.; Elberink, S.O. Deriving 3D point clouds from terrestrial photographs—Comparison of different sensors and software. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 685–692. [Google Scholar] [CrossRef] [Green Version]
- Alidoost, F.; Arefi, H. Comparison of uas-based photogrammetry software for 3D point cloud generation: A survey over a historical site. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 55–61. [Google Scholar] [CrossRef] [Green Version]
- Remondino, F.; Rizzi, A. Reality-based 3D documentation of natural and cultural heritage sites—Techniques, problems, and examples. Appl. Geomat. 2010, 2, 85–100. [Google Scholar] [CrossRef] [Green Version]
- Agisoft LLC. Agisoft PhotoScan User Manual: Professional Edition. Version 1.2User Manuals. 2016, p. 97. Available online: http://www.agisoft.com/downloads/user-manuals/ (accessed on 15 March 2022).
- Tomasi, C.; Kanade, T. Shape and motion from image streams: A factorization method. Proc. Natl. Acad. Sci. USA 1993, 90, 9795–9802. [Google Scholar] [CrossRef] [Green Version]
- Remondino, F.; Fraser, C. Digital camera calibration methods: Considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Li, Q.; Wang, G.; Liu, J.; Chen, S. Robust Scale-Invariant Feature Matching for Remote Sensing Image Registration. IEEE Geosci. Remote Sens. Lett. 2009, 6, 287–291. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Van Gool, L. Surf: Speeded up robust features. In Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2006; Volume 3951, pp. 404–417. [Google Scholar] [CrossRef]
- Herbert, B.; Andreas, E.; Tinne, T.; Luc, V.G. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Schonberger, J.L.; Frahm, J.-M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Stewart, J. Calculus, Concepts and Contexts; Cengage Learning: Boston, MA, USA, 1998. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Kim, T.; Im, Y.-J. Automatic satellite image registration by combination of matching and random sample consensus. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1111–1117. [Google Scholar] [CrossRef]
- Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 32, 1362–1376. [Google Scholar] [CrossRef]
- Agarwal, S.; Furukawa, Y.; Snavely, N.; Simon, I.; Curless, B.; Seitz, S.M.; Szeliski, R. Building Rome in a day. Commun. ACM 2011, 54, 105–112. [Google Scholar] [CrossRef]
- Vergauwen, M.; Van Gool, L. Web-based 3D reconstruction service. Mach. Vis. Appl. 2006, 17, 411–426. [Google Scholar] [CrossRef]
- Bolles, R.C.; Baker, H.H.; Marimont, D.H. Epipolar-plane image analysis: An approach to determining structure from motion. Int. J. Comput. Vis. 1987, 1, 7–55. [Google Scholar] [CrossRef]
- Fonstad, M.A.; Dietrich, J.T.; Courville, B.C.; Jensen, J.L.; Carbonneau, P.E. Topographic structure from motion: A new development in photogrammetric measurement. Earth Surf. Process. Landf. 2012, 38, 421–430. [Google Scholar] [CrossRef] [Green Version]
- Ullman, S. The interpretation of structure from motion. Proc. R. Soc. B 1979, 203, 405–426. [Google Scholar] [CrossRef]
- Akpo, H.A.; Atindogbé, G.; Obiakara, M.C.; Adjinanoukon, A.B.; Gbedolo, M.; Fonton, N.H. Accuracy of common stem volume formulae using terrestrial photogrammetric point clouds: A case study with savanna trees in Benin. J. For. Res. 2021, 32, 2415–2422. [Google Scholar] [CrossRef]
- Dwyer, R.A. A faster divide-and-conquer algorithm for constructing delaunay triangulations. Algorithmica 1987, 2, 137–151. [Google Scholar] [CrossRef]
- Girardeau-Montaut, D.; Roux, M.; Marc, R.; Thibault, G. Change detection on points cloud data acquired with a ground laser scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, W19. [Google Scholar]
- Ranzuglia, G.; Callieri, M.; Dellepiane, M.; Cignoni, P.; Scopigno, R. MeshLab as a complete tool for the integration of photos and color with high resolution 3D geometry data. CAA 2012 Conf. Proc. 2013, 2, 406–416. [Google Scholar]
- Cignoni, P.; Rocchini, C.; Scopigno, R. Metro: Measuring Error on Simplified Surfaces. Comput. Graph. Forum 1998, 17, 167–174. [Google Scholar] [CrossRef] [Green Version]
- Marapane, S.; Trivedi, M. Multi-primitive hierarchical (MPH) stereo analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 227–240. [Google Scholar] [CrossRef]
- Sun, Y.; Zhao, L.; Huang, S.; Yan, L.; Dissanayake, G. Line matching based on planar homography for stereo aerial images. ISPRS J. Photogramm. Remote Sens. 2015, 104, 1–17. [Google Scholar] [CrossRef] [Green Version]
Sensor size | 22.3 mm × 14.9 mm |
Pixel dimensions | 5184 × 3456 |
Camera model | Canon EOS Rebel T3i/600D |
Megapixels | 18.7 |
Pixel size | 4.30 µm |
Angle measurement | Display resolution | 0.1″ |
Accuracy | 1″ | |
Distance measurement | Reflectorless (m) | 2 mm + 2 ppm |
Prism (m) | 1.5 mm + 2 ppm | |
Telescope | Magnification | 30× |
Field of view | 1°30′ (1.66 gons) 2.7 m at 100 m |
Software Products | Feature Matching | Bundle Adjustment | Completeness | |
---|---|---|---|---|
Images | Points | |||
Agisoft Metashape | 36.12 | 77 | 58/98 | 47,133 |
VisualSFM | 52.17 | 14.03 | 55/98 | 37,158 |
Regad3D | 244.6 | 18.5 | 57/98 | 45,029 |
Nr. | Target Label | Error (m) | X Error (m) | Y Error (m) | Z Error (m) |
---|---|---|---|---|---|
1 | T1 | 0.01206 | −0.00419 | −0.00535 | 0.009961 |
2 | T3 | 0.011339 | 0.002024 | 0.001014 | −0.01111 |
3 | T11 | 0.002511 | 0.001865 | −0.00144 | −0.00087 |
4 | T13 | 0.001467 | 0.000207 | 0.001451 | −6.3 × 10−5 |
5 | T15 | 0.003757 | 0.001207 | 0.003447 | −0.00088 |
6 | T17 | 0.005028 | 0.002652 | 0.003855 | 0.001841 |
7 | T23 | 0.000654 | −6.8 × 10−5 | −0.00039 | 0.000522 |
8 | T25 | 0.005267 | 0.003245 | −0.00316 | −0.00269 |
RMSE | 0.007 m |
Nr. | Target Label | Error (m) | X Error (m) | Y Error (m) | Z Error (m) |
---|---|---|---|---|---|
1 | T2 | 0.008603 | 0.000777 | 0.004934 | −0.007 |
2 | T4 | 0.009766 | 0.0003 | −0.00155 | 0.009637 |
3 | T10 | 0.002518 | −0.00105 | 0.000635 | −0.0022 |
4 | T12 | 0.00461 | −0.00307 | −0.00313 | 0.001428 |
5 | T14 | 0.003192 | −0.0016 | −0.00261 | −0.00089 |
6 | T16 | 0.002553 | −0.00018 | −0.0023 | 0.001102 |
7 | T22 | 0.003538 | −0.00227 | 0.002307 | 0.001436 |
8 | T24 | 0.001872 | 0.000028 | 0.001862 | −0.00018 |
RMSE | 0.005 m |
Target Label | Agisoft Metashape Error (m) | VisualSFM Error (m) | Regard3D Error (m) |
---|---|---|---|
T1 | 0.00429 | 0.72945 | 0.03489 |
T2 | 0.00615 | 0.13299 | 0.02914 |
T3 | 0.00350 | 0.72811 | 0.02797 |
T4 | 0.00287 | 0.57834 | 0.04189 |
T5 | 0.00050 | 0.71424 | 0.02268 |
RMSE | 0.004 m | 0.620 m | 0.032 m |
RMSE without GCPs | 0.79 (pixels) | Not provided | Not provided |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Elkhrachy, I. 3D Structure from 2D Dimensional Images Using Structure from Motion Algorithms. Sustainability 2022, 14, 5399. https://doi.org/10.3390/su14095399
Elkhrachy I. 3D Structure from 2D Dimensional Images Using Structure from Motion Algorithms. Sustainability. 2022; 14(9):5399. https://doi.org/10.3390/su14095399
Chicago/Turabian StyleElkhrachy, Ismail. 2022. "3D Structure from 2D Dimensional Images Using Structure from Motion Algorithms" Sustainability 14, no. 9: 5399. https://doi.org/10.3390/su14095399
APA StyleElkhrachy, I. (2022). 3D Structure from 2D Dimensional Images Using Structure from Motion Algorithms. Sustainability, 14(9), 5399. https://doi.org/10.3390/su14095399