An Efficient and Robust Hybrid SfM Method for Large-Scale Scenes
Abstract
:1. Introduction
- (1)
- Image clustering with compact spatial distribution: To avoid the image clustering result of a weak connection degree in subclusters, a multifactor joint scene partition measure was constructed, including the number of image pairs with homologue points, the overlapping area of the image, and the number of common neighbors of the image;
- (2)
- Image expansion considering connectivity: To improve the image expansion efficiency and ensure as much connectivity among subclusters as possible, a complete rate-guided preallocation equalization image expansion algorithm among subclusters is proposed;
- (3)
- Robust subcluster merging: To avoid the large error accumulation problem caused by the long merging path of subclusters, a multilevel merging rule considering the subcluster connectivity is proposed. To ensure the accuracy of the merged subclusters as much as possible, considering the global perspective, the optimal internal camera parameter of the two subclusters to be merged was selected as the internal camera parameter of the merged cluster block.
2. Related Works
2.1. Existing Flat Hybrid SfM Method
2.2. Deficiencies of Existing Methods
3. Methodology
3.1. Scene Partitioning
3.1.1. Image Clustering Algorithm Based on Multifactor Joint Measure
3.1.2. Subcluster Expansion Algorithm Considering Partition Connectivity
3.2. Local 3D Sparse Reconstruction of Subclusters
- Global SfM does not require frequent global bundle adjustment, so it is more efficient in the local sparse reconstruction of subclusters.
- There is no need to consider the risk of drift caused by incremental SfM local reconstruction, so the subcluster partitioning size in this paper can be as large as possible. The subcluster size can be set to hundreds or even thousands of images to reduce image number redundancy between subclusters and subcluster merging times.
- The local SfM of subclusters adopts the global SfM, which has less possibility of drift and error accumulation. Therefore, only one global bundle adjustment is required during the subcluster merging process, which improves the efficiency of subcluster merging.
3.3. Multilevel Subcluster Merging Considering Subcluster Connectivity
- The case where neither of the two subclusters has been merged. First, the feature points of the common image of the current cluster pair are collected, the 3D structure points corresponding to the feature points are used to perform alignment transformation (translation, rotation, scaling), and the camera internal parameters, image, and its external parameters and 3D structure points of the subcluster are merged into a new cluster block. Then, to solve the camera internal parameter consistency problem, the common camera internal parameter of two subclusters is combined with the 3D structure points of the cluster block for the rear intersection, the reprojection error is calculated, and the group with a smaller reprojection error is taken as the camera internal parameter of the cluster block.
- The case where the neighboring subclusters have been merged. First, the cluster blocks to be merged with subclusters are searched and determined. Then, the feature points of the common images of the current subcluster and the cluster block are collected, the corresponding 3D structure points of the feature points are used to perform alignment transformation (translation, rotation, and scaling), and the camera internal parameters, image, and its external parameters and 3D structure points of the subcluster are merged into the cluster block. Finally, to solve the camera internal parameter consistency problem, the common camera internal parameter of subclusters and cluster blocks is combined with the 3D structure points of the cluster block for rear intersection, the reprojection error is calculated, and the group with smaller reprojection error is taken as the camera internal parameter of the cluster block.
4. Experiment and Results
4.1. Experimental Data and Environment
4.2. Experimental Result of Subcluster Partitioning
4.2.1. Image Clustering Compactness Verification
4.2.2. Subcluster Expansion Connectivity and Efficiency Verification
4.3. Experimental Result of Subcluster Merging Robustness
4.4. Experimental Result of Accuracy and Time Comparative
4.5. Large-Scale Aerial Image Dataset
5. Discussion
5.1. Comparative Analysis of Subcluster Partitioning
5.1.1. Image Clustering Compactness Analysis
5.1.2. Subcluster Expansion Connectivity and Efficiency Analysis
5.2. Comparative Analysis of Subcluster Merging Robustness
5.3. Accuracy and Time Comparative Analysis
6. Conclusions
- Image clustering in the subcluster partitioning stage. The multifactor joint normalized cut function weight calculation method proposed in this paper improves the compactness of subcluster images, especially when facing the partitioning of multiview images with high overlap and large inclination angles. The image clustering results of this method will not contain image pairs with weak connectivity or spatial discontinuity.
- Image expansion in the subcluster partitioning stage. The completeness-guided image expansion algorithm proposed in this paper enhances the inter-subcluster connectivity as well as the balance, and the expansion efficiency is high, especially when facing large-scale datasets.
- Subcluster merging stage. A multilevel subcluster merging rule that considers subcluster connectivity is proposed. Compared with the existing state-of-the-art methods, the proposed method can conduct subcluster merging on both public data and oblique photographic datasets and shows superiority.
- Accuracy and time. The proposed method is compared with four advanced methods, Colmap, OpenMVG, 3Df, and GraphSfM, and the proposed method has the best performance in terms of reconstruction success rate, accuracy, and time.
- Large-scale aerial image dataset. Under the cluster parallel computing framework, the proposed method performs well in terms of accuracy and time.
Author Contributions
Funding
Conflicts of Interest
References
- Snavely, N.; Seitz, S.M.; Szeliski, R. Modeling the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar] [CrossRef] [Green Version]
- Agarwal, S.; Furukawa, Y.; Snavely, N.; Simon, I.; Curless, B.; Seitz, S.M.; Szeliski, R. Building Rome in a Day. Commun. ACM 2011, 54, 105–112. [Google Scholar] [CrossRef]
- Carrivick, J.L.; Smith, M.W.; Quincey, D.J. Structure from Motion in the Geosciences; John Wiley & Sons: Hoboken, NY, USA, 2016; ISBN 978-1-118-89583-2. [Google Scholar]
- Carmigniani, J.; Furht, B.; Anisetti, M.; Ceravolo, P.; Damiani, E.; Ivkovic, M. Augmented Reality Technologies, Systems and Applications. Multimed. Tools. Appl. 2011, 51, 341–377. [Google Scholar] [CrossRef]
- Yang, M.-D.; Chao, C.-F.; Huang, K.-S.; Lu, L.-Y.; Chen, Y.-P. Image-Based 3D Scene Reconstruction and Exploration in Augmented Reality. Autom. Constr. 2013, 33, 48–60. [Google Scholar] [CrossRef]
- Anderson, K.; Westoby, M.J.; James, M.R. Low-Budget Topographic Surveying Comes of Age: Structure from Motion Photogrammetry in Geography and the Geosciences. Prog. Phys. Geogr. Earth Environ. 2019, 43, 163–173. [Google Scholar] [CrossRef]
- Smith, M.W.; Carrivick, J.L.; Quincey, D.J. Structure from Motion Photogrammetry in Physical Geography. Prog. Phys. Geogr. Earth Environ. 2016, 40, 247–275. [Google Scholar] [CrossRef] [Green Version]
- Green, S.; Bevan, A.; Shapland, M. A Comparative Assessment of Structure from Motion Methods for Archaeological Research. J. Archaeol. Sci. 2014, 46, 173–181. [Google Scholar] [CrossRef]
- López, J.A.B.; Jiménez, G.A.; Romero, M.S.; García, E.A.; Martín, S.F.; Medina, A.L.; Guerrero, J.A.E. 3D Modelling in Archaeology: The Application of Structure from Motion Methods to the Study of the Megalithic Necropolis of Panoria (Granada, Spain). J. Archaeol. Sci. Rep. 2016, 10, 495–506. [Google Scholar] [CrossRef]
- Ferrer, G.; Garrell, A.; Sanfeliu, A. Social-Aware Robot Navigation in Urban Environments. In Proceedings of the 2013 European Conference on Mobile Robots, Barcelona, Spain, 25–27 September 2013; pp. 331–336. [Google Scholar]
- Huang, Y.-P.; Sithole, L.; Lee, T.-T. Structure From Motion Technique for Scene Detection Using Autonomous Drone Navigation. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 2559–2570. [Google Scholar] [CrossRef]
- Havlena, M.; Torii, A.; Pajdla, T. Efficient Structure from Motion by Graph Optimization. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; Springer: Berlin, Germany, 2010; pp. 100–113. [Google Scholar]
- Kneip, L.; Scaramuzza, D.; Siegwart, R. A Novel Parametrization of the Perspective-Three-Point Problem for a Direct Computation of Absolute Camera Position and Orientation. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2969–2976. [Google Scholar]
- Sweeney, C.; Hollerer, T.; Turk, M. Theia: A Fast and Scalable Structure-from-Motion Library. In Proceedings of the 23rd ACM International Conference on Multimedia, Ottawa, ON, Canada, 26–30 October 2015; pp. 693–696. [Google Scholar]
- Cui, H.; Shen, S.; Gao, W.; Liu, H.; Wang, Z. Efficient and Robust Large-Scale Structure-from-Motion via Track Selection and Camera Prioritization. ISPRS J. Photogramm. Remote Sens. 2019, 156, 202–214. [Google Scholar] [CrossRef]
- Govindu, V.M. Combining Two-View Constraints for Motion Estimation. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; IEEE: Piscataway, NJ, USA, 2001; Volume 2, p. II. [Google Scholar]
- Crandall, D.; Owens, A.; Snavely, N.; Huttenlocher, D. Discrete-Continuous Optimization for Large-Scale Structure from Motion. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 3001–3008. [Google Scholar]
- Cui, Z.; Tan, P. Global Structure-from-Motion by Similarity Averaging. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 13–16 December 2015; pp. 864–872. [Google Scholar]
- Wang, X.; Rottensteiner, F.; Heipke, C. Structure from Motion for Ordered and Unordered Image Sets Based on Random K-d Forests and Global Pose Estimation. ISPRS J. Photogramm. Remote Sens. 2019, 147, 19–41. [Google Scholar] [CrossRef]
- Chen, Y.; Shen, S.; Chen, Y.; Wang, G. Graph-Based Parallel Large Scale Structure from Motion. Pattern Recognit. 2020, 107, 107537. [Google Scholar] [CrossRef]
- Zhu, S.; Zhang, R.; Zhou, L.; Shen, T.; Fang, T.; Tan, P.; Quan, L. Very Large-Scale Global Sfm by Distributed Motion Averaging. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4568–4577. [Google Scholar]
- Farenzena, M.; Fusiello, A.; Gherardi, R. Structure-and-Motion Pipeline on a Hierarchical Cluster Tree. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1489–1496. [Google Scholar]
- Wang, X.; Xiao, T.; Kasten, Y. A Hybrid Global Image Orientation Method for Simultaneously Estimating Global Rotations and Global Translations. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 5, 95–104. [Google Scholar] [CrossRef]
- Cornelis, K.; Verbiest, F.; Van Gool, L. Drift Detection and Removal for Sequential Structure from Motion Algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1249–1259. [Google Scholar] [CrossRef] [PubMed]
- Cui, H.; Gao, X.; Shen, S.; Hu, Z. HSfM: Hybrid Structure-from-Motion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1212–1221. [Google Scholar]
- Gherardi, R.; Farenzena, M.; Fusiello, A. Improving the Efficiency of Hierarchical Structure-and-Motion. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1594–1600. [Google Scholar]
- Toldo, R.; Gherardi, R.; Farenzena, M.; Fusiello, A. Hierarchical Structure-and-Motion Recovery from Uncalibrated Images. Comput. Vis. Image Underst. 2015, 140, 127–143. [Google Scholar] [CrossRef] [Green Version]
- Ni, K.; Dellaert, F. HyperSfM. In Proceedings of the Visualization & Transmission 2012 Second International Conference on 3D Imaging, Modeling, Processing, Zurich, Switzerland, 13–15 October 2012; pp. 144–151. [Google Scholar]
- Zhao, L.; Huang, S.; Dissanayake, G. Linear SFM: A Hierarchical Approach to Solving Structure-from-Motion Problems by Decoupling the Linear and Nonlinear Components. ISPRS J. Photogramm. Remote Sens. 2018, 141, 275–289. [Google Scholar] [CrossRef]
- Xu, B.; Zhang, L.; Liu, Y.; Ai, H.; Wang, B.; Sun, Y.; Fan, Z. Robust Hierarchical Structure from Motion for Large-Scale Unstructured Image Sets. ISPRS J. Photogramm. Remote Sens. 2021, 181, 367–384. [Google Scholar] [CrossRef]
- Bhowmick, B.; Patra, S.; Chatterjee, A.; Govindu, V.M.; Banerjee, S. Divide and Conquer: Efficient Large-Scale Structure from Motion Using Graph Partitioning. In Proceedings of the Computer Vision—ACCV 2014; Cremers, D., Reid, I., Saito, H., Yang, M.-H., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 273–287. [Google Scholar]
- Zhu, S.; Shen, T.; Zhou, L.; Zhang, R.; Wang, J.; Fang, T.; Quan, L. Parallel Structure from Motion from Local Increment to Global Averaging. arXiv 2017, arXiv:1702.08601. [Google Scholar]
- Lu, L.; Zhang, Y.; Liu, K. Block Partitioning and Merging for Processing Large-Scale Structure From Motion Problems in Distributed Manner. IEEE Access 2019, 7, 114400–114413. [Google Scholar] [CrossRef]
- Jiang, S.; Li, Q.; Jiang, W.; Chen, W. Parallel Structure From Motion for UAV Images via Weighted Connected Dominating Set. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5413013. [Google Scholar] [CrossRef]
- Moulon, P.; Monasse, P.; Marlet, R. Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion. In Proceedings of the IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 3248–3255. [Google Scholar]
- Sweeney, C.; Fragoso, V.; Höllerer, T.; Turk, M. Large Scale SfM with the Distributed Camera Model. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 230–238. [Google Scholar]
- Shi, J.; Malik, J. Normalized Cuts and Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 888–905. [Google Scholar] [CrossRef] [Green Version]
- Ceres Solver. Available online: http://www.ceres-solver.org/ (accessed on 23 June 2022).
- Schonberger, J.L.; Frahm, J.-M. Structure-From-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Moulon, P.; Monasse, P.; Perrot, R.; Marlet, R. OpenMVG: Open Multiple View Geometry. In Proceedings of the Reproducible Research in Pattern Recognition; Kerautret, B., Colom, M., Monasse, P., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 60–74. [Google Scholar]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
Dataset | Number of Images | Image Size |
---|---|---|
Graham-Hall | 100 | 5616 × 3744 |
South-Building | 128 | 3072 × 2304 |
Person-Hall | 330 | 5616 × 3744 |
Echillais-Church | 353 | 5616 × 3744 |
Gerrard-Hall | 1273 | 5616 × 3744 |
Dataset | Number of Images | GSD (cm) | Area (km2) | Image Size | |
---|---|---|---|---|---|
Ortho | Oblique | ||||
Area 1 | 162 | 4.08 | 0.075 | 5472 × 3648 | 5472 × 3648 |
Area 2 | 637 | 3.37 | 0.30 | 5472 × 3648 | 5472 × 3648 |
Area 3 | 2776 | 2.26 | 1.24 | 6000 × 4000 | 6000 × 4000 |
Area 4 | 5265 | 1.30 | 0.32 | 6000 × 4000 | 6000 × 4000 |
Area 5 | 8221 | 3.37 | 6.42 | 5472 × 3648 | 5472 × 3648 |
Area 6 | 11,795 | 6.50 | 40.30 | 11,674 × 7514 | 8900 × 6650 |
Area 7 | 34,364 | 3.00 | 10.62 | 7952 × 5304 | 7952 × 5304 |
Area 8 | 39,040 | 2.00 | 2.30 | 6000 × 4000 | 6000 × 4000 |
Area 9 | 48,335 | 1.20 | 2.93 | 6000 × 4000 | 6000 × 4000 |
Area 10 | 10,500 | 1.20 | 0.58 | 6000 × 4000 | 6000 × 4000 |
Dataset | Number of Images | Subcluster Size | Traditional Method (ms) | The Proposed Method (ms) |
---|---|---|---|---|
Area 3 | 2776 | 500 | 2713 | 43 |
Area 6 | 11,795 | 2000 | 48,589 | 67 |
Area 7 | 34,364 | 2000 | 938,433 | 439 |
Area 8 | 39,040 | 2000 | 772,184 | 355 |
Area 9 | 48,335 | 2000 | 5,678,460 | 1340 |
Echillais-Church | Graham-Hall | Area 3 | ||
---|---|---|---|---|
Colmap | Nc | 288 | 1260 | 2688 |
Np | 232,244 | 452,965 | 1,588,456 | |
Err | 0.31 | 0.57 | 1.34 | |
OpenMVG | Nc | 342 | 1213 | 2694 |
Np | 634,240 | 981,723 | 1,467,779 | |
Err | 0.48 | 0.52 | 0.72 | |
3Df | Nc | 352 | 886 | 2713 |
Np | 102,129 | 149,796 | 954,076 | |
Err | 0.31 | 0.36 | 0.35 | |
GraphSfM | Nc | 352 | 855 | 1627 |
Np | 181,424 | 1,189,188 | 1,308,133 | |
Err | 0.39 | 0.36 | 0.45 | |
Ours | Nc | 352 | 1265 | 2697 |
Np | 22,576 | 49,893 | 191,845 | |
Err | 0.29 | 0.31 | 0.33 |
Gerrard-Hall | South-Building | Person-Hall | Echillais-Church | Graham-Hall | ||
---|---|---|---|---|---|---|
Colmap | Nc | 100 | 128 | 330 | 288 | 1260 |
Np | 56,426 | 86,972 | 179,168 | 232,244 | 452,965 | |
Err | 0.48 | 0.41 | 0.48 | 0.31 | 0.57 | |
OpenMVG | Nc | 99 | 128 | 329 | 343 | --- |
Np | 33,280 | 86,465 | 788,726 | 634,240 | --- | |
Err | 0.47 | 0.30 | 0.45 | 0.48 | --- | |
3Df | Nc | 100 | 128 | --- | --- | --- |
Np | 20,235 | 25,065 | --- | --- | --- | |
Err | 0.28 | 0.31 | --- | --- | --- | |
GraphSfM | Nc | 100 | 128 | 328 | 352 | --- |
Np | 54,914 | 57,311 | 130,432 | 181,424 | --- | |
Err | 0.33 | 0.27 | 0.33 | 0.39 | --- | |
Ours | Nc | 100 | 128 | 330 | 353 | 1259 |
Np | 4811 | 6759 | 19,783 | 22,576 | 49,893 | |
Err | 0.24 | 0.27 | 0.28 | 0.29 | 0.31 |
Area 1 | Area 2 | Area 3 | Area 4 | Area 10 | ||
---|---|---|---|---|---|---|
Colmap | Nc | 162 | 637 | 2688 | *** | *** |
Np | 72,655 | 448,023 | 1,588,456 | *** | *** | |
Err | 0.49 | 0.53 | 0.94 | *** | *** | |
OpenMVG | Nc | 162 | 637 | 2694 | 5265 | 10,499 |
Np | 68,409 | 448,206 | 1,467,779 | 2,270,331 | 4,684,135 | |
Err | 0.70 | 0.75 | 0.72 | 0.69 | 0.74 | |
3Df | Nc | 162 | 637 | 2713 | 5250 | 10,497 |
Np | 54,801 | 220,078 | 954,076 | 2,219,910 | 2,885,045 | |
Err | 0.26 | 0.35 | 0.35 | 0.34 | 0.73 | |
GraphSfM | Nc | 162 | 637 | --- | --- | 10,493 |
Np | 47,634 | 302,216 | --- | --- | 5,744,895 | |
Err | 0.50 | 0.54 | --- | --- | 0.45 | |
Ours | Nc | 162 | 637 | 2713 | 5265 | 10,499 |
Np | 14,179 | 40,376 | 191,845 | 440,580 | 882,847 | |
Err | 0.22 | 0.30 | 0.33 | 0.33 | 0.34 |
Gerrard-Hall | South-Building | Person-Hall | Echillais-Church | Graham-Hall | ||
---|---|---|---|---|---|---|
Colmap | 183 | 298 | 981 | 2257 | 19,384 | |
OpenMVG | 9 | 30 | 1366 | 241 | --- | |
25 | 97 | 3878 | 867 | --- | ||
34 | 127 | 5244 | 1108 | --- | ||
3Df | 12 | 19 | --- | --- | --- | |
- | - | --- | --- | --- | ||
1 | 1 | --- | --- | --- | ||
13 | 20 | --- | --- | --- | ||
GraphSfM | 271 | 214 | 729 | 680 | --- | |
4 | 3 | 8 | 1 | --- | ||
275 | 217 | 747 | 741 | --- | ||
Ours | 101 | 181 | 3220 | 1235 | 4814 | |
1 | 1 | 48 | 59 | 34 | ||
2 | 14 | 19 | 40 | 630 | ||
103 | 196 | 3287 | 1334 | 5478 |
Area 1 | Area 2 | Area 3 | Area 4 | Area 10 | ||
---|---|---|---|---|---|---|
Colmap | 341 | 5451 | 7,563,501 | *** | *** | |
OpenMVG | 20 | 343 | 1466 | 7639 | 41,367 | |
30 | 896 | 2446 | 18,083 | 44,627 | ||
50 | 1239 | 3912 | 25,722 | 85,994 | ||
3Df | 23 | 402 | 1112 | 1809 | 22,143 | |
- | - | 106 | 2136 | 2694 | ||
1 | 134 | 6 | 548 | 1912 | ||
24 | 536 | 1224 | 4493 | 27,531 | ||
GraphSfM | 179 | 2372 | --- | --- | 180,570 | |
5 | 73 | --- | --- | 17,320 | ||
184 | 2445 | --- | --- | 197,890 | ||
Ours | 88 | 918 | 1408 | 2092 | 20,666 | |
1 | 3 | 90 | 34 | 494 | ||
4 | 23 | 227 | 1245 | 623 | ||
93 | 944 | 1725 | 3371 | 21,783 |
Data | N | Nc | Np | Err | T |
---|---|---|---|---|---|
Area 6 | 11,795 | 11,326 | 890,934 | 0.33 | 178 |
Area 7 | 34,435 | 33,961 | 2,854,957 | 0.35 | 628 |
Area 8 | 39,040 | 38,819 | 3,354,213 | 0.34 | 1066 |
Area 9 | 48,335 | 48,146 | 3,957,959 | 0.35 | 1359 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Z.; Qv, W.; Cai, H.; Guan, H.; Zhang, S. An Efficient and Robust Hybrid SfM Method for Large-Scale Scenes. Remote Sens. 2023, 15, 769. https://doi.org/10.3390/rs15030769
Liu Z, Qv W, Cai H, Guan H, Zhang S. An Efficient and Robust Hybrid SfM Method for Large-Scale Scenes. Remote Sensing. 2023; 15(3):769. https://doi.org/10.3390/rs15030769
Chicago/Turabian StyleLiu, Zhendong, Wenhu Qv, Haolin Cai, Hongliang Guan, and Shuaizhe Zhang. 2023. "An Efficient and Robust Hybrid SfM Method for Large-Scale Scenes" Remote Sensing 15, no. 3: 769. https://doi.org/10.3390/rs15030769
APA StyleLiu, Z., Qv, W., Cai, H., Guan, H., & Zhang, S. (2023). An Efficient and Robust Hybrid SfM Method for Large-Scale Scenes. Remote Sensing, 15(3), 769. https://doi.org/10.3390/rs15030769