Overview of Underwater 3D Reconstruction Technology Based on Optical Images
Abstract
:1. Introduction
- (1)
- Using the Citespace software to visually analyze the relevant papers in the direction of underwater 3D reconstruction in the past two decades can more conveniently and intuitively display the research content and research hotspots in this field.
- (2)
- In the underwater environment, the challenges faced by image reconstruction and the solutions proposed by current researchers are addressed.
- (3)
- We systematically introduce the main optical methods for the 3D reconstruction of underwater images that are currently widely used, including structure from motion, structured light, photometric stereo, stereo vision and underwater photogrammetry, and review the classic methods used by researchers to apply these methods. Moreover, because sonar is widely used in underwater 3D reconstruction, this paper also introduces and summarizes underwater 3D reconstruction methods based on acoustic image and optical–acoustic image fusion.
2. Development Status of Underwater 3D Reconstruction
Analysis of the Development of Underwater 3D Reconstruction Based on the Literature
3. Challenges Posed by the Underwater Environment
- (1)
- The underwater environment is complex, and the underwater scenes that can be reached are limited, so it is difficult to deploy the system and operate the equipment [32].
- (2)
- Data collection is difficult, requiring divers or specific equipment, and the requirements for the collection personnel are high [33].
- (3)
- The optical properties of the water body and insufficient light lead to dark and blurred images [34]. Light absorption can cause the borders of an image to blur, similar to a vignette effect.
- (4)
- When capturing underwater images in the air, there is a refraction effect between the sensor and the underwater object and between the air and the glass cover and the water due to the difference in density, which alters the camera’s intrinsic parameters, resulting in decreased algorithm performance while processing images [35]. Therefore, a specific calibration is required [36].
- (5)
- When photons propagate in an aqueous medium, they are affected by particles in the water, which can scatter or completely absorb the photons, resulting in the attenuation of the signal that finally reaches the image sensor [37]. The red, green and blue discrete waves are attenuated at different rates, and their effects are immediately apparent in the original underwater image, in which the red channel attenuates the most and the blue channel attenuates the least, resulting in the blue-green image effect [38].
- (6)
- Images taken in shallow-water areas (less than 10 m) may be severely affected by sunlight scintillation, which causes intense light variations as a result of sunlight refraction at the shifting air–water interface. This flickering can quickly change the appearance of the scene, which makes feature extraction and matching for basic image processing functions more difficult [39].
3.1. Underwater Image Degradation
3.1.1. Reflection or Refraction Effects
3.1.2. Absorption or Scattering Effects
3.2. Underwater Camera Calibration
- (1)
- The development of new calibration methods with a refraction-correction capability. Gu et al. [69] proposed an innovative and effective approach for medium-driven underwater camera calibration that can precisely calibrate underwater camera parameters, such as the direction and location of the transparent glass. To better construct the geometric restrictions and calculate the initial values of the underwater camera parameters, the calibration data are obtained using the optical path variations created by medium refraction between different mediums. At the same time, based on quaternions, they propose an underwater camera parameter-optimization method with the aim of improving the calibration accuracy of underwater camera systems.
- (2)
- The existing algorithm has been improved to reduce the refraction error. For example, Du et al. [70] established an actual underwater camera calibration image dataset in order to improve the accuracy of underwater camera calibration. The outcomes of conventional calibration methods are optimized using the slime mold optimization algorithm by combining the best neighborhood perturbation and reverse learning techniques. The precision and effectiveness of the proposed algorithm are verified using the seagull algorithm (SOA) and particle swarm optimization (PSO) algorithm on the surface.
4. Optical Methods
4.1. Structure from Motion
4.2. Photometric Stereo
4.3. Structured Light
4.4. Stereo Vision
4.5. Underwater Photogrammetry
5. Acoustic Image Methods
5.1. Sonar
5.2. Optical–Acoustic Method Fusion
6. Conclusions and Prospect
6.1. Conclusions
6.2. Prospect
- (1)
- Improving reconstruction accuracy and efficiency. Currently, image-based underwater 3D reconstruction technology can achieve a high reconstruction accuracy, but the efficiency and accuracy in large-scale underwater scenes still need to be improved. Future research can be achieved through optimizing algorithms, improving sensor technology and increasing computing speed. For example, improving sensor resolution, sensitivity and frequency can improve sensor technology. Using high-performance computing platforms, optimization algorithms and other aspects can accelerate the computing speed, thereby improving the efficiency of underwater three-dimensional reconstruction.
- (2)
- Solving the multimodal fusion problem. Currently, image-based underwater 3D reconstruction has achieved good results, but due to the special underwater environment, a single imaging system cannot meet all underwater 3D reconstruction needs, covering different ranges and resolutions. Although researchers have now applied homogeneous or heterogeneous sensor fusion in underwater three-dimensional reconstruction, the degree and effect of fusion has not yet reached an ideal state, and further research is needed in the field of fusion.
- (3)
- Improving real-time reconstruction. Real-time underwater three-dimensional reconstruction is an important direction for future research. Due to the high computational complexity of image-based 3D reconstruction, it is difficult to complete real-time 3D reconstruction. It is hoped that in future research, the computational complexity can be reduced and image-based 3D reconstruction can be applied to real-time reconstruction. Real-time underwater 3D reconstruction can provide more real-time and accurate data support for applications such as underwater robots, underwater detection and underwater search and rescue and has important application value.
- (4)
- Developing algorithms for evaluation indicators. Currently, there are not many algorithms for evaluating reconstruction work. Their development is relatively slow, and the overall research is not mature enough. Future research on evaluation algorithms should pay more attention to the combination of overall and local, as well as the combination of visual accuracy and geometric accuracy, in order to more comprehensively evaluate the effects of 3D reconstruction.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AUV | Autonomous Underwater Vehicle |
CNNs | Convolutional Neural Networks |
CTAR | Cube-Type Artificial Reef |
EKF | Extended Kalman Filter |
EoR | Ellipse of Refrax |
ERH | Enhancement–Registration–Homogenization |
FLMS | Forward-Looking Multibeam Sonar |
GPS | Global Positioning System |
ICP | Iterative Closest Point |
IMU | Inertial Measurement Unit |
IS | Imaging Sonar |
LTS | Least Trimmed Squares |
LTS-RA | Least Trimmed Square Rotation Averaging |
MBS | Multibeam Sonar |
MSIS | Mechanical Scanning Imaging Sonar |
MUMC | Minimum Uncertainty Maximum Consensus |
PMVS | Patches-based Multi-View Stereo |
PSO | Particle Swarm Optimization |
RANSAC | Random Sample And Consensus |
RD | Refractive Depth |
ROS | Robot Operating System |
ROV | Remotely Operated Vehicle |
RPCA | Robust Principal Component Analysis |
RSfM | Refractive Structure from Motion |
VIO | Visual–Inertial Odometer |
SAD | Sum of Absolute Differences |
SAM | Smoothing And Mapping |
SBL | Short Baseline |
SBS | Single-Beam Sonar |
SGM | Semi-Global Matching |
SfM | Structure from Motion |
SIFT | Scale-Invariant Feature Transform |
SL | Structured Light |
SLAM | Simultaneous Localization and Mapping |
SOA | Seagull Algorithm |
SSS | Side-Scan Sonar |
SURF | Speeded-Up Robust Features |
SV | Stereo Vision |
SVP | Single View Point |
References
- Blais, F. Review of 20 years of range sensor development. J. Electron. Imaging 2004, 13, 231–243. [Google Scholar] [CrossRef]
- Malamas, E.N.; Petrakis, E.G.; Zervakis, M.; Petit, L.; Legat, J.D. A survey on industrial vision systems, applications and tools. Image Vis. Comput. 2003, 21, 171–188. [Google Scholar] [CrossRef]
- Massot-Campos, M.; Oliver-Codina, G. Optical sensors and methods for underwater 3D reconstruction. Sensors 2015, 15, 31525–31557. [Google Scholar] [CrossRef] [PubMed]
- Qi, Z.; Zou, Z.; Chen, H.; Shi, Z. 3D Reconstruction of Remote Sensing Mountain Areas with TSDF-Based Neural Networks. Remote Sens. 2022, 14, 4333. [Google Scholar]
- Cui, B.; Tao, W.; Zhao, H. High-Precision 3D Reconstruction for Small-to-Medium-Sized Objects Utilizing Line-Structured Light Scanning: A Review. Remote Sens. 2021, 13, 4457. [Google Scholar]
- Lo, Y.; Huang, H.; Ge, S.; Wang, Z.; Zhang, C.; Fan, L. Comparison of 3D Reconstruction Methods: Image-Based and Laser-Scanning-Based. In Proceedings of the International Symposium on Advancement of Construction Management and Real Estate, Chongqing, China, 29 November–2 December 2019. pp. 1257–1266.
- Shortis, M. Calibration techniques for accurate measurements by underwater camera systems. Sensors 2015, 15, 30810–30826. [Google Scholar] [CrossRef]
- Xi, Q.; Rauschenbach, T.; Daoliang, L. Review of underwater machine vision technology and its applications. Mar. Technol. Soc. J. 2017, 51, 75–97. [Google Scholar] [CrossRef]
- Castillón, M.; Palomer, A.; Forest, J.; Ridao, P. State of the art of underwater active optical 3D scanners. Sensors 2019, 19, 5161. [Google Scholar]
- Sahoo, A.; Dwivedy, S.K.; Robi, P. Advancements in the field of autonomous underwater vehicle. Ocean. Eng. 2019, 181, 145–160. [Google Scholar] [CrossRef]
- Chen, C.; Ibekwe-SanJuan, F.; Hou, J. The structure and dynamics of cocitation clusters: A multiple-perspective cocitation analysis. J. Am. Soc. Inf. Sci. Technol. 2010, 61, 1386–1409. [Google Scholar] [CrossRef]
- Chen, C.; Dubin, R.; Kim, M.C. Emerging trends and new developments in regenerative medicine: A scientometric update (2000–2014). Expert Opin. Biol. Ther. 2014, 14, 1295–1317. [Google Scholar] [CrossRef]
- Chen, C. Science mapping: A systematic review of the literature. J. Data Inf. Sci. 2017, 2, 1–40. [Google Scholar] [CrossRef]
- Chen, C. Cascading citation expansion. arXiv 2018, arXiv:1806.00089. [Google Scholar]
- Chen, B.; Xia, M.; Qian, M.; Huang, J. MANet: A multi-level aggregation network for semantic segmentation of high-resolution remote sensing images. Int. J. Remote Sens. 2022, 43, 5874–5894. [Google Scholar] [CrossRef]
- Song, L.; Xia, M.; Weng, L.; Lin, H.; Qian, M.; Chen, B. Axial Cross Attention Meets CNN: Bibranch Fusion Network for Change Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 32–43. [Google Scholar] [CrossRef]
- Lu, C.; Xia, M.; Lin, H. Multi-scale strip pooling feature aggregation network for cloud and cloud shadow segmentation. Neural Comput. Appl. 2022, 34, 6149–6162. [Google Scholar] [CrossRef]
- Qu, Y.; Xia, M.; Zhang, Y. Strip pooling channel spatial attention network for the segmentation of cloud and cloud shadow. Comput. Geosci. 2021, 157, 104940. [Google Scholar] [CrossRef]
- Hu, K.; Weng, C.; Shen, C.; Wang, T.; Weng, L.; Xia, M. A multi-stage underwater image aesthetic enhancement algorithm based on a generative adversarial network. Eng. Appl. Artif. Intell. 2023, 123, 106196. [Google Scholar] [CrossRef]
- Lu, C.; Xia, M.; Qian, M.; Chen, B. Dual-Branch Network for Cloud and Cloud Shadow Segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Shuai Zhang, L.W. STPGTN–A Multi-Branch Parameters Identification Method Considering Spatial Constraints and Transient Measurement Data. Comput. Model. Eng. Sci. 2023, 136, 2635–2654. [Google Scholar] [CrossRef]
- Hu, K.; Ding, Y.; Jin, J.; Weng, L.; Xia, M. Skeleton Motion Recognition Based on Multi-Scale Deep Spatio-Temporal Features. Appl. Sci. 2022, 12, 1028. [Google Scholar] [CrossRef]
- Wang, Z.; Xia, M.; Lu, M.; Pan, L.; Liu, J. Parameter Identification in Power Transmission Systems Based on Graph Convolution Network. IEEE Trans. Power Deliv. 2022, 37, 3155–3163. [Google Scholar] [CrossRef]
- Beall, C.; Lawrence, B.J.; Ila, V.; Dellaert, F. 3D reconstruction of underwater structures. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems IEEE, Taipei, Taiwan, 18–22 October 2010; pp. 4418–4423. [Google Scholar]
- Bruno, F.; Bianco, G.; Muzzupappa, M.; Barone, S.; Razionale, A.V. Experimentation of structured light and stereo vision for underwater 3D reconstruction. ISPRS J. Photogramm. Remote Sens. 2011, 66, 508–518. [Google Scholar] [CrossRef]
- Bianco, G.; Gallo, A.; Bruno, F.; Muzzupappa, M. A comparative analysis between active and passive techniques for underwater 3D reconstruction of close-range objects. Sensors 2013, 13, 11007–11031. [Google Scholar] [CrossRef] [PubMed]
- Jordt, A.; Köser, K.; Koch, R. Refractive 3D reconstruction on underwater images. Methods Oceanogr. 2016, 15, 90–113. [Google Scholar] [CrossRef]
- Kang, L.; Wu, L.; Wei, Y.; Lao, S.; Yang, Y.H. Two-view underwater 3D reconstruction for cameras with unknown poses under flat refractive interfaces. Pattern Recognit. 2017, 69, 251–269. [Google Scholar] [CrossRef]
- Chadebecq, F.; Vasconcelos, F.; Lacher, R.; Maneas, E.; Desjardins, A.; Ourselin, S.; Vercauteren, T.; Stoyanov, D. Refractive two-view reconstruction for underwater 3d vision. Int. J. Comput. Vis. 2020, 128, 1101–1117. [Google Scholar] [CrossRef]
- Song, H.; Chang, L.; Chen, Z.; Ren, P. Enhancement-registration-homogenization (ERH): A comprehensive underwater visual reconstruction paradigm. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6953–6967. [Google Scholar] [CrossRef]
- Su, Z.; Pan, J.; Lu, L.; Dai, M.; He, X.; Zhang, D. Refractive three-dimensional reconstruction for underwater stereo digital image correlation. Opt. Express 2021, 29, 12131–12144. [Google Scholar] [CrossRef]
- Drap, P.; Seinturier, J.; Scaradozzi, D.; Gambogi, P.; Long, L.; Gauch, F. Photogrammetry for virtual exploration of underwater archeological sites. In Proceedings of the 21st International Symposium CIPA, Athens, Greece, 1–6 October 2007; p. 1e6. [Google Scholar]
- Gawlik, N. 3D Modelling of Underwater Archaeological Artefacts. Master’s Thesis, Institutt for Bygg, Anlegg Og Transport, Trondheim, Norway, 2014. [Google Scholar]
- Pope, R.M.; Fry, E.S. Absorption spectrum (380–700 nm) of pure water. II. Integrating cavity measurements. Appl. Opt. 1997, 36, 8710–8723. [Google Scholar] [CrossRef]
- Schechner, Y.Y.; Karpel, N. Clear underwater vision. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition IEEE, Washington, DC, USA, 27 June–2 July 2004; Volume 1, p. I. [Google Scholar]
- Jordt-Sedlazeck, A.; Koch, R. Refractive calibration of underwater cameras. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 846–859. [Google Scholar]
- Skinner, K.A.; Iscar, E.; Johnson-Roberson, M. Automatic color correction for 3D reconstruction of underwater scenes. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA) IEEE, Singapore, 29 June 2017; pp. 5140–5147. [Google Scholar]
- Hu, K.; Jin, J.; Zheng, F.; Weng, L.; Ding, Y. Overview of behavior recognition based on deep learning. Artif. Intell. Rev. 2022, 56, 1833–1865. [Google Scholar] [CrossRef]
- Agrafiotis, P.; Skarlatos, D.; Forbes, T.; Poullis, C.; Skamantzari, M.; Georgopoulos, A. Underwater Photogrammetry in Very Shallow Waters: Main Challenges and Caustics Effect Removal; International Society for Photogrammetry and Remote Sensing: Hannover, Germany, 2018. [Google Scholar]
- Trabes, E.; Jordan, M.A. Self-tuning of a sunlight-deflickering filter for moving scenes underwater. In Proceedings of the 2015 XVI Workshop on Information Processing and Control (RPIC) IEEE, Cordoba, Argentina, 6–9 October 2015; pp. 1–6. [Google Scholar]
- Gracias, N.; Negahdaripour, S.; Neumann, L.; Prados, R.; Garcia, R. A motion compensated filtering approach to remove sunlight flicker in shallow water images. In Proceedings of the OCEANS IEEE, Quebec City, QC, Canada, 15–18 September 2008; pp. 1–7. [Google Scholar]
- Shihavuddin, A.; Gracias, N.; Garcia, R. Online Sunflicker Removal using Dynamic Texture Prediction. In VISAPP 1; Girona, Spain, 24–26 February 2012, Science and Technology Publications: Setubal, Portugal, 2012; pp. 161–167. [Google Scholar]
- Schechner, Y.Y.; Karpel, N. Attenuating natural flicker patterns. In Proceedings of the Oceans’ 04 MTS/IEEE Techno-Ocean’04 (IEEE Cat. No. 04CH37600) IEEE, Kobe, Japan, 9–12 November 2004; Volume 3, pp. 1262–1268. [Google Scholar]
- Swirski, Y.; Schechner, Y.Y. 3Deflicker from motion. In Proceedings of the IEEE International Conference on Computational Photography (ICCP) IEEE, Cambridge, MA, USA, 19–21 April 2013; pp. 1–9. [Google Scholar]
- Forbes, T.; Goldsmith, M.; Mudur, S.; Poullis, C. DeepCaustics: Classification and removal of caustics from underwater imagery. IEEE J. Ocean. Eng. 2018, 44, 728–738. [Google Scholar] [CrossRef]
- Hu, K.; Wu, J.; Li, Y.; Lu, M.; Weng, L.; Xia, M. FedGCN: Federated Learning-Based Graph Convolutional Networks for Non-Euclidean Spatial Data. Mathematics 2022, 10, 1000. [Google Scholar] [CrossRef]
- Zhang, C.; Weng, L.; Ding, L.; Xia, M.; Lin, H. CRSNet: Cloud and Cloud Shadow Refinement Segmentation Networks for Remote Sensing Imagery. Remote Sens. 2023, 15, 1664. [Google Scholar] [CrossRef]
- Ma, Z.; Xia, M.; Lin, H.; Qian, M.; Zhang, Y. FENet: Feature enhancement network for land cover classification. Int. J. Remote Sens. 2023, 44, 1702–1725. [Google Scholar] [CrossRef]
- Hu, K.; Li, M.; Xia, M.; Lin, H. Multi-Scale Feature Aggregation Network for Water Area Segmentation. Remote Sens. 2022, 14, 206. [Google Scholar] [CrossRef]
- Hu, K.; Zhang, Y.; Weng, C.; Wang, P.; Deng, Z.; Liu, Y. An underwater image enhancement algorithm based on generative adversarial network and natural image quality evaluation index. J. Mar. Sci. Eng. 2021, 9, 691. [Google Scholar] [CrossRef]
- Li, Y.; Lin, Q.; Zhang, Z.; Zhang, L.; Chen, D.; Shuang, F. MFNet: Multi-level feature extraction and fusion network for large-scale point cloud classification. Remote Sens. 2022, 14, 5707. [Google Scholar] [CrossRef]
- Agrafiotis, P.; Drakonakis, G.I.; Georgopoulos, A.; Skarlatos, D. The Effect of Underwater Imagery Radiometry on 3D Reconstruction and Orthoimagery; International Society for Photogrammetry and Remote Sensing: Hannover, Germany, 2017. [Google Scholar]
- Jian, M.; Liu, X.; Luo, H.; Lu, X.; Yu, H.; Dong, J. Underwater image processing and analysis: A review. Signal Process. Image Commun. 2021, 91, 116088. [Google Scholar] [CrossRef]
- Ghani, A.S.A.; Isa, N.A.M. Underwater image quality enhancement through Rayleigh-stretching and averaging image planes. Int. J. Nav. Archit. Ocean. Eng. 2014, 6, 840–866. [Google Scholar] [CrossRef]
- Mangeruga, M.; Cozza, M.; Bruno, F. Evaluation of underwater image enhancement algorithms under different environmental conditions. J. Mar. Sci. Eng. 2018, 6, 10. [Google Scholar] [CrossRef]
- Mangeruga, M.; Bruno, F.; Cozza, M.; Agrafiotis, P.; Skarlatos, D. Guidelines for underwater image enhancement based on benchmarking of different methods. Remote Sens. 2018, 10, 1652. [Google Scholar] [CrossRef]
- Hu, K.; Zhang, Y.; Lu, F.; Deng, Z.; Liu, Y. An underwater image enhancement algorithm based on MSR parameter optimization. J. Mar. Sci. Eng. 2020, 8, 741. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Ren, W.; Cong, R.; Hou, J.; Kwong, S.; Tao, D. An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 2019, 29, 4376–4389. [Google Scholar] [CrossRef]
- Gao, J.; Weng, L.; Xia, M.; Lin, H. MLNet: Multichannel feature fusion lozenge network for land segmentation. J. Appl. Remote Sens. 2022, 16, 1–19. [Google Scholar] [CrossRef]
- Miao, S.; Xia, M.; Qian, M.; Zhang, Y.; Liu, J.; Lin, H. Cloud/shadow segmentation based on multi-level feature enhanced network for remote sensing imagery. Int. J. Remote Sens. 2022, 43, 5940–5960. [Google Scholar] [CrossRef]
- Ma, Z.; Xia, M.; Weng, L.; Lin, H. Local Feature Search Network for Building and Water Segmentation of Remote Sensing Image. Sustainability 2023, 15, 3034. [Google Scholar] [CrossRef]
- Hu, K.; Zhang, E.; Xia, M.; Weng, L.; Lin, H. MCANet: A Multi-Branch Network for Cloud/Snow Segmentation in High-Resolution Remote Sensing Images. Remote Sens. 2023, 15, 1055. [Google Scholar] [CrossRef]
- Chen, J.; Xia, M.; Wang, D.; Lin, H. Double Branch Parallel Network for Segmentation of Buildings and Waters in Remote Sensing Images. Remote Sens. 2023, 15, 1536. [Google Scholar] [CrossRef]
- McCarthy, J.K.; Benjamin, J.; Winton, T.; van Duivenvoorde, W. 3D Recording and Interpretation for Maritime Archaeology. Underw. Technol. 2020, 37, 65–66. [Google Scholar] [CrossRef]
- Pedersen, M.; Hein Bengtson, S.; Gade, R.; Madsen, N.; Moeslund, T.B. Camera calibration for underwater 3D reconstruction based on ray tracing using Snell’s law. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1410–1417. [Google Scholar]
- Kwon, Y.H. Object plane deformation due to refraction in two-dimensional underwater motion analysis. J. Appl. Biomech. 1999, 15, 396–403. [Google Scholar] [CrossRef]
- Treibitz, T.; Schechner, Y.; Kunz, C.; Singh, H. Flat refractive geometry. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 51–65. [Google Scholar] [CrossRef]
- Menna, F.; Nocerino, E.; Troisi, S.; Remondino, F. A photogrammetric approach to survey floating and semi-submerged objects. In Proceedings of the Videometrics, Range Imaging, and Applications XII and Automated Visual Inspection SPIE, Munich, Germany, 23 May 2013; Volume 8791, pp. 117–131. [Google Scholar]
- Gu, C.; Cong, Y.; Sun, G.; Gao, Y.; Tang, X.; Zhang, T.; Fan, B. MedUCC: Medium-Driven Underwater Camera Calibration for Refractive 3-D Reconstruction. IEEE Trans. Syst. Man Cybern. Syst. 2021, 52, 5937–5948. [Google Scholar] [CrossRef]
- Du, S.; Zhu, Y.; Wang, J.; Yu, J.; Guo, J. Underwater Camera Calibration Method Based on Improved Slime Mold Algorithm. Sustainability 2022, 14, 5752. [Google Scholar] [CrossRef]
- Shortis, M. Camera calibration techniques for accurate measurement underwater. In 3D Recording and Interpretation for Maritime Archaeology; Springer: Berlin/Heidelberg, Germany, 2019; pp. 11–27. [Google Scholar]
- Sedlazeck, A.; Koch, R. Perspective and non-perspective camera models in underwater imaging—Overview and error analysis. In Proceedings of the 15th International Conference on Theoretical Foundations of Computer Vision: Outdoor and Large-Scale Real-World Scene Analysis, Dagstuhl Castle, Germany, 26 June 2011; Volume 7474, pp. 212–242. [Google Scholar]
- Constantinou, C.C.; Loizou, S.G.; Georgiades, G.P.; Potyagaylo, S.; Skarlatos, D. Adaptive calibration of an underwater robot vision system based on hemispherical optics. In Proceedings of the 2014 IEEE/OES Autonomous Underwater Vehicles (AUV) IEEE, San Diego, CA, USA, 6–9 October 2014; pp. 1–5. [Google Scholar]
- Ma, X.; Feng, J.; Guan, H.; Liu, G. Prediction of chlorophyll content in different light areas of apple tree canopies based on the color characteristics of 3D reconstruction. Remote Sens. 2018, 10, 429. [Google Scholar] [CrossRef]
- Longuet-Higgins, H.C. A computer algorithm for reconstructing a scene from two projections. Nature 1981, 293, 133–135. [Google Scholar] [CrossRef]
- Hu, K.; Lu, F.; Lu, M.; Deng, Z.; Liu, Y. A marine object detection algorithm based on SSD and feature enhancement. Complexity 2020, 2020, 5476142. [Google Scholar] [CrossRef]
- Bay, H.; Tuytelaars, T.; Gool, L.V. Surf: Speeded up robust features. In Proceedings of the European Conference on Computer Vision, Graz, Austria, 1 January 2006; pp. 404–417. [Google Scholar]
- Ng, P.C.; Henikoff, S. SIFT: Predicting amino acid changes that affect protein function. Nucleic Acids Res. 2003, 31, 3812–3814. [Google Scholar] [CrossRef]
- Meline, A.; Triboulet, J.; Jouvencel, B. Comparative study of two 3D reconstruction methods for underwater archaeology. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems IEEE, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 740–745. [Google Scholar]
- Moulon, P.; Monasse, P.; Marlet, R. Global fusion of relative motions for robust, accurate and scalable structure from motion. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 3248–3255. [Google Scholar]
- Snavely, N.; Seitz, S.M.; Szeliski, R. Photo tourism: Exploring photo collections in 3D. Acm Trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef]
- Gao, X.; Hu, L.; Cui, H.; Shen, S.; Hu, Z. Accurate and efficient ground-to-aerial model alignment. Pattern Recognit. 2018, 76, 288–302. [Google Scholar] [CrossRef]
- Triggs, B.; Zisserman, A.; Szeliski, R. Vision Algorithms: Theory and Practice. In Proceedings of the International Workshop on Vision Algorithms, Corfu, Greece, 21–22 September 1999; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
- Wu, C. Towards linear-time incremental structure from motion. In Proceedings of the 2013 International Conference on 3D Vision-3DV 2013 IEEE, Tokyo, Japan, 29 October–1 November 2013; pp. 127–134. [Google Scholar]
- Moulon, P.; Monasse, P.; Perrot, R.; Marlet, R. Openmvg: Open multiple view geometry. In Proceedings of the International Workshop on Reproducible Research in Pattern Recognition, Cancun, Mexico, 4 December 2016; pp. 60–74. [Google Scholar]
- Hartley, R.; Trumpf, J.; Dai, Y.; Li, H. Rotation averaging. Int. J. Comput. Vis. 2013, 103, 267–305. [Google Scholar] [CrossRef]
- Wilson, K.; Snavely, N. Robust global translations with 1dsfm. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 61–75. [Google Scholar]
- Liu, S.; Jiang, S.; Liu, Y.; Xue, W.; Guo, B. Efficient SfM for Large-Scale UAV Images Based on Graph-Indexed BoW and Parallel-Constructed BA Optimization. Remote Sens. 2022, 14, 5619. [Google Scholar] [CrossRef]
- Wen, Z.; Fraser, D.; Lambert, A.; Li, H. Reconstruction of underwater image by bispectrum. In Proceedings of the 2007 IEEE International Conference on Image Processing IEEE, San Antonio, TX, USA, 16–19 September 2007; Volume 3, p. 545. [Google Scholar]
- Sedlazeck, A.; Koser, K.; Koch, R. 3D reconstruction based on underwater video from rov kiel 6000 considering underwater imaging conditions. In Proceedings of the OCEANS 2009-Europe IEEE, Scotland, UK, 11–14 May 2009; pp. 1–10. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Pizarro, O.; Eustice, R.M.; Singh, H. Large area 3-D reconstructions from underwater optical surveys. IEEE J. Ocean. Eng. 2009, 34, 150–169. [Google Scholar] [CrossRef]
- Xu, X.; Che, R.; Nian, R.; He, B.; Chen, M.; Lendasse, A. Underwater 3D object reconstruction with multiple views in video stream via structure from motion. In Proceedings of the OCEANS 2016-Shanghai IEEE, ShangHai, China, 10–13 April 2016; pp. 1–5. [Google Scholar]
- Chen, Y.; Li, Q.; Gong, S.; Liu, J.; Guan, W. UV3D: Underwater Video Stream 3D Reconstruction Based on Efficient Global SFM. Appl. Sci. 2022, 12, 5918. [Google Scholar] [CrossRef]
- Jordt-Sedlazeck, A.; Koch, R. Refractive structure-from-motion on underwater images. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 57–64. [Google Scholar]
- Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In Proceedings of the International Workshop on Vision Algorithms, Corfu, Greece, 21–22 September 1999; pp. 298–372. [Google Scholar]
- Kang, L.; Wu, L.; Yang, Y.H. Two-view underwater structure and motion for cameras under flat refractive interfaces. In Proceedings of the European Conference on Computer Vision, Ferrara, Italy, 7–13 October 2012; pp. 303–316. [Google Scholar]
- Parvathi, V.; Victor, J.C. Multiview 3D reconstruction of underwater scenes acquired with a single refractive layer using structure from motion. In Proceedings of the 2018 Twenty Fourth National Conference on Communications (NCC) IEEE, Hyderabad, India, 25–28 February 2018; pp. 1–6. [Google Scholar]
- Chadebecq, F.; Vasconcelos, F.; Dwyer, G.; Lacher, R.; Ourselin, S.; Vercauteren, T.; Stoyanov, D. Refractive structure-from-motion through a flat refractive interface. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5315–5323. [Google Scholar]
- Qiao, X.; Yamashita, A.; Asama, H. 3D Reconstruction for Underwater Investigation at Fukushima Daiichi Nuclear Power Station Using Refractive Structure from Motion. In Proceedings of the International Topical Workshop on Fukushima Decommissioning Research, Fukushima, Japan, 24–26 May 2019; pp. 1–4. [Google Scholar]
- Ichimaru, K.; Taguchi, Y.; Kawasaki, H. Unified underwater structure-from-motion. In Proceedings of the 2019 International Conference on 3D Vision (3DV) IEEE, Quebec City, QC, Canada, 16–19 September 2019; pp. 524–532. [Google Scholar]
- Jeon, I.; Lee, I. 3D Reconstruction of unstable underwater environment with SFM using SLAM. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 1–6. [Google Scholar] [CrossRef]
- Jaffe, J.S. Underwater optical imaging: The past, the present, and the prospects. IEEE J. Ocean. Eng. 2014, 40, 683–700. [Google Scholar] [CrossRef]
- Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 139–144. [Google Scholar] [CrossRef]
- Narasimhan, S.G.; Nayar, S.K. Structured light methods for underwater imaging: Light stripe scanning and photometric stereo. In Proceedings of the OCEANS 2005 MTS/IEEE, Washington, DC, USA, 19–22 September 2005; pp. 2610–2617. [Google Scholar]
- Wu, L.; Ganesh, A.; Shi, B.; Matsushita, Y.; Wang, Y.; Ma, Y. Robust photometric stereo via low-rank matrix completion and recovery. In Proceedings of the Asian Conference on Computer Vision, Queenstown, New Zealand, 8–12 November 2010; pp. 703–717. [Google Scholar]
- Tsiotsios, C.; Angelopoulou, M.E.; Kim, T.K.; Davison, A.J. Backscatter compensated photometric stereo with 3 sources. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2251–2258. [Google Scholar]
- Wu, Z.; Liu, W.; Wang, J.; Wang, X. A Height Correction Algorithm Applied in Underwater Photometric Stereo Reconstruction. In Proceedings of the 2018 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC) IEEE, Hangzhou, China, 5–8 August 2018; pp. 1–6. [Google Scholar]
- Murez, Z.; Treibitz, T.; Ramamoorthi, R.; Kriegman, D. Photometric stereo in a scattering medium. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 3415–3423. [Google Scholar]
- Jiao, H.; Luo, Y.; Wang, N.; Qi, L.; Dong, J.; Lei, H. Underwater multi-spectral photometric stereo reconstruction from a single RGBD image. In Proceedings of the 2016 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) IEEE, Macau, China, 13–16 December 2016; pp. 1–4. [Google Scholar]
- Telem, G.; Filin, S. Photogrammetric modeling of underwater environments. ISPRS J. Photogramm. Remote Sens. 2010, 65, 433–444. [Google Scholar] [CrossRef]
- Kolagani, N.; Fox, J.S.; Blidberg, D.R. Photometric stereo using point light sources. In Proceedings of the 1992 IEEE International Conference on Robotics and Automation IEEE Computer Society, Nice, France, 12–14 May 1992; pp. 1759–1760. [Google Scholar]
- Mecca, R.; Wetzler, A.; Bruckstein, A.M.; Kimmel, R. Near field photometric stereo with point light sources. SIAM J. Imaging Sci. 2014, 7, 2732–2770. [Google Scholar] [CrossRef]
- Fan, H.; Qi, L.; Wang, N.; Dong, J.; Chen, Y.; Yu, H. Deviation correction method for close-range photometric stereo with nonuniform illumination. Opt. Eng. 2017, 56, 103102. [Google Scholar] [CrossRef]
- Angelopoulou, M.E.; Petrou, M. Evaluating the effect of diffuse light on photometric stereo reconstruction. Mach. Vis. Appl. 2014, 25, 199–210. [Google Scholar] [CrossRef]
- Fan, H.; Qi, L.; Chen, C.; Rao, Y.; Kong, L.; Dong, J.; Yu, H. Underwater optical 3-d reconstruction of photometric stereo considering light refraction and attenuation. IEEE J. Ocean. Eng. 2021, 47, 46–58. [Google Scholar] [CrossRef]
- Li, X.; Fan, H.; Qi, L.; Chen, Y.; Dong, J.; Dong, X. Combining encoded structured light and photometric stereo for underwater 3D reconstruction. In Proceedings of the 2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI) IEEE, Melbourne, Australia, 4–8 August 2017; pp. 1–6. [Google Scholar]
- Salvi, J.; Fernandez, S.; Pribanic, T.; Llado, X. A state of the art in structured light patterns for surface profilometry. Pattern Recognit. 2010, 43, 2666–2680. [Google Scholar] [CrossRef]
- Salvi, J.; Pages, J.; Batlle, J. Pattern codification strategies in structured light systems. Pattern Recognit. 2004, 37, 827–849. [Google Scholar] [CrossRef]
- Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
- Zhang, Q.; Wang, Q.; Hou, Z.; Liu, Y.; Su, X. Three-dimensional shape measurement for an underwater object based on two-dimensional grating pattern projection. Opt. Laser Technol. 2011, 43, 801–805. [Google Scholar] [CrossRef]
- Törnblom, N. Underwater 3D Surface Scanning Using Structured Light. 2010. Available online: http://www.diva-portal.org/smash/get/diva2:378911/FULLTEXT01.pdf (accessed on 18 September 2015).
- Massot-Campos, M.; Oliver-Codina, G.; Kemal, H.; Petillot, Y.; Bonin-Font, F. Structured light and stereo vision for underwater 3D reconstruction. In Proceedings of the OCEANS 2015-Genova IEEE, Genova, Italy, 18–21 May 2015; pp. 1–6. [Google Scholar]
- Tang, Y.; Zhang, Z.; Wang, X. Estimation of the Scale of Artificial Reef Sets on the Basis of Underwater 3D Reconstruction. J. Ocean. Univ. China 2021, 20, 1195–1206. [Google Scholar] [CrossRef]
- Sarafraz, A.; Haus, B.K. A structured light method for underwater surface reconstruction. ISPRS J. Photogramm. Remote Sens. 2016, 114, 40–52. [Google Scholar] [CrossRef]
- Fox, J.S. Structured light imaging in turbid water. In Proceedings of the Underwater Imaging SPIE, San Diego, CA, USA, 1–3 November 1988; Volume 980, pp. 66–71. [Google Scholar]
- Ouyang, B.; Dalgleish, F.; Negahdaripour, S.; Vuorenkoski, A. Experimental study of underwater stereo via pattern projection. In Proceedings of the 2012 Oceans IEEE, Hampton, VA, USA, 14–19 October 2012; pp. 1–7. [Google Scholar]
- Wang, Y.; Negahdaripour, S.; Aykin, M.D. Calibration and 3D reconstruction of underwater objects with non-single-view projection model by structured light stereo imaging. Appl. Opt. 2016, 55, 6564–6575. [Google Scholar] [CrossRef]
- Massone, Q.; Druon, S.; Triboulet, J. An original 3D reconstruction method using a conical light and a camera in underwater caves. In Proceedings of the 2021 4th International Conference on Control and Computer Vision, Guangzhou, China, 25–28 June 2021; pp. 126–134. [Google Scholar]
- Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06) IEEE, New York, NY, USA, 17–22 June 2006; Volume 1, pp. 519–528. [Google Scholar]
- Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Kumar, N.S.; Kumar, R. Design & development of autonomous system to build 3D model for underwater objects using stereo vision technique. In Proceedings of the 2011 Annual IEEE India Conference IEEE, Hyderabad, India, 16–18 December 2011; pp. 1–4. [Google Scholar]
- Atallah, M.J. Faster image template matching in the sum of the absolute value of differences measure. IEEE Trans. Image Process. 2001, 10, 659–663. [Google Scholar] [CrossRef] [PubMed]
- Rahman, T.; Anderson, J.; Winger, P.; Krouglicof, N. Calibration of an underwater stereoscopic vision system. In Proceedings of the 2013 OCEANS-San Diego IEEE, San Diego, CA, USA, 23–26 September 2013; pp. 1–6. [Google Scholar]
- Rahman, T.; Krouglicof, N. An efficient camera calibration technique offering robustness and accuracy over a wide range of lens distortion. IEEE Trans. Image Process. 2011, 21, 626–637. [Google Scholar] [CrossRef] [PubMed]
- Heikkila, J. Geometric camera calibration using circular control points. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1066–1077. [Google Scholar] [CrossRef]
- Oleari, F.; Kallasi, F.; Rizzini, D.L.; Aleotti, J.; Caselli, S. An underwater stereo vision system: From design to deployment and dataset acquisition. In Proceedings of the OCEANS 2015-Genova IEEE, Genova, Italy, 18–21 May 2015; pp. 1–6. [Google Scholar]
- Deng, Z.; Sun, Z. Binocular camera calibration for underwater stereo matching. Proc. J. Physics Conf. Ser. 2020, 1550, 032047. [Google Scholar] [CrossRef]
- Chen, W.; Shang, G.; Ji, A.; Zhou, C.; Wang, X.; Xu, C.; Li, Z.; Hu, K. An overview on visual slam: From tradition to semantic. Remote Sens. 2022, 14, 3010. [Google Scholar] [CrossRef]
- Bonin-Font, F.; Cosic, A.; Negre, P.L.; Solbach, M.; Oliver, G. Stereo SLAM for robust dense 3D reconstruction of underwater environments. In Proceedings of the OCEANS 2015-Genova IEEE, Genova, Italy, 18–21 May 2015; pp. 1–6. [Google Scholar]
- Zhang, H.; Lin, Y.; Teng, F.; Hong, W. A Probabilistic Approach for Stereo 3D Point Cloud Reconstruction from Airborne Single-Channel Multi-Aspect SAR Image Sequences. Remote Sens. 2022, 14, 5715. [Google Scholar] [CrossRef]
- Servos, J.; Smart, M.; Waslander, S.L. Underwater stereo SLAM with refraction correction. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems IEEE, Tokyo, Japan, 3–7 November 2013; pp. 3350–3355. [Google Scholar]
- Andono, P.N.; Yuniarno, E.M.; Hariadi, M.; Venus, V. 3D reconstruction of under water coral reef images using low cost multi-view cameras. In Proceedings of the 2012 International Conference on Multimedia Computing and Systems IEEE, Florence, Italy, 10–12 May 2012; pp. 803–808. [Google Scholar]
- Wu, Y.; Nian, R.; He, B. 3D reconstruction model of underwater environment in stereo vision system. In Proceedings of the 2013 OCEANS-San Diego IEEE, San Diego, CA, USA, 23–27 September 2013; pp. 1–4. [Google Scholar]
- Zheng, B.; Zheng, H.; Zhao, L.; Gu, Y.; Sun, L.; Sun, Y. Underwater 3D target positioning by inhomogeneous illumination based on binocular stereo vision. In Proceedings of the 2012 Oceans-Yeosu IEEE, Yeosu, Republic of Korea, 21–24 May 2012; pp. 1–4. [Google Scholar]
- Zhang, Z.; Faugeras, O. 3D Dynamic Scene Analysis: A Stereo Based Approach; Springer: Berlin/Heidelberg, Germany, 2012; Volume 27. [Google Scholar]
- Huo, G.; Wu, Z.; Li, J.; Li, S. Underwater target detection and 3D reconstruction system based on binocular vision. Sensors 2018, 18, 3570. [Google Scholar] [CrossRef]
- Wang, C.; Zhang, Q.; Lin, S.; Li, W.; Wang, X.; Bai, Y.; Tian, Q. Research and experiment of an underwater stereo vision system. In Proceedings of the OCEANS 2019-Marseille IEEE, Marseille, France, 17–20 June 2019; pp. 1–5. [Google Scholar]
- Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-range photogrammetry and 3D imaging. In Close-Range Photogrammetry and 3D Imaging; De Gruyter: Berlin, Germany, 2019. [Google Scholar]
- Förstner, W. Uncertainty and projective geometry. In Handbook of Geometric Computing; Springer: Berlin/Heidelberg, Germany, 2005; pp. 493–534. [Google Scholar]
- Abdo, D.; Seager, J.; Harvey, E.; McDonald, J.; Kendrick, G.; Shortis, M. Efficiently measuring complex sessile epibenthic organisms using a novel photogrammetric technique. J. Exp. Mar. Biol. Ecol. 2006, 339, 120–133. [Google Scholar] [CrossRef]
- Menna, F.; Nocerino, E.; Remondino, F. Photogrammetric modelling of submerged structures: Influence of underwater environment and lens ports on three-dimensional (3D) measurements. In Latest Developments in Reality-Based 3D Surveying and Modelling; MDPI: Basel, Switzerland, 2018; pp. 279–303. [Google Scholar]
- Menna, F.; Nocerino, E.; Nawaf, M.M.; Seinturier, J.; Torresani, A.; Drap, P.; Remondino, F.; Chemisky, B. Towards real-time underwater photogrammetry for subsea metrology applications. In Proceedings of the OCEANS 2019-Marseille IEEE, Marseille, France, 17–20 June 2019; pp. 1–10. [Google Scholar]
- Zhukovsky, M. Photogrammetric techniques for 3-D underwater record of the antique time ship from phanagoria. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 40, 717–721. [Google Scholar] [CrossRef]
- Nornes, S.M.; Ludvigsen, M.; Ødegard, Ø.; SØrensen, A.J. Underwater photogrammetric mapping of an intact standing steel wreck with ROV. IFAC-PapersOnLine 2015, 48, 206–211. [Google Scholar] [CrossRef]
- Guo, T.; Capra, A.; Troyer, M.; Grün, A.; Brooks, A.J.; Hench, J.L.; Schmitt, R.J.; Holbrook, S.J.; Dubbini, M. Accuracy assessment of underwater photogrammetric three dimensional modelling for coral reefs. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 821–828. [Google Scholar] [CrossRef]
- Balletti, C.; Beltrame, C.; Costa, E.; Guerra, F.; Vernier, P. 3D reconstruction of marble shipwreck cargoes based on underwater multi-image photogrammetry. Digit. Appl. Archaeol. Cult. Herit. 2016, 3, 1–8. [Google Scholar] [CrossRef]
- Mohammadloo, T.H.; Geen, M.S.; Sewada, J.; Snellen, M.G.; Simons, D. Assessing the Performance of the Phase Difference Bathymetric Sonar Depth Uncertainty Prediction Model. Remote Sens. 2022, 14, 2011. [Google Scholar] [CrossRef]
- Pathak, K.; Birk, A.; Vaskevicius, N. Plane-based registration of sonar data for underwater 3D mapping. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems IEEE, Osaka, Japan, 18–22 October 2010; pp. 4880–4885. [Google Scholar]
- Pathak, K.; Birk, A.; Vaškevičius, N.; Poppinga, J. Fast registration based on noisy planes with unknown correspondences for 3-D mapping. IEEE Trans. Robot. 2010, 26, 424–441. [Google Scholar] [CrossRef]
- Guo, Y. 3D underwater topography rebuilding based on single beam sonar. In Proceedings of the 2013 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013) IEEE, Hainan, China, 5–8 August 2013; pp. 1–5. [Google Scholar]
- Langer, D.; Hebert, M. Building qualitative elevation maps from side scan sonar data for autonomous underwater navigation. In Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, CA, USA, 9–11 April 1991; Volume 3, pp. 2478–2483. [Google Scholar]
- Zerr, B.; Stage, B. Three-dimensional reconstruction of underwater objects from a sequence of sonar images. In Proceedings of the 3rd IEEE International Conference on Image Processing IEEE, Santa Ana, CA, USA, 16–19 September 1996; Volume 3, pp. 927–930. [Google Scholar]
- Bikonis, K.; Moszynski, M.; Lubniewski, Z. Application of shape from shading technique for side scan sonar images. Pol. Marit. Res. 2013, 20, 39–44. [Google Scholar] [CrossRef]
- Wang, J.; Han, J.; Du, P.; Jing, D.; Chen, J.; Qu, F. Three-dimensional reconstruction of underwater objects from side-scan sonar images. In Proceedings of the OCEANS 2017-Aberdeen IEEE, Aberdeen, Scotland, 19–22 June 2017; pp. 1–6. [Google Scholar]
- Brahim, N.; Guériot, D.; Daniel, S.; Solaiman, B. 3D reconstruction of underwater scenes using DIDSON acoustic sonar image sequences through evolutionary algorithms. In Proceedings of the OCEANS 2011 IEEE, Santander, Spain, 6–9 June 2011; pp. 1–6. [Google Scholar]
- Song, Y.E.; Choi, S.J. Underwater 3D reconstruction for underwater construction robot based on 2D multibeam imaging sonar. J. Ocean. Eng. Technol. 2016, 30, 227–233. [Google Scholar] [CrossRef]
- Song, Y.; Choi, S.; Shin, C.; Shin, Y.; Cho, K.; Jung, H. 3D reconstruction of underwater scene for marine bioprospecting using remotely operated underwater vehicle (ROV). J. Mech. Sci. Technol. 2018, 32, 5541–5550. [Google Scholar] [CrossRef]
- Kwon, S.; Park, J.; Kim, J. 3D reconstruction of underwater objects using a wide-beam imaging sonar. In Proceedings of the 2017 IEEE Underwater Technology (UT) IEEE, Busan, Repbulic of Korea, 21–24 February 2017; pp. 1–4. [Google Scholar]
- Justo, B.; dos Santos, M.M.; Drews, P.L.J.; Arigony, J.; Vieira, A.W. 3D surfaces reconstruction and volume changes in underwater environments using msis sonar. In Proceedings of the Latin American Robotics Symposium (LARS), Brazilian Symposium on Robotics (SBR) and Workshop on Robotics in Education (WRE) IEEE, Rio Grande, Brazil, 23–25 October 2019; pp. 115–120. [Google Scholar]
- Guerneve, T.; Subr, K.; Petillot, Y. Three-dimensional reconstruction of underwater objects using wide-aperture imaging SONAR. J. Field Robot. 2018, 35, 890–905. [Google Scholar] [CrossRef]
- McConnell, J.; Martin, J.D.; Englot, B. Fusing concurrent orthogonal wide-aperture sonar images for dense underwater 3D reconstruction. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) IEEE, Coimbra, Portugal, 25–29 October 2020; pp. 1653–1660. [Google Scholar]
- Joe, H.; Kim, J.; Yu, S.C. 3D reconstruction using two sonar devices in a Monte-Carlo approach for AUV application. Int. J. Control. Autom. Syst. 2020, 18, 587–596. [Google Scholar] [CrossRef]
- Kim, B.; Kim, J.; Lee, M.; Sung, M.; Yu, S.C. Active planning of AUVs for 3D reconstruction of underwater object using imaging sonar. In Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Workshop (AUV) IEEE, Clemson, MI, USA, 6–9 November 2018; pp. 1–6. [Google Scholar]
- Li, Z.; Qi, B.; Li, C. 3D Sonar Image Reconstruction Based on Multilayered Mesh Search and Triangular Connection. In Proceedings of the 2018 10th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC) IEEE, Hangzhou, China, 25–26 August 2018; Volume 2, pp. 60–63. [Google Scholar]
- Mai, N.T.; Woo, H.; Ji, Y.; Tamura, Y.; Yamashita, A.; Asama, H. 3-D reconstruction of underwater object based on extended Kalman filter by using acoustic camera images. IFAC-PapersOnLine 2017, 50, 1043–1049. [Google Scholar]
- Mai, N.T.; Woo, H.; Ji, Y.; Tamura, Y.; Yamashita, A.; Asama, H. 3D reconstruction of line features using multi-view acoustic images in underwater environment. In Proceedings of the 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) IEEE, Daegu, Repbulic of Korea, 16–18 November 2017; pp. 312–317. [Google Scholar]
- Kiryati, N.; Eldar, Y.; Bruckstein, A.M. A probabilistic Hough transform. Pattern Recognit. 1991, 24, 303–316. [Google Scholar] [CrossRef]
- Hurtós, N.; Cufí, X.; Salvi, J. Calibration of optical camera coupled to acoustic multibeam for underwater 3D scene reconstruction. In Proceedings of the OCEANS’10 IEEE, Sydney, Australia, 24–27 May 2010; pp. 1–7. [Google Scholar]
- Negahdaripour, S.; Sekkati, H.; Pirsiavash, H. Opti-acoustic stereo imaging, system calibration and 3-D reconstruction. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition IEEE, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
- Negahdaripour, S. On 3-D reconstruction from stereo FS sonar imaging. In Proceedings of the OCEANS 2010 MTS/IEEE, Seattle, WA, USA, 20–23 September 2010; pp. 1–6. [Google Scholar]
- Babaee, M.; Negahdaripour, S. 3-D object modeling from occluding contours in opti-acoustic stereo images. In Proceedings of the 2013 OCEANS, San Diego, CA, USA, 23–27 September 2013; pp. 1–8. [Google Scholar]
- Inglis, G.; Roman, C. Sonar constrained stereo correspondence for three-dimensional seafloor reconstruction. In Proceedings of the OCEANS’10 IEEE, Sydney, Australia, 24–27 May 2010; pp. 1–10. [Google Scholar]
- Zhang, Q.; Pless, R. Extrinsic calibration of a camera and laser range finder (Improves camera calibration). In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 28 September–2 October 2004; Volume 3, pp. 2301–2306. [Google Scholar]
- Kunz, C.; Singh, H. Map building fusing acoustic and visual information using autonomous underwater vehicles. J. Field Robot. 2013, 30, 763–783. [Google Scholar] [CrossRef]
- Teague, J.; Scott, T. Underwater photogrammetry and 3D reconstruction of submerged objects in shallow environments by ROV and underwater GPS. J. Mar. Sci. Res. Technol. 2017, 1, 5. [Google Scholar]
- Mattei, G.; Troisi, S.; Aucelli, P.P.; Pappone, G.; Peluso, F.; Stefanile, M. Multiscale reconstruction of natural and archaeological underwater landscape by optical and acoustic sensors. In Proceedings of the 2018 IEEE International Workshop on Metrology for the Sea, Learning to Measure Sea Health Parameters (MetroSea), Bari, Italy, 8–10 October 2018; pp. 46–49. [Google Scholar]
- Wei, X.; Sun, C.; Lyu, M.; Song, Q.; Li, Y. ConstDet: Control Semantics-Based Detection for GPS Spoofing Attacks on UAVs. Remote Sens. 2022, 14, 5587. [Google Scholar] [CrossRef]
- Kim, J.; Sung, M.; Yu, S.C. Development of simulator for autonomous underwater vehicles utilizing underwater acoustic and optical sensing emulators. In Proceedings of the 2018 18th International Conference on Control, Automation and Systems (ICCAS) IEEE, Bari, Italy, 8–10 October 2018; pp. 416–419. [Google Scholar]
- Aykin, M.D.; Negahdaripour, S. Forward-look 2-D sonar image formation and 3-D reconstruction. In Proceedings of the 2013 OCEANS, San Diego, CA, USA, 23–27 September 2013; pp. 1–10. [Google Scholar]
- Rahman, S.; Li, A.Q.; Rekleitis, I. Contour based reconstruction of underwater structures using sonar, visual, inertial, and depth sensor. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) IEEE, Macau, China, 4–8 November 2019; pp. 8054–8059. [Google Scholar]
- Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardós, J.D. Visual-inertial monocular SLAM with map reuse. IEEE Robot. Autom. Lett. 2017, 2, 796–803. [Google Scholar] [CrossRef]
- Yang, X.; Jiang, G. A Practical 3D Reconstruction Method for Weak Texture Scenes. Remote Sens. 2021, 13, 3103. [Google Scholar] [CrossRef]
References | Contribution |
---|---|
Chris Beall [24] | a large-scale sparse reconstruction technology |
Bruno, F. [25] | a projection of SL patterns based on SV system |
Bianco [26] | Authors integrated the 3D point cloud collected by active and passive methods and made use of the advantages of each technology |
Jordt, A. [27] | Authors compensated for refraction through the geometric model formed by the image |
Kang, L. [28] | a simplified refraction camera model |
Chadebecq, F. [29] | a novel RSfM framework |
Song, H. [30] | a comprehensive underwater visual reconstruction ERH paradigm |
Su, Z. [31] | a flexible and accurate stereo-DIC |
References | Feature | Matching Method | Contribution |
---|---|---|---|
Sedlazeck [90] | Corner | KTL Tracker | The system can adjust the underwater photography environment, including a specific background and floating particle filtering, allowing for a sparse set of 3D points and a reliable estimation of camera postures. |
Pizarro [92] | Harris | Affine invariant region | The authors proposed a complete seabed 3D reconstruction system for processing optical images obtained from underwater vehicles. |
Xu [93] | SIFT | SIFT and RANSAC | For continuous video streams, the authors created a novel underwater 3D object reconstruction model. |
Chen [94] | Keyframes | KNN-match | The authors proposed a faster rotation-averaging method, LTS-RA method, based on the LTS and L1RA methods. |
Jordt-Sedlazeck [95] | — | KLT Tracker | The authors proposed a novel error function that can be calculated fast and even permits the analytic derivation of the error function’s required Jacobian matrices. |
Kang [28,97] | — | — | In the case of known rotation, the authors showed that optimal underwater SfM under L∞-norm can probably be evaluated based on two new concepts, including the EoR and RD of a scene point. |
Jordt [27] | SIFT | SIFT and RANSAC | This work was the first to propose, build and estimate a complete scalable 3D reconstruction system that can be employed with deep-sea flat-port cameras. |
Parvathi [98] | SIFT | SIFT | The authors proposed a refractive reconstruction model for underwater images taken from the water surface. The system does not require the use of professional underwater cameras. |
Chadebecq [29,99] | SIFT | SIFT | The authors formulated a new four-view constraint-enforcing camera pose consistency along a video that leads to a novel RSfM framework. |
Qiao [100] | — | — | The camera system modelling approach based on ray tracing was proposed to model the camera system. A new camera-housing calibration was based on back-projection error, which was proposed to achieve accurate modelling. |
Ichimaru [101] | SURF | SURF | The authors provided unified reconstruction methods for several situations, including a single static camera and moving refractive interface, a single moving camera and static refractive interface, and a single moving camera and moving refractive interface. |
Jeon [102] | SIFT | SIFT | The authors proposed two Aqualoc datasets using the results of cloud point count, SfM processing time, number of matched images, total images and average reprojection error before suggesting the use of visual SLAM to handle the localization of vehicle systems and the mapping of the surrounding environment. |
References | Major Problem | Contribution |
---|---|---|
Narasimhan [105] | Scattering Effects | The physical representation of the surface appearance submerged in the scattering medium was derived, and it was also determined how many light sources are necessary to give the photometric stereo. |
Wu L [106] | Scattering Effects | A novel method for effectively resolving photometric stereo puzzles was given by the authors. By simultaneously correcting its incorrect and missing elements, the strategy takes advantage of powerful convex optimization techniques that are guaranteed to locate the proper low-rank matrix. |
Tsiotsios [107] | Backscattering Effects | By effectively compensating for the backscattering component, the authors established a linear formula of photometric stereo that can restore an accurate normal map with only three lights. |
Wu Z [108] | Gradient Error | Based on the height distribution in the surrounding area, the authors introduced a height-correction technique used in underwater photometric stereo reconstruction. The height error was fitted using a 2D quadratic function, and the error was subtracted from the rebuilt height. |
Murez [109] | Scattering Effects | The authors demonstrated through in-depth simulations that a point light source with a single direction can simulate a single-scattered light from a source. |
Jiao [110] | Backscattering Effects | A new multispectral photometric stereo method was proposed. This method used simple linear iterative clustering segmentation to solve the problem of multi-color scene reconstruction. |
Fan [114] | Nonuniform Illumination | The authors proposed a post-processing technique to fix the divergence brought on by uneven lighting. The process uses calibration data from the object or a flat plane to refine the surface contour. |
Fan [116] | Refraction Effects | The combination of underwater photometric stereo and underwater laser triangulation was proposed by the authors as a novel approach. It was used to overcome the large shape-recovery defects and enhance underwater photometric stereo performance. |
Li [117] | Lack of constraints among multiple disconnected patches. | To rectify photometric stereo aberrations utilizing depth data generated by encoded structured light systems, a hybrid approach has been put forth. By recovering high-frequency details as well as avoiding or at least decreasing low-frequency biases, this approach maintains high-precision normal information. |
References | Color | Pattern | Contribution |
---|---|---|---|
Zhang [121] | Grayscale | Sinusoidal Fringe | A useful technique for calculating the three-dimensional geometry of an underwater item was proposed, employing phase-tracking and ray-tracing techniques. |
Törnblom [122] | White | Binary pattern | The authors constructed and developed an underwater 3D scanner based on structured light and compared the scanner based on stereo scanning and line-scanning laser. |
Massot-Campos [123] | Green | Lawn-moving pattern | In a typical underwater setting with well-known dimensions and items, SV and SL were contrasted. The findings demonstrate that a stereo-based reconstruction is best-suited for long, high-altitude surveys, always reliant on having sufficient texture and light, whereas a structured-light reconstruction can be better fitted in a short, close-distance approach where precise dimensions of an object or structure are required. |
Bruno [25] | White | Binary pattern | The geometric shape of the water surface and the geometric shape of items under the surface can both be estimated concurrently using a new SL approach for 3D imaging. The technique just needs one image, making it possible to use it for both static and dynamic scenarios. |
Sarafraz [125] | Red, Green, Blue | Pseudorandom pattern | A new structured-light method for 3D imaging was developed that can simultaneously estimate both the geometric shape of the water surface and the geometric shape of underwater objects. The method requires only a single image and thus can be applied to dynamic as well as static scenes. |
Fox [126] | White | Light pattern | SL using a single scanning light strip was originally proposed to combat backscatter and enable 3D underwater object reconstruction. |
Narasimhan [105] | White | Light-plane sweep | Two representative methods, namely, the light-stripe distance-scanning method and light-scattering stereo method, were comprehensively analyzed. A physical model of the surface appearance immersed in a scattering medium was also derived. |
Wang [128] | multiple colors | Colored dot pattern | The calibration of their projector-camera model based on the proposed non-SVP model to represent the projection geometry. Additionally, the authors provided a framework for multiresolution object reconstruction that makes use of projected dot patterns with various spacings to provide pattern recognition under various turbidity circumstances. |
Massone [129] | — | Light pattern | The authors proposed a new structured-light method, which was based on projecting light patterns onto a scene taken by a camera. They used a simple conical submersible lamp as a light projector and created a specific calibration method to estimate the cone geometry relative to the camera. |
References | Feature | Matching Method | Contribution |
---|---|---|---|
Rahman [134] | — | — | The authors studied the difference between terrestrial and underwater camera calibration and proposed a calibration method for underwater stereo vision systems. |
Oleari [137] | — | SAD | This paper outlined the hardware configuration of an underwater SV system for the detection and localization of objects floating on the seafloor to make cooperative object transportation assignments. |
Bonin-Font [140] | — | SLAM | The authors compared the performance of two classical visual SLAM technologies employed in mobile robots: one based on EKF and the other on graph optimization using bundle adjustment. |
Servos [142] | — | ICP | This paper presented a method for underwater stereo positioning and mapping. The method produces precise reconstructions of underwater environments by correcting the refraction-related visual distortion. |
Beall [24] | SURF | SURF and SAM | A method was put forth for the large-scale sparse reconstruction of underwater structures. The brand-new method uses stereo image pairings to recognize prominent features, compute 3D points and estimate the camera pose trajectory. |
Nurtantio [143] | SIFT | SIFT | A low-cost multi-view camera system with a stereo camera was proposed in this paper. A pair of stereo images was obtained from the stereo camera. |
Wu [144] | — | — | The authors developed the underwater 3D reconstruction model and enhanced the quality of the environment understanding in the SV system. |
Zheng [145] | Edge and corners | SIFT | The authors proposed a method for placing underwater 3D targets using inhomogeneous illumination based on binocular SV. The inhomogeneous light field’s backscattering may be effectively reduced, and the system can measure both the precise target distance and breadth. |
Huo [147] | — | SGM | An underwater object-identification and 3D reconstruction system based on binocular vision was proposed. Two optical sensors were used for the vision of the system. |
Wang [148] | Corners | SLAM | The primary contribution of this paper is the creation of a new underwater stereo-vision system for AUV SLAM, manipulation, surveying and other ocean applications. |
References | Sonar Type | Contribution |
---|---|---|
Pathak [159] | MBS | A surface-patch-based 3D mapping in actual underwater scenery was proposed. It is based on 6DOF registration of sonar data. |
Guo [161] | SBS | SBS was used by the authors to recreate the 3D underwater topography of an experimental pool. Based on the 3D point cloud that has been processed, a covering approach was devised to construct an underwater model. This technique is based on the fact that a plastic tablecloth will take the shape of the table when it is used to cover a table. |
Wang [165] | SSS | The authors proposed an approach to reconstructing 3D features of underwater objects from SSS images. The sonar images were divided into three regions: echo, shadow and background. The 2D intensity map was estimated according to the echo, and the depth map was calculated according to the shadow information. Using the transformation model, the two maps were combined to obtain 3D point cloud images of underwater objects. |
Brahim [166] | IS | This paper proposed a technique for reconstructing the underwater environment using two acoustic camera photos of the same scene taken from diverse perspectives. |
Song [167,168] | IS | An approach for 3D reconstruction of underwater structures using 2D multibeam IS was proposed. The physical relationship between the sonar image and the scene terrain was employed to locate elevation information in order to address the issue of the absence of elevation information in sonar images. |
Kwon [169] | IS | A system 3D reconstruction scheme using wide-beam IS was proposed. An occupied grid graph of octree structure was used, and a sensor model considering the sensing characteristics of IS was built for reconstruction. |
Justo [170] | MSIS | The spatial variation of underwater surfaces can be estimated through 3D reconstruction utilizing MSIS according to a system that was provided. |
Guerneve [171] | IS | To achieve 3D reconstruction from IS of any aperture, two reconstruction techniques were presented. The first offers an elegant linear solution to the issue using blind deconvolution and spatially variable kernels. The second method uses nonlinear formulas and a straightforward algorithm to approximate reconstruction. |
McConnell [172] | IS | This paper presented a new method to solve the problem of height ambiguity connected with forward multibeam IS observations, as well as the difficulties it brings to the realization of 3D reconstruction. |
Joe [173] | FLMS | A sequential approach was proposed to extract 3D data for mapping via sensor fusion with two sonar devices. This approach made use of geometric constraints and complementary features between two sonar devices, such as different angles of sound beam as well as data acquisition ways. |
Kim [174] | IS | The authors proposed a multi-view scanning method that can select the unit vector of the next path by maximizing the reflected area of the beam and orthogonality with the previous path, so as to perform multiple scanning efficiently and save time. |
Li [175] | IS | A new sonar image-reconstruction technique was proposed. In order to effectively rebuild the surface of sonar objects, the method first employs an adaptive threshold to perform a 2 × 2 grid block search for non-empty sonar data points, and then searches for a 3 × 3 grid block centered on the empty point to reduce acoustic noise. |
Mai [176,177] | IS | It was suggested to use a novel technique that can retrieve 3D data on items that are submerged. In the suggested approach, lines of underwater objects were extracted and tracked using acoustic cameras, the next generation of sonar sensors, which serve as visual features for image-processing algorithms. |
References | Sonar Type | Contribution |
---|---|---|
Negahdaripour [180,181] | IS | The authors investigated how to determine 3D point locations from two photos taken from two randomly chosen camera positions. Numerous linear closed-form solutions were put forth, investigated and then compared for their accuracy and degeneracy. |
Babaee [182] | IS | A multimodal stereo imaging approach was proposed, using coincident optical and sonar cameras. Furthermore, the issue of creating intricate photoacoustic correspondence was avoided by employing the 2D occluded contours of 3D object edge photos as architectural features. |
Inglis [183] | MBS | A technique was created to constrain the frequently wrong stereo-correspondence problem to a small part of the image, which corresponds to the estimated distance along the polar line calculated from the jointly registered MBS microtopography. This method can be applied to stereo-correspondence techniques based on sparse features and dense regions. |
Hurtos [179] | MBS | An efficient method for solving the calibration problem between MBS and camera systems was proposed. |
Kunz [185] | MBS | In this paper, the abstract attitude map was used to solve the difficulties of positioning and sensor calibration. The attitude map captured the relationship between the estimated trajectory of the robot moving in the water and the measurements made by the navigation and map sensors in a flexible sparse map framework, thus realizing the rapid optimization of the trajectory and map. |
Teague [186] | Acoustic transponders | A reconstruction approach employing an existing low-cost ROV as the platform was discussed. These platforms, which are the foundation of underwater photogrammetry, offer speed and stability in comparison to conventional divers. |
Mattei [187] | SSS | Geophysical and photogrammetric sensors were integrated into the USV to enable precision mapping of seafloor morphology and a 3D reconstruction of archaeological remains, allowing for the reconstruction of underwater landscapes of high cultural value. |
Kim [189] | DIDSON | A dynamic model and sensor model for a virtual underwater simulator were proposed. The proposed simulator was created using an ROS interface so that it may be quickly linked with both current and future ROS plug-ins. |
Rahman [191] | Acoustic sensor | The proposed method utilized the well-defined edges between well-lit areas and darkness to provide additional features, resulting into a denser 3D point cloud than the usual point clouds from a visual odometry system. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, K.; Wang, T.; Shen, C.; Weng, C.; Zhou, F.; Xia, M.; Weng, L. Overview of Underwater 3D Reconstruction Technology Based on Optical Images. J. Mar. Sci. Eng. 2023, 11, 949. https://doi.org/10.3390/jmse11050949
Hu K, Wang T, Shen C, Weng C, Zhou F, Xia M, Weng L. Overview of Underwater 3D Reconstruction Technology Based on Optical Images. Journal of Marine Science and Engineering. 2023; 11(5):949. https://doi.org/10.3390/jmse11050949
Chicago/Turabian StyleHu, Kai, Tianyan Wang, Chaowen Shen, Chenghang Weng, Fenghua Zhou, Min Xia, and Liguo Weng. 2023. "Overview of Underwater 3D Reconstruction Technology Based on Optical Images" Journal of Marine Science and Engineering 11, no. 5: 949. https://doi.org/10.3390/jmse11050949
APA StyleHu, K., Wang, T., Shen, C., Weng, C., Zhou, F., Xia, M., & Weng, L. (2023). Overview of Underwater 3D Reconstruction Technology Based on Optical Images. Journal of Marine Science and Engineering, 11(5), 949. https://doi.org/10.3390/jmse11050949