An Overview on Image-Based and Scanner-Based 3D Modeling Technologies
Abstract
:1. Introduction
2. Image-Based 3D Modeling
- Depth map or depth maps. A depth map is a 2D representation of an image, which includes for each pixel its depth, i.e., the value of its distance from the point of capture (projection center). It is visualized as an image by converting the depth values into intensity values.
- Dense point cloud. It is a set of points in space with known 3D coordinates in a defined reference system.
- 3D model. The most common form of representation of the 3D geometry is a polygonal model (polygon mesh) consisting of a set of vertices, edges, and polygons (usually triangles, but may also be quadrilaterals or, rarely, polygons with more than four vertices), which describe the 3D surface of the scene. Texture mapping to the model is also common. While a polygon mesh represents the surface of an object, a polyhedral mesh represents (in addition to the surface) the volume occupied by an object, e.g., tetrahedral model and parallelepiped model. The voxels are structural elements of parallelepipeds, representing the smallest cube-shaped distinct part of a volume, constituting a cell of a 3D grid.
2.1. Multi-View Stereo
- Data collection stage. It includes capturing of overlapping images (terrestrial images and/or aerial images from a manned or unmanned aircraft) and topographical measurements, if required.
- Image orientation stage. It concerns the calculation of the exterior orientation and (optionally) interior orientation of the images and the generation of a sparse point cloud, which consists of the 3D coordinates of the tie points, i.e., homologous feature points. It is usually performed by applying methods of (a) detection of overlapping images, (b) image matching and feature tracking, and (c) structure from motion (SfM). SfM methods are mainly categorized into three categories: incremental, global, and hierarchical SfM [4,5,6]. Incremental SfM methods are the most commonly used ones. These methods introduce images incrementally into the orientation and sparse reconstruction process, as they orient one image at each iteration (e.g., [7,8,9,10,11,12]). Global SfM methods simultaneously calculate the 3D coordinates of a sparse point cloud and the exterior orientation of the images in a single bundle adjustment solution through a factorization method or a motion averaging method (e.g., [13,14,15,16,17,18]). Hierarchical SfM methods divide the problem of orientation and sparse reconstruction into smaller subproblems, which are combined in a hierarchical manner (e.g., [19,20,21,22]). Whether the sparse point cloud is used in the subsequent steps of the 3D reconstruction depends on the applied method. However, the combination of image matching and SfM methods is followed by all 3D reconstruction algorithms.
- Depth map generation stage. It is performed via dense image matching for a subset of the overlapping image pairs of known interior and exterior orientation (or for all overlapping image pairs) and produces a set of depth maps for the reference images. Dense image matching methods can be distinguished into two main categories: local methods and global methods [23,24,25]. In local methods, the calculation of the disparity of each pixel of the reference image depends solely on the intensity values within a specified window. They follow the simplest way of producing the disparity map, as, for each pixel, they select the disparity corresponding to the largest or smallest (depending on the selected similarity measure) aggregated matching cost. In local methods, the problem of computing the disparity map is related to the minimization of a global energy function, usually defined for all pixels of the reference image, introducing, additionally, a disparity smoothness constraint for the entire image. Moreover, combinations of the above categories of methods have been introduced, such as the semi-global matching method [26] and its variants, which aim to reduce the computational complexity of global methods. Dense image matching is not applied by all MVS algorithms.
- Dense point cloud generation stage. It is usually performed by applying some method of merging the depth maps created in stage 3 (e.g., [27,28]), or by applying some method of densification of the sparse point cloud created in stage 2 (e.g., [29]). This step is not followed by all 3D reconstruction algorithms.
- 3D surface generation stage. It concerns the production of a polygonal (usually triangular) mesh model. Several methods have been developed for producing a 3D surface from a point cloud (derived by stage 4) [30]. Some indicative 3D meshing methods are the following: (a) methods based on Delaunay triangulation, that is, on the construction of a graph that connects points to each other, forming triangles (in 2D space) or tetrahedrons (in 3D space) with circumcircles that do not contain any points in their interior, (b) methods based on the Voronoi diagram, which is the dual graph of Delaunay triangulation, that creates a region for each point consisting of all the points that are closer to that point than to any other point, (c) methods based on the convex hull, i.e., the smallest convex polygon (in 2D space) or polyhedron (in 3D space) that includes all points of the cloud, having some of them as vertices, and (d) methods based on alpha shapes (a-shapes), i.e., a family of lines connected to the shape defined by a set of points, being a generalization of the convex hull. The above methods produce a triangular model using all or most of the points, which have to be additionally accompanied by normal vectors. Another commonly used method for reconstructing the 3D scene geometry from a point cloud is the Poisson reconstruction method [31]. If the point cloud is not accompanied by normal vectors, these can be calculated, e.g., through adjustment of a local plane at each point. In addition, in the case of noisy point clouds, the resulting surface often needs further processing. However, 3D surface generation is not solely based on a 3D dense point cloud. It can also be performed using the depth maps (stage 3), without converting them into a 3D point cloud. The process of creating a 3D surface based on depth maps allows the use of all the information contained in the original images and does not rely on the, often filtered and merged, point cloud. It is also faster, considering that it skips the step of merging the depth maps into a single point cloud.
- Texture mapping stage. This stage, which is usually the last one of a multi-image 3D reconstruction process, generates texture and maps it to the 3D model using the images of known interior and exterior orientation. The usual procedure for generating a texture map involves projecting each polygon of the mesh to one or more images, in which it is visible, and finding the optimal image(s) for rendering texture to each polygon [32]. An indicative process for creating a texture map, making the assumptions of triangular mesh and selection of texture for each triangle from a single image, is outlined in the following: (a) projection of each triangle in the images in which it should be visible (regardless of whether it is occluded) and (b) selection of the optimal image for texture mapping to each triangle, based on various criteria, e.g., occlusions, resolution of the part of the image in which each triangle is projected, viewing angle, and relevance to neighboring pixels. The simplest method of creating the texture map is to incrementally store (“copy”) connected parts of the same image that are used to texture the triangles of the mesh. Finally, each vertex of each triangle is assigned texture coordinates from the texture map, corresponding to row and column numbers of the 2D texture map.
2.2. Two-Image Reconstruction
- Feature extraction and image matching.
- Calculation of the relative orientation parameters of the stereo-pair (using at least five correspondences).
- Absolute orientation of the stereo-pair (using at least three ground control points, if available and if the final 3D model is intended to be georeferenced).
- Generation of depth map.
- Generation of dense point cloud.
- Generation of 3D mesh.
- Texture mapping on the 3D mesh.
2.3. Conventional Photogrammetric Procedure
2.4. Shading-Based Shape Recovery
2.5. Usage of a Stereo-Camera
2.6. Usage of Satellite Imagery
2.7. Discussion
3. Scanner-Based 3D Modeling
3.1. A Taxonomy of Scanners
- Non-contact scanners. They scan the object without touching it. They may be further distinguished into two main subcategories: scanners based on the reflection of waves from the object being scanned (reflection-based scanners) and scanners based on the transmission of rays in the material being scanned (transmission-based) scanners.
- Reflection-based scanners. They produce a 3D representation of the external surface of the object being scanned. Optical scanners and non-optical scanners belong to this category. Optical scanners rely on the reflection of optical radiation. These are, basically, laser scanners, and their basic principles are mentioned in Section 3.1.1. Non-optical scanners include sonar and radar systems, which are presented in Section 3.1.2.
- Transmission-based scanners. Scanners of this type produce a 3D representation of the internal surface of the target being scanned. Computed tomography scanners and magnetic resonance imaging scanners, which are mainly used for medical purposes, belong to this category. Computed tomography scanners emit high-energy X-rays and measure the amount of radiation that passes through the object/patient being scanned. The basic principles of operation of this type of scanner are summarized in Section 3.1.3. Magnetic resonance imaging scanners use a strong magnetic field of waves and radio frequencies to create a 3D representation of the target. Their basic principles are presented in Section 3.1.4.
- Contact scanners. They touch the surface of the object in order to scan it, producing 3D models of targets through physical contact with them. They may be further distinguished into two main categories, depending on whether or not they cause any damage/alteration/destruction of the object being scanned, as described below [50].
- Non-destructive scanners. They do not cause any damage/alteration/destruction to the object being scanned. This category includes 3D ultrasound scanners, which touch the target (patient body in medical ultrasound scanners or other material in industrial ultrasound scanners) for the 3D representation of its internal parts/material (Section 3.1.5) and the coordinate measuring machines (CMMs), which can be either fixed or portable (Section 3.1.6).
- Destructive scanners. Scanners of this type produce volumetric data by successively removing thin layers of material from the object of study. Examples of scanners of this type are given in Section 3.1.7.
3.1.1. Laser Scanners
- Triangulation scanners. They send two laser beams, which intersect at the object of interest. The rays can come either from different sources or from the same source, through splitting the original ray. Triangulation scanners using a single-camera setup include a mechanical base, to the ends of which the following are attached, with known geometry: (a) a transmitter, which sends a laser beam at a defined angle to the object of interest and (b) a camera, which locates the point of intersection of the beam with the object of interest (spot of the laser beam). The transmitter, the camera, and the spot of the laser beam form a triangle, from which the 3D information of the position of the reflection point is derived. The emission angle changes with a predetermined angular step. Typically, the object is scanned via a single scan line to speed up the process. There is also a dual-camera setup with slight variations.
- Time-of-flight scanners. They measure the time required for the laser beam to travel the distance between the emitter and the target and return to the emitter and calculate the distance (d) between the emitter and the target based on this time (t), given the speed (u) of electromagnetic radiation (). Thus, errors in distance calculation depend on the accuracy of time measurement. They are relatively slow scanners with a range of hundreds of meters or a few kilometers.
- Phase-shift scanners. They use a continuous laser beam instead of discrete pulses. The emitted laser beam hits the target and a part of it is reflected and follows the same path as the emission path, returning to the receiver. They measure the phase shift between the sent and the received waveforms. Phase-shift scanners are fast but correspond to a limited range. Due to the limited possibilities of emitting strong continuous laser radiation, they are used almost exclusively in terrestrial applications (distances up to 100 m).
- Structured light scanners. They use a technology similar to the triangulation method. They project a pattern onto an object with the help of laser beams and study the deformations caused by the object shape, using a camera (or cameras). An important advantage of these scanners is the speed and the consequent ability to calculate the 3D position in space of many points at a time and not of just one point.
3.1.2. Non-Optical Scanners
- Real aperture radar systems, in which the real size of the antenna is considered as its physical size, and
- Synthetic aperture radar (SAR) systems, in which a technique is used to increase the physical length of the antenna using the motion of the flying platform (airborne/satellite) on which the SAR system is installed, and multiple pulse returns are obtained for the same targets, producing images of higher resolution. A frequent use of SAR systems is the production of a digital terrain model through the technique of interferometry, the application of which requires at least two SAR images depicting the same scene, taken either at different times or from a different position [58].
3.1.3. Computed Tomography Scanners
3.1.4. Magnetic Resonance Imaging Scanners
3.1.5. Ultrasound Scanners
- Scanners with a 2D array of transducers. Ultrasound scanners of this type produce an acoustic beam in two dimensions to obtain volumetric data through scanning. The elements of the 2D array of transducers produce a divergent beam in a pyramidal shape, and the received echo is processed to produce a 3D representation.
- Ultrasound scanners with mechanical 3D probes. They have a linear array of transducers within a hand-held instrument. The linear array of transducers can be rotated, tilted, or translated within the probe in a motorized way, under computer control. Thus, the motion mechanisms of the transducer array can be divided into three categories: linear motion, tilt motion, and rotation. In the linear motion of the transducer array, parallel 2D images are acquired for 3D reconstruction. In tilt motion, the transducer takes different tilts to capture the images, with a tilt axis on the surface of the transducer array. In rotational motion, a mechanism rotates the transducer around the central axis of the probe.
- Ultrasound scanners with mechanical localizers. As in ultrasound scanners with mechanical 3D probes, in ultrasound scanners with mechanical locators the latter are motorized. However, while, in scanners with mechanical 3D probes, the scanning mechanism is built into a handheld instrument along with a dedicated 1D linear transducer, a mechanical localizer consists of an external component that holds a conventional 1D transducer to capture a series of consecutive 2D images. The scan path is predetermined so that the relative positions and orientations of the 2D ultrasound images are accurately recorded by the computer system, allowing real-time 3D reconstruction. Motion mechanisms can be separated, as in the case of ultrasound scanners with mechanical 3D probes, into three categories: linear motion, tilt motion, and rotational motion.
- Freehand ultrasound scanners. Ultrasound scanners of this type allow the area of interest to be scanned in different directions and positions, allowing the operator to select the optimal positions and orientations for obtaining the ultrasound images by manually tilting and moving the transducer. The orientation of the transducer is recorded for each tomographic image.
3.1.6. Coordinate Measuring Machines
- Bridge-type CMMs. This is the most common type of CMM. A bridge, on which the Z-axis lies, moves on the base of the machine. The measuring head is located on the Z-axis and can be moved along this axis (up and down), along the X-axis (i.e., the axis along the bridge), and along the Y-axis (perpendicular to the axis of bridge) by moving the entire bridge over the CMM base.
- Cantilever-type CMMs. In this kind of CMM, the measuring head is attached to one side of a rigid base. They are used for measuring smaller objects (e.g., parts of objects) than those measured by bridge-type CMMs. They provide a high level of accuracy.
- Horizontal arm CMMs. They provide lower accuracy of measurements than bridge-type CMMs and cantilever-type CMMs. They are particularly useful for measurements of larger objects and objects that involve measurements in hard-to-reach places (e.g., for automotive use, to scan cars and their internal parts).
- Gantry CMMs. The structure of this kind of CMM is similar to the structure of bridge-type CMMs, but they are much larger than the latter. The bridge is raised on pillars. They provide a high level of accuracy and are used for measuring large volume objects (e.g., for use in the aeronautical industry).
3.1.7. Destructive Scanners
- Serial block-face scanning electron microscopy (SBEM) scanners. Instruments of this type use a microtome (i.e., a special cutting device that produces extremely thin sections) inside a scanning electron microscope, i.e., a microscope used to examine the microstructure of objects that uses a high-energy electron beam to create an image of the study object on a computer screen. The microtome cuts the object of interest, and, through the microscope, the sections are visible. The process is repeated until the entire object is digitized, and thus completely destroyed. Scanners of this type provide precision of the order of a few nanometers [78].
- Knife-edge scanning microscopy (KESM) scanners. Instruments of this type combine the sectioning of the study object and the visualization of the section in one step. They use an arm with a diamond knife to cut the object [79].
- Micro-optical serial tomography (MOST) scanners. Instruments of this type consist of a microtome, a light microscope, and an image recorder, and perform imaging and sectioning simultaneously [80].
- Focused-ion-beam scanning electron microscopy (FIBSEM) scanners. In instruments of this type, a scanning electron microscope equipped with a focused beam of gallium ions is used. The gallium ions gradually impinge on the object of interest, causing the surface atoms of the object to be ejected and its surface to become amorphous. The detector of the backscattered electrons of the instrument is used for image surfaces, creating a large series of images that can be combined for 3D representation of the object of interest [81].
3.2. From Scans to 3D Models
3.2.1. Point Clouds to 3D Models
3.2.2. Tomographic Images to 3D Models
- Intensity-based methods, such as thresholding, edge detection, and active contours.
- Geometry-based methods, such as region growing and clustering.
3.3. Discussion
4. Applications
4.1. Medical and Dental Applications
4.2. Applications in the Computer Graphics Industry
4.3. Cultural Heritage Applications in the Field of Culture
4.4. Applications in the Fields of Safety and Rescue
4.5. Reverse Engineering Applications in the Manufacturing Industry
- New product design. In some new product design applications, the design starts from a physical (existing) prototype object. Especially for objects with freeform surfaces, it is easier to produce their 3D polygonal model through a reverse engineering process and post-build the CAD (computer-aided design) model based on the polygonal model.
- Modification of an existing product. Existing product designs are often iteratively modified. However, the CAD 3D model for a product after modification may not be available, and its 3D model may have to be created from scratch through a reverse engineering technique.
- Loss of digital 3D product designs. In some cases, the 3D model of a product, or part of it, is no longer available or has been destroyed (e.g., car/aircraft/ship parts that have been retired).
- Product verification. In some applications it is useful to generate the 3D model of a product using overlapping images or scans of it, or part of it, and subsequently compare it with the 3D CAD model of its design to identify any deviations.
- Quality control and inspection. 3D modeling of parts/machines/vehicles and other products using scanners is conducted to detect cracks or other types of defects in the object under consideration for quality control.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Goesele, M.; Curless, B.; Seitz, S.M. Multi-view stereo revisited. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 2402–2409. [Google Scholar]
- Furukawa, Y.; Hernández, C. Multi-view stereo: A tutorial. Found. Trends® Comput. Graph. Vis. 2015, 9, 1–148. [Google Scholar] [CrossRef] [Green Version]
- Strecha, C.; Von Hansen, W.; Van Gool, L.; Fua, P.; Thoennessen, U. On benchmarking camera calibration and multi-view stereo for high resolution imagery. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
- Locher, A.; Havlena, M.; Van Gool, L. Progressive structure from motion. In Proceedings of the 2018 European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 20–35. [Google Scholar]
- Verykokou, S.; Ioannidis, C. A photogrammetry-based structure from motion algorithm using robust iterative bundle adjustment techniques. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-4/W6, 73–80. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Shen, S.; Chen, Y.; Wang, G. Graph-based parallel large scale structure from motion. Pattern Recognit. 2020, 107, 107537. [Google Scholar] [CrossRef]
- Snavely, N.; Seitz, S.M.; Szeliski, R. Photo tourism: Exploring photo collections in 3D. ACM Trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef] [Green Version]
- Frahm, J.-M.; Fite-Georgel, P.; Gallup, D.; Johnson, T.; Raguram, R.; Wu, C.; Jen, Y.-H.; Dunn, E.; Clipp, B.; Lazebnik, S.; et al. Building Rome on a cloudless day. In Computer Vision—ECCV 2010. ECCV 2010. Lecture Notes in Computer Science; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6314, pp. 368–381. [Google Scholar]
- Agarwal, S.; Furukawa, Y.; Snavely, N.; Simon, I.; Curless, B.; Seitz, S.M.; Szeliski, R. Building Rome in a day. Commun. ACM 2011, 54, 105–112. [Google Scholar] [CrossRef]
- Wu, C. Towards linear-time incremental structure from motion. In Proceedings of the 2013 International Conference on 3D Vision, Seattle, DC, USA, 29 June–1 July 2013; pp. 127–134. [Google Scholar]
- Shah, R.; Deshpande, A.; Narayanan, P.J. Multistage SFM: Revisiting incremental structure from motion. In Proceedings of the 2nd International Conference on 3D Vision (3DV), Tokyo, Japan, 8–11 December 2014; pp. 417–424. [Google Scholar]
- Verykokou, S.; Ioannidis, C. Exterior orientation estimation of oblique aerial images using SfM-based robust bundle adjustment. Int. J. Remote Sens. 2020, 41, 7233–7270. [Google Scholar] [CrossRef]
- Arie-Nachimson, M.; Kovalsky, S.Z.; Kemelmacher-Shlizerman, I.; Singer, A.; Basri, R. Global motion estimation from point matches. In Proceedings of the Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 81–88. [Google Scholar]
- Jiang, N.; Cui, Z.; Tan, P.A. A global linear method for camera pose registration. In Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 481–488. [Google Scholar]
- Moulon, P.; Monasse, P.; Marlet, R. Global fusion of relative motions for robust, accurate and scalable structure from motion. In Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 3248–3255. [Google Scholar]
- Wilson, K.; Snavely, N. Robust Global Translation with 1DSfM. In Computer Vision—ECCV 2014. ECCV 2014. Lecture Notes in Computer Science; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; Volume 8691. [Google Scholar]
- Cui, Z.; Tan, P. Global structure-from-motion by similarity averaging. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December; pp. 864–872.
- Zhu, S.; Zhang, R.; Zhou, L.; Shen, T.; Fang, T.; Tan, P.; Quan, L. Very large-scale global sfm by distributed motion averaging. In Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4568–4577. [Google Scholar]
- Farenzena, M.; Fusiello, A.; Gherardi, R. Structure-and-motion pipeline on a hierarchical cluster tree. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 1489–1496. [Google Scholar]
- Gherardi, R.; Farenzena, M.; Fusiello, A. Improving the efficiency of hierarchical structure-and-motion. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 1594–1600. [Google Scholar]
- Ni, K.; Dellaert, F. HyperSfM. In Proceedings of the Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), Zurich, Switzerland, 13–15 October 2012; pp. 144–151. [Google Scholar]
- Xu, B.; Zhang, L.; Liu, Y.; Ai, H.; Wang, B.; Sun, Y.; Fan, Z. Robust hierarchical structure from motion for large-scale unstructured image sets. ISPRS J. Photogramm. Remote Sens. 2021, 181, 367–384. [Google Scholar] [CrossRef]
- Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 2002, 47, 7–42. [Google Scholar] [CrossRef]
- Szeliski, R. Computer Vision—Algorithms and Applications; Springer: Heidelberg, Germany, 2011. [Google Scholar]
- Remondino, F.; Spera, M.G.; Nocerino, E.; Menna, F.; Nex, F. State of the art in high density image matching. Photogramm. Rec. 2014, 29, 144–166. [Google Scholar] [CrossRef] [Green Version]
- Hirschmuller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 30, 328–341. [Google Scholar] [CrossRef]
- Merrell, P.; Akbarzadeh, A.; Wang, L.; Mordohai, P.; Frahm, J.M.; Yang, R.; Nister, D.; Pollefeys, M. Real-time visibility-based fusion of depth maps. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007. [Google Scholar]
- Li, J.; Li, E.; Chen, Y.; Xu, L.; Zhang, Y. Bundled depth-map merging for multi-view stereo. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 1010; pp. 2769–2776. [Google Scholar]
- Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed]
- Lim, S.P.; Haron, H. Surface reconstruction techniques: A review. Artif. Intell. Rev. 2014, 42, 59–78. [Google Scholar] [CrossRef]
- Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson surface reconstruction. In Proceedings of the fourth Eurographics symposium on Geometry Processing, Cagliari, Sardinia, Italy, 26–28 June 2006. [Google Scholar]
- Frueh, C.; Sammon, R.; Zakhor, A. Automated texture mapping of 3D city models with oblique aerial imagery. In Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 2004. 3DPVT 2004, Thessaloniki, Greece, 6–9 September 2004; pp. 396–403. [Google Scholar]
- Nex, F.; Gerke, M.; Remondino, F.; Przybilla, H.J.; Bäumker, M.; Zurhorst, A. ISPRS benchmark for multi-platform photogrammetry. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 135. [Google Scholar] [CrossRef] [Green Version]
- Srivastava, A.K.; de la Tocnaye, J.D.B.; Dupont, L. Liquid crystal active glasses for 3D cinema. J. Disp. Technol. 2010, 6, 522–530. [Google Scholar] [CrossRef]
- McAllister, D.F. Display technology: Stereo & 3D display technologies. Encycl. Imaging Sci. Technol. 2002, 2, 1327–1344. [Google Scholar]
- Woods, A.J.; Harris, C.R. Comparing levels of crosstalk with red/cyan, blue/yellow, and green/magenta anaglyph 3D glasses. In Stereoscopic Displays and Applications XXI; SPIE: Boston, MA, USA, 2010; Volume 7524, pp. 235–246. [Google Scholar]
- Klette, R.; Kozera, R.; Schlüns, K. Shape from Shading and Photometric Stereo Methods; Computer Science Department of the University of Auckland: Auckland, New Zealand, 1998. [Google Scholar]
- Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 139–144. [Google Scholar] [CrossRef]
- Ikehata, S. CNN-PS: CNN-Based Photometric Stereo for General Non-convex Surfaces. In Computer Vision—ECCV 2018. ECCV 2018. Lecture Notes in Computer Science; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer: Cham, Switzerland, 2018; Volume 11219. [Google Scholar]
- Ju, Y.; Shi, B.; Jian, M.; Qi, L.; Dong, J.; Lam, K.M. NormAttention-PSN: A High-frequency Region Enhanced Photometric Stereo Network with Normalized Attention. Int. J. Comput. Vis. 2022, 130, 3014–3034. [Google Scholar] [CrossRef]
- Liu, Y.; Ju, Y.; Jian, M.; Gao, F.; Rao, Y.; Hu, Y.; Dong, J. A deep-shallow and global–local multi-feature fusion network for photometric stereo. Image Vis. Comput. 2022, 118, 104368. [Google Scholar] [CrossRef]
- Horn, B.K.P.; Brooks, M.J. Shape from Shading; MIT Press: Cambridge, MA, USA, 1989. [Google Scholar]
- Shunyi, Z.; Ruirui, W.; Changjun, C.; Zuxun, Z. 3D measurement and modeling based on stereo-camera. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, XXXVII(B5), 57–62. [Google Scholar]
- Poli, D.; Toutin, T. Review of developments in geometric modelling for high resolution satellite pushbroom sensors. Photogramm. Rec. 2012, 27, 58–73. [Google Scholar] [CrossRef]
- Galantucci, L.M.; Percoco, G.; Angelelli, G.; Lopez, C.; Introna, F.; Liuzzi, C.; De Donno, A. Reverse engineering techniques applied to a human skull, for CAD 3D reconstruction and physical replication by rapid prototyping. J. Med. Eng. Technol. 2006, 30, 102–111. [Google Scholar] [CrossRef] [PubMed]
- Bi, Z.M.; Wang, L. Advances in 3D data acquisition and processing for industrial applications. Robot. Comput. -Integr. Manuf. 2010, 26, 403–413. [Google Scholar] [CrossRef]
- Mikó, B.; Czövek, I.; Horváth, Á. Investigation of accuracy of 3D scanning. In Proceedings of the MultiScience-XXXI, MicroCAD International Multidisciplinary Scientific Conference, Miskolc, Hungary, 20–21 April 2017. [Google Scholar]
- Perez-Cortes, J.C.; Perez, A.J.; Saez-Barona, S.; Guardiola, J.L.; Salvador, I. A System for In-Line 3D Inspection without Hidden Surfaces. Sensors 2018, 18, 2993. [Google Scholar] [CrossRef] [Green Version]
- Cui, B.; Tao, W.; Zhao, H. High-Precision 3D Reconstruction for Small-to-Medium-Sized Objects Utilizing Line-Structured Light Scanning: A Review. Remote Sens. 2021, 13, 4457. [Google Scholar] [CrossRef]
- Farahani, N.; Braun, A.; Jutt, D.; Huffman, T.; Reder, N.; Liu, Z.; Yagi, Y.; Pantanowitz, L. Three-dimensional imaging and scanning: Current and future applications for pathology. J. Pathol. Inform. 2017, 8, 36. [Google Scholar] [CrossRef] [PubMed]
- Shan, J.; Toth, C.K. Topographic Laser Ranging and Scanning: Principles and Processing; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
- Opitz, R.S. An overview of airborne and terrestrial laser scanning in archaeology. In Interpreting Archaeological Topography: 3D Data, Visualisation and Observation; Oxbow Books: Oxford, UK, 2013; pp. 13–31. [Google Scholar]
- Muralikrishnan, B. Performance evaluation of terrestrial laser scanners—A review. Meas. Sci. Technol. 2021, 32, 072001. [Google Scholar] [CrossRef]
- Altuntaş, C. Triangulation and time-of-flight based 3D digitisation techniques of cultural heritage structures. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, XLIII-B2–2021, 825–830. [Google Scholar] [CrossRef]
- Lakshmi, M.K.; Rao, S.K.; Subrahmanyam, K. Pervasive underwater passive target tracking for the computation of standard deviation solution in a 3D environment. Int. J. Intell. Comput. Cybern. 2021, 14, 580–597. [Google Scholar] [CrossRef]
- Wei, Z.; Duan, Z.; Han, Y. Target Tracking with Asynchronous Multi-rate Active and Passive Sonars. In Proceedings of the 2021 International Conference on Control, Automation and Information Sciences (ICCAIS), Xi’an, China, 14–17 October 2021; pp. 717–722. [Google Scholar]
- Bishop, O. Understand Electronics; Elsevier: Amsterdam, The Netherlans, 2001. [Google Scholar]
- Imperatore, P.; Pepe, A.; Sansosti, E. High performance computing in satellite SAR interferometry: A critical perspective. Remote Sens. 2021, 13, 4756. [Google Scholar] [CrossRef]
- Angelopoulos, C.; Scarfe, W.C.; Farman, A.G. A comparison of maxillofacial CBCT and medical CT. Atlas Oral Maxillofac. Surg. Clin. North Am. 2012, 20, 1–17. [Google Scholar] [CrossRef]
- Bushberg, J.T.; Seibert, J.A.; Leidholdt Jr, E.M.; Boone, J.M. The Essential Physics of Medical Imaging; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2002. [Google Scholar]
- Seeram, E. Computed tomography: A technical review. Radiol. Technol. 2018, 89, 279CT–302CT. [Google Scholar] [PubMed]
- Sera, T. Computed tomography. In Transparency in Biology; Soga, K., Umezawa, M., Okubo, K., Eds.; Springer: Singapore, 2021; pp. 167–187. [Google Scholar]
- Scarfe, W.C.; Farman, A.G. What is cone-beam CT and how does it work? Dent. Clin. N. Am. 2008, 52, 707–730. [Google Scholar] [CrossRef] [PubMed]
- Alamri, H.M.; Sadrameli, M.; Alshalhoob, M.A.; Alshehri, M.A. Applications of CBCT in dental practice: A review of the literature. Gen. Dent. 2012, 60, 390–400. [Google Scholar] [PubMed]
- Hashimoto, K.; Kawashima, S.; Kameoka, S.; Akiyama, Y.; Honjoya, T.; Ejima, K.; Sawada, K. Comparison of image validity between cone beam computed tomography for dental use and multidetector row helical computed tomography. Dentomaxillofac. Radiol. 2007, 36, 465–471. [Google Scholar] [CrossRef]
- Schulze, D.; Heiland, M.; Thurmann, H.; Adam, G. Radiation exposure during midfacial imaging using 4- and 16-slice computed tomography, cone beam computed tomography systems and conventional radiography. Dentomaxillofac. Radiol. 2004, 33, 83–86. [Google Scholar] [CrossRef]
- Haleem, A.; Javaid, M. 3D scanning applications in medical field: A literature-based review. Clin. Epidemiol. Glob. Health 2019, 7, 199–210. [Google Scholar] [CrossRef] [Green Version]
- Carovac, A.; Smajlovic, F.; Junuzovic, D. Application of ultrasound in medicine. Acta Inform. Med. 2011, 19, 168–171. [Google Scholar] [CrossRef] [Green Version]
- Sprawls, P. Physical Principles of Medical Imaging, 3rd ed.; Medical Physics Pub: Madison, WI, USA, 1995. [Google Scholar]
- Fenster, A.; Downey, D.B.; Cardinal, H.N. Three-dimensional ultrasound imaging. Phys. Med. Biol. 2001, 46, R67. [Google Scholar] [CrossRef]
- Huang, Q.; Zeng, Z. A review on real-time 3D ultrasound imaging technology. BioMed Res. Int. 2017, 2017, 6027029. [Google Scholar] [CrossRef] [Green Version]
- Cawley, P. Non-destructive testing—Current capabilities and future directions. Proc. Inst. Mech. Eng. Part L J. Mater. Des. Appl. 2001, 215, 213–223. [Google Scholar]
- Hsu, D.K. Non-destructive evaluation (NDE) of aerospace composites: Ultrasonic techniques. In Non-Destructive Evaluation (NDE) of Polymer Matrix Composites; Woodhead Publishing: Sawston, UK, 2013; pp. 397–422. [Google Scholar]
- Ye, G.; Neal, B.; Boot, A.; Kappatos, V.; Selcuk, C.; Gan, T.H. Development of an ultrasonic NDT system for automated in-situ inspection of wind turbine blades. In Proceedings of the EWSHM-7th European Workshop on Structural Health Monitoring, Nantes, France, 8–11 July 2014. [Google Scholar]
- Desai, S.; Bidanda, B. Reverse engineering: A review & evaluation of contact based systems. Rapid Prototyp. 2006, 6, 107–131. [Google Scholar]
- Kamrani, A.K.; Nasr, E.A. Rapid Prototyping: Theory and Practice; Springer Science & Business Media: New York, NY, USA, 2006. [Google Scholar]
- Puertas, I.; Pérez, C.L.; Salcedo, D.; León, J.; Luri, R.; Fuertes, J.P. Precision study of a coordinate measuring machine using several contact probes. Procedia Eng. 2013, 63, 547–555. [Google Scholar] [CrossRef] [Green Version]
- Denk, W.; Horstmann, H. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol. 2004, 2, e329. [Google Scholar] [CrossRef]
- Pesavento, M.J.; Miller, C.; Pelton, K.; Maloof, M.; Monteith, C.E.; Vemuri, V.; Klimen, M. Knife-edge scanning microscopy for bright-field multi-cubic centimeter analysis of microvasculature. Microsc. Today 2017, 25, 14–21. [Google Scholar] [CrossRef]
- Li, A.; Gong, H.; Zhang, B.; Wang, Q.; Yan, C.; Wu, J.; Liu, Q.; Zeng, S.; Luo, Q. Micro-optical sectioning tomography to obtain a high-resolution atlas of the mouse brain. Science 2010, 330, 1404–1408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bushby, A.J.; P’ng, K.M.; Young, R.D.; Pinali, C.; Knupp, C.; Quantock, A.J. Imaging three-dimensional tissue architectures by focused ion beam scanning electron microscopy. Nat. Protoc. 2011, 6, 845–858. [Google Scholar] [CrossRef]
- Besl, P.J.; McKay, N.D. Method for registration of 3–D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; SPIE: Boston, MA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
- Lo Giudice, A.; Ronsivalle, V.; Grippaudo, C.; Lucchese, A.; Muraglie, S.; Lagravère, M.O.; Isola, G. One step before 3D printing—Evaluation of imaging software accuracy for 3–dimensional analysis of the mandible: A comparative study using a surface-to-surface matching technique. Materials 2020, 13, 2798. [Google Scholar] [CrossRef]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
- Bakoš, M. Active contours and their utilization at image segmentation. In Proceedings of the 5th Slovakian-Hungarian Joint symposium on applied machine intelligence and informatics, Poprad, Slovakia, 25–26 January 2007. [Google Scholar]
- Brice, C.R.; Fennema, C.L. Scene analysis using regions. Artif. Intell. 1970, 1, 205–226. [Google Scholar] [CrossRef]
- Huotilainen, E.; Jaanimets, R.; Valášek, J.; Marcián, P.; Salmi, M.; Tuomi, J.; Makitie, A.; Wolff, J. Inaccuracies in additive manufactured medical skull models caused by the DICOM to STL conversion process. J. Cranio-Maxillofac. Surg. 2014, 42, e259–e265. [Google Scholar] [CrossRef]
- Seal, A.; Das, A.; Sen, P. Watershed: An image segmentation approach. Int. J. Comput. Sci. Inf. Technol. 2015, 6, 2295–2297. [Google Scholar]
- Müller, A.; Krishnan, K.G.; Uhl, E.; Mast, G. The application of rapid prototyping techniques in cranial reconstruction and preoperative planning in neurosurgery. J. Craniofac. Surg. 2003, 14, 899–914. [Google Scholar] [CrossRef] [PubMed]
- Wagner, J.D.; Baack, B.; Brown, G.A.; Kelly, J. Rapid 3–dimensional prototyping for surgical repair of maxillofacial fractures: A technical note. J. Oral Maxillofac. Surg. 2004, 62, 898–901. [Google Scholar] [CrossRef]
- Guarino, J.; Tennyson, S.; McCain, G.; Bond, L.; Shea, K.; King, H. Rapid prototyping technology for surgeries of the pediatric spine and pelvis: Benefits analysis. J. Pediatr. Orthop. 2007, 27, 955–960. [Google Scholar] [CrossRef] [PubMed]
- Rengier, F.; Mehndiratta, A.; Von Tengg-Kobligk, H.; Zechmann, C.M.; Unterhinninghofen, R.; Kauczor, H.U.; Giesel, F.L. 3D printing based on imaging data: Review of medical applications. Int. J. Comput. Assist. Radiol. Surg. 2010, 5, 335–341. [Google Scholar] [CrossRef]
- Mavili, M.E.; Canter, H.I.; Saglam-Aydinatay, B.; Kamaci, S.; Kocadereli, I. Use of three-dimensional medical modeling methods for precise planning of orthognathic surgery. J. Craniofac. Surg. 2007, 18, 740–747. [Google Scholar] [CrossRef] [Green Version]
- Verykokou, S.; Ioannidis, C.; Angelopoulos, C. Evaluation of 3D Modeling Workflows Using Dental CBCT Data for Periodontal Regenerative Treatment. J. Pers. Med. 2022, 12, 1355. [Google Scholar] [CrossRef] [PubMed]
- Harrysson, O.L.; Hosni, Y.A.; Nayfeh, J.F. Custom-designed orthopedic implants evaluated using finite element analysis of patient-specific computed tomography data: Femoral-component case study. BMC Musculoskelet. Disord. 2007, 8, 91. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mitsouras, D.; Liacouras, P.; Imanzadeh, A.; Giannopoulos, A.A.; Cai, T.; Kumamaru, K.K.; George, E.; Wake, N.; Caterson, E.J.; Pomahac, B.; et al. Medical 3D printing for the radiologist. Radiographics 2015, 35, 1965–1988. [Google Scholar] [CrossRef]
- Shah, N.P.; Khanna, A.; Pai, A.R.; Sheth, V.H.; Raut, S.R. An evaluation of virtually planned and 3D-printed stereolithographic surgical guides from CBCT and digital scans: An in vitro study. J. Prosthet. Dent. 2022, 128, 436–442. [Google Scholar] [CrossRef]
- Klak, M.; Bryniarski, T.; Kowalska, P.; Gomolka, M.; Tymicki, G.; Kosowska, K.; Cywoniuk, P.; Dobrzanski, T.; Turowski, P.; Wszola, M. Novel strategies in artificial organ development: What is the future of medicine? Micromachines 2020, 11, 646. [Google Scholar] [CrossRef]
- Tong, J.; Zhou, J.; Liu, L.; Pan, Z.; Yan, H. Scanning 3D full human bodies using kinects. IEEE Trans. Vis. Comput. Graph. 2012, 18, 643–650. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Huang, J.; Dai, A.; Guibas, L.J.; Nießner, M. 3Dlite: Towards commodity 3D scanning for content creation. ACM Trans. Graph. 2017, 36, 203. [Google Scholar] [CrossRef]
- Statham, N. Use of photogrammetry in video games: A historical overview. Games Cult. 2020, 15, 289–307. [Google Scholar] [CrossRef]
- Skublewska-Paszkowska, M.; Milosz, M.; Powroznik, P.; Lukasik, E. 3D technologies for intangible cultural heritage preservation—Literature review for selected databases. Herit. Sci. 2022, 10, 3. [Google Scholar] [CrossRef]
- Doulamis, N.; Doulamis, A.; Ioannidis, C.; Klein, M.; Ioannides, M. Modelling of static and moving objects: Digitizing tangible and intangible cultural heritage. In Mixed Reality and Gamification for Cultural Heritage; Springer: Cham, Switzerland, 2017; pp. 567–589. [Google Scholar]
- Rallis, I.; Voulodimos, A.; Bakalos, N.; Protopapadakis, E.; Doulamis, N.; Doulamis, A. Machine learning for intangible cultural heritage: A review of techniques on dance analysis. Vis. Comput. Cult. Herit. 2020, 103–119. [Google Scholar] [CrossRef]
- Goenetxea, J.; Unzueta, L.; Linaza, M.T.; Rodriguez, M.; O’Connor, N.; Moran, K. Capturing the sporting heroes of our past by extracting 3D movements from legacy video content. In Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection. EuroMed 2014; Ioannides, M., Magnenat-Thalmann, N., Fink, E., Žarnić, R., Yen, A.Y., Quak, E., Eds.; Springer: Cham, Switzerland, 2014; Volume 8740, pp. 48–58. [Google Scholar]
- Jeong, E.; Yu, J. Ego-centric recording framework for Korean traditional crafts motion. In Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection. EuroMed 2018; Springer: Cham, Switzerland, 2018; Volume 11197, pp. 118–125. [Google Scholar]
- Partarakis, N.; Zabulis, X.; Chatziantoniou, A.; Patsiouras, N.; Adami, I. An approach to the creation and presentation of reference gesture datasets, for the preservation of traditional crafts. Appl. Sci. 2020, 10, 7325. [Google Scholar] [CrossRef]
- Menna, F.; Agrafiotis, P.; Georgopoulos, A. State of the art and applications in archaeological underwater 3D recording and mapping. J. Cult. Herit. 2018, 33, 231–248. [Google Scholar] [CrossRef]
- Verykokou, S.; Doulamis, A.; Athanasiou, G.; Ioannidis, C.; Amditis, A. UAV-based 3D modelling of disaster scenes for Urban Search and Rescue. In Proceedings of the 2016 IEEE International Conference on Imaging Systems and Techniques (IST), Chania, Crete Island, Greece, 4–6 October 2016; pp. 106–111. [Google Scholar]
- Verykokou, S.; Ioannidis, C.; Athanasiou, G.; Doulamis, N.; Amditis, A. 3D reconstruction of disaster scenes for urban search and rescue. Multimed. Tools Appl. 2018, 77, 9691–9717. [Google Scholar] [CrossRef]
- Lauterbach, H.A.; Koch, C.B.; Hess, R.; Eck, D.; Schilling, K.; Nüchter, A. The Eins3D project—Instantaneous UAV-based 3D mapping for Search and Rescue applications. In Proceedings of the 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Würzburg, Germany, 1–2 September 2019. [Google Scholar]
- Barazzetti, L.; Sala, R.; Scaioni, M.; Cattaneo, C.; Gibelli, D.; Giussani, A.; Poppa, P.; Roncoroni, F.; Vandone, A. 3D scanning and imaging for quick documentation of crime and accident scenes. In Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense XI; SPIE: Boston, MA, USA, 2012; Volume 8359, pp. 208–221. [Google Scholar]
- Becker, S.; Spranger, M.; Heinke, F.; Grunert, S.; Labudde, D. A comprehensive framework for high resolution image-based 3D modeling and documentation of crime scenes and disaster sites. Int. J. Adv. Syst. Meas. 2018, 11, 1–12. [Google Scholar]
- Tredinnick, R.; Smith, S.; Ponto, K. A cost-benefit analysis of 3D scanning technology for crime scene investigation. Forensic Sci. Int. Rep. 2019, 1, 100025. [Google Scholar] [CrossRef]
- Geng, Z.; Bidanda, B. Review of reverse engineering systems–current state of the art. Virtual Phys. Prototyp. 2017, 12, 161–172. [Google Scholar] [CrossRef]
Data | Methods |
---|---|
Images | Multi-view stereo |
Two-image reconstruction | |
Conventional photogrammetric procedure | |
Shading-based shape recovery | |
Usage of a stereo-camera | |
Usage of satellite imagery | |
Scans | Point clouds to 3D models |
Tomographic images to 3D models |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Verykokou, S.; Ioannidis, C. An Overview on Image-Based and Scanner-Based 3D Modeling Technologies. Sensors 2023, 23, 596. https://doi.org/10.3390/s23020596
Verykokou S, Ioannidis C. An Overview on Image-Based and Scanner-Based 3D Modeling Technologies. Sensors. 2023; 23(2):596. https://doi.org/10.3390/s23020596
Chicago/Turabian StyleVerykokou, Styliani, and Charalabos Ioannidis. 2023. "An Overview on Image-Based and Scanner-Based 3D Modeling Technologies" Sensors 23, no. 2: 596. https://doi.org/10.3390/s23020596
APA StyleVerykokou, S., & Ioannidis, C. (2023). An Overview on Image-Based and Scanner-Based 3D Modeling Technologies. Sensors, 23(2), 596. https://doi.org/10.3390/s23020596