Geometry Reconstruction from Images (2nd Edition)

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Visualization and Computer Graphics".

Deadline for manuscript submissions: 28 February 2025 | Viewed by 19775

Special Issue Editor


E-Mail Website
Guest Editor
XLIM Institute, UMR CNRS 7252, University of Poitiers, 86073 Poitiers, France
Interests: computer graphics; lighting simulation; reflectance models; image-based rendering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Our Special Issue on "Geometry Reconstruction from Images" has been successful; 10 papers have been published (https://www.mdpi.com/journal/jimaging/special_issues/geometry_reconstruction). The Journal of Imaging proposes a second issue in this same area. Recovering 3D content from images has been a tremendous source of research advances, with many different focuses, targeted applications, needs, or scientific starting points. A wide range of existing approaches nowadays are employed in the industry for many considerations, including, for instance, quality in engineering production, video-based security, or 3D modeling in gaming applications or movies. Yet, reconstructing a representation of a scene observed through a camera remains a challenging aspect in general, and the specific question of producing a (static or dynamic) geometric model has led to decades of research and still corresponds to a very active scientific domain. Sensors are continuously evolving, bringing more and more accuracy, resolution, and new opportunities for reconstructing objects’ shapes and/or detailed geometric variations.

This new call will be dedicated to, but not limited to, 3D reconstruction from videos, multispectral images, or time of flight. We wish to encourage original contributions that focus on the power of imaging methods to recover geometric representations of objects or parts of objects. Contributions may correspond to various approaches, including shape-from-X approaches, deep learning, photometric stereo, NERF approaches, etc.

We hope this new call will be of interest to many authors.

Dr. Daniel Meneveaux
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • geometry from images and videos
  • reconstruction from 3D images
  • reconstruction from time of flight sensors
  • reconstruction from multispectral images
  • reconstruction from lightfield cameras
  • multiview reconstruction
  • photometric stereo
  • epipolar geometry
  • space carving and coloring
  • differential geometry
  • deep learning based reconstruction
  • medical imaging
  • radar, satellites
  • cultural heritage
  • virtual and augmented reality

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 543 KiB  
Article
Fitting Geometric Shapes to Fuzzy Point Cloud Data
by Vincent B. Verhoeven, Pasi Raumonen and Markku Åkerblom
J. Imaging 2025, 11(1), 7; https://doi.org/10.3390/jimaging11010007 - 3 Jan 2025
Viewed by 416
Abstract
This article describes procedures and thoughts regarding the reconstruction of geometry-given data and its uncertainty. The data are considered as a continuous fuzzy point cloud, instead of a discrete point cloud. Shape fitting is commonly performed by minimizing the discrete Euclidean distance; however, [...] Read more.
This article describes procedures and thoughts regarding the reconstruction of geometry-given data and its uncertainty. The data are considered as a continuous fuzzy point cloud, instead of a discrete point cloud. Shape fitting is commonly performed by minimizing the discrete Euclidean distance; however, we propose the novel approach of using the expected Mahalanobis distance. The primary benefit is that it takes both the different magnitude and orientation of uncertainty for each data point into account. We illustrate the approach with laser scanning data of a cylinder and compare its performance with that of the conventional least squares method with and without random sample consensus (RANSAC). Our proposed method fits the geometry more accurately, albeit generally with greater uncertainty, and shows promise for geometry reconstruction with laser-scanned data. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

18 pages, 36094 KiB  
Article
Arbitrary Optics for Gaussian Splatting Using Space Warping
by Jakob Nazarenus, Simin Kou, Fang-Lue Zhang and Reinhard Koch
J. Imaging 2024, 10(12), 330; https://doi.org/10.3390/jimaging10120330 - 22 Dec 2024
Viewed by 614
Abstract
Due to recent advances in 3D reconstruction from RGB images, it is now possible to create photorealistic representations of real-world scenes that only require minutes to be reconstructed and can be rendered in real time. In particular, 3D Gaussian splatting shows promising results, [...] Read more.
Due to recent advances in 3D reconstruction from RGB images, it is now possible to create photorealistic representations of real-world scenes that only require minutes to be reconstructed and can be rendered in real time. In particular, 3D Gaussian splatting shows promising results, outperforming preceding reconstruction methods while simultaneously reducing the overall computational requirements. The main success of 3D Gaussian splatting relies on the efficient use of a differentiable rasterizer to render the Gaussian scene representation. One major drawback of this method is its underlying pinhole camera model. In this paper, we propose an extension of the existing method that removes this constraint and enables scene reconstructions using arbitrary camera optics such as highly distorting fisheye lenses. Our method achieves this by applying a differentiable warping function to the Gaussian scene representation. Additionally, we reduce overfitting in outdoor scenes by utilizing a learnable skybox, reducing the presence of floating artifacts within the reconstructed scene. Based on synthetic and real-world image datasets, we show that our method is capable of creating an accurate scene reconstruction from highly distorted images and rendering photorealistic images from such reconstructions. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

18 pages, 6875 KiB  
Article
A Mathematical Model for Wind Velocity Field Reconstruction and Visualization Taking into Account the Topography Influence
by Guzel Khayretdinova and Christian Gout
J. Imaging 2024, 10(11), 285; https://doi.org/10.3390/jimaging10110285 - 7 Nov 2024
Viewed by 1055
Abstract
In this paper, we propose a global modelling for vector field approximation from a given finite set of vectors (corresponding to the wind velocity field or marine currents). In the modelling, we propose using the minimization on a Hilbert space of an energy [...] Read more.
In this paper, we propose a global modelling for vector field approximation from a given finite set of vectors (corresponding to the wind velocity field or marine currents). In the modelling, we propose using the minimization on a Hilbert space of an energy functional that includes a fidelity criterion to the data and a smoothing term. We discretize the continuous problem using a finite elements method. We then propose taking into account the topographic effects on the wind velocity field, and visualization using a free library is also proposed, which constitutes an added value compared to other vector field approximation models. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

12 pages, 2392 KiB  
Communication
Multi-Head Attention Refiner for Multi-View 3D Reconstruction
by Kyunghee Lee, Ihjoon Cho, Boseung Yang and Unsang Park
J. Imaging 2024, 10(11), 268; https://doi.org/10.3390/jimaging10110268 - 24 Oct 2024
Viewed by 5005
Abstract
Traditional 3D reconstruction models have consistently faced the challenge of balancing high recall of object edges with maintaining a high precision. In this paper, we introduce a post-processing method, the Multi-Head Attention Refiner (MA-R), designed to address this issue by integrating a multi-head [...] Read more.
Traditional 3D reconstruction models have consistently faced the challenge of balancing high recall of object edges with maintaining a high precision. In this paper, we introduce a post-processing method, the Multi-Head Attention Refiner (MA-R), designed to address this issue by integrating a multi-head attention mechanism into the U-Net style refiner module. Our method demonstrates improved capability in capturing intricate image details, leading to significant enhancements in boundary predictions and recall rates. In our experiments, the proposed approach notably improves the reconstruction performance of Pix2Vox++ when multiple images are used as the input. Specifically, with 20-view images, our method achieves an IoU score of 0.730, a 1.1% improvement over the 0.719 of Pix2Vox++, and a 2.1% improvement in F-Score, achieving 0.483 compared to 0.462 of Pix2Vox++. These results underscore the robustness of our approach in enhancing both precision and recall in 3D reconstruction tasks involving multiple views. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

20 pages, 4626 KiB  
Article
Three-Dimensional Reconstruction of Indoor Scenes Based on Implicit Neural Representation
by Zhaoji Lin, Yutao Huang and Li Yao
J. Imaging 2024, 10(9), 231; https://doi.org/10.3390/jimaging10090231 - 16 Sep 2024
Viewed by 1228
Abstract
Reconstructing 3D indoor scenes from 2D images has always been an important task in computer vision and graphics applications. For indoor scenes, traditional 3D reconstruction methods have problems such as missing surface details, poor reconstruction of large plane textures and uneven illumination areas, [...] Read more.
Reconstructing 3D indoor scenes from 2D images has always been an important task in computer vision and graphics applications. For indoor scenes, traditional 3D reconstruction methods have problems such as missing surface details, poor reconstruction of large plane textures and uneven illumination areas, and many wrongly reconstructed floating debris noises in the reconstructed models. This paper proposes a 3D reconstruction method for indoor scenes that combines neural radiation field (NeRFs) and signed distance function (SDF) implicit expressions. The volume density of the NeRF is used to provide geometric information for the SDF field, and the learning of geometric shapes and surfaces is strengthened by adding an adaptive normal prior optimization learning process. It not only preserves the high-quality geometric information of the NeRF, but also uses the SDF to generate an explicit mesh with a smooth surface, significantly improving the reconstruction quality of large plane textures and uneven illumination areas in indoor scenes. At the same time, a new regularization term is designed to constrain the weight distribution, making it an ideal unimodal compact distribution, thereby alleviating the problem of uneven density distribution and achieving the effect of floating debris removal in the final model. Experiments show that the 3D reconstruction effect of this paper on ScanNet, Hypersim, and Replica datasets outperforms the state-of-the-art methods. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

18 pages, 10168 KiB  
Article
Single-Image-Based 3D Reconstruction of Endoscopic Images
by Bilal Ahmad, Pål Anders Floor, Ivar Farup and Casper Find Andersen
J. Imaging 2024, 10(4), 82; https://doi.org/10.3390/jimaging10040082 - 28 Mar 2024
Cited by 2 | Viewed by 5562
Abstract
A wireless capsule endoscope (WCE) is a medical device designed for the examination of the human gastrointestinal (GI) tract. Three-dimensional models based on WCE images can assist in diagnostics by effectively detecting pathology. These 3D models provide gastroenterologists with improved visualization, particularly in [...] Read more.
A wireless capsule endoscope (WCE) is a medical device designed for the examination of the human gastrointestinal (GI) tract. Three-dimensional models based on WCE images can assist in diagnostics by effectively detecting pathology. These 3D models provide gastroenterologists with improved visualization, particularly in areas of specific interest. However, the constraints of WCE, such as lack of controllability, and requiring expensive equipment for operation, which is often unavailable, pose significant challenges when it comes to conducting comprehensive experiments aimed at evaluating the quality of 3D reconstruction from WCE images. In this paper, we employ a single-image-based 3D reconstruction method on an artificial colon captured with an endoscope that behaves like WCE. The shape from shading (SFS) algorithm can reconstruct the 3D shape using a single image. Therefore, it has been employed to reconstruct the 3D shapes of the colon images. The camera of the endoscope has also been subjected to comprehensive geometric and radiometric calibration. Experiments are conducted on well-defined primitive objects to assess the method’s robustness and accuracy. This evaluation involves comparing the reconstructed 3D shapes of primitives with ground truth data, quantified through measurements of root-mean-square error and maximum error. Afterward, the same methodology is applied to recover the geometry of the colon. The results demonstrate that our approach is capable of reconstructing the geometry of the colon captured with a camera with an unknown imaging pipeline and significant noise in the images. The same procedure is applied on WCE images for the purpose of 3D reconstruction. Preliminary results are subsequently generated to illustrate the applicability of our method for reconstructing 3D models from WCE images. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

21 pages, 10758 KiB  
Article
Neural Radiance Field-Inspired Depth Map Refinement for Accurate Multi-View Stereo
by Shintaro Ito, Kanta Miura, Koichi Ito and Takafumi Aoki
J. Imaging 2024, 10(3), 68; https://doi.org/10.3390/jimaging10030068 - 8 Mar 2024
Viewed by 3114
Abstract
In this paper, we propose a method to refine the depth maps obtained by Multi-View Stereo (MVS) through iterative optimization of the Neural Radiance Field (NeRF). MVS accurately estimates the depths on object surfaces, and NeRF accurately estimates the depths at object boundaries. [...] Read more.
In this paper, we propose a method to refine the depth maps obtained by Multi-View Stereo (MVS) through iterative optimization of the Neural Radiance Field (NeRF). MVS accurately estimates the depths on object surfaces, and NeRF accurately estimates the depths at object boundaries. The key ideas of the proposed method are to combine MVS and NeRF to utilize the advantages of both in depth map estimation and to use NeRF for depth map refinement. We also introduce a Huber loss into the NeRF optimization to improve the accuracy of the depth map refinement, where the Huber loss reduces the estimation error in the radiance fields by placing constraints on errors larger than a threshold. Through a set of experiments using the Redwood-3dscan dataset and the DTU dataset, which are public datasets consisting of multi-view images, we demonstrate the effectiveness of the proposed method compared to conventional methods: COLMAP, NeRF, and DS-NeRF. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

15 pages, 2033 KiB  
Article
Fast Data Generation for Training Deep-Learning 3D Reconstruction Approaches for Camera Arrays
by Théo Barrios, Stéphanie Prévost and Céline Loscos
J. Imaging 2024, 10(1), 7; https://doi.org/10.3390/jimaging10010007 - 27 Dec 2023
Viewed by 2040
Abstract
In the last decade, many neural network algorithms have been proposed to solve depth reconstruction. Our focus is on reconstruction from images captured by multi-camera arrays which are a grid of vertically and horizontally aligned cameras that are uniformly spaced. Training these networks [...] Read more.
In the last decade, many neural network algorithms have been proposed to solve depth reconstruction. Our focus is on reconstruction from images captured by multi-camera arrays which are a grid of vertically and horizontally aligned cameras that are uniformly spaced. Training these networks using supervised learning requires data with ground truth. Existing datasets are simulating specific configurations. For example, they represent a fixed-size camera array or a fixed space between cameras. When the distance between cameras is small, the array is said to be with a short baseline. Light-field cameras, with a baseline of less than a centimeter, are for instance in this category. On the contrary, an array with large space between cameras is said to be of a wide baseline. In this paper, we present a purely virtual data generator to create large training datasets: this generator can adapt to any camera array configuration. Parameters are for instance the size (number of cameras) and the distance between two cameras. The generator creates virtual scenes by randomly selecting objects and textures and following user-defined parameters like the disparity range or image parameters (resolution, color space). Generated data are used only for the learning phase. They are unrealistic but can present concrete challenges for disparity reconstruction such as thin elements and the random assignment of textures to objects to avoid color bias. Our experiments focus on wide-baseline configuration which requires more datasets. We validate the generator by testing the generated datasets with known deep-learning approaches as well as depth reconstruction algorithms in order to validate them. The validation experiments have proven successful. Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images (2nd Edition))
Show Figures

Figure 1

Back to TopTop