Next Article in Journal
Assessing Extreme Drought Events and Their Temporal Impact: Before and after the Operation of a Hydropower Plant
Next Article in Special Issue
Quantitative Determination of Partial Voxel Compositions with X-ray CT Image-Based Data-Constrained Modelling
Previous Article in Journal
Biomedical Imaging Technologies for Cardiovascular Disease - Volume II
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstructing 3D De-Blurred Structures from Limited Angles of View through Turbid Media Using Deep Learning

by
Ngoc An Dang Nguyen
1,2,†,
Hoang Nhut Huynh
1,2,†,
Trung Nghia Tran
1,2,* and
Koichi Shimizu
3,4
1
Laboratory of Laser Technology, Faculty of Applied Science, Ho Chi Minh City University of Technology (HCMUT), 268 Ly Thuong Kiet Street, District 10, Ho Chi Minh City 72409, Vietnam
2
Vietnam National University, Linh Trung Ward, Thu Duc, Ho Chi Minh City 71308, Vietnam
3
School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China
4
Information, Production and Systems Research Center, Waseda University, Kitakyushu 808-0135, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2024, 14(5), 1689; https://doi.org/10.3390/app14051689
Submission received: 17 January 2024 / Revised: 14 February 2024 / Accepted: 18 February 2024 / Published: 20 February 2024

Abstract

:
Recent studies in transillumination imaging for developing an optical computed tomography device for small animal and human body parts have used deep learning networks to suppress the scattering effect, estimate depth information of light-absorbing structures, and reconstruct three-dimensional images of de-blurred structures. However, they still have limitations, such as knowing the information of the structure in advance, only processing simple structures, limited effectiveness for structures with a depth of about 15 mm, and the need to use separated deep learning networks for de-blurring and estimating information. Furthermore, the current technique cannot handle multiple structures distributed at different depths next to each other in the same image. To overcome the mentioned limitations in transillumination imaging, this study proposed a pixel-by-pixel scanning technique in combination with deep learning networks (Attention Res-UNet for scattering suppression and DenseNet-169 for depth estimation) to estimate the existence of each pixel and the relative structural depth information. The efficacy of the proposed method was evaluated through experiments that involved a complex model within a tissue-equivalent phantom and a mouse, achieving a reconstruction error of 2.18 % compared to the dimensions of the ground truth when using the fully convolutional network. Furthermore, we could use the depth matrix obtained from the convolutional neural network (DenseNet-169) to reconstruct the absorbing structures using a binary thresholding method, which produced a reconstruction error of 6.82 %. Therefore, only one convolutional neural network (DenseNet-169) must be used for depth estimation and explicit image reconstruction. Therefore, it reduces time and computational resources. With depth information at each pixel, reconstruction of 3D image of the de-blurred structures could be performed even from a single blurred image. These results confirm the feasibility and robustness of the proposed pixel-by-pixel scanning technique to restore the internal structure of the body, including intricate networks such as blood vessels or abnormal tissues.

1. Introduction

The use of light for biomedical imaging dates back to pioneering studies by researchers such as T.B. Curling (1843), R. Bright (1831), and M. Cutler (1929) [1,2,3,4]. Subsequent advances in science and technology have led to the development of various light sources (LASER and LED) and image acquisition sensors, driving the widespread adoption of light in medical and life sciences. Existing imaging modalities have limitations due to ionizing radiation, contrast agents, metal restriction, sophisticated systems, and high costs. Therefore, alternative optical-based imaging techniques with simple, radiation-free, and affordable designs are crucial. Transillumination imaging requires a simple system consisting of light sources, a camera as a detector, and a computer for controlling and processing images. The use of transillumination (diaphanography) to monitor the pathology of human organs has become of interest in recent years, as there have been many new advances related to light source technology, sensor variables, and theoretical, experimental, and clinical results [3,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. However, this method faces great challenges, with a strong scattering of light in biological tissues and time-consuming image processing [24].
Figure 1 shows the transillumination mode and the epi-transillumination mode of the transillumination imaging. Transillumination mode is commonly known by placing the light source on the opposite side of the recording device (typically a camera). The epi-illumination mode can also be considered as a mode of transillumination imaging, utilizing the light source and the recording device positioned on the same side of the object with appropriate lighting conditions. When the lighting conditions are appropriately adjusted to allow the light to diffuse well in the turbid medium, it becomes possible to acquire the distribution of the absorption structure on the surface of the body.
The transillumination image is the blurred shadows of the absorbing structures in a turbid medium, such as the collection of point absorbers. As the depth of the absorbing structure increases, the resulting image exhibits progressively more pronounced blurring. Additionally, acquiring light signals when light penetrates thick body parts is challenging. This challenge may also be reflected in breast light or blood vessel finder applications currently on the market [22,23]. Consequently, de-blurring scattering on observed images has remained a difficulty until recently. To realize transillumination imaging, many studies have been conducted to reduce scattering [24,25,26,27,28,29]. K. Shimizu et al. have derived a depth-dependent point spread function (PSF) to characterize the scattering of a point light source in biological tissue [24]. The depth-dependent PSF is presented as Equation (1) [24]:
PSF ( ρ ) = C μ s + μ a + κ d + 1 ρ 2 + d 2 d ρ 2 + d 2 exp [ κ d ρ 2 + d 2 ] ρ 2 + d 2
where k d 2 = 3 μ a ( μ s + μ a ) and C, μ s , μ a , and d represent the constants concerning ρ and d, the reduced scattering coefficient, the absorption coefficient, and the depth of the structure, respectively.
On the surface, it is possible to divide the light into two components, the composition of direct light that is affected by scattering, the absorption of the scattered environment, and the composition of diffused light. Back-scatter or back-reflection imaging uses a light source and a recording device placed on the same side of the object. By adjusting according to the passing conditions, we can obtain the distribution of absorption structure at the body’s surface. The distribution of light at the observing surface involves the decomposition of incident light into two components: direct light, which undergoes scattering and absorption within the surrounding medium, and diffused light.
Tran et al. have successfully applied this PSF to imaging absorption structures in biological tissues, assuming a uniform distribution of light in the plane containing the absorption structures [25]. As a result, Tran et al. have demonstrated optical computed tomography (OCT) with transillumination imaging and have reconstructed internal structures in small animals [25]. However, the scattering suppression process in this study depends on the deconvolution process with the Lucy–Richardson algorithm. Therefore, the restored image depends on the selection of the iteration number. The use of the depth-dependent point spread function in conjunction with deep learning to suppress scattering is one of the most remarkable advances that has been made. Van et al. developed the scattering suppression technique and the estimation of the depth of the structure in a turbid medium using deep learning [27]. The research team has succeeded in suppressing scattering and estimating depth using convolutional neural network (CNN) and fully convolutional network (FCN) models. Using the proposed technique from Van et al., the de-blurred images, depth information, and three-dimensional (3D) structure of simple or single absorption structure were estimated [27]. The blood flow in the reconstructed 3D vessels could be estimated using a depth-dependent contrast model [29].
Shimizu et al. recently proposed novel techniques to reconstruct a 3D structure in a turbid medium from a single blurred 2D image obtained using near-infrared transillumination imaging [30]. One technique uses 1D information on the intensity profile in the light-absorbing image. Profiles under different conditions are calculated by convolution with the depth-dependent point spread function (PSF) of the transillumination image. In databanks, profiles are stored as lookup tables to connect the contrast and spread of the profile to the absorber depth. A one-to-one correspondence was found from the contrast and spread to the absorber depth and thickness. Another technique uses 2D information from the transillumination image of a volumetric absorber. A blurred 2D image is deconvolved with the depth-dependent PSF, thereby producing many images with points of focus on different parts. The depth of the image part can be estimated by searching the deconvolved images for the image part with the best focus. The techniques are time-consuming because of the nature of the convolution and deconvolution process. In addition, it could be applied to the simple structure.
The results in previous studies still show limitations when processing a depth of 15.0 mm related to the efficiency of scattering suppression, the shape of the reconstructed structures, the estimated depth, and the applicability in complex structures [25,26,28,29,30]. These problems are related to the absorption structure’s complexity, the biological tissue’s heterogeneity, the training data, and the neural network model itself. Dang et al. have proposed the Attention Res-UNet model for de-blurring by adding the Attention gate and the Residual block to the common U-net model structure. As a result, a correlation of more than 89% could be achieved between the de-blurred image and the original structure image [31]. Dang et al. have proposed depth estimation using the DenseNet169 model with high accuracy beyond the limit of 20.0 mm [31].
The complexity of the light-absorbing structure is also unresolved. The current solution is to subdivide the image into several parts, each of which contains only one simple structural part with a relative spatial location roughly on the same plane [27,29,30]. Current techniques also cannot handle multiple structures distributed at different depths next to each other in the same image.
This paper presents a new method named the pixel-by-pixel scan matrix method that uses deep learning to de-blur and estimate depth information of the absorbing structures in a turbid medium. Consequently, with de-blurred two-dimensional (2D) images at different angles as the projection images, the 3D de-blurred absorbing structures and the cross-sectional images could be reconstructed using the filtered-back-projection method. It also allows for restoring the “clear” image of the light-absorbing structure, so that only one convolutional neural network needs to be used for depth estimation and explicit image reconstruction. If the result of the proposed method could be achieved, the 2D de-blurred image and its pixel depth information can lead to reconstructing a 3D view of absorbing structures with a limited acquisition angle, even with only a single 2D image.

2. Materials and Methods

2.1. Data Preparing

Deep learning requires many pairs of training for the training process to ensure optimal accuracy and performance. Collecting data through experiments is a challenge for optical imaging techniques. Data collection presented practical challenges to acquire a significant quantity of training pairs. However, it has been solved and proven through experiments using depth-dependent PSF, even in the case of imaging of absorbing structures, to produce blurred scattering image data from existing structures [25,26,27,28,29,30].
Figure 2 shows a schematic diagram of the light intensity distribution observed on the surface of the scattering medium. In transillumination imaging of light-absorbing structures, homogeneous light is irradiated outside the scattering medium. The scattered light passes through the absorbing structure and projectes a shadow on the surface of the scattering object. As shown in Figure 2, the scattering medium was considered an infinitely wide slab; the orange lines show the light distributions of the light source with the same sizes as the absorbing object. In reality, because of the limitation of the observed image, the light distribution was cut out. Thus, the depth-dependent light-source point spread function (PSF), originally derived for a light source, cannot be applied directly to the transillumination images. To overcome this problem, the light distribution in the original clear image will be inverted. The absorption distribution in the clear image becomes the light distribution in the inverted image. Then, the light-source PSF can be expected to apply correctly to the inverted clear image. In this study, the image noise captured by the camera was neglected. The depth-dependent PSF was implemented to convolve clear images of the absorbing structures and the image of illuminated conditions to generate the desired blurred images. Therefore, the simulated image for transillumination imaging by convolving the clear structure image with a light-source PSF can be written as in Equation (2):
y = 1 [ h × ( 1 x ) ]
where x, y, and h represent, respectively, the original structure image, the simulated image, and the depth-dependent light-source PSF and the × denotes the convolution operation. In this process, the image’s x and y were normalized to be 0 to 1. This makes the dataset of transillumination images for deep learning possible and easier. The effectiveness of this approach was rigorously evaluated and validated through tissue-equivalent and small-animal experiments [27,31]. The dataset in this study was created using the method specifically described in previous studies [27,31].
For the de-blurring model, a dataset consists of 204,000 pairs of images. These were collected from four subdatasets, each simulating objects of different sizes, namely 0.1 , 0.5 , 1.0 , and 2.0 times the depth range from 0.1 to 100.0 mm, with a step size of 0.1 mm. Each of these datasets comprises 51,000 pairs of images. Each pair of images is augmented by rotating the angle, ranging from 0 to 360 with a step of 20 . The dataset was used for training, totaling 3,672,000 image pairs. In this study, the Attention Res-UNet model [31,32] is used. The output of the model will be a de-blurred image of the light-absorbing structure.
In the depth estimation model, the dataset comprises pairs of blurred images and the corresponding depth information. These blurred images are generated by convolving the original images with the PSF. Similarly to the de-blurring model, data augmentation techniques are applied, but exclusively to data with a magnification of 1.0 . In this study, the CNN model (DenseNet169) [31,33] is used. The model then outputs the depth information corresponding to the part of the light-absorbing structure in the image.

2.2. Pixel-by-Pixel Scan Matrix Method

Reconstructing 3D structures from the combined information from the FCN (Attention Res-UNet) and CNN (DenseNet-169) model represents a significant advance in image processing and analysis. This approach leverages the power of deep learning to effectively address the challenges posed by scattering in transillumination images of absorbing structures in a turbid medium. As mentioned above, when processing an acquired image for complex structures that are present in a scattering medium or a network structure such as blood vessels, current solutions apply a subdivision of the observed image by dividing it into many separate parts [27,29]. Each section will contain an image of a separate light-absorbing structure. This leads to limitations in processing complex images such as blood vessels when many structures are present in the same image area. In Figure 3, the operational principle of the pixel-by-pixel scan matrix method is elucidated. The gray zone matrix depicted, which encompasses pixels B × B , represents the blurred image under consideration. Currently, the green zone matrix, which extends over S × S pixels, serves as the scanning matrix. The application of a zero-padding technique generates an augmented image of dimensions ( 2 S + B 2 ) × ( 2 S + B 2 ) pixels, thus providing additional data points for analysis. Throughout the computational process, the scanning matrix navigates laterally from left to right and subsequently from the upper to lower regions of the zero-padded image. Each pixel, or computational cell, within the scanning matrix, is evaluated using a FCN/CNN model. Subsequently, the estimated value is logged into a string corresponding to that specific pixel. Upon completion of the scanning process, the value that manifests the highest estimated frequency is selected as the definitive value for the related pixel, as mathematically articulated in Equation (3). Therefore, this method provides a comprehensive pixel-wise analytical approach that potentially improves the clarity and accuracy of biomedical imaging, particularly in applications that require meticulous resolution and exact detail.
Mode = { x i | frequency ( x i ) = max frequency }
Figure 4 shows examples of the estimated pixel value in the case of the de-blurred mode and the depth estimation, respectively. The de-blurring process for a 3 × 3 blurred image, depicted in yellow and processed with a 2 × 2 green kernel matrix as the output of a de-blurring model, involves four distinct sliding steps, labeled step I, II, III, and IV, as demonstrated in Figure 4A. Upon examining the pixel at coordinates (2, 2), the de-blurring sequence reveals values of 0, 1, 1, and 1 for each respective step. This effect results from the binary of the image’s training mask data, which confines pixel intensity values to either 0 or 1. Consequently, the de-blurring model’s output is binary, limited to these two values. According to Equation (3), which returns the result with the highest frequency in the set, the value of the pixel at coordinates (2, 2) is computed to be 1. Then, this procedure is systematically replicated for the remaining pixels in the 3 × 3 matrix. The Fully Convolutional Network (FCN), specifically an Attention Res-UNet, plays a pivotal role in the de-blurring and restoration of the de-blurred image. With a matrix size of 256 × 256 pixels, the FCN model analyzes the blurred image pixel by pixel, employing a one-pixel step size. This approach minimizes the effects of scattering, resulting in a clear and sharp 2D image that enhances the visibility of absorbing structures. However, the sliding process may occasionally lead to pixel deficits at the image’s edges. To address this issue and ensure uniform processing of all image regions by the FCN model, we employ the zero-padding technique. This technique intelligently pads the image’s edges with zeros, effectively extending the image’s dimensions to enable comprehensive de-blurring without compromising result accuracy.
In the case of the depth estimation process using the same 3 × 3 blurred image, processed with the identical 2 × 2 green kernel matrix as the output of a depth estimation model, the methodology also includes four sliding steps (I, II, III, and IV), as illustrated in Figure 4B. In the pixel located at the coordinates (2, 2), the computed depth estimation values for these steps are 5.0 , 5.1 , 5.2 , and 5.0 , respectively. These values are derived from the training data of the depth estimation model, which includes blurred images and corresponding depth labels ranging from 0.1 to 100.0 mm, in 0.1 mm increments. Therefore, the model predicts a set of depth values. Following Equation (3), the depth value of the pixel (2, 2) is determined to be 5.0 mm. This step-by-step process is applied similarly to the other pixels in the 3 × 3 matrix. The CNN model is responsible for depth estimation, a critical aspect of the 3D reconstruction process. The CNN model operates on an estimation matrix of size 224 × 224 pixels, and, similar to the FCN model, it slides through the image with a one-pixel step size. This approach enables the CNN model to analyze the depth of each pixel, facilitating an accurate estimation of the spatial distribution and characteristics of the absorbing structures.

2.3. 3D De-Blurred Structures from Limited Angle of View

Figure 5 shows the methodology underpinning the 3D reconstruction of de-blurred structures derived from a de-blurred image. As delineated in Figure 5, if images that span a complete rotation of 360 can be procured, a corresponding set of 360 de-blurred images can be achieved. These de-blurred images act as pivotal projection sources. Building upon this foundation, the well-established filtered back-projection (FBP) technique facilitates the generation of cross-sectional views and an encompassing 3D representation of the absorbing structures. Notably, this technique retains its efficacy even when the available viewing angles are constrained.
However, in the case of only one blurred image or with few angles of view, 3D reconstruction with the filtered-back-projection method cannot be done. In this case, with the depth information matrix, 3D de-blurred of absorbing structures can be done to make one view angle or multi-view angle at different angles as shown in Figure 6.
Scattering de-blurring: The initial phase involves transforming the blurred image into a sharp and clear 2D representation. Using the FCN (Attention Res-UNet) model, a systematic de-blurring process is applied to the image [31,32]. The model processes the blurred image through a 256 × 256 matrix using a pixel-by-pixel scan method. The zero-padding technique is implemented to ensure comprehensive processing of the entire image, including its edges, enhancing the 2D image to better illuminate the absorbing structures. Furthermore, the CNN model, combined with a pixel-to-pixel scanner method, is used for concurrent depth estimation and scatter de-blurring, as indicated by the red arrow in Figure 5 and Figure 6. This method utilizes a depth matrix derived from the depth estimation model and a pixel scanning method. A threshold-based approach is used, where the intensity of pixels is set to 1 if the depth matrix value is below a predetermined threshold; conversely, it is set to 0 if it exceeds the threshold. This facilitates the reconstruction of a 2D image from the depth matrix. Using the 2D image and its associated depth map, the method enables the reconstruction of a 3D image from the original 2D representation.
Depth estimation: Once the 2D image is clarified, determining the spatial depth of each pixel becomes essential. The CNN (DenseNet-169) model designed for this task employs a 224 × 224 pixel estimation matrix to analyze the de-blurred image, ascertaining the depth at each pixel [31,34]. This process yields an extensive depth map, which accentuates the spatial positioning of the absorbing structures.

3. Experiment with the Complex Structures in the Tissue-Equivalent Phantom

The feasibility and effectiveness of the proposed method were examined in an experiment with complex structures in a tissue-equivalent phantom. Figure 7 presents a schematic of the experimental system for obtaining transillumination images. The phantom was irradiated with near-infrared (NIR) light 800 nm from a laser through a beam expander and a diffuser for homogeneous illumination. Images were captured at all 360 degrees using a CMOS camera placed on the opposite side of the phantom. The image observed with the scattering medium is quite blurred compared to the image observed with the clear medium.
Figure 8 presents the normalized intensity profiles at the 150th pixel row for four images (Figure 8a–d), each scaled to 10 mm / 120 pixels . In Figure 8a, the observed image depicts an absorbing structure in a clear medium at a 0-degree orientation, providing a baseline for comparison with the width d = 8.17 mm . Figure 8b illustrates the observed image of the same structure in a scattering medium, highlighting the impact of the scattering effect on the apparent width and contrast of the object, recorded at 0.7485 . As proposed in previous research, the effectiveness of scattering suppression via PSF deconvolution is demonstrated in Figure 8c, yielding a contrast value of 0.9375 . The reconstructed object’s width d = 9.16 mm error of 12.20 %. Using the proposed technique, Figure 8d achieves a perfect contrast of 1.00 . This result is attributed to the output of the de-blurred model and the pixel-by-pixel scanning method that produces binary values (0 and 1), with an object width d = 8.83 mm and an error of 8.08 %.
Figure 9 presents the normalized intensity profiles on the 350th pixel row for four images (Figure 9a–d), each scaled to 10 mm / 120 pixels . In Figure 9a, the observed image depicts an absorbing structure in a clear medium at the 0-degree orientation, providing a baseline for comparison. Figure 9b illustrates the observed image of the same structure in a scattering medium, highlighting the scattering effect’s impact on the object’s apparent width and contrast, recorded at 0.6738 . As proposed in previous research, the effectiveness of scattering suppression via PSF deconvolution is demonstrated in Figure 9c, yielding a contrast value of 0.8667 . Employing the proposed technique, Figure 9d achieves a contrast of 1.00 .
In this study, the 3D image reconstruction process averages 88.0 s per image. This level of performance is achieved using a computational setup that includes an NVIDIA Tesla V100 GPU, complemented by 12.0 GB of GPU memory and powered by a 16-core Intel Xeon processor. These hardware specifications, while not at the cutting edge, are chosen to demonstrate the considerations of the proposed method on commonly available equipment, making it accessible for broader medical research and diagnostic applications. This approach ensures that the method is practical not only in terms of technical performance but also in terms of its adaptability to a variety of real-world settings.
The contrast improvement ratio (CIR) [35] serves as a metric to evaluate the effectiveness of different image processing techniques across 360 rotation angles of an object, as depicted in Figure 10. The orange line in the graph represents the CIR of the deconvolution method employing the PSF function, the purple line indicates the CIR of the proposed method, and the red line signifies the percentage improvement between these two methods. A notable observation from this graph is the similar trend exhibited by both methods, particularly their lowest CIR values at the 90 and 270-degree rotation angles. This similarity in the CIR trend and specific low points at these angles can be attributed to several factors. First, these angles typically correspond to the longest path lengths through the object, potentially leading to increased scattering and reduced contrast. Second, the alignment of certain features within the object’s structure at these angles might amplify the scattering effects, further diminishing contrast. The consistency of this trend across both methods suggests that these dips in CIR are likely due to the inherent geometry and optical properties of the object rather than the limitations of the image processing techniques.
The percentage improvement metric allows for the following observations. The initial large difference at the first angle: a substantial difference in CIR at the first angle may indicate that one method significantly outperforms the other under specific imaging conditions, influenced by the initial orientation of the object relative to the imaging apparatus. Decrease to a minimum at the 90th angle: The minimum CIR at the 90th angle suggests reduced effectiveness for both methods, likely due to the structural or optical characteristics of the object that increase the scattering or reduce the contrast at this specific orientation. Increase to a maximum at the 180th angle: the peak in CIR at the 180th angle indicates a point of optimal performance for both methods, likely due to more favorable conditions for contrast enhancement. Decrease to minimum at the 270th angle and increase towards the 180th angle: The pattern of decreasing to another minimum at the 270th angle, followed by an increase towards the 360th angle, highlights the influence of the object’s orientation and imaging conditions on the performance of the methods. The cyclic nature of this pattern implies that certain angles consistently present challenges or advantages for contrast enhancement.
Figure 11 shows cross-sectional images at the mid-height of the upper object, each scaled to a resolution of 10 mm / 120 pixels . In Figure 11a, the image obtained in a clear medium reveals an object width of 11.69 mm. Figure 11b shows the image in a scattering medium, where the scattering effects obscure the object’s dimensions. Figure 11c demonstrates the application of the erasing template technique, yielding a reconstructed object width of 12.86 mm, corresponding to an error δ = 10.01 % . This error suggests that while the technique is beneficial for enhancing image clarity, it may alter the perceived dimensions of the object. Finally, Figure 11d depicts the result of the proposed technique, with a reconstructed object width of 12.39 mm, and a reduced error δ = 5.98 % . This reduced error indicates a higher fidelity in preserving the object’s true dimensions while effectively mitigating scattering effects.
Similarly, Figure 12 shows cross-sectional images in the middle height of the lower object, each scaled to 10 mm / 120 pixels . In Figure 12a, captured in a clear medium, the width of the object is measured at 10.89 mm. Figure 12b, taken in a scattering medium, illustrates how scattering effects can significantly obscure the dimensions of the object. The use of the erasing template technique, shown in Figure 12c, results in a reconstructed object width of 11.71 mm, with an associated error of δ = 7.53 % . This suggests that while the technique enhances image clarity, it might also slightly distort the object’s perceived size. In particular, the proposed technique, as seen in Figure 12d, achieves a more accurate reconstruction, producing an object width of 11.35 mm and a significantly lower error rate of δ = 4.22 % . This demonstrates the technique’s higher accuracy in maintaining the object’s true dimensions, despite the presence of scattering effects.
Figure 13 shows the results of the filtered back-projection method using the dataset of 360 degrees with two-level thresholds, which are common in all the figures. The internal structure, which is barely seen in Figure 13d, became visible by the proposed technique.
Figure 14 shows the results of the proposed method in a single view at 302 degrees. The 3D view of the de-blurred internal structures became visible by the proposed method.

4. Experiment with Mouse

Figure 15 shows the experimental apparatus, where a living female mouse (Slc:ICR, 20 weeks-old, 38.0 g) serves as the subject. Anesthesia was administered intraperitoneally with pentobarbital injection, ensuring immobilization and comfort of the mouse throughout the experiment. Subsequently, the mouse was securely placed within a cylindrical holder meticulously crafted from transparent acrylic resin to facilitate unobstructed observational access and light penetration. Illumination was achieved through the deployment of an 800 nm wavelength laser light, propagated through a beam expander and a diffuser to establish uniformly disseminated illumination across one side of the holder. In contrast, a CMOS camera was strategically positioned on the holder’s opposing side, enabling the capture of transilluminated images. The comprehensive capture of transillumination images was facilitated by a rotating system, which rotated the holder to acquire diverse perspectives. This methodology harbors the potential for the reconstruction of 3D images via the FBP algorithm, contingent on the successful procurement of the requisite projection images.
Figure 16a shows an ultrasound image of the kidney region in a mouse, reconstructed from 360-degree transillumination images of a mouse with the horizontal dimension of the left kidney measured at 9.20 mm. On the contrary, Figure 16b shows a cross-sectional view in which the kidney is visible in the horizontal plane. This section is reconstructed from a 360-degree illuminated image of the mouse. However, in these observed images, internal organs, such as the kidney, are barely discernible and difficult to distinguish. Figure 16c further presents the cross-sectional image reconstructed from the deconvolved images, using the sample removal technique described in a previous study [25]. In this reconstruction, the left kidney is distinguishable, with a measured width of 10.06 mm, and an associated error of δ = 9.35 % . Using the proposed technique, Figure 16d reveals a de-blurred image that significantly improves the cross-sectional view. The reconstructed left kidney in this image has a width of 9.00 mm, with a reduced error of δ = 2.18 % , demonstrating the proposed technique’s efficacy in enhancing image clarity and precision.
The stack of cross-sectional images was vertically arranged to create a 3D image. Figure 17 shows the results with different threshold levels with conventional thresholds applied. In Figure 17a, the internal structure was barely discernible, but the previous technique (Figure 17b) and the proposed technique (Figure 17c) revealed greater visibility. This enabled the identification of high-absorption organs such as the kidneys and lower sections of the liver.
Figure 18 shows two stages of 3D reconstruction imaging. Figure 18a shows the output image of the scattering de-blurring process, highlighting the significant reduction in the blurred image. Scattering effects within the image have been effectively suppressed, revealing clearer details of the absorption structures. Figure 18b illustrates the result of the 3D reconstruction imaging, combining scatter de-blurring and depth estimation. The result is a 3D reconstructed image that provides an insightful and comprehensive representation of the internal light-absorbing structures. The addition of depth estimation contributes to the spatial dimension, enhancing the ability to visualize structures within a three-dimensional context.
This experimental investigation confirmed the practicality of achieving 3D imaging for the internal light-absorbing structure of a small animal. The stack of cross-sectional images was vertically arranged to create a 3D image.
Figure 19 validates the efficacy of the CNN depth estimation model in producing clear images while using a threshold of 0.1 ( mm ) . The correlation coefficient between the de-blurred images generated through FCN and CNN is 0.9134. Concerning the width measurement, all images maintain a consistent scale of 10 mm / 120 pixels . Specifically, focusing on the object positioned in the lower right corner of the 350th-pixel row, the actual width of the object is measured at 8.50 mm (Figure 19a). Using the PSF function deconvolution approach (Figure 19c), the width is computed as 9.17 mm (with an error of 7.88%). Using the de-blurring method with the FCN model (Figure 19d) yields a width measurement of 8.25 mm (with an error of 2.94%). Meanwhile, utilizing the image reconstruction technique from the depth matrix via the CNN model (Figure 19e) results in a width of 7.92 mm (with an error of 6.82%). These results indicate the feasibility of both the FCN model-based de-blurring and the image reconstruction from the depth matrix via the CNN model for scatter de-blurring.
Inherent limitations accompany the thresholding method used for reconstructing blurred images using CNN models, presenting a nuanced landscape of benefits and challenges. On the positive side, the streamlined utilization of a singular CNN model for image reconstruction translates into tangible reductions in costs and computational resources, while maintaining commendable accuracy and precision. However, a notable drawback emerges as a threshold is applied to the image reconstruction process. Imposing a threshold for image reconstruction with a value smaller than the threshold of black pixels diminishes the generality of the reconstruction.
Furthermore, when adequate depth information is available and a clear image is achieved after de-blurring, it becomes possible to perform a 3D reconstruction of the light-absorbing structures from a single 2D image.

5. Conclusions

This research addressed the challenging tasks of de-blurring caused by scattering, restoring complex absorbing structures, and estimating the depths of complex structures presented in the transilluminated image of biological tissue. The key contribution of our work lies in developing a pixel-to-pixel scanning method that incorporates deep learning models to provide information and depth values for a given blurred image of absorption structures with multiple depth levels. This novel approach enables us to associate various depths with each pixel and then estimate the depths of the absorbing structures in the whole image. Additionally, it should be noted that when a full viewing angle is available, this study has demonstrated the ability to reconstruct complete 3D structures, providing a comprehensive understanding of the structures within the imaged medium.
Integrating the U-Net and CNN models in the reconstruction process has yielded remarkable results. Combining the clear and de-blurred 2D image from the U-Net model with multiple depth estimations from the CNN model, we obtain a comprehensive 3D representation of the absorbing structures within the turbid medium. This multidimensional insight provides valuable information for researchers and experts and enables our understanding of the complex nature of absorption structures within turbid media and other related domains.
Although this approach leverages the capabilities of deep learning models, it is essential to acknowledge the challenges related to data size and computational power. Creating large datasets for training and conducting computationally intensive operations requires careful consideration and optimization. The expanding capability of scattering suppression and depth estimation for absorption structures in turbid mediums using deep learning, combined with our pixel-to-pixel scanning method, represents a significant achievement. This technique can be applied to the advancement of medical imaging and other related fields. Using the strengths of the U-Net and CNN models and the novel depth estimation process, researchers gain access to a powerful tool for reconstructing 3D structures from 2D images or even a single 2D image.

Author Contributions

Conceptualization, T.N.T. and K.S.; methodology, T.N.T.; software, H.N.H.; validation, N.A.D.N.; analysis, N.A.D.N.; investigation, T.N.T. and H.N.H.; resources, T.N.T.; data curation, N.A.D.N. and H.N.H.; writing—original draft preparation, H.N.H.; writing—review and editing, N.A.D.N. and T.N.T.; visualization, H.N.H.; supervision, T.N.T. and K.S.; project administration, T.N.T.; funding acquisition, T.N.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and was approved by the Ethics Committee of Hokkaido University (protocol code: 2010-02, date of approval: 28 July 2010). The animal study protocol was approved by the Institutional Review Board of Hokkaido University (protocol code: 08-0127, date of approval: 18 March 2008). A part of the research was supported by a Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available but may be obtained from the authors upon reasonable request.

Acknowledgments

We acknowledge Ho Chi Minh City University of Technology (HCMUT), VNU-HCM for supporting this study. The authors express their gratitude to Hokkaido University for supporting this study. The experimental data reused in this study were obtained by Tran Trung Nghia at the Graduate School of Information Science and Technology, Hokkaido University, during their stay for the doctoral degree under the guidance of Koichi Shimizu, Nobuki Kudo, Yuji Kato, and Takeshi Namita. The animal experiments in this study were carried out according to the guidelines and with the approval of the review committee for animal experiments at Hokkaido University.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cutler, M. Transillumination as an aid in the diagnosis of breast lesions. Surg. Gynecol. Obstet. 1929, 48, 721–729. [Google Scholar] [CrossRef]
  2. Key, H.; Jackson, P.; Wells, P. New approaches to transillumination imaging. J. Biomed. Eng. 1988, 10, 113–118. [Google Scholar] [CrossRef]
  3. Grosenick, D.; Rinneberg, H.; Cubeddu, R.; Taroni, P. Review of optical breast imaging and spectroscopy. J. Biomed. Opt. 2016, 21, 091311. [Google Scholar] [CrossRef]
  4. Schutta, H.S. Richard Bright’s observations on diseases of the nervous system due to inflammation. J. Hist. Neurosci. 2018, 27, 165–185. [Google Scholar] [CrossRef] [PubMed]
  5. Gandjbakhche, A.H.; Bonner, R.F.; Nossal, R.; Weiss, G.H. Absorptivity contrast in transillumination imaging of tissue abnormalities. Appl. Opt. 1996, 35, 1767–1774. [Google Scholar] [CrossRef]
  6. Siegel, A.M.; Marota, J.J.A.; Boas, D.A. Design and evaluation of a continuous-wave diffuse optical tomography system. Opt. Express 1999, 4, 287–298. [Google Scholar] [CrossRef] [PubMed]
  7. Cerussi, A.E.; Berger, A.J.; Bevilacqua, F.; Shah, N.; Jakubowski, D.; Butler, J.; Holcombe, R.F.; Tromberg, B.J. Sources of Absorption and Scattering Contrast for Near-Infrared Optical Mammography. Acad. Radiol. 2001, 8, 211–218. [Google Scholar] [CrossRef] [PubMed]
  8. Schmitz, C.H.; Klemer, D.P.; Hardin, R.; Katz, M.S.; Pei, Y.; Graber, H.L.; Levin, M.B.; Levina, R.D.; Franco, N.A.; Solomon, W.B.; et al. Design and implementation of dynamic near-infrared optical tomographic imaging instrumentation for simultaneous dual-breast measurements. Appl. Opt. 2005, 44, 2140–2153. [Google Scholar] [CrossRef]
  9. Li, C.; Zhao, H.; Anderson, B.; Jiang, H. Multispectral breast imaging using a ten-wavelength, source/detector channels silicon photodiode-based diffuse optical tomography system. Med. Phys. 2006, 33, 627–636. [Google Scholar] [CrossRef] [PubMed]
  10. D’Alessandro, B.; Dhawan, A.P. Depth-Dependent Hemoglobin Analysis From Multispectral Transillumination Images. IEEE Trans. Biomed. Eng. 2010, 57, 2568–2571. [Google Scholar] [CrossRef] [PubMed]
  11. D’Alessandro, B.; Dhawan, A.P. Transillumination Imaging for Blood Oxygen Saturation Estimation of Skin Lesions. IEEE Trans. Biomed. Eng. 2012, 59, 2660–2667. [Google Scholar] [CrossRef]
  12. Gonzalez, J.; Roman, M.; Hall, M.; Godavarty, A. Gen-2 Hand-Held Optical Imager towards Cancer Imaging: Reflectance and Transillumination Phantom Studies. Sensors 2012, 12, 1885–1897. [Google Scholar] [CrossRef]
  13. Chiao, F.B.; Resta-Flarer, F.; Lesser, J.; Ng, J.; Ganz, A.; Pino-Luey, D.; Bennett, H.; Perkins, C.J.; Witek, B. Vein visualization: Patient characteristic factors and efficacy of a new infrared vein finder technology. BJA Br. J. Anaesth. 2013, 110, 966–971. [Google Scholar] [CrossRef]
  14. Wang, F.; Behrooz, A.; Morris, M. High-contrast subcutaneous vein detection and localization using multispectral imaging. J. Biomed. Opt. 2013, 18, 050504. [Google Scholar] [CrossRef]
  15. Chandra, F.; Wahyudianto, A.; Yasin, M. Design of vein finder with multi tuning wavelength using RGB LED. J. Phys. Conf. Ser. 2017, 853, 012019. [Google Scholar] [CrossRef]
  16. Racovita, A.; Morar, A.; Balan, O.; Moldoveanu, F.; Moldoveanu, A. Near Infrared Imaging for Tissue Analysis. In Proceedings of the 2017 21st International Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 29–31 May 2017; pp. 294–300. [Google Scholar] [CrossRef]
  17. Strojnik, M.; Scholl, M.K.; Kirk, M.S. Image formation in trans-illumination interferometry. In Proceedings of the Optical Instrument Science, Technology, and Applications, Frankfurt, Germany, 14–17 May 2018; Haverkamp, N., Youngworth, R.N., Eds.; International Society for Optics and Photonics; SPIE: Frankfurt, Germany, 2018; Volume 10695, p. 1069508. [Google Scholar] [CrossRef]
  18. Merlo, S.; Bello, V.; Bodo, E.; Pizzurro, S. A VCSEL-Based NIR Transillumination System for Morpho-Functional Imaging. Sensors 2019, 19, 851. [Google Scholar] [CrossRef] [PubMed]
  19. Bello, V.; Bodo, E.; Pizzurro, S.; Merlo, S. In Vivo Recognition of Vascular Structures by Near-Infrared Transillumination. Proceedings 2020, 42, 24. [Google Scholar] [CrossRef]
  20. Marcos-Vidal, A.; Ripoll, J. Recent advances in optical tomography in low scattering media. Opt. Lasers Eng. 2020, 135, 106191. [Google Scholar] [CrossRef]
  21. Yang, S.; Cheng, D.; Wang, J.; Qin, H.; Liu, Y. Non-Contact Heart Rate Detection Based on Hand Vein Transillumination Imaging. Appl. Sci. 2021, 11, 8470. [Google Scholar] [CrossRef]
  22. Lutowski, Z.; Bujnowski, S.; Marciniak, B.; Kloska, S.; Marciniak, A.; Lech, P. A Novel Method of Vein Detection with the Use of Digital Image Correlation. Entropy 2021, 23, 401. [Google Scholar] [CrossRef]
  23. Mai, H.T.; Ngo, D.Q.; Nguyen, H.P.T.; La, D.D. Fabrication of a Reflective Optical Imaging Device for Early Detection of Breast Cancer. Bioengineering 2023, 10, 1272. [Google Scholar] [CrossRef]
  24. Shimizu, K. Near-Infrared Transillumination for Macroscopic Functional Imaging of Animal Bodies. Biology 2023, 12, 1362. [Google Scholar] [CrossRef]
  25. Tran, T.N.; Yamamoto, K.; Namita, T.; Kato, Y.; Shimizu, K. Three-dimensional transillumination image reconstruction for small animal with new scattering suppression technique. Biomed. Opt. Express 2014, 5, 1321–1335. [Google Scholar] [CrossRef]
  26. Yamaoki, T.; Hamada, H.; Matoba, O. Experimental verification of reconstructed absorbers embedded in scattering media by optical power ratio distribution. Appl. Opt. 2016, 55, 6874–6879. [Google Scholar] [CrossRef]
  27. Van, T.N.P.; Tran, T.N.; Inujima, H.; Shimizu, K. Three-dimensional imaging through turbid media using deep learning: NIR transillumination imaging of animal bodies. Biomed. Opt. Express 2021, 12, 2873–2887. [Google Scholar] [CrossRef]
  28. Lai, X.; Li, Q.; Chen, Z.; Shao, X.; Pu, J. Reconstructing images of two adjacent objects passing through scattering medium via deep learning. Opt. Express 2021, 29, 43280–43291. [Google Scholar] [CrossRef]
  29. Chen, R.; Tong, S.; Miao, P. Deep-learning-based 3D blood flow reconstruction in transmissive laser speckle imaging. Opt. Lett. 2023, 48, 2913–2916. [Google Scholar] [CrossRef]
  30. Shimizu, K.; Xian, S.; Guo, J. Reconstructing a Deblurred 3D Structure in a Turbid Medium from a Single Blurred 2D Image—For Near-Infrared Transillumination Imaging of a Human Body. Sensors 2022, 22, 5747. [Google Scholar] [CrossRef] [PubMed]
  31. Dang Nguyen, N.A.; Huynh, H.N.; Tran, T.N. Improvement of the Performance of Scattering Suppression and Absorbing Structure Depth Estimation on Transillumination Image by Deep Learning. Appl. Sci. 2023, 13, 10047. [Google Scholar] [CrossRef]
  32. Maji, D.; Sigedar, P.; Singh, M. Attention Res-UNet with Guided Decoder for semantic segmentation of brain tumors. Biomed. Signal Process. Control 2022, 71, 103077. [Google Scholar] [CrossRef]
  33. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef]
  34. Goh, C.; Subramaniam, R.; Saad, N.; Ali, S.; Meriaudeau, F. Subcutaneous veins depth measurement using diffuse reflectance images. Opt. Express 2017, 25, 25741–25759. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, Y.P.; Wu, Q.; Castleman, K.R.; Xiong, Z. Chromosome image enhancement using multiscale differential operators. IEEE Trans. Med. Imaging 2003, 22, 685–693. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The transillumination mode and epi-transillumination mode of the transillumination imaging. The red arrow represents the direction of the light source and the blue arrow represents the direction of the light reaching the camera during image acquisition.
Figure 1. The transillumination mode and epi-transillumination mode of the transillumination imaging. The red arrow represents the direction of the light source and the blue arrow represents the direction of the light reaching the camera during image acquisition.
Applsci 14 01689 g001
Figure 2. Geometry for PSF as light distribution observed at the scattering medium surface in reality for transillumination imaging. The orange circle denotes the light point sources in both cases.
Figure 2. Geometry for PSF as light distribution observed at the scattering medium surface in reality for transillumination imaging. The orange circle denotes the light point sources in both cases.
Applsci 14 01689 g002
Figure 3. Principle of the pixel-by-pixel scan matrix method. The green matrix is a scanning matrix with dimensions ( S 1 ) × ( S 1 ) pixels, the gray matrix is a transmitted image matrix with dimensions of (B×B) pixels and the blue arrow shows the scanning direction. of the process.
Figure 3. Principle of the pixel-by-pixel scan matrix method. The green matrix is a scanning matrix with dimensions ( S 1 ) × ( S 1 ) pixels, the gray matrix is a transmitted image matrix with dimensions of (B×B) pixels and the blue arrow shows the scanning direction. of the process.
Applsci 14 01689 g003
Figure 4. Examples of the estimated pixel value in the de-blurred mode (A) and the depth estimation mode (B). The letters I, II, III, and IV represent the steps in the pixel scanning process, and the numbers in the center represent the estimated value.
Figure 4. Examples of the estimated pixel value in the de-blurred mode (A) and the depth estimation mode (B). The letters I, II, III, and IV represent the steps in the pixel scanning process, and the numbers in the center represent the estimated value.
Applsci 14 01689 g004
Figure 5. Principle of 3D de-blurred structures reconstruction from a blurred image. The red arrow shows the scatter deblurring method using the CNN model.
Figure 5. Principle of 3D de-blurred structures reconstruction from a blurred image. The red arrow shows the scatter deblurring method using the CNN model.
Applsci 14 01689 g005
Figure 6. Principle of 3D de-blurred structures reconstruction from a single 2D blurred image.
Figure 6. Principle of 3D de-blurred structures reconstruction from a single 2D blurred image.
Applsci 14 01689 g006
Figure 7. Experimental setup with the complex structures in the tissue-equivalent phantom.
Figure 7. Experimental setup with the complex structures in the tissue-equivalent phantom.
Applsci 14 01689 g007
Figure 8. Scattering suppression in transillumination imaging at 0-deg orientation with size images 530 × 530 pixels ( μ s = 1.00 /mm, μ a = 0.01 /mm): (a) observed image in clear medium, (b) observed image in a scattering medium, (c) result using the PSF deconvolution technique, (d) result using the proposed technique, and (e) the intensity profile lines in (ad) at the 150th pixel row.
Figure 8. Scattering suppression in transillumination imaging at 0-deg orientation with size images 530 × 530 pixels ( μ s = 1.00 /mm, μ a = 0.01 /mm): (a) observed image in clear medium, (b) observed image in a scattering medium, (c) result using the PSF deconvolution technique, (d) result using the proposed technique, and (e) the intensity profile lines in (ad) at the 150th pixel row.
Applsci 14 01689 g008
Figure 9. Scattering suppression in transillumination imaging at 90-deg orientation with size image 530 × 530 pixels ( μ s = 1.00 /mm, μ a = 0.01 /mm): (a) observed image in clear medium, (b) observed image in a scattering medium, (c) result using the PSF deconvolution technique, (d) result using the proposed technique, and (e) the intensity profile lines in (ad) at the 350th pixel row.
Figure 9. Scattering suppression in transillumination imaging at 90-deg orientation with size image 530 × 530 pixels ( μ s = 1.00 /mm, μ a = 0.01 /mm): (a) observed image in clear medium, (b) observed image in a scattering medium, (c) result using the PSF deconvolution technique, (d) result using the proposed technique, and (e) the intensity profile lines in (ad) at the 350th pixel row.
Applsci 14 01689 g009
Figure 10. The CIR in 360 angles de-blurring of deconvolution method (orange), the proposed method (purple), and the percentage improvement between the two methods (red).
Figure 10. The CIR in 360 angles de-blurring of deconvolution method (orange), the proposed method (purple), and the percentage improvement between the two methods (red).
Applsci 14 01689 g010
Figure 11. Cross-sectional images at the height of the upper object in Figure 8: (a) from observed images in clear medium, (b) from observed images in scattering medium, (c) by the erasing template technique, and (d) by the proposed technique.
Figure 11. Cross-sectional images at the height of the upper object in Figure 8: (a) from observed images in clear medium, (b) from observed images in scattering medium, (c) by the erasing template technique, and (d) by the proposed technique.
Applsci 14 01689 g011
Figure 12. Cross-sectional images at the height of lower objects in Figure 8: (a) from observed images in clear medium, (b) from observed images in scattering medium, (c) by the erasing template technique, and (d) by the proposed technique.
Figure 12. Cross-sectional images at the height of lower objects in Figure 8: (a) from observed images in clear medium, (b) from observed images in scattering medium, (c) by the erasing template technique, and (d) by the proposed technique.
Applsci 14 01689 g012
Figure 13. Three-dimensional images reconstructed from transillumination images: (a) from the observed image in clear medium, (b) from the observed image in scattering medium, (c) result using the erasing template technique, and (d) result using the proposed technique.
Figure 13. Three-dimensional images reconstructed from transillumination images: (a) from the observed image in clear medium, (b) from the observed image in scattering medium, (c) result using the erasing template technique, and (d) result using the proposed technique.
Applsci 14 01689 g013
Figure 14. Three-dimensional reconstructions from a single blurred transillumination image at 302-deg orientation with size image 530 × 530 pixels ( μ s = 1.00 /mm, μ a = 0.01 /mm). (a) Reconstruction from the observed image in a clear medium; (b) reconstruction from the observed image in a scattering medium; (c) result using the proposed technique; (d) 3D view from a specific angle with depth color scale in millimeters (mm).
Figure 14. Three-dimensional reconstructions from a single blurred transillumination image at 302-deg orientation with size image 530 × 530 pixels ( μ s = 1.00 /mm, μ a = 0.01 /mm). (a) Reconstruction from the observed image in a clear medium; (b) reconstruction from the observed image in a scattering medium; (c) result using the proposed technique; (d) 3D view from a specific angle with depth color scale in millimeters (mm).
Applsci 14 01689 g014
Figure 15. Experimental setup for live animal testing. The red arrow line shows the direction of the light image reaching the camera after passing through the mouse’s body.
Figure 15. Experimental setup for live animal testing. The red arrow line shows the direction of the light image reaching the camera after passing through the mouse’s body.
Applsci 14 01689 g015
Figure 16. Cross-sectional image reconstructed from 360-degree transillumination images of a mouse: (a) ultrasonic images, (b) observed images, (c) deconvoluted images, and (d) proposed technique.
Figure 16. Cross-sectional image reconstructed from 360-degree transillumination images of a mouse: (a) ultrasonic images, (b) observed images, (c) deconvoluted images, and (d) proposed technique.
Applsci 14 01689 g016
Figure 17. Three-dimensional images reconstructed from 360-degree transillumination images of a mouse: (a) observed images, (b) deconvoluted images, and (c) proposed technique.
Figure 17. Three-dimensional images reconstructed from 360-degree transillumination images of a mouse: (a) observed images, (b) deconvoluted images, and (c) proposed technique.
Applsci 14 01689 g017
Figure 18. Three-dimensional image reconstructed using the proposed technique from a single blurred image with depth color scale (mm) with (a) the scattering de-blurring process and (b) result of the 3D reconstruction..
Figure 18. Three-dimensional image reconstructed using the proposed technique from a single blurred image with depth color scale (mm) with (a) the scattering de-blurring process and (b) result of the 3D reconstruction..
Applsci 14 01689 g018
Figure 19. Evaluating the feasibility of scatter de-blurring using the CNN model: (a) from the observed image with clear medium, (b) from the observed image with scattering medium, (c) result using the deconvoluted technique, (d) result using the FCN model, and (e) result using the CNN model with threshold depth 0.1 .
Figure 19. Evaluating the feasibility of scatter de-blurring using the CNN model: (a) from the observed image with clear medium, (b) from the observed image with scattering medium, (c) result using the deconvoluted technique, (d) result using the FCN model, and (e) result using the CNN model with threshold depth 0.1 .
Applsci 14 01689 g019
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dang Nguyen, N.A.; Huynh, H.N.; Tran, T.N.; Shimizu, K. Reconstructing 3D De-Blurred Structures from Limited Angles of View through Turbid Media Using Deep Learning. Appl. Sci. 2024, 14, 1689. https://doi.org/10.3390/app14051689

AMA Style

Dang Nguyen NA, Huynh HN, Tran TN, Shimizu K. Reconstructing 3D De-Blurred Structures from Limited Angles of View through Turbid Media Using Deep Learning. Applied Sciences. 2024; 14(5):1689. https://doi.org/10.3390/app14051689

Chicago/Turabian Style

Dang Nguyen, Ngoc An, Hoang Nhut Huynh, Trung Nghia Tran, and Koichi Shimizu. 2024. "Reconstructing 3D De-Blurred Structures from Limited Angles of View through Turbid Media Using Deep Learning" Applied Sciences 14, no. 5: 1689. https://doi.org/10.3390/app14051689

APA Style

Dang Nguyen, N. A., Huynh, H. N., Tran, T. N., & Shimizu, K. (2024). Reconstructing 3D De-Blurred Structures from Limited Angles of View through Turbid Media Using Deep Learning. Applied Sciences, 14(5), 1689. https://doi.org/10.3390/app14051689

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop