Next Article in Journal
Training of an Extreme Learning Machine Autoencoder Based on an Iterative Shrinkage-Thresholding Optimization Algorithm
Next Article in Special Issue
Deep Neural Network Model for Evaluating and Achieving the Sustainable Development Goal 16
Previous Article in Journal
Special Issue “Gas Bearings: Modelling, Design and Applications”
Previous Article in Special Issue
Real-Time Face Mask Detection to Ensure COVID-19 Precautionary Measures in the Developing Countries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstruction of Motion Images from Single Two-Dimensional Motion-Blurred Computed Tomographic Image of Aortic Valves Using In Silico Deep Learning: Proof of Concept

School of Engineering, University of Tokyo, Tokyo 113-0033, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(18), 9044; https://doi.org/10.3390/app12189044
Submission received: 29 July 2022 / Revised: 2 September 2022 / Accepted: 5 September 2022 / Published: 8 September 2022
(This article belongs to the Special Issue Deep Convolutional Neural Networks)

Abstract

:
The visualization of motion is important in the diagnosis and treatment of aortic valve disease. It is difficult to perform using computed tomography (CT) because of motion blur. Existing research focuses on suppressing or removing motion blur. The purpose of this study is to prove the feasibility of inferring motion images using motion information from a motion-blurred CT image. An in silico learning method is proposed, to infer 60 motion images from a two-dimensional (2D) motion-blurred CT image, to verify the concept. A dataset of motion-blurred CT images and motion images was generated using motion and CT simulators to train a deep neural network. The trained model was evaluated using two image similarity evaluation metrics, a structural similarity index measure (0.97 ± 0.01), and a peak signal-to-noise ratio (36.0 ± 1.3 dB), as well as three motion feature evaluation metrics, maximum opening distance error between endpoints (0.7 ± 0.6 mm), maximum-swept area velocity error between adjacent images (393.3 ± 423.3 mm2/s), and opening time error (5.5 ± 5.5 ms). According to the results, the trained model can successfully infer 60 motion images from a motion-blurred CT image. This study demonstrates the feasibility of inferring motion images from a motion-blurred CT image under simulated conditions.

1. Introduction

Aortic valve disease is the most common type of heart disease and a leading cause of cardiovascular morbidity and mortality [1]. It is a condition in which the valve between the left ventricle and the aorta does not open or close properly, thereby reducing or blocking blood flow and causing it to flow back. Aortic valve disease can be treated with two types of surgery: aortic valve repair surgery and aortic valve replacement surgery. Repair surgery is used to repair the aortic valve without removing the damaged valve, whereas replacement surgery is used to remove the damaged valve and replace it with an artificial aortic valve. Compared to replacement surgery, repair surgery usually preserves a large amount of original tissue and has a lower complication rate [2]. However, several difficulties remain in repair surgery. The feasibility of repair surgery primarily depends on the ability to reconstruct diseased valves. The success of repair surgeries depends on the degree of leaflet damage [3,4]. Additionally, universal application of repair surgeries has yet to be achieved, because of the lack of surgical expertise and experience [5]. To promote the application of repair surgeries, a more detailed understanding of anatomic structure and dynamic function of aortic valves is required, to accurately determine the degree of leaflet damage, appropriately select the surgical procedure, and make better surgical planning. Since the anatomic structure and dynamic function are closely related to motion, evaluating the motion of aortic valves is critical to achieving this goal.
Three main types of imaging techniques are available to assess aortic valves: echocardiography, cardiac magnetic resonance imaging (MRI), and cardiac computed tomography (CT). However, no matter which method is used, dynamic imaging of the aortic valve is still difficult because of its rapid movement [6]. Echocardiography is limited by its low spatial resolution and low frame rate (<20 fps) [7]. MRI and CT have a higher spatial resolution than echocardiography. Even with ECG-gated technology, MRI (<25 fps) and CT (<38 fps) [8] cannot meet the needs of dynamic imaging of aortic valves (>100 fps). Among these three imaging techniques, CT has emerged as a routinely used tool in the diagnosis of aortic valve diseases because of its high spatial resolution [9]. The temporal resolution of CT is limited by the scan time and angular range required for reconstruction. Therefore, when conducting cardiac CT, the rapid movements of aortic valves can cause motion blur. Due to the motion blur, the current CT diagnosis of aortic valves is limited to the analysis of the structure in the diastolic phase and does not use motion information. Motion blur remains a major challenge in cardiac CT diagnosis [10].
Many studies have been conducted to determine how to improve the quality of motion-blurred CT images. These techniques involve three main types: (1) physical methods, (2) image processing methods, and (3) machine learning methods. Physical methods are based on understanding the physics of motion blur and improving image quality using physical theory, such as decreasing the motion, increasing the number of X-rays, and ECG-gating [11]. These methods can slightly improve image quality, but they are limited by physical conditions and cannot handle fast motions. Image processing methods include image denoising and motion compensation [12]. Image denoising uses spatial or frequency domain filtering to remove noise, such as linear filters, nonlinear filters [13], total variation methods [14], and wavelet-based thresholding [15]. Motion compensation uses knowledge of motion to achieve image reconstruction [16], such as an anatomy-guided registration algorithm [17] and a second pass approach for aortic valve motion compensation [18]. Image processing methods perform better than physical methods, but they are not suitable for situations where the feature of motion blur is not obvious in the spatial or frequency domains or where the motion cannot be estimated in advance. Machine learning methods are data-driven approaches for removing or suppressing motion blur, such as using a neural style-transfer method to suppress artifacts [19] or using a motion vector prediction convolution neural network to compensate for motion blur [20]. These methods depend less on prior knowledge of motions and can reconstruct images more accurately. Although many methods have been proposed to improve the image quality of motion-blurred CT images, it is still difficult to analyze the rapid motion of aortic valves using the above-mentioned methods. These methods are limited, in that they only improve the image quality of a single blur image and do not provide motion images for motion analysis. To realize motion analysis with CT imaging, it is necessary to solve the ill-posed problem of inferring motion images from a single motion-blurred CT image.
Machine learning is famous for its ability to solve ill-posed problems. For example, a deep learning neural network (DNN) in optical imaging has successfully extracted a video sequence comprising seven frames from a single motion-blurred image [21]. However, there is currently no relevant research on inferring motion images from motion blur in CT imaging. It is considered that the projection data for reconstructing a CT cross-section image corresponds to the valve motion information when each projection datum is obtained. Our central hypothesis is that information about dynamic valve motion, including opening time and maximum opening distance of the valve leaflets, can be retrieved from a single motion-blurred CT image. Since the relationship between the motion information embedded in the projection data and the resultant motion-blurred CT image is complex, we assume that a DNN can relate the motion-blurred CT image information to the corresponding time series of the valve motion information.
This study aims to prove that motion images can be inferred from motion blur in CT images of aortic valves. In this case, motion blur is no longer considered an undesirable phenomenon to be removed, but a valuable source of information on the dynamics of aortic valves. The in silico learning method [22] was used to prove the proposed concept, because it allows for discovering and demonstrating new concepts with a combination of the in silico dataset and deep learning. In our hypothesis, the motion information in motion-blurred CT images’ is critical for achieving the goal. Therefore, to test the hypothesis, we have employed simple numerical simulation experiments with a 2D CT simulator and valve models, where a CT image was reconstructed with multiple projection data captured at different instant motion images. When a 2D CT scan is performed from the direction perpendicular to the aortic annulus, the motion of aortic valves can be approximated to 2D motion. Furthermore, to show the ability of motion blur in inferring motion images, only the motion blur in the CT images was considered in the simulation. With these simplifications, we proposed an in silico learning scheme that combined a simulation dataset and deep learning to infer 60 motion images from a single motion-blurred CT image to prove the proposed concept. To the best of our knowledge, this is the first time that this problem has been proposed and studied in the CT imaging field.

2. Methods

To demonstrate the concept that motion information can be obtained from motion blur in the CT imaging field, we proposed an in silico learning scheme to infer 60 motion images from a single motion-blurred CT image. The overview of the proposed in silico learning scheme is shown in Figure 1. The proposed method consists of two parts: (a) data generation process and (b) DNN training process. The dataset generation process is shown in Figure 1a. First, various simulation parameters were input into the motion simulator to generate different motion videos. The generated motions differed in motion features, such as velocity and maximum opening distance of aortic valves by varying the geometric parameters and material properties of aortic valves. In the Vid2im module, we converted motion videos to images, painted background tissues to these images, assigned grayscale values to each component, and finally converted these images to a specified spatial resolution to make video stream data. After data augmentation such as translation and rotation, the number of the video stream data was increased. Finally, video stream data was input into the CT simulator, which simulated the projection process and reconstruction process. The CT simulator generated motion-blurred CT images from the video stream data. Then, 60 motion images to be inferred were selected at equal time intervals from video stream data used in CT imaging. A dataset, containing pairs of motion-blurred CT and motion images, was generated in this manner. The DNN training process is shown in Figure 1b. A DNN was trained on the generated dataset. The input and output of the DNN are a single motion-blurred CT image and 60 motion images, respectively. The details of the components are described below.

2.1. Data Generation Process

To generate a dataset with a reasonable movement of aortic valves in the motion images and rational motion blur patterns in the motion-blurred CT images, 2D motion and 2D CT simulators were used. Since the movement of aortic valves is a two-way fluid structure interaction theory (2-way FSI) problem, the motion simulator was designed on the ANSYS (2021R1, ANSYS) platform based on 2-way FSI [23]. In 2-way FSI, blood flow is affected by valve displacement, and the valve is affected by the force generated by blood flow. The CT simulator was designed based on a ray-driven simulation method [24] on the MATLAB (R2022a, MathWorks) platform. A sample of the data is shown in Figure 2a. The detailed data generation process is presented below.

2.1.1. Motion Simulator

To generate a reasonable movement of aortic valves, a 2D motion simulator was designed on ANSYS Workbench platform based on 2-way FSI theory. ANSYS FLUENT and ANSYS MECHANICAL APDL were used to solve the fluid and structure fields, respectively. These two fields were coupled using the system coupling module. Although the motion simulator cannot perfectly simulate the real physical process, it can reproduce the complex motion variation depending on FSI.
The geometries of the numerical model were developed using the geometry study of aortic valves [25,26]. The geometry of aortic valves is shown in Figure 2b, and the geometric parameters are shown in Table 1. These geometric parameters were sampled according to the normal distribution with the known mean and standard deviation of each parameter in Table 1.
In the structure field, the material properties of aortic valves were defined. For simplicity, a linear elastic material model was used. The values of these parameters were selected using FSI studies in this area. The detailed parameters are shown in Table 2. These material parameters were sampled according to the random distribution, with the known range of each parameter, in Table 2.
In the fluid field, the parameters of blood flow were defined. For simplicity, the blood flow was assumed to be incompressible Newtonian with constant blood density of 1060 kg/m3, constant viscosity of 0.0035 J/s. For turbulence modeling, the k-ε model was used [27]. The inlet was defined as the velocity inlet. Based on the measurement of Doppler [28], a patient specified inlet velocity was used (Figure 2c).
The main causes of aortic valve diseases are changes in geometry and material properties [29]. By varying the geometric parameters and material properties, 80 motion videos with different velocities and deformations were generated. With the generated videos, we extracted image frame by frame from motion videos to generate video stream data in the Vid2im module. The images in the video stream data are grayscale images with a size of 512 × 512 and a spatial resolution of 0.5 mm/pixel. An image in the video stream data is shown in Figure 2d. Background tissue was painted in the video stream data in the Vid2im module to make the data more realistic. The pixel values of each component are determined using the CT value (Table 3). The proportion of each component was used to determine the pixels of the contact area among the aortic valve, blood, and background tissue. The midpoint of the aortic valves was defined as the midpoint of the aortic annulus (Figure 2d). In the data augmentation process, we translated the midpoint of aortic valves in the range of −5–5 mm from the center of the images and rotated the aortic valve in the range of 0°–360° around the midpoint of aortic valves, to augment the number of the video stream data. Finally, 8000 video stream data were generated from 80 motion videos, and each motion video corresponds to 100 video stream data. The duration of the video stream data is 305 ms (Figure 2g). Each video stream data contains 320 images of the closed phase, 160 images of the rapid opening phase, and 320 images of the slow closing phase. Here, we did not include motions of the rapid closing phase [30] and only used part of the motion to demonstrate the proposed concept that motion images of aortic valves can be inferred from motion-blurred CT images.

2.1.2. CT Simulator

A 2D equiangular fan-beam CT simulator was used to generate rational motion blur patterns in the motion-blurred CT images. It is a complex physical process, in which X-rays from the source pass through an object and are collected by the detector. In this study, we only focused on the motion blur caused by the motion of the aortic valves. Therefore, the complex physical process can be well-approximated by taking collections of line integrals and combining them in appropriate ways [31]. The geometry of the 2D equiangular fan-beam CT is shown in Figure 2e. The X-ray source was rotated from −90° of the imaging plane. The parameters of the CT simulator are shown in Table 4.
We used a ray-driven CT simulation method. The ray-driven method connects a line from the focal point through the image to the center of the detector cell of interest of the X-ray. CT values along the X-ray were calculated by interpolation of adjacent pixels. In the projection process of the CT simulator, the projection data is given by the following:
p r = μ x , y d s
where μ x , y is the attenuation coefficient at the point (x, y), s is the ray from the source to the detector, and the projection data p r is the line integral along with the X-ray. In the reconstruction process, the filtered back projection (FBP) [32] and half scan reconstruction [33] algorithms were used to reconstruct motion-blurred CT images from projection data. The back-projection data is given by the following:
b x , y = p s , θ d θ
where b x , y is the back-projection data at the point (x, y), θ is the rotation angle of X-rays.
A single motion-blurred CT image requires about 183 ms of video stream data (Figure 2g). Considering that the CT scan cannot start at the precise moment the aortic valve starts to open in the real world, we randomly selected the scan start moment to generate motion-blurred CT images. The time range for random scan start is during the closed phase of aortic valves, because motion images in the rapid opening phase are relatively important to understanding the dynamics of aortic valves. Images from the video stream data were set in the CT imaging plane in the simulator, and the center of images in the video stream data was kept the same as the center of the CT simulator (Figure 2f). After inputting the video stream data to the CT simulator, we can acquire motion-blurred CT images. Additionally, motion images were evenly taken from the video stream data used in CT imaging. Finally, we selected a 128 × 128 region around the center of the images to make the dataset. In this way, a dataset (N = 8000) containing motion-blurred CT and their corresponding 60 motion images was generated. The dataset was further divided into a training dataset (N = 6400), validation dataset (N = 800), and test dataset (N = 800).

2.2. DNN Training Process

2.2.1. Network Architecture

As explained, we assume that DNN can relate the motion-blurred CT image information to the corresponding time series of the valve motion information. In fact, several different architectures were used to confirm the feasibility of this idea. These data show that DNN can represent the relationship between a motion-blurred CT image to the corresponding valve motion information embedded in the projection data used (the comparison study of 4 different models can be seen in Supplementary Materials). We used the proposed architecture in the paper because of its better performance. Figure 3 shows the detailed network architecture of our proposed model. The model was built on a 2D U-Net [34]. It is a powerful technique to locate objects or boundaries in images on a per-pixel level and encode the input image into feature representations at multiple different levels. Moreover, it has advantages of its concise architecture design, fast convergence in training, and high representation power. Although it enables to fully extract the features of motion-blurred CT images, 2D convolution limits its ability to extract temporal features. Several modifications were made to meet the requirements. Although the 2D U-Net was designed for segmentation problems, we can use it for our problem by changing the activation function to a sigmoid function and employing mean square error (MSE) loss function. Additionally, we modified the decoder of the 2D U-Net with the decoder of the three-dimensional (3D) U-Net [35]. This 5-layer 3D decoder can learn temporal features from spatial features and enable the information changes between temporal and spatial features. Moreover, a special skip connection module was used to enable the information changes between the 2D encoder and 3D decoder. In the skip connection module, 2D features extracted in the encoder part were copied to the size of the 3D features in the 3D decoder. Then, a 3D convolution operation was performed to select useful features in each dimension. Based on these changes, the proposed model can infer 60 motion images from one motion-blurred CT image. The input to the network is one motion-blurred CT image with size 1 × 128 × 128 (channel × width × height), whereas the output of the network is 60 motion images with size 1 × 60 × 128 × 128 (channel × depth × width × height), in which the time series was treated as the depth dimension.

2.2.2. Loss Function

The MSE is the most commonly used regression loss function, which is defined as the following:
MSE = 1 m n i = 1 m j = 1 n Y i j Y i j p 2
where m is the number of training data, n is the number of motion images for each training data (n is equal to 60 in this study), Y i j is the ground truth motion image, and Y i j p is the predicted motion image.

2.2.3. Implementation Detail

The network was trained in Pytorch using three GTX 2080 Ti GPUs (NVIDIA). The batch size is six. We use Adam optimizer for its good convergence and fast running time, with a learning rate as 10−4; the hyperparameters β1 and β2 were 0.9 and 0.999, respectively, and the weight decay was 10−6. The learning rate was determined by a trial-and-error method, and the value that demonstrated the best loss without compromising the speed of training is selected. We tried exponential values: 10−2, 10−3, and 10−4. The final validation losses are 5.6 × 10−4, 3.2 × 10−4, and 2.3 × 10−4, respectively. The learning rate 10−4 was determined because of the lowest validation loss.

2.3. Evaluation Metric

To better evaluate the performance of our model, we used not only image similarity evaluation metrics but also motion feature evaluation metrics. In terms of image similarity metrics, the structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were used. In terms of motion feature evaluation metrics, the maximum opening distance error between endpoints (MDE), the maximum-swept area velocity error between adjacent images (MVE), and the opening time error (OTE) was introduced. MDE and MVE can evaluate the intensity of the motion, whereas OTE can evaluate the temporal order of the motion.

2.3.1. Image Similarity Evaluation Metrics

The SSIM metric is a perception-based metric for measuring the similarity between two images that considers image degradation as a perceived change in structural information and incorporates important perceptual phenomena, such as luminance and contrast. SSIM is defined as the following:
SSIM x , y = 2 μ x μ y + c 1 2 σ x y + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2
where μ x ,   μ y is the mean of x, y, respectively; σ x 2 ,   σ y 2 is the variance of x, y, respectively; σ x y is the covariance of x and y; and c1, c2 are two variables to stabilize the division with weak denominator.
The PSNR is the most widely used objective image quality metrics for evaluating the similarity between two images. The PSNR is defined as the following:
PSNR = 10 · log 10 M A X I 2 PSNR
where M A X I 2 is the maximum possible pixel value of images.

2.3.2. Motion Feature Evaluation Metrics

To evaluate the MDE, MVE, and OTE of the motion, we used skeleton images to avoid the influence of the aortic valve thickness on calculating opening distance and swept area velocity. The original image is segmented based on the marker-based segmentation method [36], and the segmentation image is skeletonized using the image thinning method [37] (Figure 4a). We manually extracted the skeleton images for the failed cases.
The MDE is used to evaluate the maximum opening distance of endpoints between ground truth motion and predicted motion images. The opening distance between endpoints was calculated using the coordinates from endpoints extraction (Figure 4b). Then, the maximum opening distance is determined from the distances of all motion images. The MDE is defined as the following:
ε MDE = 1 N N d o _ p r e d o _ g
where d o _ p r e is the maximum opening distance of the endpoints of the predicted motion images, d o _ g is the maximum opening distance of the endpoints of ground truth motion images.
The MVE is used to evaluate the area swept velocity by the aortic valve between ground truth and prediction. The swept area velocity between adjacent images is defined as the area swept out by the skeleton of the aortic valve per unit time (Figure 4c)
v A = d A d t
where dA is the area swept out by the skeleton of aortic valve during the time of dt. The MVE is defined as the following:
ε MVE = 1 N N v m _ p r e v m _ g
where v m _ p r e is the maximum-swept area velocity of predicted motion images, v m _ g is the maximum-swept area velocity of ground truth motion images.
The OTE is used to evaluate the opening time between the ground truth and predicted motion images. The aortic valve opening time was defined as the point at which the endpoint distance was 1 mm larger than the initial state. Since the spatial resolution of the CT image is 0.5 mm/pixel, 1 mm is an approximate distance change value when the endpoints of aortic valves move one pixel away from the initial location. The OTE is defined as the following:
ε OTE = 1 N N t o _ p r e t o _ g
where N is the number of test dataset, N = 800. t o _ p r e is the opening time of the predicted motion images, and t o _ g is the opening time of the ground truth motion images.

3. Results

To evaluate the trained model, a test dataset containing 800 pairs of motion-blurred CT images and their corresponding 60 motion images was used. It was generated from eight different motion videos that were excluded from the training dataset. Two examples of the performance of the trained model are shown in Figure 5. We only selected several images as the representative of each phase to qualitatively analyze the performance of our model. Overlapped images were used to demonstrate the structural similarity between the prediction and ground truth. They were produced by overlapping color images that were converted from the ground truth and predicted images. In the overlapped images, correctly predicted pixels will turn yellow or remain black, whereas mispredicted pixels will remain green and red. A typical example with good performance is shown in Figure 5a. There are few green and red pixels in the overlapped images (Figure 5a), indicating that the structures of predicted images are close to the ground truth images. A typical example with bad performance is shown in Figure 5b. Predicted images in the closed phase and predicted images in the slow closing phase are close to the ground truth images (Figure 5b). However, predicted images during the rapid opening phase differ from the ground truth images. There are significant branches with green and red pixels in the overlapped images of these images. To further learn the distribution of correct and failed predictions, 50 pairs of motion-blurred CT images and their corresponding 60 motion images were randomly selected from the test dataset. Then, we checked overlapped images. Correctly predicted motion images were treated as correct predictions, while mispredicted motion images were treated as failed predictions (Table 5). A considerable performance drop can be seen in the rapid opening phase.
To evaluate the trained model quantitatively, we used two image similarity evaluation metrics and three motion features evaluation metrics. The details are presented below.

3.1. Results of Standard Image Quality Metrics

Table 6 shows the SSIM and PSNR of the trained model. There are 48,000 motion images in total. Additionally, we recorded the number of images in the different phases in advance. The number of images in the closed phase is 15,983, the number of images in the rapid opening phase is 16,000, and the number of images in the slow closing phase is 16,017. We evaluated the accuracy of all images, images in the closed phase, images in the rapid opening phase, and images in the slow closing phase of aortic valves (Table 6). The SSIM and PSNR of all images and images of different phases are close, as shown in the table. Moreover, images in the closed phase have the highest accuracy, images in the slow closing phase have the second-highest accuracy, and images in the rapid opening phase have the worst accuracy.

3.2. Results of Motion Feature Evaluation Metrics

Table 7 shows the results of motion feature metrics: MDE, MVE, and OTE. The geometric parameters and material properties of aortic valves with eight different motions in the test dataset are shown in Table 8 and Table 9, respectively. The maximum opening distance and maximum-swept area velocity of the aortic valves with eight different motions are shown in Table 10. Since the CT scan starts moments differ, the maximum opening distance and maximum-swept area velocity differ slightly in each pair of motion images of a single motion. For simplicity, we used the mean maximum opening distance and the mean maximum-swept area velocity as the ground truth.
To visualize the performance of our trained model, the data distribution map of maximum opening distance, maximum-swept area velocity, and opening time are shown. Ideally, these points should be distributed on the blue line. The data distribution of maximum opening distance is shown in Figure 6a. The data distribution of maximum-swept area velocity is shown in Figure 6b. The data distribution of opening time is shown in Figure 6c. The black points are outliers that differ significantly from the ground truth, and they account for approximately 6.6% of the total.

4. Discussion

In this study, the proposed concept of inferring aortic valve motion images from a single motion-blurred CT image was demonstrated using in silico learning under certain conditions. After training with the simulated dataset, the performance of the trained model was evaluated using two image similarity evaluation metrics and three motion feature evaluation metrics. The results of two image similarity evaluation metrics (SSIM, PSNR) are shown in Table 6. By analyzing and comparing these results, we found that the trained model can learn motion images from motion blur. The accuracy of different phases was different; additionally, a correlation between velocity and accuracy was observed: the higher the velocity, the lower the accuracy. We further explored the velocity limitation of the trained model, which works under the condition of the swept area velocity below 1400 mm2/s. The results of three special evaluation metrics (MDE, MVE, and OTE) are shown in Table 7. After analyzing the results of these metrics, we found that the trained model can infer opening time with a high accuracy and also had some ability in predicting maximum opening distance and maximum-swept area velocity. Furthermore, these motion features may be used to roughly determine the material properties of aortic valves.
Despite the velocity limitation, this study shows that the trained DNN has learned the motion images from a single motion-blurred CT image of aortic valves. The main difference between the proposed and traditional methods is the way to deal with motion blur. In the proposed method, the motion information hidden in the motion blur is learned via DNN to infer motion images, whereas traditional methods primarily use the spatial or transform domain distribution of the motion blur to suppress or remove motion blur. For example, a wavelet-based thresholding method [16] removed noise by finding some thresholds in the wavelet transform domain. In some motion compensation methods, additional information is used, but this information is only used to separate targets from motion blur, while the motion information in blur is not used. For example, an anatomy-guided registration algorithm [17] used anatomy structure to compensate for regions of interest and remove extra noise. This difference in method leads to differences in results. Our method can reconstruct the entire time series representing the valve motion from a single motion-blurred CT image. However, the conventional method recovers an image without temporal information from a single motion-blurred CT image. Motion images can provide far more information than a single image, such as motion features. Moreover, the amount of information in results between the proposed and traditional methods is different, indicating that the proposed method is considerably different from the traditional methods.

4.1. Interpretation of Image Similarity Evaluation Metrics

To understand the improvement in the image quality between the motion-blurred CT and predicted images, we compared the SSIM and PSNR of motion-blurred CT and predicted images, respectively. The SSIM and PSNR differences between motion-blurred CT images and ground truth motion images are 0.41 ± 0.02 and 21.6 ± 0.9 dB, respectively. The SSIM and PSNR differences between the predicted motion images and ground truth motion images are 0.97 ± 0.01 and 36.0 ± 1.3 dB, respectively (Table 6). From the above-mentioned values, we can conclude that the image quality of predicted motion images is much higher than that of motion-blurred images. The improvement in image quality indicates that the trained DNN has learned motion information from the motion blur.
Furthermore, by comparing the results of image similarity metrics in different phases, we found that the accuracy varied depending on the phases. The most significant difference among phases is velocity: aortic valves in the rapid opening phase move faster than those in other phases. Therefore, velocity may affect the accuracy. The opening distance profile, the swept area velocity profile, and its corresponding SSIM accuracy profile are shown as an example of predicted motion and ground truth in Figure 7. In the figure, we can find that the predicted swept area velocity profile can roughly fit the ground truth. Furthermore, by combining these images, we found that the SSIM accuracy tends to decrease as the swept area velocity increases.
The reasons for these phenomena can be traced back to the CT imaging process. The higher the velocity is, the less significant the blur left by the motion image at that moment in the motion-blurred CT image. Motion-blurred CT images are reconstructed by projection data of different motion images from different projection angles at different times. The pixels in the motion-blurred CT image are related to the corresponding pixels in the motion images. Due to the motion of aortic valves, these pixels in the motion images switch between the material of blood and aortic valves, resulting in a blur in the motion-blurred CT images. To make a pixel more like the aortic valve, there must be multiple motion images in which the corresponding pixel is an aortic valve. When a motion image has high velocity, there are fewer shared pixels with adjacent motion images. Therefore, it is difficult to determine whether these pixels in motion-blurred CT images are aortic valves. Therefore, the blur left by the fast-moving motion image is insignificant.
Since velocity influences the accuracy, we further explored the velocity limitation of the trained model. We quantitatively evaluated the relationship between the swept area velocity and SSIM (Table 11). We found that predicted images with the SSIM above 0.964 were very close to the ground truth images using empirical judgment. Therefore, our trained model can handle area swept velocity below 1400 mm2/s.

4.2. Interpretation of Motion Feature Evaluation Metrics

To evaluate the ability of the trained model to predict motion features, we must know that the average maximum opening distance of the test dataset is 12.4 mm, the average maximum-swept area velocity of the test dataset is approximately 1888.5 mm2/s, and the time for the CT simulator to generate one CT image is 183.3 ms. Based on the results in Table 7, the MDE is approximately 5.7% of the average max opening distance, the MVE is approximately 20.8% of the average maximum-swept area velocity, and the OTE is approximately 3.0% of the time for the CT simulator to generate one CT image. From the results, it was found that the trained DNN can infer opening time with a high accuracy. The high accuracy in inferring opening time confirmed that motion blur in CT images had temporal information of motion. Furthermore, we found that the trained DNN had some ability in inferring opening distance and maximum swept area velocity. The reasons for poor accuracy in terms of maximum swept area velocity is because of the relationship between velocity and accuracy and the velocity limitation of the trained DNN. Although there are still errors, considering the elusive motion information in the motion blur and the difficulty of dynamic imaging, these results are acceptable and a significant improvement over the results of traditional methods, which only extracted one sharp image from a motion-blurred CT image. In addition, we found that the trained model has some ability to distinguish different motions by analyzing the data distribution of maximum opening distance and maximum-swept area velocity. The average maximum opening distance and maximum-swept area velocity of prediction were close to the ground truth (Table 10). We can tell the difference between these eight motions based on their mean values. However, a large deviation in the data distribution is shown in Figure 6a,b. These data shows that the trained model required the mean value of multiple samples to distinguish different motions.
Our trained model had some ability to acquire accurate maximum opening distance, maximum-swept area velocity, and even the swept area velocity profile. Since the motions were varied by changing the geometric parameters and material properties, we further explored the relationship between these parameters and motion features. Geometric parameters affect the turbulence around aortic valves, whereas the material properties affect the bending stiffness and ductility of aortic valves. The geometric parameters can be easily measured from CT images of aortic valves in the closed phase, but it is difficult to evaluate the material properties of aortic valves. However, the detailed motion features obtained based on the proposed method can provide a new possibility for the evaluation of the material properties of aortic valves, inferring material properties from detailed motion features. There is a definite physical relationship between motion and material properties. Inferring material properties from detailed motion features can be considered an ill-posed problem with solutions, such as deep learning. Similar studies have been performed in other fields. Davis A, et al. [38] proposed a method to infer material properties from small motion in video and Schmidt F, et al. [39] integrated shape, motion, and optical cues to infer stiffness of unfamiliar objects. With the above studies, we conclude that the material properties of aortic valves can be evaluated using the detailed motion features obtained from the proposed method.

4.3. Model Generalization Evaluation

The trained model was evaluated using a test dataset. However, the accuracy of the trained model depends on the test dataset, making the results unreliable. Therefore, the generalizability of the DNN model should be evaluated. K-fold cross-validation [40] solves this problem by dividing the dataset into k numbers of subsets and ensuring that each subset is used as a test dataset at different folds, which ensures that our model does not depend on training and test datasets.
The dataset was split into four folds (Figure 8). In the first iteration, the first fold was used to test the model, and the rest were used to train the model. In the second iteration, the second fold was used as the testing dataset, while the rest served as the training dataset. This process was repeated until each fold of the four folds was used as a test set.
After k-fold cross-validation, SSIM and PSNR were calculated. As shown in Table 12, the accuracy is similar in all four folds, indicating the generalizability of the proposed DNN architecture. It also means that our algorithm is consistent, and we can be confident that by training it on the dataset generated with other simulation parameters can lead to similar performance.

4.4. Failure Cases

Figure 5b shows that the model failed to predict images in the rapid opening phase for some cases. The probability distribution of the failed cases is shown in Table 5. After analyzing the predicted motion images, we found that the failed cases were concentrated in the data generated from motion 3 and in the rapid opening phase of the aortic valve. In motion 3, the aortic valve thickness is 0.24 mm, and the velocity is fast because of the thin thickness. The intensity of the motion blur is related to the thickness of the aortic valve. Since the spatial resolution of the CT image is 0.5 mm/pixel, thin thickness indicates a low intensity of motion blur in the CT images. Therefore, the low intensity of motion blur could be the cause of this failure. Furthermore, in the failed cases in the rapid opening phase, the velocity of the rapid opening phase is higher than other phases. As previously stated, the higher the velocity is, the lower the accuracy. Therefore, one possible reason for this failure is the complexity of inferring motion images in the rapid opening phase. To improve the performance, it is necessary to increase the number of training data and use a more complex DNN architecture.

4.5. Study Limitations

Our study has proven the feasibility of the proposed concept in 2D under simulation conditions. However, it has potential limitations. Firstly, our study is limited to 2D. The 3D case will be more complicated because of the spatial motion as well as the arrangement of CT volume. It may be difficult to determine whether the motion blur is caused by the spatial motion or planar motion, and the arrangement of CT volume increased the complexity of the imaging process. Secondly, our study is conducted under simulation conditions. There is no real data involved. In our simulation, only motion blur generated by a specific reconstruction algorithm was considered. However, real data comprise several noise types, such as beam hardening and hardware-based noises. These noises may add useful information or affect the recognition of motion blur. Furthermore, there are many CT reconstruction algorithms, such as iterative and DNN reconstruction algorithms, and different reconstruction algorithms will make the motion blur pattern a little different. In the simulation, it is difficult to perfectly represent real physical processes. Simulation comes with simplified simulation models, such as the direction of CT imaging, linear elastic material properties of aortic valves, and incompressible Newtonian fluid of blood flow. In addition, the DNN model and the loss function we used are not optimized, so further improvements are required in future study.
Despite these limitations, we thought the simple simulation condition was sufficient to prove the proposed concept. Although it cannot perfectly reproduce the complex real-world situation, our study included the necessary variables and complexity of actual problems. Even in this simple in silico experimental setting, the motion simulator included the projection and reconstruction process of CT imaging, and the motion simulator included two-way fluid structure interaction. To verify how the proposed method can deal with varieties of valve motion, the material and geometric properties of aortic valves were varied in the motion simulator, and the CT imaging start times were varied in the CT simulator. Therefore, we considered the generated dataset sufficient to prove the proposed concept.

5. Conclusions

In this study, we have validated a concept that motion images can be inferred from motion blur in CT images of aortic valves under simulated conditions. The proposed method differs from previous methods for suppressing or removing motion blur. It will aid in the accurate diagnosis of aortic valves and the appropriate use of motion blur in the CT imaging field. Sixty motion images have been successfully inferred from a single motion-blurred CT image in 2D. The predicted images were evaluated with two image similarity evaluation metrics, SSIM (0.97 ± 0.01) and PSNR (36.0 ± 1.3 dB), and three motion feature evaluation strategies, the MDE between endpoints (0.7 ± 0.6 mm), the MVE between adjacent images (393.3 ± 423.3 mm2/s), and the OTE (5.5 ± 5.5 ms). The preliminary in silico learning results show a promising future for using deep learning techniques to learn motion information from motion-blurred CT images. Moreover, we found that the higher the velocity is, the lower the accuracy, so the predicted motion images may be useful in estimating the material properties of aortic valves.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app12189044/s1, Figure S1: The architecture of the discriminator; Figure S2: The performance of different architectures; Table S1: The running time for different architectures; Table S2: SSIM of different architectures; Table S3: PSNR of different architectures. References [41,42] are cited in the Supplementary Materials.

Author Contributions

Conceptualization, I.S. and N.T.; methodology, Y.L.; software, Y.L.; validation, Y.L., I.S. and N.T.; formal analysis, N.T.; investigation, Y.L.; resources, I.S.; data curation, Y.L.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L., I.S. and N.T.; visualization, Y.L.; supervision, I.S.; project administration, I.S.; funding acquisition, I.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by JST COI (Grant Number: JPMJCE1304) and Grants-in Aid for Young Scientist (Grant Number: 18K18357, 21K18036) from the Japanese Society for Promotion of Science in Japan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Vahanian, A.; Alfieri, O.; Andreotti, F.; Antunes, M.J.; Barón-Esquivias, G.; Baumgartner, H.; Borger, M.; Carrel, T.P.; De Bonis, M.; Evangelista, A.; et al. Guidelines on the management of valvular heart disease (version 2012) The Joint Task Force on the Management of Valvular Heart Disease of the European Society of Cardiology (ESC) and the European Association for Cardio-Thoracic Surgery (EACTS). Eur. Heart J. 2012, 33, 2451–2496. [Google Scholar] [CrossRef] [PubMed]
  2. Maganti, K.; Rigolin, V.H.; Sarano, M.E.; Bonow, R.O. Valvular heart disease: Diagnosis and management. Mayo Clin. Proc. 2010, 85, 483–500. [Google Scholar] [CrossRef] [PubMed]
  3. Aicher, D.; Fries, R.; Rodionycheva, S.; Schmidt, K.; Langer, F.; Schäfers, H. Aortic valve repair leads to a low incidence of valve-related complications. Eur. J. Cardio-Thorac. Surg. 2010, 37, 127–132. [Google Scholar] [CrossRef] [PubMed]
  4. Aicher, D.; Schäfers, H.J. Aortic valve repair—Current status, indications, and outcomes. Semin. Thorac. Cardiovasc. Surg. 2012, 24, 195–201. [Google Scholar] [CrossRef]
  5. Boodhwani, M.; El Khoury, G. Aortic valve repair: Indications and outcomes. Curr. Cardiol. Rep. 2014, 16, 490. [Google Scholar] [CrossRef]
  6. Lin, E.; Alessio, A. What are the basic concepts of temporal, contrast, and spatial resolution in cardiac CT? J. Cardiovasc. Comput. Tomogr. 2009, 3, 403–408. [Google Scholar] [CrossRef]
  7. Schlosshan, D.; Aggarwal, G.; Mathur, G.; Allan, R.; Cranney, G. Real-time 3D transesophageal echocardiography for the evaluation of rheumatic mitral stenosis. JACC Cardiovasc. Imaging 2011, 4, 580–588. [Google Scholar] [CrossRef]
  8. Bennett, C.J.; Maleszewski, J.J.; Araoz, P.A. CT and MR imaging of the aortic valve: Radiologic-pathologic correlation. Radiographics 2012, 32, 1399–1420. [Google Scholar] [CrossRef]
  9. Fan, B.; Tomii, N.; Tsukihara, H.; Maeda, E.; Yamauchi, H.; Nawata, K.; Harano, A.; Takagi, S.; Sakuma, I.; Ono, M. Attention-guided decoder in dilated residual network for accurate aortic valve segmentation in 3D CT scans. In Machine Learning and Medical Engineering for Cardiovascular Health and Intravascular Imaging and Computer Assisted Stenting; Springer: Cham, Switzerland, 2019; pp. 121–129. [Google Scholar]
  10. Lee, D.; Choi, J.; Kim, H.; Cho, M.; Lee, K.Y. Validation of a novel cardiac motion correction algorithm for x-ray computed tomography: From phantom experiments to initial clinical experience. PLoS ONE 2020, 15, e0239511. [Google Scholar] [CrossRef]
  11. Kalisz, K.; Buethe, J.; Saboo, S.S.; Abbara, S.; Haliburton, S.; Rajiah, P. Artifacts at cardiac CT: Physics and solutions. Radiographics 2016, 36, 2064–2083. [Google Scholar] [CrossRef]
  12. Diwakar, M.; Kumar, M. A review on CT image noise and its denoising. Biomed. Signal Process. Control 2018, 42, 73–88. [Google Scholar] [CrossRef]
  13. Zohair, A.A.; Shamil, A.A.; Sulong, G. Latest methods of image enhancement and restoration for computed tomography: A concise review. Appl. Med. Inform. 2015, 36, 1–12. [Google Scholar]
  14. Al-Ameen, Z.; Sulong, G. Attenuating noise from computed tomography medical images using a coefficients-driven total variation denoising algorithm. Int. J. Imaging Syst. Technol. 2014, 24, 350–358. [Google Scholar] [CrossRef]
  15. Silva, J.S.; Silva, A.; Santos, B.S. Image denoising methods for tumor discrimination in high-resolution computed tomography. J. Digit. Imaging 2011, 24, 464–469. [Google Scholar] [CrossRef]
  16. Van Stevendaal, U.; Von Berg, J.; Lorenz, C.; Grass, M. A motion-compensated scheme for helical cone-beam reconstruction in cardiac CT angiography. Med. Phys. 2008, 35, 3239–3251. [Google Scholar] [CrossRef]
  17. Doris, M.K.; Rubeaux, M.; Pawade, T.; Otaki, Y.; Xie, Y.; Li, D.; Tamarappoo, B.K.; Newby, D.E.; Berman, D.S.; Dweck, M.R.; et al. Motion-corrected imaging of the aortic valve with 18F-NaF PET/CT and PET/MRI: A feasibility study. J. Nucl. Med. 2017, 58, 1811–1814. [Google Scholar] [CrossRef]
  18. Elss, T.; Bippus, R.; Schmitt, H.; Ivanc, T.; Morlock, M.; Grass, M. Motion compensated reconstruction of the aortic valve for computed tomography. In Proceedings of the Medical Imaging 2018: Physics of Medical Imaging, Houston, TX, USA, 13 June 2018. [Google Scholar] [CrossRef]
  19. Jung, S.; Lee, S.; Jeon, B.; Jang, Y.; Chang, H. Deep learning based coronary artery motion artifact compensation using style-transfer synthesis in CT images. In International Workshop on Simulation and Synthesis in Medical Imaging; Springer: Cham, Switzerland, 2018; pp. 100–110. [Google Scholar]
  20. De Albuquerque, V.H.C.; Rodrigues, D.D.A.; Ivo, R.F.; Peixoto, S.A.; Han, T.; Wu, W.; Filho, P.P.R. Fast fully automatic heart fat segmentation in computed tomography datasets. Comput. Med. Imaging Graph. 2020, 80, 101674. [Google Scholar] [CrossRef]
  21. Jin, M.; Meishvili, G.; Favaro, P. Learning to extract a video sequence from a single motion-blurred image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  22. Auslander, N.; Wolf, Y.I.; Koonin, E.V. In silico learning of tumor evolution through mutational time series. Proc. Natl. Acad. Sci. 2019, 116, 9501–9510. [Google Scholar] [CrossRef] [PubMed]
  23. Benra, F.K.; Dohmen, H.J.; Pei, J.; Schuster, S.; Wan, B. A comparison of one-way and two-way coupling methods for numerical analysis of fluid-structure interactions. J. Appl. Math. 2011, 2011, 853560. [Google Scholar] [CrossRef]
  24. Joseph, P.M. An improved algorithm for reprojecting rays through pixel images. J. Comput. Assist. Tomogr. 1983, 7, 1136. [Google Scholar] [CrossRef]
  25. Sahasakul, Y.; Edwards, W.D.; Naessens, J.M.; Tajik, A.J. Age-related changes in aortic and mitral valve thickness: Implications for two-dimensional echocardiography based on an autopsy study of 200 normal human hearts. Am. J. Cardiol. 1988, 62, 424–430. [Google Scholar] [CrossRef]
  26. Berrebi, A.; Monin, J.L.; Lansac, E. Systematic echocardiographic assessment of aortic regurgitation—What should the surgeon know for aortic valve repair? Ann. Cardiothorac. Surg. 2019, 8, 331. [Google Scholar] [CrossRef]
  27. Meslem, A.; Bode, F.; Croitoru, C.; Nastase, I. Comparison of turbulence models in simulating jet flow from a cross-shaped orifice. Eur. J. Mech. B Fluids 2014, 44, 100–120. [Google Scholar] [CrossRef]
  28. Amindari, A.; Saltik, L.; Kirkkopru, K.; Yacoub, M.; Yalcin, H.C. Assessment of calcified aortic valve leaflet deformations and blood flow dynamics using fluid-structure interaction modeling. Inform. Med. Unlocked 2017, 9, 191–199. [Google Scholar] [CrossRef]
  29. Nishimura, R.A. Aortic valve disease. Circulation 2002, 106, 770–772. [Google Scholar] [CrossRef]
  30. Leyh, R.G.; Schmidtke, C.; Sievers, H.H.; Yacoub, M.H. Opening and closing characteristics of the aortic valve after different types of valve-preserving surgery. Circulation 1999, 100, 2153–2160. [Google Scholar] [CrossRef]
  31. De Man, B.; Basu, S. Distance-driven projection and backprojection in three dimensions. Phys. Med. Biol. 2004, 49, 2463. [Google Scholar] [CrossRef]
  32. Dennerlein, F.; Noo, F.; Hornegger, J.; Lauritsch, G. Fan-beam filtered-backprojection reconstruction without backprojection weight. Phys. Med. Biol. 2007, 52, 3227. [Google Scholar] [CrossRef]
  33. Parker, D.L. Optimal short scan convolution reconstruction for fan beam CT. Med. Phys. 1982, 9, 254–257. [Google Scholar] [CrossRef]
  34. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
  35. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016. [Google Scholar]
  36. Roerdink, J.B.T.M.; Meijster, A. The watershed transform: Definitions, algorithms and parallelization strategies. Fundam. Inform. 2000, 41, 187–228. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, T.Y.; Suen, C.Y. A fast parallel algorithm for thinning digital patterns. Commun. ACM 1984, 27, 236–239. [Google Scholar] [CrossRef]
  38. Davis, A.; Bouman, K.L.; Chen, J.G.; Rubinstein, M.; Durand, F.; Freeman, W.T. Visual vibrometry: Estimating material properties from small motion in video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  39. Schmidt, F.; Paulun, V.C.; van Assen, J.J.R.; Fleming, R.W. Inferring the stiffness of unfamiliar objects from optical, shape, and motion cues. J. Vis. 2017, 17, 18. [Google Scholar] [CrossRef]
  40. Burman, P. A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods. Biometrika 1989, 76, 503–514. [Google Scholar] [CrossRef]
  41. Xie, H.; Yao, H.; Sun, X.; Zhou, S.; Zhang, S. Pix2vox: Context-aware 3d reconstruction from single and multi-view images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019. [Google Scholar]
  42. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courvillle, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 1–9. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed method. (a) Data generation process. First, various simulation parameters were input into the motion simulator to generate different motion videos. The videos were then converted to video stream data in the Vid2im module. The amount of video stream data was increased following data augmentation. Finally, motion-blurred CT images were generated by the CT simulator using the augmented video stream data, and motion images were evenly taken from the video stream data used in CT imaging. (b) DNN training process. The DNN was trained using a dataset that included motion-blurred CT images and their corresponding motion images.
Figure 1. Overview of the proposed method. (a) Data generation process. First, various simulation parameters were input into the motion simulator to generate different motion videos. The videos were then converted to video stream data in the Vid2im module. The amount of video stream data was increased following data augmentation. Finally, motion-blurred CT images were generated by the CT simulator using the augmented video stream data, and motion images were evenly taken from the video stream data used in CT imaging. (b) DNN training process. The DNN was trained using a dataset that included motion-blurred CT images and their corresponding motion images.
Applsci 12 09044 g001
Figure 2. Data generation process. (a) A sample of the data. (b) Geometry of aortic valves. (c) Velocity profile of inlet in the two-way fluid structure interaction theory (2-way FSI). (d) Image in the video stream data. (e) Geometry of the equiangular fan-beam CT. (f) Relationship between the position of images and CT simulator. (g) Relative timing setting of CT measurement toward the aortic valve motion. The scan start moment was randomly selected to generate motion-blurred CT images.
Figure 2. Data generation process. (a) A sample of the data. (b) Geometry of aortic valves. (c) Velocity profile of inlet in the two-way fluid structure interaction theory (2-way FSI). (d) Image in the video stream data. (e) Geometry of the equiangular fan-beam CT. (f) Relationship between the position of images and CT simulator. (g) Relative timing setting of CT measurement toward the aortic valve motion. The scan start moment was randomly selected to generate motion-blurred CT images.
Applsci 12 09044 g002
Figure 3. The architecture of our proposed network. The meaning of all types of signs is marked at the bottom right of the figure.
Figure 3. The architecture of our proposed network. The meaning of all types of signs is marked at the bottom right of the figure.
Applsci 12 09044 g003
Figure 4. Detail of motion feature extraction. (a) Skeletonization process. (b) The process of endpoints distance extraction. The red circles in the figure represent the endpoints of the aortic valve. (c) The process of swept area velocity calculation.
Figure 4. Detail of motion feature extraction. (a) Skeletonization process. (b) The process of endpoints distance extraction. The red circles in the figure represent the endpoints of the aortic valve. (c) The process of swept area velocity calculation.
Applsci 12 09044 g004
Figure 5. Two examples of prediction results of our trained model. The image on the left is a motion-blurred CT image. We selected representative images from each motion phase. Images in the first row are ground truth and color-converted images. Images in the second row are predicted images and color-converted images. Images in the third row are partially enlarged images and overlapped images. In the overlapped images, correctly predicted pixels will turn yellow or remain black, whereas mispredicted pixels will remain red or green. (a) An example of good performance of our trained model. (b) An example of bad performance of our trained model. Our trained model failed in the images of rapid opening phase.
Figure 5. Two examples of prediction results of our trained model. The image on the left is a motion-blurred CT image. We selected representative images from each motion phase. Images in the first row are ground truth and color-converted images. Images in the second row are predicted images and color-converted images. Images in the third row are partially enlarged images and overlapped images. In the overlapped images, correctly predicted pixels will turn yellow or remain black, whereas mispredicted pixels will remain red or green. (a) An example of good performance of our trained model. (b) An example of bad performance of our trained model. Our trained model failed in the images of rapid opening phase.
Applsci 12 09044 g005aApplsci 12 09044 g005b
Figure 6. Results of motion feature evaluation. (a) Data distribution of maximum opening distance (mm). (b) Data distribution of maximum-swept area velocity (mm2/s). (c) Data distribution of opening time (ms). The black points are outliers that differ significantly from the ground truth, and they account for approximately 6.6% of the total.
Figure 6. Results of motion feature evaluation. (a) Data distribution of maximum opening distance (mm). (b) Data distribution of maximum-swept area velocity (mm2/s). (c) Data distribution of opening time (ms). The black points are outliers that differ significantly from the ground truth, and they account for approximately 6.6% of the total.
Applsci 12 09044 g006
Figure 7. An example of the relationship between velocity and accuracy. (a) The opening distance profile of endpoints. (b) The swept area velocity profile. (c) The SSIM profile.
Figure 7. An example of the relationship between velocity and accuracy. (a) The opening distance profile of endpoints. (b) The swept area velocity profile. (c) The SSIM profile.
Applsci 12 09044 g007
Figure 8. Four-fold cross-validation.
Figure 8. Four-fold cross-validation.
Applsci 12 09044 g008
Table 1. Geometric parameters of aortic valves (mean ± std).
Table 1. Geometric parameters of aortic valves (mean ± std).
ParametersSize (mm)
Thickness0.42 ± 0.12
eH (Effective height)9.5 ± 1.4
Ad (Aorto-ventricular diameter)21.0 ± 2.8
Sd (Sinus Valsalva diameter)28.5 ± 3.5
STJ (Sinotubular junction)25.0 ± 3.7
Table 2. Material properties of aortic valves.
Table 2. Material properties of aortic valves.
ParametersRange for Random Selection
Density (kg/m3)1000, 1020, 1056, 1060, 1100, 1250
Young’s modulus (Mpa)2–8
Poisson ratio0.49, 0.4, 0.3, 0.45
Table 3. CT values and normalized pixel value of each component in the motion image.
Table 3. CT values and normalized pixel value of each component in the motion image.
ComponentsCT Value (HU)Pixel Values
Vessels and aortic valve11892
Blood flow330255
Background tissue5238
Table 4. Parameters of the fan-beam CT simulator.
Table 4. Parameters of the fan-beam CT simulator.
ParametersValue
Fan angle60°
Rotation angle increment0.5°
Fan-beam spacing0.25°
Time required for one image183 ms
Spatial resolution0.5 mm/pixel
Table 5. The distribution of correct predictions and failed predictions.
Table 5. The distribution of correct predictions and failed predictions.
ProbabilityImages in the Closed Phase
(N = 989)
Images in the Rapid Opening Phase (N = 1000)Images in the Slow Closing Phase
(N = 1011)
Correct prediction92.3%68.6%83.3%
Failed prediction7.7%31.4%16.7%
Table 6. Evaluation results of SSIM and PSNR (N: the number of images, mean ± std).
Table 6. Evaluation results of SSIM and PSNR (N: the number of images, mean ± std).
Images in Different PhasesSSIMPSNR (dB)
All images
(N = 48,000)
0.97 ± 0.0136.0 ± 1.3
Images in the closed phase
(N = 15,983)
0.98 ± 0.0136.4 ± 1.1
Images in the rapid opening phase
(N = 16,000)
0.97 ± 0.0135.5 ± 1.5
Images in the slow closing phase
(N = 16,017)
0.97 ± 0.0135.9 ± 1.1
Table 7. Evaluation results of MDE, MVE, and OTE (N: the number of motion images, mean ± std).
Table 7. Evaluation results of MDE, MVE, and OTE (N: the number of motion images, mean ± std).
Evaluation MetricsValues
MDE (N = 800)0.7 ± 0.6 (mm)
MVE (N = 800)393.3 ± 423.3 (mm2/s)
OTE (N = 800)5.5 ± 5.5 (ms)
Table 8. Geometric parameters of aortic valves with eight motions in the test dataset.
Table 8. Geometric parameters of aortic valves with eight motions in the test dataset.
Motion No.Thickness (mm)eH (mm)Ad (mm)STJ (mm)Sd (mm)
10.398.417.720.626.7
20.549.520.224.629.7
30.249.219.426.329.6
40.549.821.325.130.3
50.3910.121.424.529.8
60.609.821.224.629.9
70.468.618.719.425.9
80.358.718.425.529.9
Table 9. Material properties of aortic valves with eight motions in test dataset.
Table 9. Material properties of aortic valves with eight motions in test dataset.
Motion No.Density (kg/m3)Young’s Modulus (Mpa)Poisson Ratio
112504.00.40
211003.80.45
310205.40.49
412506.80.45
510004.40.49
610006.10.49
710007.50.49
810602.70.45
Table 10. Maximum opening distance and maximum-swept area velocity of aortic valves with eight motions (dmax: maximum opening distance of ground truth, dmax_p: maximum opening distance of prediction, Vmax: maximum-swept area velocity of ground truth, Vmax_p: maximum-swept area velocity of prediction) (mean ± std).
Table 10. Maximum opening distance and maximum-swept area velocity of aortic valves with eight motions (dmax: maximum opening distance of ground truth, dmax_p: maximum opening distance of prediction, Vmax: maximum-swept area velocity of ground truth, Vmax_p: maximum-swept area velocity of prediction) (mean ± std).
Motion No.dmax (mm)dmax_p (mm)Vmax
(103 mm2/s)
Vmax_p
(103 mm2/s)
110.910.6 ± 0.61.61.4 ± 0.3
212.011.7 ± 0.51.81.7 ± 0.2
315.213.9 ± 0.82.41.9 ± 0.4
412.211.8 ± 0.51.81.8 ± 0.2
513.913.5 ± 0.62.42.1 ± 0.4
611.811.3 ± 0.71.71.7 ± 0.3
710.09.6 ± 0.71.31.2 ± 0.3
813.112.6 ± 0.72.11.7 ± 0.3
Table 11. The swept area velocity and SSIM.
Table 11. The swept area velocity and SSIM.
Swept Area Velocity (mm2/s)SSIMSwept Area Velocity (mm2/s)SSIM
0–2000.9741200–14000.965
200–4000.9721400–16000.963
400–6000.9711600–18000.962
600–8000.9701800–20000.962
800–10000.9682000–22000.962
1000–12000.9662200–24000.962
Table 12. Results of four-fold cross-validation.
Table 12. Results of four-fold cross-validation.
SSIMPSNR
Fold 10.988 ± 0.00536.534 ± 1.464
Fold 20.990 ± 0.00437.267 ± 1.396
Fold 30.989 ± 0.00336.921 ± 1.286
Fold 40.990 ± 0.00337.486 ± 1.219
Average0.989 ± 0.00437.052 ± 1.341
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Long, Y.; Sakuma, I.; Tomii, N. Reconstruction of Motion Images from Single Two-Dimensional Motion-Blurred Computed Tomographic Image of Aortic Valves Using In Silico Deep Learning: Proof of Concept. Appl. Sci. 2022, 12, 9044. https://doi.org/10.3390/app12189044

AMA Style

Long Y, Sakuma I, Tomii N. Reconstruction of Motion Images from Single Two-Dimensional Motion-Blurred Computed Tomographic Image of Aortic Valves Using In Silico Deep Learning: Proof of Concept. Applied Sciences. 2022; 12(18):9044. https://doi.org/10.3390/app12189044

Chicago/Turabian Style

Long, Yawu, Ichiro Sakuma, and Naoki Tomii. 2022. "Reconstruction of Motion Images from Single Two-Dimensional Motion-Blurred Computed Tomographic Image of Aortic Valves Using In Silico Deep Learning: Proof of Concept" Applied Sciences 12, no. 18: 9044. https://doi.org/10.3390/app12189044

APA Style

Long, Y., Sakuma, I., & Tomii, N. (2022). Reconstruction of Motion Images from Single Two-Dimensional Motion-Blurred Computed Tomographic Image of Aortic Valves Using In Silico Deep Learning: Proof of Concept. Applied Sciences, 12(18), 9044. https://doi.org/10.3390/app12189044

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop