Next Article in Journal
Nonlinear Predictive Motion Control for Autonomous Mobile Robots Considering Active Fault-Tolerant Control and Regenerative Braking
Next Article in Special Issue
Multi-Conditional Constraint Generative Adversarial Network-Based MR Imaging from CT Scan Data
Previous Article in Journal
Application of the Differential Evolutionary Algorithm to the Estimation of Pipe Embedding Parameters
Previous Article in Special Issue
Method to Minimize the Errors of AI: Quantifying and Exploiting Uncertainty of Deep Learning in Brain Tumor Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Domain Neumann Network with Sensitivity Maps for Parallel MRI Reconstruction

1
Department of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin 17035, Korea
2
Department of Electrical and Computer Engineering, Marquette University, Milwaukee, WI 53323, USA
*
Authors to whom correspondence should be addressed.
Sensors 2022, 22(10), 3943; https://doi.org/10.3390/s22103943
Submission received: 13 April 2022 / Revised: 18 May 2022 / Accepted: 19 May 2022 / Published: 23 May 2022

Abstract

:
MRI is an imaging technology that non-invasively obtains high-quality medical images for diagnosis. However, MRI has the major disadvantage of long scan times which cause patient discomfort and image artifacts. As one of the methods for reducing the long scan time of MRI, the parallel MRI method for reconstructing a high-fidelity MR image from under-sampled multi-coil k-space data is widely used. In this study, we propose a method to reconstruct a high-fidelity MR image from under-sampled multi-coil k-space data using deep-learning. The proposed multi-domain Neumann network with sensitivity maps (MDNNSM) is based on the Neumann network and uses a forward model including coil sensitivity maps for parallel MRI reconstruction. The MDNNSM consists of three main structures: the CNN-based sensitivity reconstruction block estimates coil sensitivity maps from multi-coil under-sampled k-space data; the recursive MR image reconstruction block reconstructs the MR image; and the skip connection accumulates each output and produces the final result. Experiments using the fastMRI T1-weighted brain image dataset were conducted at acceleration factors of 2, 4, and 8. Qualitative and quantitative experimental results show that the proposed MDNNSM method reconstructs MR images more accurately than other methods, including the generalized autocalibrating partially parallel acquisitions (GRAPPA) method and the original Neumann network.

1. Introduction

Magnetic resonance imaging (MRI) is one of the most-used medical imaging technologies. It is non-invasive and there is no radiation exposure, unlike X-ray and computed tomography (CT), so it is harmless to the human body. MRI follows the principle of nuclear magnetic resonance (NMR) to image the inside of the human body. It needs strong magnetic fields and electromagnetic waves to resonate hydrogen molecules in the human body, which either excites or relaxes them. Since the density of hydrogen molecules is different for each tissue, the intensity of the emitted signal varies with the scan parameters, such as the repetition time and the echo time. Therefore, by setting the scan parameters for a precise diagnosis, the suitable contrasts of the MR image can be acquired. The MR signal is transformed from the frequency domain to the spatial domain. Specifically, the MR signal collected by the radio frequency (RF) antenna is a complex value and is sampled in k-space. The k-space contains spatial frequency and phase information. After acquiring the k-space data, it is reconstructed into an MR image in the spatial domain using Fourier transform [1]. The conventionally used process for MRI scans has physical limits in terms of speed, because it has to be acquired sequentially in the k-space domain. Therefore, it requires a long scan time. In addition, the longitudinal relaxation time ( T 1 ) increases as the external magnetic field strength increases. This results in an increase in scan time. Long scan times to acquire an MR image make patients uncomfortable. Moreover, artifacts are generated, depending on the patient’s movement or the uncontrollable flow of water molecules in the body (e.g., blood flow), during the MR scan. One of the methods used to reduce the long MR scan time is to obtain the k-space data by under-sampling. The under-sampling of an MR signal can reduce scan times because it acquires only a part of the k-space data. However, it also causes aliasing artifacts, due to the insufficient sampling rate.
Various methods have been proposed and developed in past years to reconstruct an artifact-free MR image from under-sampled data. One of those methods utilizes parallel imaging (PI) [2] to increase efficiency and accuracy in reconstruction. PI is a method that reconstructs under-sampled MR signals with coil sensitivities from multi-receiver RF coils to generate an artifact-free MR image. Multi-receiver RF coils have different sensitivity profiles, depending on their spatial location. Measuring the MR signal with multi-receiver RF coils is equivalent to performing additional sensitivity encoding. Multi-coil MR data with different spatial sensitivity profiles help the reconstruction process of mapping under-sampled k-space data to fully sampled MR images. The most important thing in PI is to effectively remove aliasing artifacts caused by violating the Nyquist theorem. In order to reconstruct an artifact-free MR image from multi-coil under-sampled k-space data, multiple methods have been proposed [3,4,5,6,7]. One method is to reconstruct an MR image using coil sensitivity maps in the image domain [3]. At this time, coil sensitivity maps are obtained in advance or are calculated from the acquired k-space data. Another method is to reconstruct an MR image by interpolating multi-coil data in the k-space domain and then combine the multi-coil data [5]. These PI methods are popular and widely used. However, it is challenging for conventional reconstruction methods to remove aliasing artifacts, particularly those with high acceleration factors such as 4 or 8.
In recent years, deep learning has been proposed in several studies for image restoration methods, such as for denoising [8,9,10], super-resolution [11,12,13,14], and inpainting [15,16,17], and it has been shown to work effectively. After that, model-based deep learning showed excellent performance by mathematically taking into account the image restoration operator as a forward model and solving the inverse problem to estimate the clean image for deep learning [18,19,20]. In particular, the Neumann network outperformed standard unrolled network architectures, such as model-based reconstruction with deep learned priors (MODL), by directly incorporating the forward model into the network optimization [19]. Moreover, deep learning-based methods show great promise in parallel MRI reconstruction when a high acceleration factor is used [21,22,23,24,25,26,27,28,29]. In addition, model-based deep learning, which sets parallel MRI reconstruction as an inverse problem and implements a forward model using prior knowledge, including coil sensitivity maps, shows excellent performance [30,31,32,33]. Coil sensitivity maps used in model-based deep learning for parallel MRI reconstruction are obtained in advance or calculated from auto-calibration signal (ACS) lines of the MR data, using an estimation method such as the ESPIRiT method [7]. Unfortunately, additional MR scanning to obtain coil sensitivity maps has the disadvantage of increasing the scan time, and estimation methods such as ESPIRiT have the disadvantage of low-accuracy estimations of sensitivity maps when the acceleration factor is high or when the ACS lines are few. Therefore, not only the MR image, but also the coil sensitivity maps, are necessary to estimate with deep learning [31,32].
In this study, we propose a multi-domain Neumann network with sensitivity maps (MDNNSM) that takes account into both image-domain and k-space-domain denoising, with the Neumann network’s regularization block combined with coil sensitivity maps. The MDNNSM consists of a data consistency block, a regularization block, and skip connections of the standard Neumann network, and an added sensitivity map reconstruction block. The new regularization block consists of two convolutional neural networks (CNNs) to reconstruct an MR image in the both image and frequency domains. The added sensitivity map reconstruction block maps multi-coil under-sampled k-space data to coil sensitivity maps for use in the forward model of parallel MRI reconstruction using CNNs. By integrating k-space data regularization and coil sensitivity map estimation into the Neumann Network, we achieved significant improvement in reconstruction quality when compared with the standard Neumann network [19].

2. Related Work

2.1. Parallel MRI Reconstruction Formulation

When acquiring MR signals, multi-receiver RF coils measure the signals in the frequency domain called the k-space. The MR image can be obtained by an inverse Fourier transform (IFT) of the acquired k-space data. Considering y is the measured under-sampled k-space data and x is the MR image that should be reconstructed, the process of scanning an MR image can be formulated as follows:
y = F x + ϵ ,
where F is the Fourier transform (FT) and ϵ is the measurement noise. In parallel MRI, the MR scanner has multiple receiver RF coils, and each coil has a different sensitivity map, depending on its location. Each coil acquires signals in which the MR signal of x is affected by the coil’s sensitivity map as k-space data. Therefore, the MR signal acquisition in parallel MRI can be expressed as follows:
A = M F S ,
y = A x + ϵ
where A is a forward operator consisting of F, coil sensitivity maps S, and under-sampling mask operator M. With Equation (3) as an inverse problem, x can be obtained from the measured y. Unfortunately, since parallel MRI reconstruction is an ill-posed problem, there is no closed-form solution. So, the optimal solution is found using the optimization method that minimizes the least-squares problem as follows:
x = argmin x 1 2 A x y 2 2 + λ R ( x )
where λ is a learning rate and R ( · ) is a regularization term. The regularizer R limits the degree of freedom of the solution by using prior knowledge about the MR image to be reconstructed. Classically, in the field of image reconstruction, total variation [34,35], L 1 -norm wavelet transform [36], etc., are used. In MRI compression sensing (CS) [37], the L 1 -norm wavelet transform is used as a regularization function to exploit image sparsity in the wavelet domain. Equation (4) can be solved by a gradient descent algorithm or a conjugate gradient algorithm. If it is solved using the gradient descent method, x is updated as follows in the t-th step.
x t = x t 1 λ A 1 A x t 1 y + R x t 1
Equation (5) calculates the approximate solution iteratively.

2.2. Deep Learning for Parallel MRI Reconstruction

In the past few years, many parallel MRI reconstruction methods using deep learning have been proposed [21,22,23,24,25,26,27,28,29]. Model-based deep learning architectures have recently gained popularity and achieved state-of-the-art performance [30,31,32,33]. This model-based method uses a regularizer R as a deep neural network in Equation (5) and reconstructs an MR image using an unrolled optimization.
R ( x ) = C N N ( x )
CNN-based regularization with a deep structure and nonlinear functions is also called data-driven regularization, and can approximate better than classical regularization. In addition, since CNNs learn the prior from the training data, the more data, the more the prior is helpful for the image reconstruction. For the model-based method, coil sensitivity maps S are used, and thus, need to be obtained or estimated. When coil sensitivity maps are obtained through additional MR scans, the scan time increases and the advantage of parallel imaging decreases. Another method is to estimate coil sensitivity maps from the acquired multi-coil k-space data. However, in conventional methods, such as ESPIRiT, as the acceleration factor increases, estimating accurate sensitivity maps becomes less likely. Recently, a method to reconstruct not only MR images, but also sensitivity maps by using deep learning has shown great promise [31,32].

2.3. Neumann Network

A Neumann network [19] is a deep neural network proposed to solve the inverse problem in the image processing field. Neumann series expansion is introduced to the inverse problem, and the solution is estimated by expanding as follows. The normal form of Equation (3) with the regularizer R is:
x = A 1 A + R 1 A y
Using Neumann series expansion, Equation (7) can be expanded as:
x = j = 0 I λ A 1 A λ R j λ A 1 y ,
where λ represents trainable parameters. By truncating the series at N, we obtain:
x = j = 0 N I λ A 1 A λ R j λ A 1 y ,
where the regularizer R is a trainable neural network. Equation (9) can be written in recursive form:
x 0 = λ A 1 y ,
x j = x j 1 λ A 1 A x j 1 R x j 1 .
Then:
x ^ = j = 0 N x j .

3. Multi-Domain Neumann Network with Sensitivity Maps

Unlike the standard Neumann network, we propose a multi-domain Neumann network with sensitivity maps (MDNNSM), which incorporates coil sensitivity maps corresponding to multi-coil k-space data into a forward model for parallel MRI reconstruction. In this study, coil sensitivity maps are estimated and used with a deep learning-based method, instead of a conventional method such as ESPIRiT. Figure 1 illustrates the overall architecture of the MDNNSM. The network consists of a sensitivity map estimation block, a data consistency block, a regularization block, and skip connections. The sensitivity map estimation block estimates coil sensitivity maps from multi-coil k-space data using a CNN. the Data consistency and regularization blocks reconstruct an MR image using a forward model and a CNN. The skip connections accumulate outputs of each iterative and output them as the final output of the network.

3.1. Sensitivity Maps Estimation

Since we use the forward model of parallel MRI reconstruction in Equation (3) for the Neumann network, we estimate coil sensitivity maps from multi-coil k-space data, which is an input to the model. We use a CNN to reconstruct coil sensitivity maps, which can be rewritten as:
S = C N N C F 1 M A C S y .
where M A C S is the ACS lines mask operator and C N N C is the CNN used to estimate the coil sensitivity maps. Except for low-frequency lines acquired by the ACS of multi-coil k-space data, zeros are filled in the other areas, transformed into an image domain by IFT, and then used as an input for C N N C [31]. Coil sensitivity maps estimated in this way are used for the forward model operation.

3.2. MR Image Reconstruction

Based on Equation (11), an MR image is reconstructed using a data consistency block with the forward model and a CNN-based regularization block. Figure 2 illustrates the detail of the CNN-based regularization block. The CNN-based regularization block reconstructs an MR image in parallel with the regularization operating in both the image domain and the frequency domain. Using this, the CNN-based regularization R of Equation (11) is formulated as follows.
R x = C N N I x + F 1 C N N F F x ,
where C N N I and C N N F represent the CNN-based regularization that reconstructs an MR image in the image domain and the frequency domain, respectively. To reconstruct the MR image in the frequency domain, FT is applied to the MR image and used as the input of C N N F ; then, the IFT is applied to the output.

3.3. K-Space Domain Accumulation

The standard Neumann Network derives the final output by adding both the initial value and iteration outputs in the image domain [38]. Because the proposed MDNNSM handles data in the k-space domain (Equations (10)–(12)), the equation is updated by applying F S to both sides so that an MR image can be mapped to multi-coil k-space data.
k 0 = λ M y ,
k j = k j 1 λ M k j 1 F S R S 1 F 1 k j 1 j = 1 , . . . , N ,
k ^ = j = 0 N k j ,
Finally, since the final output is the multi-coil k-space data, it is transformed into the image domain with IFT, and combined into an MR image with root sum squares:
x ^ = i = 1 N F 1 k i ^ 2 ,
where k i ^ is the i-th coil k-space data of the final output.

4. Experiments

We compare our MDNNSM with the zero-filled method, the GRAPPA algorithm [5], U-Net [39], and Neumann networks [19]. Zero-filled MR images are used as the input of U-Net. Neumann networks reconstruct an MR image from multi-coil k-space data without considering the sensitivity map using CNN-based regularization in the image domain only. The reference image and the reconstruction images are normalized from 0 to 4095, and the difference map, depicting the difference between the reference image and the reconstruction image, is visualized. We evaluate MR images reconstructed by each method quantitatively, based on the normalized mean squared error (NMSE) and structural similarity (SSIM).

4.1. Implementation Details

We split the complex-valued k-space data into two channels, a real channel and an imaginary channel, and then concatenate them in the channel dimension to treat them as a real value. For example, 16-coil complex-valued data is treated as 32-channel data. Additionally, since the target ground truth is an MR image with a real value only, we calculate the loss between the target and the magnitude of the final output with a complex value.
In the MDNNSM, we use C N N C for the sensitivity map reconstruction and C N N I and C N N F for the MR image reconstruction. These three CNNs ( C N N C , C N N I and C N N F ) are implemented using same U-Net architecture [39]. U-Net consists of 2D convolution layers, leaky rectified linear units with a coefficient of 0.2 for negative values, and instance normalization [40]. We set the number of iteration blocks to 6 in the MDNNSM. The λ is initialized to 1 for all blocks. We use the SSIM loss function [41] and the ADAM optimizer [42] to train our network, with a learning rate of 0.0001 and 50 epochs.
We used T1-weighted MR images of the NYU fastMRI brain dataset [43], which were obtained from four different MR scanners. The number of T1-weighted MR images is 498 scans (axial 7782 slices) for training and 169 scans (axial 2646 slices) for validation. The data used in our study subscribes to the following parameters: magnetic field strength = (1.5, 3) T, the number of multi-receiver coils = (2, 4, 6, 8, 12, 14, 16, 18, 20, 24), matrix size = ((640 × 260), (640 × 272), (640 × 290), (640 × 320), (640 × 332)), resolution = ((0.69 × 0.69), (0.69 × 0.72), (0.72 × 0.72), (0.75 × 0.75)) mm 2 , and slice thickness = (5, 7.5) mm. We used all 7782 slices for training; however, we conducted the validation without using the entire validation dataset, using only 1542 slices and excluding the data with ringing artifacts (as shown in Figure 3).To generate the multi-coil under-sampled k-space to be used for training and validation, multi-coil fully-sampled k-space data was multiplied pixel-wise by a sampling mask. As the sampling mask, equi-spaced sampling masks for regular under-sampling were used, as is shown in Figure 4. The acceleration factor and ACS ratio for sampling are as follows: (2, 10%), (4, 8%), and (8, 4%).

4.2. Results

In Figure 5, we present the fully sampled reference image and parallel MRI reconstruction images by various FFT-based and deep learning-based methods, with an acceleration factor of 2. The zero-filled image has severe aliasing artifacts due to the effect of under-sampling, making it difficult to observe the anatomy of the brain. When comparing reconstructed MR images of GRAPPA, U-Net, the Neumann network, and the MDNNSM, aliasing artifacts are removed, and the quality of reconstruction results is comparable to the fully sampled reference image.
Figure 6 shows the fully-sampled reference image and the T1-weighted MR images reconstructed at an acceleration factor of 4 by each method. In the zero-filled image, there are more severe aliasing artifacts than in the result of Figure 5, due to the influence of the higher acceleration factor. For the same reason, the GRAPPA method also reconstructs the MR image with speckle noise. The U-Net method removes artifacts and noise well, but the image is blurred and details are lost. The Neumann network and the MDNNSM, which are model-based methods, have less blurring compared to the previous methods and reconstruct details well. In the difference map, the image reconstruction by the model-based method has a lower value, showing that it is reconstructed more similarly to the reference image. This indicates that incorporating a forward model into deep neural networks helps to improve fast MRI reconstruction.
Figure 7 shows the reconstructed MR images of each method at an acceleration factor of 8. The analysis results of Figure 7 are similar to those of Figure 6. However, the quality of the reconstructed MR image is worse than the results of Figure 5 and Figure 6, due to the influence of a higher acceleration factor. In particular, MDNNSM significantly outperforms the standard Neumann network, as highlighted in the difference image. This shows the benefit of using CNN-based sensitivity map reconstruction.
We also report quantitative evaluation scores for parallel MRI reconstruction with acceleration factors of 2, 4, and 8 in Table 1. Our proposed MDNNSM produces significantly lower NMSE and higher SSIM than other reconstruction methods, including the original Neumann network.

4.3. The Amount of Data

We compare the performance of the U-Net, Neumann network and MDNNSM methods according to the number of patient subjects used for training. Figure 8 shows the SSIM scores of the U-Net, Neumann network, and MDNNSM methods at an acceleration factor of 8 when the number of training subjects is 100, 300, and 498. Regardless of the number of data, the performance is good in the order of the MDNNSM, the Neumann network, and U-Net. When comparing the MDNNSM and U-Net, the increase in SSIM score is 0.0211, 0.0192, and 0.0182 for 100, 300, and 498 training subjects, respectively. This indicates that the MDNNSM, using the forward model, is robust even when trained with less data than U-Net.

4.4. Ablation Studies

To evaluate the efficacy of the network architecture, we evaluated the MDNNSM by implementing its architecture depending on the following three points:
1
Sensitivity maps estimation: A comparison of the performance according to the sensitivity maps estimated by ESPIRiT or C N N C ;
2
Accumulating domain: A comparison of the performance according to the domain where data accumulates in the skip connections, either in the image domain or the frequency domain;
3
Sharing network parameters: A comparison of the performance according to paramaters that were shared with U-Net for the CNN-based regularization block in each iteration;

4.4.1. Sensitivity Maps Estimation

Table 2 shows the quantitative performance comparison of MR image reconstruction according to the sensitivity map estimation method. The MR image reconstruction performance of the MDNNSM is better when using sensitivity maps estimated by C N N C than when using sensitivity maps estimated by the ESPIRiT method. In all cases of acceleration factors 2, 4, and 8, when sensitivity maps estimated by C N N C were used, the NMSE was lower and the SSIM was higher.

4.4.2. Accumulating Domain

Table 3 shows the performance of each case quantitatively when data were accumulated in the image domain and the k-space domain in the skip connections of the MDNNSM. When data was accumulated in the k-space domain at acceleration factors of 2 and 4, it showed higher NMSE and SSIM scores than the results using accumulated data in the image domain. When comparing the results from using a factor of 8, the results of the image domain scored lower errors in NMSE than the results of the k-space domain. However, for SSIM, the results of the k-space domain were higher than image domain.

4.4.3. Sharing the Network Parameters

Table 4 shows a quantitative comparison of the performance according to shared parameters in the MDNNSM. Sharing the parameters means sharing a CNN-based regularization block in each iteration of the MDNNSM. If the parameters are shared in a regularization block for every iteration, the input is reconstructed by the CNN with the same weights. In all cases where acceleration factors of 2, 4, and 8, were used, the performance was good for the MDNNSM without parameter sharing. Furthermore, as the acceleration factor increased, the performance difference between the two cases increased as well.

5. Discussion

In this study, we introduced a method for parallel MRI reconstruction from under-sampled multi-coil k-space data. An MDNNSM, based on the Neumann network and with two added elements, was implemented.
First, a CNN was added for estimating coil sensitivity maps and used to perform the parallel MRI forward model operation. One method to obtain coil sensitivity maps without deep learning is to obtain the standard coil sensitivity map using full-scan data. The standard coil sensitivity maps obtained by this method have the advantage of high sensitivity profile accuracy, but because full-scan data need to be acquired, the advantage of accelerated MRI that measures MR signals by under-sampling disappears. Another method is to derive coil sensitivity maps from ACS lines, such as with the ESPIRiT method. In the self-calibrated coil sensitivity maps, the accuracy of the sensitivity profile tends to decrease as the number of ACS lines decreases. The accuracy of the low sensitivity profile may lower the quality of the reconstructed MR image. Therefore, we designed not only MR image reconstruction but also coil sensitivity map estimation with a CNN to increase the accuracy of the sensitivity profile. Consequently, we automated the process of reconstructing an MR image from multi-coil under-sampled k-space data, and, as shown in Table 2, the resulting reconstructed MR images were better than those reconstructed using ESPIRiT.
Second, in CNN-based regularization, multi-domain regularization was used for reconstruction in both the image and frequency domains. Using multi-domain regularization, aliasing artifacts were removed in the image domain and interpolation was performed by estimating missing data in the frequency domain. As a result of implementing three acceleration factors, such as 2, 4, and 8, the MDNNSM outperformed the reconstruction performance of the original Neumann network and other state-of-the-art methods. The aliasing artifacts were reduced and detailed structures were reconstructed.
The MDNNSM reconstructs an MR image by directly incorporating the forward model into the network optimization. Instead of reconstructing an MR image without an explicit formula, we took an MRI-specific approach, using the forward model of parallel MRI. With this approach, we were able to apply the prior knowledge of parallel MRI acquisitions to the deep neural network and to improve the quality of the reconstructed MR image. Moreover, by applying the forward model using estimated sensitivity maps, multi-coil data could be mapped into single data. Accordingly, not only the reconstruction accuracy was increased, but the size of the calculated data was also reduced, so the memory efficiency was also improved.
In the current study, only T1-weighted brain images from the fastMRI dataset were used. MR images can have various modalities, as well as T1 weights, depending on their purpose. Therefore, it is necessary to evaluate the MDNNSM using various data in future studies. The reconstruction performance of the MDNNSM will be evaluated for various MR images, such as T2-weighted images or fluid attenuated inversion recovery (FLAIR) images. In addition, since the accuracy of the estimated sensitivity maps has a great effect on the MR image reconstruction performance, it is necessary to study and apply a method for estimating the sensitivity maps with high accuracy.

6. Conclusions

We proposed an MDNNSM for parallel MRI reconstruction by incorporating multi-domain regularization and coil sensitivity map estimation into a Neumann network. Experimental results demonstrated that our MDNNSM shows superior reconstruction quality, compared with other FFT-based and deep learning-based parallel MRI reconstruction methods—including even the original Neumann network. This means that, when parallel MRI reconstruction is performed, using a forward model with sensitivity maps increases the accuracy of the reconstructed MR image.

Author Contributions

Conceptualization, S.-H.O. and D.H.Y.; data curation, J.-H.L. and J.K.; funding acquisition, S.-H.O. and D.H.Y.; investigation, J.-H.L. and D.H.Y.; methodology, J.-H.L. and D.H.Y.; software, J.-H.L.; writing—original draft, J.-H.L.; writing—review and editing, S.-H.O. and D.H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the MSIT (Ministry of Science, ICT), Korea, under the High-Potential Individuals Global Training Program (2021-0-01553), supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation), and the National Research Foundation of Korea (NRF) grant, funded by the Korean government (MSIT) (NRF-2020R1A2C4001623).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data used in the preparation of this article were obtained from the NYU fastMRI Initiative database (https://fastmri.med.nyu.edu/) (accessed on 13 April 2022) [43,44]. As such, NYU fastMRI investigators provided data but did not participate in the analysis or writing of this report. A listing of NYU fastMRI investigators, subject to updates, can be found at: https://fastmri.med.nyu.edu/ (accessed on 13 April 2022). The primary goal of fastMRI is to test whether machine learning can aid in the reconstruction of medical images.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Paschal, C.B.; Morris, H.D. K-space in the clinic. J. Magn. Reson. Imaging 2004, 19, 145–159. [Google Scholar] [CrossRef] [PubMed]
  2. Heidemann, R.M.; Özsarlak, Ö.; Parizel, P.M.; Michiels, J.; Kiefer, B.; Jellus, V.; Müller, M.; Breuer, F.; Blaimer, M.; Griswold, M.A.; et al. A brief review of parallel magnetic resonance imaging. Eur. Radiol. 2003, 13, 2323–2337. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Pruessmann, K.P.; Weiger, M.; Scheidegger, M.B.; Boesiger, P. SENSE: Sensitivity encoding for fast MRI. Magn. Reson. Med. 1999, 42, 952–962. [Google Scholar] [CrossRef]
  4. Ying, L.; Sheng, J. Joint image reconstruction and sensitivity estimation in SENSE (JSENSE). Magn. Reson. Med. 2007, 57, 1196–1202. [Google Scholar] [CrossRef] [PubMed]
  5. Griswold, M.A.; Jakob, P.M.; Heidemann, R.M.; Nittka, M.; Jellus, V.; Wang, J.; Kiefer, B.; Haase, A. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn. Reson. Med. 2002, 47, 1202–1210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Lustig, M.; Pauly, J.M. SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space. Magn. Reson. Med. 2010, 64, 457–471. [Google Scholar] [CrossRef] [Green Version]
  7. Uecker, M.; Lai, P.; Murphy, M.J.; Virtue, P.; Elad, M.; Pauly, J.M.; Vasanawala, S.S.; Lustig, M. ESPIRiT—An eigenvalue approach to autocalibrating parallel MRI: Where SENSE meets GRAPPA. Magn. Reson. Med. 2014, 71, 990–1001. [Google Scholar] [CrossRef] [Green Version]
  8. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [Green Version]
  9. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3929–3938. [Google Scholar]
  10. Tian, C.; Xu, Y.; Li, Z.; Zuo, W.; Fei, L.; Liu, H. Attention-guided CNN for image denoising. Neural Netw. 2020, 124, 117–129. [Google Scholar] [CrossRef]
  11. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [Green Version]
  12. Lai, W.S.; Huang, J.B.; Ahuja, N.; Yang, M.H. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 624–632. [Google Scholar]
  13. Yamanaka, J.; Kuwashima, S.; Kurita, T. Fast and accurate image super resolution by deep CNN with skip connection and network in network. In Proceedings of the International Conference on Neural Information Processing, Guangzhou, China, 14–18 November 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 217–225. [Google Scholar]
  14. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
  15. Yan, Z.; Li, X.; Li, M.; Zuo, W.; Shan, S. Shift-net: Image inpainting via deep feature rearrangement. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 1–17. [Google Scholar]
  16. Zeng, Y.; Fu, J.; Chao, H.; Guo, B. Learning pyramid-context encoder network for high-quality image inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1486–1494. [Google Scholar]
  17. Pathak, D.; Krahenbuhl, P.; Donahue, J.; Darrell, T.; Efros, A.A. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2536–2544. [Google Scholar]
  18. Aggarwal, H.K.; Mani, M.P.; Jacob, M. MoDL: Model-based deep learning architecture for inverse problems. IEEE Trans. Med. Imaging 2018, 38, 394–405. [Google Scholar] [CrossRef] [PubMed]
  19. Gilton, D.; Ongie, G.; Willett, R. Neumann networks for linear inverse problems in imaging. IEEE Trans. Comput. Imaging 2019, 6, 328–343. [Google Scholar] [CrossRef]
  20. Diamond, S.; Sitzmann, V.; Heide, F.; Wetzstein, G. Unrolled optimization with deep priors. arXiv 2017, arXiv:1705.08041. [Google Scholar]
  21. Schlemper, J.; Caballero, J.; Hajnal, J.V.; Price, A.N.; Rueckert, D. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 2017, 37, 491–503. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Eo, T.; Shin, H.; Jun, Y.; Kim, T.; Hwang, D. Accelerating Cartesian MRI by domain-transform manifold learning in phase-encoding direction. Med. Image Anal. 2020, 63, 101689. [Google Scholar] [CrossRef] [PubMed]
  23. Wang, S.; Su, Z.; Ying, L.; Peng, X.; Zhu, S.; Liang, F.; Feng, D.; Liang, D. Accelerating magnetic resonance imaging via deep learning. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 514–517. [Google Scholar]
  24. Du, T.; Zhang, H.; Li, Y.; Pickup, S.; Rosen, M.; Zhou, R.; Song, H.K.; Fan, Y. Adaptive convolutional neural networks for accelerating magnetic resonance imaging via k-space data interpolation. Med. Image Anal. 2021, 72, 102098. [Google Scholar] [CrossRef] [PubMed]
  25. Lee, D.; Yoo, J.; Tak, S.; Ye, J.C. Deep residual learning for accelerated MRI using magnitude and phase networks. IEEE Trans. Biomed. Eng. 2018, 65, 1985–1995. [Google Scholar] [CrossRef] [Green Version]
  26. Tavaf, N.; Torfi, A.; Ugurbil, K.; Van de Moortele, P.F. GRAPPA-GANs for Parallel MRI Reconstruction. arXiv 2021, arXiv:2101.03135. [Google Scholar]
  27. Eo, T.; Jun, Y.; Kim, T.; Jang, J.; Lee, H.J.; Hwang, D. KIKI-net: Cross-domain convolutional neural networks for reconstructing undersampled magnetic resonance images. Magn. Reson. Med. 2018, 80, 2188–2201. [Google Scholar] [CrossRef]
  28. Han, Y.; Sunwoo, L.; Ye, J.C. k-space deep learning for accelerated MRI. IEEE Trans. Med. Imaging 2019, 39, 377–386. [Google Scholar] [CrossRef] [Green Version]
  29. Sriram, A.; Zbontar, J.; Murrell, T.; Zitnick, C.L.; Defazio, A.; Sodickson, D.K. GrappaNet: Combining parallel imaging with deep learning for multi-coil MRI reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14315–14322. [Google Scholar]
  30. Hammernik, K.; Klatzer, T.; Kobler, E.; Recht, M.P.; Sodickson, D.K.; Pock, T.; Knoll, F. Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 2018, 79, 3055–3071. [Google Scholar] [CrossRef] [PubMed]
  31. Sriram, A.; Zbontar, J.; Murrell, T.; Defazio, A.; Zitnick, C.L.; Yakubova, N.; Knoll, F.; Johnson, P. End-to-end variational networks for accelerated MRI reconstruction. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 64–73. [Google Scholar]
  32. Jun, Y.; Shin, H.; Eo, T.; Hwang, D. Joint deep model-based MR image and coil sensitivity reconstruction network (joint-ICNet) for fast MRI. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 5270–5279. [Google Scholar]
  33. Putzky, P.; Karkalousos, D.; Teuwen, J.; Miriakov, N.; Bakker, B.; Caan, M.; Welling, M. i-RIM applied to the fastMRI challenge. arXiv 2019, arXiv:1910.08952. [Google Scholar]
  34. Block, K.T.; Uecker, M.; Frahm, J. Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Magn. Reson. Med. 2007, 57, 1086–1098. [Google Scholar] [CrossRef] [PubMed]
  35. Knoll, F.; Bredies, K.; Pock, T.; Stollberger, R. Second order total generalized variation (TGV) for MRI. Magn. Reson. Med. 2011, 65, 480–491. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  37. Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. Compressed sensing MRI. IEEE Signal Process. Mag. 2008, 25, 72–82. [Google Scholar] [CrossRef]
  38. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  39. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
  40. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Instance normalization: The missing ingredient for fast stylization. arXiv 2016, arXiv:1607.08022. [Google Scholar]
  41. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  42. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  43. Zbontar, J.; Knoll, F.; Sriram, A.; Murrell, T.; Huang, Z.; Muckley, M.J.; Defazio, A.; Stern, R.; Johnson, P.; Bruno, M.; et al. fastMRI: An open dataset and benchmarks for accelerated MRI. arXiv 2018, arXiv:1811.088391. [Google Scholar]
  44. Knoll, F.; Zbontar, J.; Sriram, A.; Muckley, M.J.; Bruno, M.; Defazio, A.; Parente, M.; Geras, K.J.; Katsnelson, J.; Chandarana, H.; et al. fastMRI: A publicly available raw k-space and DICOM dataset of knee images for accelerated MR image reconstruction using machine learning. Radiol. Artif. Intell. 2020, 2, e190007. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The overall architecture of the multi-domain Neumann network with sensitivity maps. The CNN-based sensitivity map reconstruction block reconstructs coil sensitivity maps from multi-coil under-sampled k-space. The block marked with ( I λ M ) is a data consistency block. The regularization block R reconstructs an MR image. The outputs of each iteration are accumulated in skip connections to become the final output.
Figure 1. The overall architecture of the multi-domain Neumann network with sensitivity maps. The CNN-based sensitivity map reconstruction block reconstructs coil sensitivity maps from multi-coil under-sampled k-space. The block marked with ( I λ M ) is a data consistency block. The regularization block R reconstructs an MR image. The outputs of each iteration are accumulated in skip connections to become the final output.
Sensors 22 03943 g001
Figure 2. CNN-based regularization block. An MR image is reconstructed in parallel in the image domain and the k-space domain with two U-Nets and then added.
Figure 2. CNN-based regularization block. An MR image is reconstructed in parallel in the image domain and the k-space domain with two U-Nets and then added.
Sensors 22 03943 g002
Figure 3. Examples of the NYU fastMRI brain dataset. (a) Artifact-free T1 weighted MR image; (b) Ringing artifact T1 weighted MR image.
Figure 3. Examples of the NYU fastMRI brain dataset. (a) Artifact-free T1 weighted MR image; (b) Ringing artifact T1 weighted MR image.
Sensors 22 03943 g003
Figure 4. Examples of equi-spaced under-sampling masks with acceleration factors and ACS ratios of (a) (2, 0.1%), (b) (4, 0.08%), and (c) (8, 0.04%), respectively.
Figure 4. Examples of equi-spaced under-sampling masks with acceleration factors and ACS ratios of (a) (2, 0.1%), (b) (4, 0.08%), and (c) (8, 0.04%), respectively.
Sensors 22 03943 g004
Figure 5. The top row shows fully sampled and reconstructed T1-weighted images using the zero-filled, Grappa, U-net, Neumann network, and MDNNSM methods, with an acceleration factor of 2 and an ACS rate of 10%. The middle row shows detailed images zooming the red box area of the top row. The bottom row shows the difference between the reference and reconstruction images. The MDNNSM showed the lowest NMSE and the highest SSIM.
Figure 5. The top row shows fully sampled and reconstructed T1-weighted images using the zero-filled, Grappa, U-net, Neumann network, and MDNNSM methods, with an acceleration factor of 2 and an ACS rate of 10%. The middle row shows detailed images zooming the red box area of the top row. The bottom row shows the difference between the reference and reconstruction images. The MDNNSM showed the lowest NMSE and the highest SSIM.
Sensors 22 03943 g005
Figure 6. The top row shows fully sampled and reconstructed T1-weighted images using the zero-filled, Grappa, U-net, Neumann network, and MDNNSM methods, with an acceleration factor of 4 and an ACS rate 8%. The middle row shows detailed images zooming the red box area of the top row. The bottom row shows the difference between the reference and reconstruction images. The MDNNSM showed the lowest NMSE and the highest SSIM.
Figure 6. The top row shows fully sampled and reconstructed T1-weighted images using the zero-filled, Grappa, U-net, Neumann network, and MDNNSM methods, with an acceleration factor of 4 and an ACS rate 8%. The middle row shows detailed images zooming the red box area of the top row. The bottom row shows the difference between the reference and reconstruction images. The MDNNSM showed the lowest NMSE and the highest SSIM.
Sensors 22 03943 g006
Figure 7. The top row shows fully sampled and reconstructed T1-weighted images using the zero-filled, Grappa, U-net, Neumann network, and MDNNSM methods, with an acceleration factor of 8 and an ACS rate 4%. The middle row shows detailed images zooming the red box area of the top row. The bottom row shows the difference between the reference and reconstruction images. The MDNNSM showed the lowest NMSE and the highest SSIM.
Figure 7. The top row shows fully sampled and reconstructed T1-weighted images using the zero-filled, Grappa, U-net, Neumann network, and MDNNSM methods, with an acceleration factor of 8 and an ACS rate 4%. The middle row shows detailed images zooming the red box area of the top row. The bottom row shows the difference between the reference and reconstruction images. The MDNNSM showed the lowest NMSE and the highest SSIM.
Sensors 22 03943 g007
Figure 8. SSIM score according to the amount of training data at an acceleration factor of 8.
Figure 8. SSIM score according to the amount of training data at an acceleration factor of 8.
Sensors 22 03943 g008
Table 1. Quantitative evaluation of reconstructed T1-weighted images with acceleration factors of 2, 4, and 8.
Table 1. Quantitative evaluation of reconstructed T1-weighted images with acceleration factors of 2, 4, and 8.
Model2X Acceleration Factor4X Acceleration Factor8X Acceleration Factor
NMSESSIMNMSESSIMNMSESSIM
Zero-filled0.01330.90840.03600.80790.09130.7003
GRAPPA0.00490.92470.05840.68230.09390.5848
U-Net0.00200.96960.00450.95010.01020.9259
Neumann network0.00130.97370.00280.95790.00690.9362
MDNNSM0.00120.97470.00230.96120.00510.9441
Table 2. Quantitative evaluation of reconstructed T1-weighted images depending on the sensitivity map estimation method.
Table 2. Quantitative evaluation of reconstructed T1-weighted images depending on the sensitivity map estimation method.
Model2X Acceleration Factor4X Acceleration Factor8X Acceleration Factor
NMSESSIMNMSESSIMNMSESSIM
MDNNSM with ESPRiT0.00140.97110.00300.95140.00680.9268
MDNNSM with C N N C 0.00120.97470.00230.96120.00510.9441
Table 3. Quantitative evaluation of reconstructed T1-weighted images depending on the accumulating domain.
Table 3. Quantitative evaluation of reconstructed T1-weighted images depending on the accumulating domain.
Model2X Acceleration Factor4X Acceleration Factor8X Acceleration Factor
NMSESSIMNMSESSIMNMSESSIM
MDNNSM Image sum0.00140.97100.00240.95990.00500.9439
MDNNSM K-space sum0.00120.97470.00230.96120.00510.9441
Table 4. Quantitative evaluation of reconstructed T1-weighted images depending on the shared parameters.
Table 4. Quantitative evaluation of reconstructed T1-weighted images depending on the shared parameters.
Model2X Acceleration Factor4X Acceleration Factor8X Acceleration Factor
NMSESSIMNMSESSIMNMSESSIM
MDNNSM
with parameter sharing
0.00130.97450.00250.96000.00600.9404
MDNNSM
without parameter sharing
0.00120.97470.00230.96120.00510.9441
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, J.-H.; Kang, J.; Oh, S.-H.; Ye, D.H. Multi-Domain Neumann Network with Sensitivity Maps for Parallel MRI Reconstruction. Sensors 2022, 22, 3943. https://doi.org/10.3390/s22103943

AMA Style

Lee J-H, Kang J, Oh S-H, Ye DH. Multi-Domain Neumann Network with Sensitivity Maps for Parallel MRI Reconstruction. Sensors. 2022; 22(10):3943. https://doi.org/10.3390/s22103943

Chicago/Turabian Style

Lee, Jun-Hyeok, Junghwa Kang, Se-Hong Oh, and Dong Hye Ye. 2022. "Multi-Domain Neumann Network with Sensitivity Maps for Parallel MRI Reconstruction" Sensors 22, no. 10: 3943. https://doi.org/10.3390/s22103943

APA Style

Lee, J. -H., Kang, J., Oh, S. -H., & Ye, D. H. (2022). Multi-Domain Neumann Network with Sensitivity Maps for Parallel MRI Reconstruction. Sensors, 22(10), 3943. https://doi.org/10.3390/s22103943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop