Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images
Abstract
:1. Introduction
- (1)
- The U-Net architecture combines both location and contextual information but does not combine information collectively (e.g., making a prediction from different prediction maps). As a result, it relies on a single prediction map derived from the previous layers. In machine learning, many studies [6,7,8] have shown that ensemble learning reduces the variance of predictions and reduces generalisation error, often producing better results.
- (2)
- By contrast, the HED network does combine local and global information collectively (e.g., features from different side outputs are fused) but it does not store spatial and contextual information together, which is essential in semantic segmentation. As a result, from our own experience, the HED network is less accurate in cases where the boundary of the foetal brain is less visible.
- (3)
- The majority of the ensemble-based networks in the literature are (i) multi-stream architectures where several independent networks are placed into a single architecture to produce a single prediction map, or (ii) involve separate training of several independent networks to produce different prediction maps, that are averaged to produce a final prediction. SIMOU-Net bypasses the need for training several different networks independently (to combine the predictions).
- (1)
- We propose SIMOU-Net, an ensemble deep U-Net inspired from the U-Net and HED architectures which combines multi-scale and multi-level local and global information collectively together with spatial information.
- (2)
- To the best of our knowledge, this is the largest cross-validated foetal brain segmentation study in the literature covering 254 cases (normal and abnormal) with over 15,000 images in total.
- (3)
- To further evaluate the robustness and generalisability of the proposed method, we train our network on normal cases and test it on abnormal cases, which can be extremely challenging due to differences in shape/geometry information. To the best of our knowledge, it is the first study in the literature to have performed this kind of investigation/evaluation.
- (4)
- Most of the fetal brain segmentation studies in the literature evaluate their results on a 2D slice-by-slice basis. From a neuro-radiology point of view, volumetric information is one of the key elements in abnormality detection. Thus, we further evaluate our method by comparing the fetal brain volumes generated automatically with those obtained manually both for normal and abnormal cases.
2. Related Work
- (1)
- The template-based methods rely heavily on the alignment of the query images to atlases or make a strong assumption about the orientation and geometry information [18]. As a result, when the foetal brain is structurally abnormal, this approach often produces inaccurate results due to differences in geometry/shape information.
- (2)
- Most of the methods developed require initial detection of the eyes or head in order to aid localisation of the brain. However, in many images, the eyes are not visible and the skull can be obscured particularly towards the bottom and top of the brain. Furthermore, since this initial step is necessary for subsequent steps, it is essential for the first step to achieve 100% localisation accuracy which is often difficult, or time consuming in the case of manual region placement.
- (3)
- The processing steps that generate initial candidates typically result in many false positives. Therefore, in subsequent steps, it is essential to have a robust post- processing method for false-positive reduction.
- (4)
- The 2D level-set and region growing methods often perform poorly when the brain boundary is unclear or obscured. Unfortunately, this is a frequent occurrence in foetal MRI, especially in the bottom and top slices. While the methods might work successfully on the central slices, in some cases the brain and the skull boundaries can be overlapping, resulting in over-segmentation. Furthermore, these methods do not generally work well with low contrast images.
3. Materials and Methods
3.1. Network Architecture
3.2. Training, Validation and Testing
3.3. Data Description, Pre-Processing and Experimental Setup
4. Results and Discussion
4.1. Normal Cases
4.2. Abnormal Cases
4.3. Fetal Brain Volume Estimation
4.4. Visual Comparison with Other Network Architectures
4.5. Segmentation of Challenging Cases
4.6. Ensemble Learning on the Original U-Net and Att-U-Net
4.7. Inter-Observer Evaluation
4.8. Advantages, Disadvantages and Limitations
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Griffiths, P.D.; Bradburn, M.; Campbell, M.; Cooper, C.L.; Graham, R.; Jarvis, D.; Kilby, M.D.; Mason, G.; Mooney, C.; Robson, S.C.; et al. Use of MRI in the diagnosis of fetal brain abnormalities in utero (MERIDIAN): A multicentre, prospective cohort study. Lancet 2017, 389, 538–546. [Google Scholar] [CrossRef] [Green Version]
- Griffiths, P.D.; Jarvis, D.; McQuillan, H.; Williams, F.; Paley, M.; Armitage, P. MRI of the foetal brain using a rapid 3D steady-state sequence. Br. J. Radiol. 2013, 86, 20130168. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Makropoulos, A.; Counsell, S.K.; Rueckert, D. A review on automatic fetal and neonatal brain MRI segmentation. NeuroImage 2018, 170, 231–248. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention (MICCAI); Springer: Munich, Germany, 2015; Volume 9351, pp. 234–241. [Google Scholar]
- Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 8–10 June 2015; pp. 1395–1403. [Google Scholar]
- Rampun, A.; Zheng, L.; Malcolm, P.; Tiddeman, B.; Zwiggelaar, R. Computer-aided detection of prostate cancer in T2-weighted MRI within the peripheral zone. Phys. Med. Biol. 2016, 61, 4796–4825. [Google Scholar] [CrossRef] [PubMed]
- Avnimelech, R.; Intrator, N. Boosted Mixture of Experts: An Ensemble Learning Scheme. Neural Comput. 1999, 11, 483–497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rokach, L. Taxonomy for characterizing ensemble methods in classification tasks: A review and annotated bibliography. Comput. Stat. Data An. 2009, 53, 4046–4072. [Google Scholar] [CrossRef] [Green Version]
- Anquez, J.; Angelini, E.D.; Bloch, I. Automatic segmentation of head structures on fetal MRI. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Boston, MA, USA, 28 June–1 July 2009; pp. 109–112. [Google Scholar]
- Taleb, Y.; Schweitzer, M.; Studholme, C.; Koob, M.; Dietemann, J.L.; Rousseau, F. Automatic Template-Based Brain Extraction in Fetal MR Images; HAL: Lyon, France, 2013. [Google Scholar]
- Alansary, A.; Lee, M.; Keraudren, K.; Kainz, B.; Malamateniou, C.; Rutherford, M.; Hajnal, J.V.; Glocker, B.; Rueckert, D. Automatic Brain Localization in Fetal MRI Using Superpixel Graphs. In Proceedings of the Machine Learning Meets Medical Imaging, Lille, France, 11 July 2015; pp. 13–22. [Google Scholar]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; and Susstrunk, S. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2281. [Google Scholar] [CrossRef] [Green Version]
- Link, D.; Braginsky, M.B.; Joskowicz, L.; Sira, L.B.; Harel, S.; Many, A.; Tarrasch, R.; Malinger, G.; Artzi, M.; Kapoor, C.; et al. Automatic Measurement of Fetal Brain Development from Magnetic Resonance Imaging: New Reference Data. Fetal Diagn. Ther. 2018, 43, 113–122. [Google Scholar] [CrossRef]
- Attallah, O.; Sharkas, M.A.; Gadelkarim, H. Fetal brain abnormality classification from MRI images of different gestational age. Brain Sci. 2019, 9, 231. [Google Scholar] [CrossRef] [Green Version]
- Ison, M.; Dittrich, E.; Donner, R.; Kasprian, G.; Prayer, D.; Langs, G. Fully automated brain extraction and orientation in raw fetal MRI, In MICCAI Workshop on Paediatric and Perinatal Imaging 2012 (PaPI 2012). In Proceedings of the MICCAI Workshop on Paediatric and Perinatal Imaging 2012 (PaPI 2012), Nice, France, 1 October 2012; pp. 17–24. [Google Scholar]
- Keraudren, K.; Kuklisova-Murgasova, M.; Kyriakopoulou, V.; Mala-mateniou, C.; Rutherford, M.A.; Kainz, B.; Hajnal, J.V.; Rueckert, D. Automated fetal brain segmentation from 2D MRI slices for motion correction. NeuroImage 2014, 101, 633–643. [Google Scholar] [CrossRef]
- Kainz, B.; Keraudren, K.; Kyriakopoulou, V.; Rutherford, M.; Hajnal, J.V.; Rueckert, D. Fast fully automatic brain detection in fetal MRI using dense rotation invariant image descriptors. In Proceedings of the IEEE 11th International Symposium on Biomedical Imaging (ISBI), Beijing, China, 29 April–2 May 2014; pp. 1230–1233. [Google Scholar]
- Salehi, S.S.M.; Hashemi, S.R.; Velasco-Annis, C.; Ouaalam, A.; Estroff, J.A.; Erdogmus, D.; Warfield, S.K.; Gholipour, A. Real-time automatic fetal brain extraction in fetal MRI by deep learning. In Proceedings of the IEEE 15th International Symposium on Biomedical Imaging (ISBI), Washington, DC, USA, 4–7 April 2018; pp. 720–724. [Google Scholar]
- Rampun, A.; López-Linares, K.; Morrow, P.J.; Scotney, B.W.; Wang, H.; Ocaña, I.G.; Maclair, G.; Zwiggelaar, R.; Ballester, M.A.G.; Macía, I. Breast pectoral muscle segmentation in mammograms using a modified holistically-nested edge detection network. Med. Image Anal. 2019, 57, 1–17. [Google Scholar] [CrossRef] [PubMed]
- Hamidinekoo, A.; Denton, E.; Rampun, A.; Honnor, K.; Zwiggelaar, R. Deep Learning in Mammography and Breast Histology, an Overview and Future Trends. Med. Image Anal. 2018, 76, 45–67. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mohseni Salehi, S.S.; Erdogmus, D.; Gholipour, A. Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging. IEEE Trans. Med. Imaging 2017, 36, 2319–2330. [Google Scholar] [CrossRef] [PubMed]
- Rajchl, M.; Lee, M.C.H.; Oktay, O.; Kamnitsas, K.; Passerat-Palmbach, J.; Bai, W.; Damodaram, M.; Rutherford, M.; Hajnal, J.; Kainz, B.; et al. DeepCut: Object Segmentation From Bounding Box Annotations Using Convolutional Neural Networks. IEEE Trans. Med. Imaging 2017, 36, 674–683. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Rother, C.; Kolmogorov, V.; Blake, A. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. (TOG) 2004, 23, 309–314. [Google Scholar] [CrossRef]
- Khalili, N.; Lessmann, N.; Turk, E.; Claessens, N.; de Heus, R.; Kolk, T.; Viergever, M.A.; Benders, M.J.N.L.; Isgum, I. Automatic brain tissue segmentation in fetal MRI using convolutional neural networks. Magn. Reson. Imaging 2019, 64, 77–89. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ebner, M.; Wang, G.; Li, W.; Aertsen, M.; Patel, P.A.; Aughwane, R.; Melbourne, A.; Doel, T.; David, A.L.; Deprest, J.; et al. An Automated Localization, Segmentation and Reconstruction Framework for Fetal Brain MRI. In Medical Image Computing and Computer Assisted Intervention (MICCAI); Springer: Granada, Spain, 2018; Volume 11070, pp. 313–320. [Google Scholar]
- Lou, J.; Li, D.; Bui, T.D.; Zhao, F.; Sun, L.; Li, G.; Shen, D. Automatic Fetal Brain Extraction Using Multi-stage U-Net with Deep Supervision. Mach. Learn. Med Imaging 2019, 592–600. [Google Scholar] [CrossRef]
- Dou, Q.; Chen, H.; Jin, Y.; Yu, L.; Qin, J.; Heng, P.-A. 3D Deeply Supervised Network for Automatic Liver Segmentation from CT Volumes. In Medical Image Computing and Computer-Assisted Intervention (MICCAI); Springer: Athens, Greece, 2016; Volume 9901, pp. 149–157. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar]
- Hinton, G. Neural Networks for Machine Learning—Lecture 6a—Overview of Mini-Batch Gradient Descent. Available online: https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf (accessed on 4 July 2019).
- Hinton, G.E. Learning to represent visual input. Philos. Trans. R. Soc. B Biol. Sci. 2010, 365, 177–184. [Google Scholar] [CrossRef]
- Jarvis, D.A.; Finney, C.R.; Griffiths, P.D. Normative volume measurements of the fetal intra-cranial compartments using 3D volume in utero MR imaging. Eur. Radiol. 2019, 29, 3488–3495. [Google Scholar] [CrossRef] [Green Version]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Rueckert, D. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef] [PubMed]
Network | J (%) | D (%) | Sen (%) | MAVD (cm3) | HD (mm) |
---|---|---|---|---|---|
SIMOU-Net | |||||
P{d,ux} | 86.9 ± 7.8 | 93.8 ± 6.2 | 95.9 ± 6.5 * | 4.5 ± 2.9 | 7.7 ± 3.7 |
P{d,u} | 86.8 ± 6.9 | 93.6 ± 6.3 | 95.7 ± 6.5 * | 4.5 ± 2.9 | 7.7 ± 3.7 |
P{d} | 84.3 ± 8.2 * | 88.1 ± 7.3 * | 92.1 ± 7.9 * | 5.1 ± 3.5 | 7.9 ± 3.9 |
P{u} | 86.9 ±7.8 | 93.0 ± 6.2 | 95.9 ± 6.5 * | 4.5 ± 2.9 | 7.8 ± 3.7 |
P{f} | 86.8 ±7.4 | 93.2 ± 6.2 | 95.8 ± 6.4 * | 4.5 ± 2.9 | 7.8 ± 3.7 |
PMAX | 86.1 ± 7.9 * | 91.2 ± 6.6 * | 95.8 ± 6.2 * | 4.7 ± 3.1 | 8.1 ± 4.1 |
PMED | 87.2 ± 6.9 | 93.3 ± 6.1 | 95.9 ± 6.2 * | 4.4 ± 2.9 | 7.6 ± 3.7 |
PAVG | 88.9 ± 6.9 | 94.2 ± 5.9 | 97.5 ± 6.2 | 4.2 ± 2.7 | 7.5 ± 3.6 |
U-Net | 79.2 ± 8.1 * | 86.8 ± 7.4 * | 92.4 ± 7.9 * | 6.3 ± 5.7 | 8.6 ± 4.8 |
wU-Net | 74.9 ± 10.6 * | 83.5 ± 9.3 * | 92.8 ± 7.6 * | 7.6 ± 6.4 * | 8.9 ± 4.8 |
U-Net+++ | 81.3 ± 7.5 * | 90.6 ± 6.8 * | 95.9 ± 6.4 * | 5.9 ± 5.6 | 8.1 ± 4.2 |
U-Net++ | 80.9 ± 7.6 * | 89.7 ±6.6 * | 95.7 ± 8.3 * | 6.1 ± 5.5 | 8.4 ± 4.3 |
HED | 76.9 ± 10.9 * | 85.6 ± 9.9 * | 84.7 ± 9.9 * | 7.2 ± 6.4 * | 10.1 ± 5.6 * |
SegNet | 81.9 ± 9.2 * | 87.9 ± 9.6 * | 90.7 ± 7.6 * | 5.5 ± 5.5 | 8.4 ± 4.7 |
Att-U-Net | 83.5 ± 7.8 * | 90.9 ± 6.3 * | 95.1 ± 6.4 * | 5.1 ± 3.5 | 8.1 ± 4.0 |
P-Net(S) | 85.4 ± 7.9 * | 90.1 ± 7.1 * | 94.2 ± 8.5 * | 5.1 ± 3.5 | 8.2 ± 4.3 |
DS-U-Net | 86.9 ± 7.8 | 92.2 ± 6.5 | 96.1 ± 8.4 * | 4.5 ± 2.9 | 7.6 ± 3.6 |
U-Net2 | 80.3 ± 7.9 * | 88.1 ± 6.7 * | 92.9 ± 9.1 * | 6.1 ± 5.5 | 8.4 ± 4.3 |
Network | J (%) | D (%) | Sen (%) | MAVD (cm3) | HD (mm) |
---|---|---|---|---|---|
SIMOU-Net | |||||
P{d,ux} | 83.5 ± 6.8 | 89.2 ± 6.4 | 92.3 ± 5.9 | 7.2 ± 12.1 | 8.7 ± 12.4 |
P{d,u} | 83.6 ± 6.8 | 89.4 ± 6.4 | 92.4 ± 5.9 | 7.2 ± 12.1 | 8.7 ± 12.4 |
P{d} | 75.1 ± 6.9 * | 80.1 ± 6.2 * | 85.3 ± 5.5 * | 7.9 ± 12.7 | 8.9 ± 12.6 |
P{u} | 81.1 ± 6.4 * | 81.2 ± 6.0 * | 88.3 ± 5.3 * | 7.5 ± 12.3 | 8.8 ± 12.5 |
P{f} | 83.4 ± 6.1 | 89.1 ± 6.4 | 92.2 ± 5.9 | 7.2 ± 12.1 | 8.7 ± 12.4 |
PMAX | 77.1± 6.3 * | 84.2 ± 6.6 * | 89.2 ± 6.2 | 7.7 ± 12.5 | 8.8 ± 12.6 |
PMED | 83.7± 6.6 | 89.3 ± 6.4 | 92.4 ± 7.1 | 7.2 ± 12.1 | 8.7 ± 12.4 |
PAVG | 85.7± 6.6 | 91.2 ± 6.8 | 93.2 ± 6.2 | 7.1 ± 12.1 | 8.5 ± 12.4 |
U-Net | 76.2 ± 8.2 * | 82.8 ± 7.2 * | 84.4 ± 7.1 * | 7.8 ± 12.5 | 9.0 ± 12.9 |
wU-Net | 71.1 ± 10.1 * | 79.6 ± 9.3 * | 80.8 ± 8.6 * | 8.1 ± 12.7 | 9.3 ± 13.2 |
U-Net+++ | 76.3 ± 8.5 * | 82.9 ± 7.2 * | 84.7 ± 7.2 * | 7.7 ± 12.5 | 8.8 ± 12.6 |
U-Net++ | 76.9 ± 8.4 * | 83.2 ± 6.9 * | 85.7 ± 7.1 * | 7.7 ± 12.5 | 10.1 ± 12.8 |
HED | 70.1 ± 12.7 * | 75.2 ± 10.2 * | 79.6 ± 9.9 * | 8.2 ± 12.9 | 11.5 ± 13.9 |
SegNet | 78.2 ± 10.6 * | 84.4 ± 9.1 * | 87.9 ± 9.6 | 7.7 ± 12.5 | 8.9 ± 12.7 |
Att-U-Net | 79.5 ± 7.8 * | 85.9 ± 7.3 | 87.1 ± 7.1 * | 7.6 ± 12.3 | 8.8 ± 12.7 |
P-Net(S) | 83.9 ± 6.9 | 89.7 ± 6.5 | 91.3 ± 6.5 | 7.2 ± 12.1 | 8.8 ± 12.4 |
DS-U-Net | 85.5 ± 6.6 | 90.9 ± 6.5 | 92.9 ± 6.4 | 7.1 ± 12.1 | 8.5 ± 12.6 |
U-Net2 | 78.5 ± 7.9 * | 84.1 ± 6.9 * | 85.9 ± 7.6 * | 7.7 ± 12.5 | 9.0 ± 12.7 |
Network | J (%) | D (%) | Sen (%) | MAVD (cm3) | HD (mm) |
---|---|---|---|---|---|
U-Net | |||||
P{d,ux} | 80.2 ± 7.9 * | 87.2 ± 7.1 | 92.2 ± 5.3 | 6.3 ± 5.7 | 8.6 ± 4.9 |
P{d,u} | 80.5 ± 7.5 | 88.9 ± 7.1 | 92.9 ± 5.3 | 6.2 ± 5.7 | 8.5 ± 4.7 |
P{d} | 70.2 ± 9.4 * | 76.2 ± 9.0 * | 79.6 ± 7.9 * | 6.7 ± 5.9 | 9.5 ± 5.2 |
P{u} | 79.0 ± 8.1 * | 86.1 ± 7.6 * | 91.1 ± 6.6 | 6.3 ± 5.7 | 8.6 ± 4.9 |
P{f} | 79.2 ± 8.1 * | 86.8 ± 7.4 | 92.4 ± 6.4 | 6.3 ± 5.7 | 8.6 ± 4.8 |
PMAX | 73.1 ± 9.3 * | 79.1 ± 7.9 * | 81.1 ± 7.2 | 6.5 ± 5.8 | 8.8 ± 5.1 |
PMED | 82.7 ± 7.6 | 88.5 ± 6.4 | 92.8 ± 5.8 | 6.0 ± 5.7 | 7.8 ± 4.8 |
PAVG | 83.2 ± 7.6 | 89.2 ± 6.2 | 93.7 ± 5.7 | 6.0 ± 5.2 | 7.8 ± 4.5 |
Att-U-Net | |||||
P{d,ux} | 84.9 ± 6.9 | 90.4 ± 6.2 * | 94.1 ± 5.3 | 5.6 ± 3.5 | 8.1 ± 4.2 |
P{d,u} | 84.5 ± 6.8 | 90.2 ± 6.1 * | 93.8 ± 5.4 | 5.6 ± 3.5 | 8.1 ± 4.2 |
P{d} | 74.3 ± 8.8 * | 79.9 ± 8.7 * | 82.6 ± 7.1 | 6.3 ± 4.2 | 9.4 ± 5.2 |
P{u} | 81.1 ± 7.9 * | 86.8 ± 7.3 * | 93.3 ± 6.3 | 6.1 ± 3.8 | 9.2 ± 4.9 |
P{f} | 83.5 ± 7.8 | 90.9 ± 6.3 | 95.1 ± 6.4 | 5.5 ± 3.5 | 8.1 ± 4.0 |
PMAX | 73.9 ± 9.3 * | 80.6 ± 7.6 * | 84.2 ± 7.0 | 6.5 ± 4.1 | 10.1 ± 6.1 |
PMED | 84.3 ± 7.2 | 90.2 ± 6.1 * | 94.4 ± 5.3 | 5.6 ± 3.5 | 8.3 ± 4.4 |
PAVG | 85.6 ± 7.2 | 92.5 ± 6.6 | 95.2 ± 5.2 | 4.5 ± 2.9 | 7.8 ± 4.3 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rampun, A.; Jarvis, D.; Griffiths, P.D.; Zwiggelaar, R.; Scotney, B.W.; Armitage, P.A. Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images. J. Imaging 2021, 7, 200. https://doi.org/10.3390/jimaging7100200
Rampun A, Jarvis D, Griffiths PD, Zwiggelaar R, Scotney BW, Armitage PA. Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images. Journal of Imaging. 2021; 7(10):200. https://doi.org/10.3390/jimaging7100200
Chicago/Turabian StyleRampun, Andrik, Deborah Jarvis, Paul D. Griffiths, Reyer Zwiggelaar, Bryan W. Scotney, and Paul A. Armitage. 2021. "Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images" Journal of Imaging 7, no. 10: 200. https://doi.org/10.3390/jimaging7100200
APA StyleRampun, A., Jarvis, D., Griffiths, P. D., Zwiggelaar, R., Scotney, B. W., & Armitage, P. A. (2021). Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images. Journal of Imaging, 7(10), 200. https://doi.org/10.3390/jimaging7100200