ResBCDU-Net: A Deep Learning Framework for Lung CT Image Segmentation
Abstract
:1. Introduction
- Applying novel extensive preprocessing techniques to improve quality of the raw images.
- Proposing a new method for extracting ground truths corresponding to the input images.
- Employing a new deep learning-based algorithm for proper segmentation of lungs.
2. Related Works
2.1. Threshold-Based Methods
2.2. Edge-Detection Methods
2.3. Region Growing Methods
2.4. Deformable Boundary Models
2.5. Learning-Based Models
3. Proposed Method
3.1. DICOM Images Reading
3.2. Ground Truth (GT) Extraction
- 2.
- Removing the blobs connected to the CT image border: To classify the images correctly, the regions connected to the image border are removed, as shown in Figure 4c.
- 3.
- Labelling the image: Pixel neighbourhoods with the same intensity level can consider being a connected region. When this process is applied to the entire image some connected regions are formed. Figure 4a shows connected regions of integer array of the images that are labelled.
- 4.
- Keeping the labels with two largest areas: As shown in Figure 5b, labels with the two largest areas (both lungs) are kept whereas the tissues with areas less than the expected lungs are removed.
- 5.
- Applying erosion operation (with a disk of radius 2): This operation is applied on the image at this step to separate the pulmonary nodules attached to the lung wall from the blood vessels. The erosion operator reduces the bright areas of the image and makes the dark areas appear larger as shown in Figure 6a.
- 6.
- Applying closure operation (with a disk of radius 10) [15]: The aim of using this operator is to maintain the nodules connected to the lung wall. This operator can remove small dark spots from the image and connect small bright gaps. The image obtained by applying this operator is shown in Figure 6b.
- 7.
- Filling in the small holes within binary mask: In some cases, due to a breach in binary conversion using thresholding, a series of black pixels belong to the background appear in the binary image. These areas, known as holes, may be helpful. Therefore, we must obtain these areas by filling them as shown in Figure 6c.
3.3. Data Preparation
- a)
- Image binarization: In this process, a binary image is created with two values on the grey surface, i.e., black and white. The lung region poses a black colour with the value zero. Figure 8 shows the binarization process of a CT image.
- b)
- Dilation morphological operation: Morphological operations, typically applied to binary images, are used to extract and describe the geometry of the object in the image [49,50]. As a result of the binarization process described before, there would still be remaining regions of white colour around the lungs regarded as unwanted noise. Thus, morphological operations can be used to remove these regions. Moreover, there could still be some small black holes in the lung’s region, suspicious of noise caused by the binarization process. These holes should be also removed using morphological operations.
- c)
- Edge detection: As already stated, the edge detection filter determines the vertices of an object and the boundaries between objects and the background in the image. This process can also be used to improve the image and eliminate blur. An important advantage of the Canny technique is that it tries to remove the noise of an image before edge extraction and then applies the tendency to find the edges and the critical value of the threshold. Motivated by the advantages expressed so far, we also applied the Canny method to detect the edges in the source images. Figure 10 shows the result of the edge detection process.
3.4. Lung Segmentation Using Deep Learning
- Encoding path: In Res BCDU-Net, the encoder is replaced with a pre-trained ResNet-34 network. The last layer of this path like BCDU-Net adopts a densely connected convolutions mechanism. So, the last layer, in contrast to all residual blocks in this path, never attempts to combine features through summation before being transferred to a layer; instead, it tries to concatenate the features. In other words, features that are learned per block are passed to the next block. This strategy can help the network to avoid learning redundant features. Figure 13 shows the difference between Res blocks and dense blocks.
- Decoding path: In the decoding path, two feature maps should be concatenated: the feature maps corresponding to the same layer from the encoding path and those from the previous layer of the up-sampling function. In this Network, batch normalization was performed after the output of each up-sampling, before processing of two feature maps. Afterward, the resulting output is given to a BConvLSTM layer. In a standard ConvLSTM, only forward dependencies are processed. However, it is very important not to lose information concealed in any sequence. Therefore, the analysis of both forward and backward approaches has been proven to improve predictive network performance [54]. Both forward and backward ConvLSTMs are considered as standard processes. Therefore, two set parameters are considered as BConvLSTM. This layer can decide on the present input by verifying the data dependencies in both directions. Figure 14 illustrates our proposed network schematically.
4. Experimental Results
4.1. Evaluation Metrics
4.2. Results
- Class 1: Pixels that fall within the lung area are labelled by ‘0’.
- Class 2: Pixels related to the non-lung class are represented by the label ‘1’.
- Using the ResNet34 structure in the encoder section of the U-Net network has considerably improved the obtained results particularly in the quantity of recall.
- BCDU—Net model generally performs better than the ResNet structure in the contracting path of the U–Net.
- Using ResNet within BCDU-Net has achieved a better DSC similarity score compared to cases where these networks are used individually.
- Using images under our designed channels help to improve the quantitative results in all the evaluation criteria in comparison to using default channels.
- The high level of recall in our proposed model (with three new channels) arises from small FP as shown in the confusion matrix.
4.3. Ablation Study
5. Conclusions
6. Future Works
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
WHO | World Health Organization |
CT | Computed Tomography |
MRI | Magnetic Resonance Imaging |
CAD | Computer-Aided Diagnosis |
BCDU-Net | Bi-directional ConvLSTM U-Net with Densely connected convolutions |
FCN | Fully Convolutional Neural Network |
CNN | Convolutional Neural Network |
BConvLSTM | Bidirectional Convolutional LSTM |
LIDC | Lung Image Database Consortium |
IDRI | Infectious Disease Research Institute |
XML | Extensible Markup Language |
DICOM | Digital Imaging and Communications in Medicine |
HU | Hounsfield unit |
ROC | Receiver Operating Characteristic |
AUC | Area under the ROC Curve |
References
- Hossain, M.R.I.; Imran, A.; Kabir, M.H. Automatic lung tumor detection based on GLCM features. In Asian Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 109–121. [Google Scholar]
- Sun, S.; Christian, B.; Reinhard, B. Automated 3-D segmentation of lungs with lung cancer in CT data using a novel robust active shape model approach. IEEE Trans. Med. Imaging 2011, 31, 449–460. [Google Scholar] [PubMed] [Green Version]
- American Cancer Society‘s Publication, Cancer Facts & Figures 2020. Available online: https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2020.html (accessed on 2 November 2020).
- Wang, Y.; Guo, Q.; Zhu, Y. Medical image segmentation based on deformable models and its applications. In Deformable Models; Springer: New York, NY, USA, 2007; pp. 209–260. [Google Scholar]
- Neeraj, S.; Aggarwal, L.M. Automated medical image segmentation techniques. J. Med. Phys. Assoc. Med. Phys. India 2010, 35, 3–14. [Google Scholar]
- Asuntha, A.; Singh, N.; Srinivasan, A. PSO, genetic optimization and SVM algorithm used for lung cancer detection. J. Chem. Pharm. Res. 2016, 8, 351–359. [Google Scholar]
- Jeyavathana, R.; Balasubramanian, D.; Pandian, A.A. A survey: Analysis on preprocessing and segmentation techniques for medical images. Int. J. Res. Sci. Innov. 2016, 3, 113–120. [Google Scholar]
- Panwar, H.; Gupta, P.K.; Siddiqui, M.K.; Morales-Menendez, R.; Singh, V. Application of deep learning for fast detection of COVID-19 in X-Rays using nCOVnet. Chaos. Solitons. Fractals 2020, 138, 109944. [Google Scholar] [CrossRef]
- Amine, A.; Modzelewski, R.; Li, H.; Su, R. Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation. Comput. Biol. Med. 2020, 126, 1–10. [Google Scholar]
- Wang, X.; Deng, X.; Fu, Q.; Zhou, Q.; Feng, J.; Ma, H.; Liu, W.; Zheng, C. A Weakly-supervised Framework for COVID-19 Classification and Lesion Localization from Chest CT. IEEE Trans. Med Imaging 2020, 39, 2615–2625. [Google Scholar] [CrossRef]
- Hira, S.; Bai, A.; Hira, S. An automatic approach based on CNN architecture to detect Covid-19 disease from chest X-ray images. Appl. Intell. 2020. [Google Scholar] [CrossRef]
- Cheng, J.; Chen, W.; Cao, Y.; Xu, Z.; Zhang, X.; Deng, L.; Zheng, C.; Zhou, J.; Shi, H.; Feng, J. Development and Evaluation of an AI System for COVID-19 Diagnosis. medRxiv 2020. [Google Scholar] [CrossRef] [Green Version]
- Pathak, Y.; Shukla, P.K.; Tiwari, A.; Stalin, S.; Singh, S.; Shukla, P.K. Deep Transfer Learning based Classification Model for COVID-19 Disease. IRBM 2020. [Google Scholar] [CrossRef]
- Rizwan, H.I.; Neubert, J. Deep learning approaches to biomedical image segmentation. Inform. Med. Unlocked 2020, 18, 1–12. [Google Scholar] [CrossRef]
- Memon, N.A.; Mirza, A.M.; Gilani, S.A.M. Segmentation of lungs from CT scan images for early diagnosis of lung cancer. Proc. World Acad. Sci. Eng. Technol. 2006, 14, 228–233. [Google Scholar]
- Omid, T.; Alirezaie, J.; Babyn, P. Lung segmentation in pulmonary CT images using wavelet transform. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07, Honolulu, HI, USA, 15–20 April 2007; pp. 448–453. [Google Scholar]
- Sasidhar, B.; Ramesh Babu, D.R.; Ravi Shankar, M.; Bhaskar Rao, N. Automated segmentation of lung regions using morphological operators in CT scan. Int. J. Sci. Eng. Res. 2013, 4, 114–118. [Google Scholar]
- Keita, N.; Shimizu, A.; Kobatake, H.; Yakami, M.; Fujimoto, K.; Togashi, K. Multi-shape graph cuts with neighbor prior constraints and its application to lung segmentation from a chest CT volume. Med. Image Anal. 2013, 17, 62–77. [Google Scholar]
- Geetanjali, J.; Kaur, S. A Review on Various Edge Detection Techniques in Distorted Images. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2017, 7, 942–945. [Google Scholar]
- Shin, M.C.; Goldgof, D.B.; Bowyer, K.W.; Nikiforou, S. Comparison of edge detection algorithms using a structure from motion task. IEEE Trans. Syst. Manand Cybern. Part B Cybern. 2001, 31, 589–601. [Google Scholar] [CrossRef]
- Paola, C.; Casiraghi, E.; Artioli, D. A fully automated method for lung nodule detection from postero-anterior chest radiographs. IEEE Trans. Med Imaging 2006, 25, 1588–1603. [Google Scholar]
- Ana Maria, M.; da Silva, J.A.; Campilho, A. Automatic delimitation of lung fields on chest radiographs. In Proceedings of the 2004 2nd IEEE International Symposium on Biomedical Imaging: Nano to Macro (IEEE Cat No. 04EX821), Arlington, VA, USA, 18 April 2004; pp. 1287–1290. [Google Scholar]
- Hu, X.; Alperin, N.; Levin, D.N.; Tan, K.K.; Mengeot, M. Visualization of MR angiographic data with segmentation and volume-rendering techniques. J. Magn. Reson. Imaging 1991, 1, 539–546. [Google Scholar] [CrossRef]
- Tang, J.; Millington, S.; Acton, S.T.; Crandall, J.; Hurwitz, S. Surface extraction and thickness measurement of the articular cartilage from MR images using directional gradient vector flow snakes. IEEE Trans. Biomed. Eng. 2006, 53, 896–907. [Google Scholar] [CrossRef]
- Cline, H.E.; Dumoulin, C.L.; Hart, H.R., Jr.; Lorensen, W.E.; Ludke, S. 3D reconstruction of the brain from magnetic resonance images using a connectivity algorithm. Magn. Reson. Imaging 1987, 5, 345–352. [Google Scholar] [CrossRef]
- Nihad, M.; Grgic, M.; Huseinagic, H.; Males, M.; Skejic, E.; Smajlovic, M. Automatic CT image segmentation of the lungs with region growing algorithm. In Proceedings of the 18th International Conference on Systems, Signals and Image Processing-IWSSIP, Bratislava, Slovakia, 16–18 June 2011; pp. 395–400. [Google Scholar]
- da Silva Felix, H.J.; Cortez, P.C.; Holanda, M.A.; Costa, R.C.S. Automatic Segmentation and Measurement of the Lungs in healthy persons and in patients with Chronic Obstructive Pulmonary Disease in CT Images. In Proceedings of the IV Latin American Congress on Biomedical Engineering 2007, Bioengineering Solutions for Latin America Health, Margarita Island, Venezuela, 24–28 September 2007; pp. 370–373. [Google Scholar]
- Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
- Yoshinori, I.; Kim, H.; Ishikawa, S.; Katsuragawa, S.; Ishida, T.; Nakamura, K.; Yamamoto, A. Automatic segmentation of lung areas based on SNAKES and extraction of abnormal areas. In Proceedings of the 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’05), Hong Kong, China, 14–16 November 2005; pp. 5–10. [Google Scholar]
- Shi, Y.; Qi, F.; Xue, Z.; Chen, L.; Ito, K.; Matsuo, H.; Shen, D. Segmenting lung fields in serial chest radiographs using both population-based and patient-specific shape statistics. IEEE Trans. Med Imaging 2008, 27, 481–494. [Google Scholar] [PubMed]
- Cheng, J.; Liu, J.; Xu, Y.; Yin, F.; Wong, D.W.K.; Tan, N.-M.; Tao, D.; Cheng, C.-Y.; Aung, T.; Wong, T.Y. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE Trans. Med Imaging 2013, 32, 1019–1032. [Google Scholar] [CrossRef] [PubMed]
- Titinunt, K.; Han, X.-H.; Chen, Y.-W. Liver segmentation using superpixel-based graph cuts and restricted regions of shape constrains. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 3368–3371. [Google Scholar]
- Chen, X.; Yao, L.; Zhou, T.; Dong, J.; Zhang, Y. Momentum contrastive learning for few-shot COVID-19 diagnosis from chest CT images. arXiv 2020, arXiv:2006.13276. [Google Scholar]
- Zhou, K.; Gu, Z.; Liu, W.; Luo, W.; Cheng, J.; Gao, S.; Liu, J. Multi-cell multi-task convolutional neural networks for diabetic retinopathy grading. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 2724–2727. [Google Scholar]
- Dan, C.; Giusti, A.; Gambardella, L.M.; Schmidhuber, J. Deep neural networks segment neuronal membranes in electron microscopy images. In Proceedings of the advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 2843–2851. [Google Scholar]
- Jonathan, L.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Olaf, R.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Alom, M.Z.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Nuclei Segmentation with Recurrent Residual Convolutional Neural Networks based U-Net (R2U-Net). In Proceedings of the NAECON 2018—IEEE National Aerospace and Electronics Conference, Dayton, OH, USA, 23–26 July 2018; pp. 228–233. [Google Scholar] [CrossRef]
- Fausto, M.; Navab, N.; Seyed-Ahmad, A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Özgün, C.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 424–432. [Google Scholar]
- Zhou, X.; Ito, T.; Takayama, R.; Wang, S.; Hara, T.; Fujita, H. Three-dimensional CT image segmentation by combining 2D fully convolutional network with 3D majority voting. In Proceedings of the Deep Learning and Data Labeling for Medical Applications, Athens, Greece, 21 October 2016; pp. 111–120. [Google Scholar]
- Ozan, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; et al. Attention u-net: Learning where to look for the pancreas. In Proceedings of the 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands, 4–6 July 2018. [Google Scholar]
- Ozsahin, I.; Sekeroglu, B.; Musa, M.S.; Mustapha, M.T.; Ozsahi, D.U. Review on Diagnosis of COVID-19 from Chest CT Images Using Artificial Intelligence. Comput. Math. Methods Med. 2020. [Google Scholar] [CrossRef]
- Stephen, L.; Chong, L.H.; Edwin, K.P.; Xu, T.; Wang, X. Automated Pavement Crack Segmentation Using U-Net-Based Convolutional Neural Network. IEEE Access 2020, 8, 114892–114899. [Google Scholar]
- Reza, A.; Asadi-Aghbolaghi, M.; Fathy, M.; Escalera, S. Bi-directional ConvLSTM U-net with Densley connected convolutions. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea, 22 April 2019; pp. 1–10. [Google Scholar]
- Song, H.; Wang, W.; Zhao, S.; Shen, J.; Lam, K.-M. Pyramid dilated deeper convlstm for video salient object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 715–731. [Google Scholar]
- Christian, S.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
- Available online: https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI (accessed on 10 September 2020).
- Vanitha, U.; Prabhu Deepak, P.; PonNageswaran, N.; Sathappan, R. Tumor detection in brain using morphological image processing. J. Appl. Sci. Eng. Methodol. 2015, 1, 131–136. [Google Scholar]
- Megha, G. Morphological image processing. Int. J. Creat. Res. Thoughts 2011, 2, 161–165. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Gao, H.; Sun, Y.; Liu, Z.; Sedra, D.; Weinberger, K.O. Deep networks with stochastic depth. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 646–661. [Google Scholar]
- Gao, H.; Zhuang, L.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
- Sayda, E. Deep Stacked Residual Neural Network and Bidirectional LSTM for Speed Prediction on Real-life Traffic Data. In Proceedings of the 24th European Conference on Artificial Intelligence—ECAI 2020, Santiago de Compostela, Spain, 12 June 2020. [Google Scholar]
- Lee, D.R. Measures of the amount of ecologic association between species. Ecology 1945, 26, 297–302. [Google Scholar]
Methods | Precision | Recall | F1-Score | Accuracy (%) | Dice Coefficient |
---|---|---|---|---|---|
U-Net [37] | 96.11 | 96.34 | 96.22 | 95.18 | 95.02 |
RU-Net [38] | 95.52 | 97.21 | 96.35 | 97.15 | 94.93 |
ResNet34-Unet [44] | 97.32 | 98.35 | 97.83 | 96.73 | 95.28 |
BCDU-Net [45] | 99.02 | 98.03 | 98.52 | 97.21 | 96.32 |
Proposed Method | 99.12 | 97.01 | 98.05 | 97.58 | 97.15 |
Channel Type in CT Images | Precision | Recall | F1-Score | Accuracy (%) | Dice Coefficient |
---|---|---|---|---|---|
Default | 99.12 | 97.01 | 98.05 | 97.58 | 97.15 |
Proposed | 99.93 | 97.45 | 98.67 | 97.83 | 97.31 |
Method | Precision | Recall | F1-Score | Accuracy (%) | Dice Coefficient |
---|---|---|---|---|---|
Without Densely Connected Convolutions and BConvLSTM | 97.02 | 94.32 | 95.55 | 96.21 | 96.19 |
Ours (With Densely Connected Convolutions and BConvLSTM) | 99.93 | 97.45 | 98.67 | 97.83 | 97.31 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jalali, Y.; Fateh, M.; Rezvani, M.; Abolghasemi, V.; Anisi, M.H. ResBCDU-Net: A Deep Learning Framework for Lung CT Image Segmentation. Sensors 2021, 21, 268. https://doi.org/10.3390/s21010268
Jalali Y, Fateh M, Rezvani M, Abolghasemi V, Anisi MH. ResBCDU-Net: A Deep Learning Framework for Lung CT Image Segmentation. Sensors. 2021; 21(1):268. https://doi.org/10.3390/s21010268
Chicago/Turabian StyleJalali, Yeganeh, Mansoor Fateh, Mohsen Rezvani, Vahid Abolghasemi, and Mohammad Hossein Anisi. 2021. "ResBCDU-Net: A Deep Learning Framework for Lung CT Image Segmentation" Sensors 21, no. 1: 268. https://doi.org/10.3390/s21010268
APA StyleJalali, Y., Fateh, M., Rezvani, M., Abolghasemi, V., & Anisi, M. H. (2021). ResBCDU-Net: A Deep Learning Framework for Lung CT Image Segmentation. Sensors, 21(1), 268. https://doi.org/10.3390/s21010268