Next Article in Journal
Four-Port MIMO Antenna System for 5G n79 Band RF Devices
Previous Article in Journal
Infrared and Visible Image Fusion Using Truncated Huber Penalty Function Smoothing and Visual Saliency Based Threshold Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Pre-Processing and Segmentation for Lung Cancer Detection Using Fused CT Images

1
Department of Electrical Engineering, International Islamic University, Islamabad 44000, Pakistan
2
Department of Radiology and Biomedical Imaging, New Haven, CT 06519, USA
3
Department of Electrical Engineering, COMSATS University, Abbottabad 22060, Pakistan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2022, 11(1), 34; https://doi.org/10.3390/electronics11010034
Submission received: 12 October 2021 / Revised: 2 December 2021 / Accepted: 10 December 2021 / Published: 23 December 2021
(This article belongs to the Topic Medical Image Analysis)

Abstract

:
Over the last two decades, radiologists have been using multi-view images to detect tumors. Computer Tomography (CT) imaging is considered as one of the reliable imaging techniques. Many medical-image-processing techniques have been developed to diagnoses lung cancer at early or later stages through CT images; however, it is still a big challenge to improve the accuracy and sensitivity of the algorithms. In this paper, we propose an algorithm based on image fusion for lung segmentation to optimize lung cancer diagnosis. The image fusion technique was developed through Laplacian Pyramid (LP) decomposition along with Adaptive Sparse Representation (ASR). The suggested fusion technique fragments medical images into different sizes using the LP. After that, the LP is used to fuse the four decomposed layers. For the evaluation purposes of the proposed technique, the Lungs Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) was used. The results showed that the Dice Similarity Coefficient (DSC) index of our proposed method was 0.9929, which is better than recently published results. Furthermore, the values of other evaluation parameters such as the sensitivity, specificity, and accuracy were 89%, 98% and 99%, respectively, which are also competitive with the recently published results.

1. Introduction

Cancer is one of the most dangerous types of disease, spreading day by day across the globe. One of the main causes of death is lung cancer. The presence of cancer poses the greatest risk of complications and death. The underlying causes of cancer are not wholly known, which results in the frequent occurrence of the disease. According to the World Health Organization fact sheet, cancer is ranked as the leading cause of death across the globe. Cancer caused approximately 10 million casualties in 2020 alone, while more than 70% of the deaths occurred in low- and middle-income countries. Surprisingly, lung cancer is the most commonly occurring disease, with 2.21 million cases identified, leading to the death of 1.80 million. However, early detection of lung cancer can significantly help decrease the death toll while saving the lives of many people. The advancement of technology has considerably helped cancer diagnosis with commonly used techniques such as Magnetic Resonance Imaging (MRI), CT scans, X-rays, Positron Emission Tomography (PET), lung biopsy, High-Resolution Computed Tomography (HRCT), etc. The advancement of CT innovation has caused a remarkable expansion in the measurement of information in clinical CT. The development of Computer-Aided Design (CAD) systems for lung segmentation and fusion depends on computer vision and medical imaging technology.
Lung parenchyma segmentation is used as a pre-processing phase in lung CT image processing, in particular in lung disease. The pre-processing stages have a direct impact on the resultant image preparation. Subsequently, quicker and more exact segmentation strategies for lung CT images are an intriguing issue worthy of exploration, having urgent reasonable importance and clinical worth. Investigations have considered numerous lung division strategies, and a portion of the ordinary methodologies incorporate threshold region-developing techniques [1]. Regardless, the findings are not exceptionally encouraging, and the process is convoluted and repetitive. As a result, it is still an unexplored territory. The process of division is quick, yet not good on the grounds that the dimension estimations of the lung boundaries are the same as the windpipe and bronchus region. Deep learning [2] is a basic image-segmentation technique dependent on area. It can isolate the interstitial lung boundaries rapidly and effectively. This technique, however, is time consuming, and the emerging model is sensitive to boundaries. Most lung segmentation frameworks at present utilize blended systems dependent on the edge-respecting strategy along with unforeseen development and other extraction procedures. In addition, various examinations are based on the division of lung parenchyma with lung infection. Pu et al. [3] anticipated a robotized division method dependent on a two-dimensional versatile line walking assessment to supervise juxtapleural nodules(injuries lining the chest divider and mediastinum). Senthil et al. in 2019 proposed various evolutionary algorithms for lung segmentation [4]. Four algorithms were applied to the pre-processed images with enhanced quality. MATLAB was used to verify the realistic results for 20 sample lung images, and it was discovered that the Guaranteed Convergence Particle Swarm Optimization (GCPSO) improved the accuracy. Moreover, in 2020, Akhter et al. [5] worked on lung cancer detection using enhanced segmentation accuracy. This study developed an algorithm that uses median values measured along each row and column, in addition to maximum and minimum values, and found that this approach improved the accuracy of the segmentation of these images. The sensitivity, specificity, precision, and accuracy of the proposed methodology were significantly high with a lower false positive rate.
Image fusion is the mixture of appropriate data from different input images into one clarifying fused image. Therefore, image fusion is the mixing and integration of desired information from a set of registered images for clarity and alteration-free features. Image fusion can be divided into two methods depending on their mode: Spatial-Domain- (SD) and Transform-Domain- (TD) based image fusion methods. Although image fusion can be realized through both the spatial-area-based and the transform-domain-based algorithms, by contrast, the former is less robust and vulnerable to noise, while the later can also apply different fusion rules purposely to improve the fusion effect, besides overcoming the above hindrances [6]. Diverse medical modalities for imaging exist with each having their own exceptional qualities. This also helps further processing techniques and adds a valuable source of data.
Multi-scale image fusion is a well-known and commonly used technique. Multiple medical images are fused using this process. Pyramid Fusion (PF), Discrete Wavelet Transform (DWT), Curvelet Transform (CVT), Non-Sub-sampling Contour Transform (NSCT), and several other multi-scale image fusion algorithms are currently in use. The above methods are based on the principle of extracting potentially useful knowledge from modified parameters. After fusing the derived details by measuring the parameters, detailed relevant information along with the fused image are acquired. A natural signal sparsity analysis technique, Sparse Representation (SR), has been used to improve the capacity of the human visual system in image processing. SR theory is now commonly used in image-processing applications such as image super-resolution processing, denoising, and fusion. In recent years, SR theory has received much attention in the image-processing field, especially in the context of image fusion. It is very well known that the dictionary development of the algorithms of classical SR can be performed in one of two ways: analytical methodologies (such as wavelet decomposition) or learning-based methodologies, such as as Singular-Value Decomposition (K-SVD) and Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR) [7].
Traditional SR algorithms that use a fixed dictionary, on the other hand, have a number of drawbacks in the image fusion process. Liu et al. [8] suggested ASR for both image fusion and denoising, which can adaptively create a compressed dictionary for the fusion of images. Aishwarya et al. [9] applied the adjusted spatial frequency to image fusion and tried to introduce the basic concept of an adaptive selection dictionary to SR. In 2015, Singh and Khare [10] investigated an image fusion method for multi-view medical images based on two redundant wavelet transforms (Redundant Wavelet Transform (RWT) and R-DWT). In their proposed method, they found that quality image fusions can be produced through the shift-invariance of the Redundant Discrete Wavelet Transform (R-DWT) method. Numerous multimodal MRI, CT, and PET medical images have been used for experiments, and the results were analyzed using mutual information and strength metrics [11]. Pyramid transformation is a technique that can be used to accomplish the fusion of multi-view images. This technique was adopted and mostly used in computer vision, image compression, and the segmentation of images when it was first proposed [12]. Presently, to combine multi-view clinical images, the pyramid transform is extensively used. The method of the union LP was proposed by Du et al. [13] to extract many important features, which helped to enhance the outline structure and color contrast of the fused images. To fuse the images captured by a microscope, Kou et al. [14] suggested Region Mosaicking on Laplacian Pyramids (RMLP), but it was found to be sensitive to noise. Then, the LP algorithm including joint averaging was suggested, which resulted in effectively improving the output by showing the rich background details of the image. Li and Zhao in 2020 [15] worked on a novel multi-modal medical image fusion algorithm. In their study, firstly, CT and MR images were decomposed into low- and high-frequency sub-bands using the Non-Subsampled Contourlet Transform (NSCT) of multi-scale geometric transformation; second, the local area standard deviation method or fusion was selected for the low-frequency sub-band, while an adaptive pulse coupling neural network model was constructed and the fusion was used for the high-frequency sub-band. The fusion results of the algorithm in this paper significantly enhanced the image fusion accuracy, and it had some advantages for both visual effects and objective assessment indices, providing a more accurate basis for the clinical diagnosis and treatment of diseases. Moreover, Soliman et al. worked on accurate lung segmentation of CT chest images by adaptive appearance-guided shape modeling and reported a high DS, Bidirectional Hausdorff Distance (BHD), and Percentage VolumeDifference (PVD) accuracy of our lungsegmentation framework on multiple in vivo 3D CT image datasets [16].
Khan et al. in 2020 also worked on an integrated design of contrast-based classical feature fusion and selection [17]. Firstly, the gamma correction max intensity weight approach improves the contrast of the original CT images. Secondly, multiple texture, point, and geometric features are extracted from the contrast images, and then, a serial canonical correlation-based fusion is performed. Finally, an entropy-based approach is used to substitute zero values and negative features, followed by weighted Neighborhood Component Analysis (NCA) for selection. The Lungs Data Science Bowl (LDSB) 2017 was able to achieve the maximum accuracy. Similarly, in 2021, Azam et al. proposed multimodal medical image registration and fusion for quality enhancement [18]. The proposed approach was validated using the Harvard dataset’s CT and MRI imaging modalities. For the statistical comparison of the proposed system, quality evaluation metrics such as the Mutual Information (MI), Normalized Cross-Correlation (NCC), and Feature Mutual Information (FMI) were computed. The suggested technique yielded more precise outcomes, higher image quality, and useful data for medical diagnosis.
Recently, the American Cancer Society noted that there is a high likelihood of more severe COVID-19 in cancer patients, recommending that patients and their care-givers be required to take special precautions to reduce the risk of contracting the disease. This new type of coronavirus is SARS-CoV-2. It is beta-coronavirus, a primary cause of Acute Respiratory Syndrome (ARS). In this regard, lung cancer is closely linked with ARS because it is a part of a disease group based on the progression and expansion of abnormal cells within the human body. American scientists have analyzed the course of COVID-19 in patients with cancer. Therefore, the diagnostic and examination features are of particular importance, since these include not only determining the causative agent of an infectious disease, but also the main indicators, determining the severity of the clinical images, the prognosis, the nature, and the amount of medical care.
In this paper, we propose a lung image segmentation and fusion method. The segmentation method is based on an approach by which we optimize the computational time of CT image segmentation with the help of a very effective known method, the adaptive global threshold. The proposed algorithm also incorporates morphological operations and masking, which have proven very helpful in CT image segmentation. This enabled us to reduce the computational time with improved accuracy in complicated scenarios while eliminating the need for post-processing tasks and activities. The lung image fusion method is based on the LP and ASR [19] methods of image fusion, resulting in a better outcome and a better method of medical image fusion in the treatment of lung cancer [20]. We used LP decomposition for the multi-view clinical CT images to increase the speed of constructing the sub-dictionaries using the ASR method.
The remainder of this paper is organized as follows: The background of the theory is given in Section 2. The materials and techniques are introduced in Section 3. Section 4 gives the experimental results. Section 5 provides the conclusions.

2. Background of the Theory

In this section, we review various image fusion methods for multi-view images.

2.1. Sparse Representation Method

Several SR-based fusion approaches have been studied in recent years [21]. According to Zhu et al. [22], image patches were generated using a sampling approach and classified by a clustering algorithm, and then, a dictionary was constructed using the K-SVD methodology. A medical image fusion scheme based on discriminative low-rank sparse dictionary learning was proposed by Li et al. [23]. Convolutional-sparsity-based morphological component analysis was introduced by Liu et al. in 2019 [24] as a sparse representation model for pixel-level medical image fusion.
In SR methods, various small dictionary items are used to linearly explain natural signals. Since SR can only reflect natural images in a limited way, it has been broadly utilized in different fields recently. Nevertheless, the use of SR in fusion methods differs significantly from that of other areas. As a result, we expect an over-complete dictionary SR to show the signal y W x [25]. The SR can be depicted as follows:
y = E α
where E = [ e 1 , e 2 , e 3 , , e M ] B N × M ( N < M ) with e i as the dictionary particle, which shows the matrix of SR, and α i = [ α 1 , α 2 , α 3 , , α M ] T is the set of sparse coefficients. E has over-complete features; as a result, Equation (1) has an infinite number of solutions. The purpose of this procedure is to find a single solution vector θ i that contains the solution with the vector that contains mostly zero values. Normally, we choose the largest l 1 -norm rule to fuse { α i } . { α i } is solved by the equation:
min α y E α F 2 + λ α 1
where λ has a significant role in sparsity. When λ is large, this means that the sparse error will be large; if λ is small, then the final error will be smaller.

2.2. Image-Decomposition-Based Fusion Methods

A novel multi-component fusion method has been presented to generate superior fused images by efficiently exploiting the morphological diversity features of the images and the advantages [26]. Maqsood and Javed [27] proposed a two-scale Image Decomposition (ID) and sparse representation method for the integration of multi-modal medical images in 2020.

2.3. Deep-Learning-Based Fusion Methods

Several DL-based fusion approaches for multi-modality image fusion have recently been developed. Gao et al. [28] studied the use of a deep network for creating an initial decision map in a CNN for multi-focus image fusion. Li et al. [29] developed a DL architecture for multi-modality image fusion in 2018, which included encoder and decoder networks. Zhang et al. [30] proposed the general Image Fusion Framework based on a Convolutional Neural Network (IFCNN) (2020), which is a broad multi-modality image fusion framework based on CNNs. The performance of these DL-based fusion algorithms has been proven to be competitive. For the merging of images with different resolutions, Ma et al. [31] proposed a Dual-Discriminator conditional Generative Adversarial Network (DDcGAN) in 2020.

2.4. Rolling Guidance Filtering

The Rolling Guidance Filtering (RGF) algorithm, which is an edge-preserving smoothing filter, was presented by Zhang et al. [32] in 2014. Rolling guidance was implemented using RGF in an iterative way, with rapid convergence characteristics. RGF can totally manage the detail smoothing under the scale measure, unlike other edge-preserving filters. Small structure removal and edge recovery are the two main methods in RGF.

2.5. Dictionary Learning

Aishwarya and Thangammal [9] suggested a multi-modal medical image fusion adaptive dictionary learning algorithm. Useful information blocks were isolated for dictionary learning by removing zero information blocks and estimating the remaining image patches with a Modified Spatial Frequency (MSF).
The creation of an over-complete dictionary has a major impact on SR. There are two basic approaches to creating an over-complete dictionary. Firstly, pre-setting a transformation matrix is one procedure, for example contourlet transform and DCT. This method yields a dictionary that is fundamentally unchanged. Although the multi-source images have various attributes, a consistent sparse dictionary to fuse the images could result in poor performance. Secondly, a dictionary can be created based on training methods such as the PCA and K-SVD strategies. This generates a dictionary from the source image’s structure, allowing the trained or prepared particles to address the original image more sparsely. As a result, the dictionary produced by the latter method has better execution and efficiency, making it more appropriate for clinical image fusion. Now, look at how to use the dictionary to train the atoms. We say that { x i } e i = 1 is the database sample we obtain through a fixed size window (the size is n × n ), where e represents the samples and n signifies the number of sampling databases. From a compilation of clinical images with multiple views, the window performs random sampling. The dictionary E learning model can be defined as follows:
min E , α i i = 0 M α i 0 : x i E α i 2 < ε
where ε > 0 is the tolerance factor and M is the total number of multi-view clinical images.

2.6. Laplacian Pyramid Method

Liu et al. [20] proposed a deep-learning technique for medical image fusion. The strategy uses the Laplacian Pyramid to reconstruct the image in the fusion process after generating a weighted map of the source image using the deep network. Chen et al. [19] defined the Laplace pyramid to describe the lost high-frequency detail information caused by the convolution and down-sampling operations in the Gaussian Pyramid (GP) method.
   The LP technique is used to decompose an input image into a sequence of multi-scale, multi-layer, and pyramid-shaped output images [33]. This technique is used to breakdown medical images that can distinguish between useful data and clinical images. The LP method decomposes an image into a pyramid of images of progressively lower resolution. The upper layer contains low-resolution images, while the lower layer contains high-resolution images with the lower image’s size being four-times that of the upper image. The resulting decomposed images have a neat appearance. In the LP technique, the contrast between the two layers of the Gaussian pyramid is used as a pyramid layer for the processing of information at different frequencies by different layers. The first pyramid decomposition in the LP decomposition process is Gaussian pyramid decomposition, which loses some high-frequency data due to the convolution and down-sampling operations. The following are the steps involved in image decomposition:
The input images are used to create the initial Gaussian pyramid (multi-view medical images). A 5 × 5 2D separable Gaussian filter ω ( m × n ) is used to convolve the source images and build P l from bottom to top by down-sampling, where P l is the Gaussian pyramid, l is the current layer, and W l is the current number of rows in the l-th layer:
P l ( i , j ) = 4 m = 2 2 n = 2 2 ω ( m , n ) P l 1 ( 2 i + m , 2 j + n ) ; f o r ( 0 < l L , 0 i < W l , 0 j < C l )
The Gaussian pyramid obtained in the previous step is used to construct the corresponding LP. The ( l + 1 ) th layer P l + 1 is subtracted from the lth layer P l after up0sampling and Gaussian convolution, and the difference is LP’s lth layer P l . From the bottom layer to the top layer, the LP is constructed as follows:
P l * ( i , j ) = 4 m = 2 2 n = 2 2 ω ( m , n ) P l ( i m 2 , j n 2 ) for ( 1 l N , 0 i < W l , 0 j < C l )
L l = P l + 1 P l * : 0 l < N P l * : l = N
where:
P * l i m 2 , j n 2 = P l ( i m 2 , j n 2 ) : if i m 2 , j n 2 are integers 0 : otherwise
The corresponding Gaussian pyramid for the fused LP can be restored layer by layer from the top to bottom, resulting in the source image P 0 . The preceding indicates that the interpolation method will be used at the start. The inverse LP transform is defined as follows:
P N = LP N , : l = L P l = LP l + P l + 1 *

3. Materials and Methods

3.1. Image-Segmentation Method

Lung parenchyma segmentation is significantly helpful in locating and analyzing the nearby lesions, but it requires certain methodologies and frameworks. In the CAD system of lung nodules based on CT image sequences, lung parenchyma segmentation is an important pre-processing stage. We used an optimal thresholding method to reduce the complexity of lung segmentation in the quest to improve the computational time along with the accuracy. The approach was applied with the help of experimentation on several CT images taken from the LIDC-IDRI. The flowchart of the proposed segmented method is given in Figure 1. All the steps of the proposed segmentation technique are also summarised in Algorithm 1.
Let A ( x , y ) be the input CT image of the lungs. The adaptive global threshold was used to perform the segmentation of the lungs through intense thresholding of the region of the lung segment from the CT image. Then, the value of the threshold was picked from the CT image histogram to provide the output.
A δ ( x , y ) = 1 if A ( x , y ) σ 0 if A ( x , y ) < σ
σ is the specific global threshold value that was applied on original input image A ( x , y ) . After applying thresholding on A ( x , y ) , we obtain the resultant image, represented by A δ ( x , y ) .
Now, we obtain the image complement to the clear border as shown below:
ł A α ( x , y ) = C A δ ( x , y )
where C represents an image with all pixel values equal to 1. A α ( x , y ) acts as the output of this image complement with the clear border stage.
Now, a morphological closing operation is performed on A α ( x , y ) by using the mask B to obtain A β ( x , y ) :
A β ( x , y ) = A α ( x , y ) B
Now, taking the complement of A β ( x , y ) :
A γ ( x , y ) = C A β ( x , y )
Now, from Equation (8), the binary image A δ ( x , y ) is multiplied by image A γ ( x , y ) from Equation (11):
A τ ( x , y ) = A γ ( x , y ) A δ ( x , y )
Morphological closing is applied on A τ ( x , y ) from Equation (12) by using the mask B (structuring element):
A θ ( x , y ) = A τ ( x , y ) B
In the next step, the morphological opening operation is applied on A θ ( x , y ) , which is calculated in Equation (13) by using SE B, as shown below:
A ω ( x , y ) = A θ ( x , y ) B
In the last step, the output segmented image μ ( x , y ) is generated by multiplying A ω ( x , y ) from Equation (13) with A α ( x , y ) from Equation (9) as shown below:
μ ( x , y ) = A ω ( x , y ) A α ( x , y )
Algorithm 1 Proposed segmentation algorithm.
     Input: Input image A ( x , y )
     Output: Segmented image μ ( x , y )
     Initialization: Thresholding value : σ
     Structuring Element / Mask B
1:
procedure :
2:
     [ M , N ] s i z e ( A ( x , y )
3:
     C o n e s ( M , N )
4:
     Structuring element B
5:
    if  A ( x , y ) σ  then
6:
         A δ ( x , y ) 1
7:
    else
8:
         A δ ( x , y ) 0
9:
    end if
10:
     Image complement : A α ( x , y ) C A δ ( x , y )
11:
     Closing operation : A β ( x , y ) A α ( x , y ) B
12:
     Image complement : A γ ( x , y ) C A β ( x , y )
13:
     M u l t i p l i c a t i o n : A τ ( x , y ) A γ ( x , y ) A δ ( x , y )
14:
     Closing operation : A θ ( x , y ) A τ ( x , y ) B
15:
     opening operation : A ω ( x , y ) A θ ( x , y ) B
16:
     μ ( x , y ) A ω ( x , y ) A α ( x , y )
17:
end procedure

3.2. Image Fusion Method

In this section, the proposed image fusion algorithm is presented. The proposed method has three steps, as shown in Figure 2: decomposition of the source segmented image, hierarchical fusion, and reconstruction of the image. The complete proposed fusion method is also summarised in Algorithm 2. The method of LP decomposition is used to decompose each multi-view medical image into four layers in the initial step. The next step is to build a dictionary for each layer, which is then fused using the ASR method in sequence. In the last step, the inverse LP transform is used to obtain the reconstructed resultant image.
Algorithm 2 Proposed fusion algorithm.
     Input: Input image μ ( x , y )
     Output: Fused image I F
1:
procedure:
2:
     P 0 μ ( x , y )
3:
    for  1 l 3  do
4:
         P l ( P l 1 )
5:
         P l * ( P l )
6:
        if  0 < l < 4  then
7:
            L P l P l P l *
8:
        else
9:
            L P L P l
10:
        end if
11:
    end for
12:
     Sub - dictionaries { E 0 , E 1 , E 2 , , E k }
13:
     Sliding window size n × n
14:
     Patch Size : e ( M n + 1 ) × ( N n + 1 )
15:
     Set of patches : { s i 1 , s i 2 , , s i J } i = 1 e
16:
     { ν 1 i , ν 2 i , , ν J i } i = 1 e S o r t i n g { s i 1 , s i 2 , , s i J } i = 1 e
17:
     Zero - mean vector : v ^ j i v j i v ¯ j i . 1
18:
     Gradient orientation histogram : { θ 0 , θ 1 , θ 2 , , θ K }
19:
     θ m a x max { θ 0 , θ 1 , θ 2 , , θ K }
20:
     k * a r g m a x { θ k | k = 1 , , K }
21:
     max α j i α j i v ^ j i E k i α j i 2 < n C σ + ϵ
22:
     α F i α j * i
23:
     j * a r g m a x j { α j i 1 , j = 1 , 2 , , J }
24:
     v ¯ F i v ¯ j *
25:
     v F i E k i α F i + v - - F i . 1 ( Fused result of layer )
26:
     P F l 1 P F l * + L P F l 1 ( 0 l < 4 )
27:
     I F P F
28:
end procedure

3.2.1. Decomposition of the Segmented Source Image

To obtain the features of segmented source images μ ( x , y ) at various sizes, the LP decomposition technique was applied. To begin, we need the Gaussian pyramid of an image of size M × N . The source image is on the P 0 layer. To obtain the P 1 layer ( 0.5 M × 0.5 N ) , the image μ ( x , y ) from layer P 0 was down-sampled with the help of the Gaussian kernel function. By repeating the above steps, the LP decomposition was formed. The three stage decomposition of an LP is shown in Figure 3. The decomposition of the ( l 1 ) th layer P l 1 into the lth layer P l can be expressed as follows:
P l = ( P l 1 )
The first step in making the LP is sampling each layer of the Gaussian pyramid. The image to be enlarged is considered to be m × n in dimension. An inverse Gaussian pyramid was used to extend the image into a 2 m × 2 n image, which can be interpreted as:
P l * = ( P l )
Now, the reconstruction of μ ( x , y ) can done from the Laplacian Pyramid layers as shown below:
μ ( x , y ) = P 0 ; f o r l = 0 P l * + L P l ; f o r 0 < l < L P l = L P l ; f o r l = L

3.2.2. ASR Method

After decomposing, the ASR method was used to fuse the two groups ( L P 0 to L P 3 ) of two source images [34]. As shown in Figure 4, the most critical step in ASR is to select and compose the adaptive dictionary. The segmented source images are represented by { μ 1 , μ 2 , , μ j } , and all have the same M × N size. Medical images must meet the ASR model’s requirement that the source images must be of the same size. As a result, ASR is an excellent option for fusing multi-view images.
The relevant layers of the two images of the LP were used to create a new LP of the fused image using the learned sub-dictionaries { E 0 , E 1 , E 2 , , E k } . The sub-dictionaries { E 0 , E 1 , E 2 , , E k } were generated through the following five steps:
  • For each input image μ j , a sliding window with a size of n × n was used to delete all patches with a step length of one pixel from top to bottom and left to right. It was assumed the { s i 1 , s i 2 , , s i J } i = 1 e was a set of patches in { μ 1 , μ 2 , μ 3 , , μ J } for the ith image. e = ( M n + 1 ) × ( N n + 1 ) is the number of patches sampled from each input image;
  • The column vectors { v i 1 , v i 2 , , v i J } were obtained by rearranging the patches { s i 1 , s i 2 , , s i J } , and each column vector v i j was made to be zero mean by subtracting the the mean value v ¯ j i from each value of the column vector.
    v ^ j i = v j i v ¯ j i . 1
    where 1 is the unit vector of n × 1 ;
  • From the set { v ^ 1 i , v ^ 2 i , , v ^ J i } , the v ^ m i with the greatest variance was chosen. Then, using v i ^ m , a gradient orientation histogram was generated, and one sub-dictionary was chosen from E = { E 0 , E 1 , , E k } , which had a total of K + 1 sub-dictionaries. The gradient orientation histogram can be written as:
    θ = { θ 0 , θ 1 , , θ K }
    E k i is defined as an adaptive sub-dictionary, with k i being the index of E k into which the patch v i should be divided. The procedure for selecting k i is shown below:
    k i = 0 : if θ m a x k = 1 K θ k < 2 K k * : otherwise
    where θ m a x is:
    θ m a x = max { θ 0 , θ 1 , , θ K }
    and the index of θ m a x is shown as:
    k * = a r g m a x { θ k | k = 1 , , K } ;
  • The dictionary that was chosen for SR fusion was D k i . The sparse vectors α i F of { α i 1 , α i 2 , , α i J } were obtained after extracting vector v j ^ i from the L P 0 of both source images.
    max α j i α j i v ^ j i E k i α j i 2 < n C σ + ϵ
    where C > 0 is a constant and ϵ > 0 is the error tolerance. The steps of this method are shown in Figure 5.
    The Max-L1 fusion rule was used for the fusion of sparse vectors { α i 1 , α i 2 , , α i J } ,
    α F i = α j * i ; j = 1 , 2 , , J
    where:
    j * = a r g m a x j { α j i 1 , j = 1 , 2 , , J
    It is recommended that the merged mean value v ¯ F i be set to:
    v ¯ F i = v ¯ j *
    Finally, the fused results of the 1st layer of { v i 1 , v i 2 , , v i J } is estimated by:
    v F i = E k i α F i + v ¯ F i . 1 ;
  • In { s i 1 , s i 2 , , s i J } i = 1 e for the source image patches, Steps 2 to 4 are repeated to obtain the fused results { v F i } i = 1 e of L P 0 . To fuse the remaining three layers of the pyramid, the step of selecting the sub-dictionary E K i is repeated. Finally, we are able to build the fused LP image L P F .

3.2.3. Image Reconstruction and Fusion

The inverse LP transform is represented in the form of the equation given below:
P F l 1 = P F l * + L P F l 1 ( 0 < l < 4 )
where P F l = L P F l and the Gaussian pyramid of the lth layer retrieved by L P F l is P F l . According to Equation (27), the corresponding Gaussian pyramid is obtained after recursion on the top layer of the LP, and then, the fused image I F is acquired.

4. Results

In this section, the experimental results of the proposed technique are presented and evaluated by comparing with other recently published results of other proposed techniques/methods.

4.1. Dataset

The LIDC-IDRI of lung CT images was used to evaluate the performance of the proposed algorithm. The Cancer Imaging Archive (TCIA) hosts the LIDC, which is freely accessible on website of the TCIA [35]. This dataset was created by the collaboration of seven academic centers and eight medical imaging organizations. Each radiologist out of four experts independently assessed his/her own marks, as well as the anatomized marks of the three other radiologists before rendering a final decision. We considered 4682 scans of 61 different patients from this dataset, which contains nodules of a size of 3–30 mm. Each patient has 60–120 slices. The dataset is in DICOM format containing 512 × 512 × 16 bit images and 4096 gray-level values in HU. The pixel spacing range is from 0.78 mm to 1 mm, whereas the reconstruction interval ranges from 1 mm to 3 mm. We implemented our algorithm in MATLAB R2019a.

4.2. Image Segmentation

The first part of the proposed algorithm is image segmentation. For the evaluation of our proposed technique for lung segmentation, the DSC index was used to estimate the consistency between the original segmentation and our calculated results. The dice coefficient is calculated by using the formula:
d = 2 ( O i m g F i m g O i m g + F i m g )
where O i m g is the original image, while F i m g is the segmented result. The results of image segmentation are shown in Figure 6 and Figure 7. The DSC index was used as an evaluation parameter for the image segmentation. The DSC index value of the proposed method was 0.9929, which is better than the published results of 0.9874 [36].
In Figure 7, three cases are presented to show the results. The first column displays the original images taken for lung segmentation, while the second column displays the outcomes with a thick boundary of the selected region. The third column presents the final results of the segmentation. In Figure 8, the segmentation results of the proposed method are compared with the results of recently published techniques. Figure 9 and Table 1 provide a precise comparison of the conventional methods with the proposed method for quantification. The DSC index value of the proposed method was 0.9929, better than the other listed results.
Table 2 compares the overall performance of the proposed technique with existing techniques. In this table, three parameters, sensitivity, specificity, and accuracy, are used for evaluation purposes. The quantitative results showed that the purposed technique outperformed the U-Net [37], AWEU-Net [38], 2D U-Net [9], 2D Seg U Det [39], 3D FCN [40], 3D nodule R-CNN [41], 2D AE [42], 2D CNN [43], 2D LGAN [44], and 2D encoder–decoder [45]. The accuracy of purposed method was 99%, which is much better than the other listed methods, as shown in Table 2. The sensitivity of the purposed method was 89% higher than all listed methods, except the published results by the AWEU-Net and 2D encoder–decoder. However, the other parameters values were lower than the purposed method.

4.3. Image Fusion Results

This section describes the result of the proposed fusion method. A comparative experiment was performed with single-patient multi-view diagnostic images of lungs to check the feasibility of the proposed procedure (CT images). In this study, the experiment was performed using the CT images of the lung patients. There were six indices used to test the fusion results. The contrast was measured using the Average Pixel Intensity (API). The arithmetic square root of the variance is the Standard Deviation (SD), which represents the degree of dispersion. The total amount of information in the image is represented by the entropy (H) [46]. The resolution of the fusion effects is measured by the Average Gradient (AG). Mutual Information (MI) reflects the energy transferred from the input image to the fused output image [47]. The Spatial Frequency (SF) was used to analyze the total level of the fused output image information. Edge retention ( Q A B / F ) [48] refers to how much of the input image edge information is preserved in the final result. The total loss of the image was determined using L A B / F , and the level of noise and other related artifacts was calculated using ( N A B / F m ) [49].
Figure 10, Figure 11 and Figure 12 present the fusion results obtained using various fusion methods. In general, the proposed approach produced a fused image that retained the edges and information as well. Figure 10 shows the source multi-view images. The first column shows the different multi-view source CT images. The second, third, and fourth columns show the different layers of the Gaussian pyramid. The proposed approach was applied to the source images at different levels.
The SR method easily created a block effect, as seen in Figure 11. The ASR method failed to eliminate the block effect, and the gradient contrast was poor, resulting in a fusion result with a blurred texture and structure. The blurry edges, low contrast, and lack of structure in the fused lungs images would have a huge impact on the doctor’s treatment accuracy. The proposed method, on the other hand, can produce better fusion results and is consistent with the human visual system, as shown in Figure 12. As a result, the proposed method achieved the highest level of medical image fusion efficiency and can be used for medical care.
To evaluate the experimental results, six statistical indicators, SF, MI, API, SD, AG, and H, were used. The better the quality of the fused image, the higher the value of each indicator is. Since the values of API, SD, and SF were too high, we divided them by ten to make the observation easier. ( Q A B / F ) , ( L A B / F ) , and ( N A B / F m ) are the fusion efficiency metrics Q, L, and N, respectively (Table 3). ( Q A B / F ) has a positive sign as well, but the ( L A B / F ) and ( N A B / F m ) values should be lower.
The proposed method consistently had better results with respect to the API, SD, and MI, indicating that the suggested technique has a good capacity to maintain details. Because of the block effect, the SR method outperformed the proposed method in terms of the AG and SF. As shown in Figure 12, the resultant images acquired by using the SR method contained several artifacts and became smooth due to the loss of internal information in the fused image. The proposed approach had the best ( L A B / F ) and ( Q A B / F ) ratings, meaning that it kept the most information from the source images, while still preserving the edge and structure. This shows that our method is effective in general. According to the study of the fusion results, the suggested technique had a better overall performance than the other fusion techniques. A doctor must closely examine the fused CT image before making a diagnosis. As a result, when evaluating multi-view medical image fusion, not only the suitability of the evaluation indices, but also whether the indices are in compliance with the human visual system must be addressed. The results showed that the proposed approach produced the highest-quality fused image with no distortion while attempting to recreate the fused image with all the information and structure preserved.

4.4. Discussion

In view of the global pandemic and its effect on lung cancer patients, early diagnosis using segmentation of lung CT images has received greater attention from clinical analysts and research scholars. They have proposed many algorithms to achieve the objective of preciseness and accuracy. Taking this into consideration, a novel method based on the adaptive global threshold was proposed by considering three different aspects: DSC, accuracy, and time-based analysis. First, the DSC results are computed in Table 1, which can be further validated by Figure 9. In order to evaluate the proposed method, the results were compared with the results of the recently published methods and the manual segmentation made by experts. From Figure 8, it can be observed that the proposed method provides accurate lung segmentation results. The proposed method extracts the lung region accurately, as it uses a modified algorithm and mathematical morphological operations. In Figure 6, the specific value of the threshold is σ , which was applied to the input of the original image. The level of the grey threshold was 2/3 in our experimentation. The segmented lung boundary is flawless and clear with the mentioned value. In this way, the accuracy perfectly aligned with the requirements.
Next, the fusion was performed and improved the classification parameters, as given in Table 2 and Figure 12. The fusion accuracy achieved by the proposed method was 99%. From the review of the existing methods, we found that it is very hard to compare the results with the previously published work because of their use of non-uniform performance metrics and different evaluation criteria including the datasets and types of nodules considered. Despite the constraint, we tried to make a performance comparison of our proposed system with the other lung CAD systems, as shown in Table 2. It can be seen that our proposed system showed better performance compared to the other systems regarding the sensitivity, specificity, and accuracy. Other systems that were close on the performance indicators, i.e., API, SD, AG, H, MI, SF, Q A B / F , L A B / F , and N A B / F m , are shown in Table 3. It is clear that the proposed method had the optimum performance on the API, SD, and MI, which shows that the proposed method has the ability to retain the detailed information. The values of N A B / F m in Table 3 show that the images obtained by the proposed method had some artifacts and the fused image was too smooth due to the loss of many internal details. The smaller values of L A B / F and N A B / F m indicate that the image had a minor loss of information along with artifacts in the fusion process. From the analysis of the fusion results, it can be concluded that the proposed method has overall better performance than the other fusion methods.
The fusion approach has a significant disadvantage in terms of computational time, as the fusing of many features increases the overall classification time, which can be reduced by the selection process, but in the proposed method, it was minimized to 1.22 s. An analysis of the given comparison between the computing time and final segmented results revealed that our proposed method of the adaptive global threshold is more efficient in lung segmentation. The proposed approach would improve the fused images’ contrast and brightness. The output of the experiments showed that the suggested technique can significantly preserve detail information within a range, provide a clear view of the input images data, and ensure that no additional objects or information are added during the fusion process. In particular, the proposed method contains information regarding the edges and structure of all CT image slices. The proposed method was applied on a single dataset, which is the limitation of this study.

5. Conclusions and Future Work

Lung segmentation has gained much attention in the past due to its effectiveness in lung CT image processing and clinical analysis of lung disease, and various segmentation methods have been suggested. A robust lung segmentation method is also required to support computer-aided lung lesion treatment planning and quantitative evaluation of lung cancer treatment response. The improved global threshold approach has made significant development in the field of computer vision and image processing, prompting us to study its utility in lung CT image segmentation. As a result, selecting the appropriate collection of characteristics can improve the system’s overall accuracy by increasing the sensitivity and decreasing false positives. To evaluate the system’s effectiveness, we also used fusion methods (LP and ASR); however, the findings clearly revealed that these methods reduce image noise and enhance the image quality by reducing the time complexity.
Our proposed method produced satisfactory results, but it still has room for improvement. First, the fusion rule of the detail layer requires further research. Secondly, the system should be evaluated on large and different datasets to achieve greater robustness.

Author Contributions

The technique was developed by I.N. and I.U.H. The simulation was done by I.N. All work was done under the supervision of I.U.H. and M.M.K. M.B.Q., H.U. and S.B. helped in paper writing and review. All authors have read and agreed to the published version of the manuscript.

Funding

This research work has not funded by any institution or organization.

Acknowledgments

First of all, we pay our deepest gratitude to the LIDC-IDRI for sharing the valuable datasets comprising pearls of wisdom that helped us achieve our research objectives comprehensively. Secondly, we are immensely grateful to our teachers, family, and friends who supported us throughout the process of the research.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CTComputed Tomography
LPLaplacian Pyramid
ASRAdaptive Sparse Representative
LIDCLung Image Database Consortium
PETPositron Emission Tomography
HRCTHigh-Resolution Computed Tomography
CADComputer-Aided Design
GCPSOGuaranteed Convergence Particle Swarm Optimization
SDSpatial Domain
TDTransform Domain
PFPyramid Fusion
DWTDiscrete Wavelet Transform
CVTCurvelet Transform
NSCTNon-Subsampled Contour Transform
SRSparse Representation
SVDSingular-Value Decomposition
DL-GSGRDictionary Learning with Group Sparsity and Graph Regularization
RWTRedundant Wavelet Transform
R-DWTRedundant Discrete Wavelet Transform
RMLPRegion Mosaicking on Laplacian Pyramids
NCANeighborhood Component Analysis
LDSBLung Data Science Bowl
MIMutual Information
NCCNormalized Cross-Correlation
FMIFeature Mutual Information
PCAPrincipal Component Analysis
LIDC-IDRILung Image Database Consortium and Image Database Resource Initiative
ROIRegion Of Interest
DICOMDigital Imaging and Communications in Medicine
RGBRed Green Blue
DSCDistributed Source Coding
RDRegion Detection
LSWILevel Set Without Initialization
RMRe-initialization Methods
APIAverage Pixel Intensity
SDStandard Deviation
AGAverage Gradient
MIMutual Information
SFSpatial Frequency
BiSe-NetBilateral Segmentation Network
ESP-NetEfficient Spatial Pyramid Network
GDRLSE    Generalized Distance Regulated Level Set Evolution
RASMRobust Active Shape Model
MSGCMulti-Scale Grid Clustering
GMMGaussian Mixture Model

References

  1. Vijaya, G.; Suhasini, A. An adaptive preprocessing of lung CT images with various filters for better enhancement. Acad. J. Cancer Res. 2014, 7, 179–184. [Google Scholar]
  2. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; Van Der Laak, J.A.; Van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysi. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  3. Pu, J.; Roos, J.; Chin, A.Y.; Napel, S.; Rubin, G.D.; Paik, D.S. Adaptive border marching algorithm: Automatic lung segmentation on chest CT images. Comput. Med. Imaging Graph. 2008, 32, 452–462. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Senthil Kumar, K.; Venkatalakshmi, K.; Karthikeyan, K. Lung cancer detection using image segmentation by means of various evolutionary algorithms. Comput. Math. Methods Med. 2019, 2019, 4909846. [Google Scholar] [CrossRef] [Green Version]
  5. Akter, O.; Moni, M.A.; Islam, M.M.; Quinn, J.M.; Kamal, A. Lung cancer detection using enhanced segmentation accuracy. Appl. Intell. 2020, 50, 1–14. [Google Scholar] [CrossRef]
  6. Du, J.; Li, W.; Lu, K.; Xiao, B. An overview of multi-modal medical image fusion. Neurocomputing 2016, 215, 3–20. [Google Scholar] [CrossRef]
  7. Li, S.; Yin, H.; Fang, L. Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans. Biomed. Eng. 2012, 59, 3450–3459. [Google Scholar] [CrossRef]
  8. Liu, H.; Cao, H.; Song, E.; Ma, G.; Xu, X.; Jin, R.; Jin, Y.; Hung, C.C. A cascaded dual-pathway residual network for lung nodule segmentation in CT images. Phys. Medica 2019, 63, 112–121. [Google Scholar] [CrossRef] [Green Version]
  9. Aishwarya, N.; Bennila Thangammal, C. A novel multimodal medical image fusion using sparse representation and modified spatial frequency. Int. J. Imaging Syst. Technol. 2018, 28, 175–185. [Google Scholar] [CrossRef]
  10. Kaur, R.; Kaur, E.G. Medical image fusion using redundant wavelet based ICA co-variance analysis. Int. J. Eng. Comp. Sci. 2015, 4, 28. [Google Scholar] [CrossRef]
  11. Liu, X.; Mei, W.; Du, H. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter. Med. Biol. Eng. Comput. 2018, 56, 1565–1578. [Google Scholar] [CrossRef] [PubMed]
  12. Matsopoulos, G.; Marshall, S.; Brunt, J. Multiresolution morphological fusion of MR and CT images of the human brain. IEE Proc.-Vis. Image Signal Process. 1994, 141, 137–142. [Google Scholar] [CrossRef]
  13. Du, J.; Li, W.; Xiao, B.; Nawaz, Q. Union Laplacian Pyramid with multiple features for medical image fusion. Neurocomputing 2016, 194, 326–339. [Google Scholar] [CrossRef]
  14. Kou, L.; Zhang, L.; Zhang, K.; Sun, J.; Han, Q.; Jin, Z. A multi-focus image fusion method via region mosaicking on Laplacian Pyramids. PLoS ONE 2018, 13, e0191085. [Google Scholar] [CrossRef] [PubMed]
  15. Li, X.; Zhao, J. A novel multi-modal medical image fusion algorithm. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 1995–2002. [Google Scholar] [CrossRef]
  16. Soliman, A.; Khalifa, F.; Elnakib, A.; Abou El-Ghar, M.; Dunlap, N.; Wang, B.; Gimel’farb, G.; Keynton, R.; El-Baz, A. Accurate lung segmentation on CT chest images by adaptive appearance-guided shape modeling. IEEE Trans. Med. Imaging 2016, 36, 263–276. [Google Scholar] [CrossRef] [PubMed]
  17. Khan, M.A.; Rubab, S.; Kashif, A.; Sharif, M.I.; Muhammad, N.; Shah, J.H.; Zhang, Y.D.; Satapathy, S.C. Lungs cancer classification from CT images: An integrated design of contrast based classical features fusion and selection. Pattern Recognit. Lett. 2020, 129, 77–85. [Google Scholar] [CrossRef]
  18. Azam, M.A.; Khan, K.B.; Ahmad, M.; Mazzara, M. Multimodal Medical Image Registration and Fusion for Quality Enhancement. Cmc-Comput. Mater. Contin. 2021, 68, 821–840. [Google Scholar] [CrossRef]
  19. Chen, T.; Ma, X.; Ying, X.; Wang, W.; Yuan, C.; Lu, W.; Chen, D.Z.; Wu, J. Multi-modal fusion learning for cervical dysplasia diagnosis. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 1505–1509. [Google Scholar]
  20. Liu, Y.; Chen, X.; Cheng, J.; Peng, H. A medical image fusion method based on convolutional neural networks. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017; pp. 1–7. [Google Scholar]
  21. Ma, X.; Hu, S.; Liu, S.; Fang, J.; Xu, S. Multi-focus image fusion based on joint sparse representation and optimum theory. Signal Process. Image Commun. 2019, 78, 125–134. [Google Scholar] [CrossRef]
  22. Zhu, Z.; Chai, Y.; Yin, H.; Li, Y.; Liu, Z. A novel dictionary learning approach for multi-modality medical image fusion. Neurocomputing 2016, 214, 471–482. [Google Scholar] [CrossRef]
  23. Li, H.; He, X.; Tao, D.; Tang, Y.; Wang, R. Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning. Pattern Recognit. 2018, 79, 130–146. [Google Scholar] [CrossRef]
  24. Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Medical image fusion via convolutional sparsity based morphological component analysis. IEEE Signal Process. Lett. 2019, 26, 485–489. [Google Scholar] [CrossRef]
  25. Jiang, W.; Yang, X.; Wu, W.; Liu, K.; Ahmad, A.; Sangaiah, A.K.; Jeon, G. Medical images fusion by using weighted least squares filter and sparse representation. Comput. Electr. Eng. 2018, 67, 252–266. [Google Scholar] [CrossRef]
  26. Xu, Z. Medical image fusion using multi-level local extrema. Inf. Fusion 2014, 19, 38–48. [Google Scholar] [CrossRef]
  27. Maqsood, S.; Javed, U. Multi-modal medical image fusion based on two-scale image decomposition and sparse representation. Biomed. Signal Process. Control 2020, 57, 101810. [Google Scholar] [CrossRef]
  28. Guo, X.; Nie, R.; Cao, J.; Zhou, D.; Qian, W. Fully convolutional network-based multifocus image fusion. Neural Comput. 2018, 30, 1775–1800. [Google Scholar] [CrossRef]
  29. Li, H.; Wu, X.J. DenseFuse: A fusion approach to infrared and visible images. IEEE Trans. Image Process. 2018, 28, 2614–2623. [Google Scholar] [CrossRef] [Green Version]
  30. Zhang, Y.; Liu, Y.; Sun, P.; Yan, H.; Zhao, X.; Zhang, L. IFCNN: A general image fusion framework based on convolutional neural network. Inf. Fusion 2020, 54, 99–118. [Google Scholar] [CrossRef]
  31. Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X.P. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef]
  32. Zhang, Q.; Shen, X.; Xu, L.; Jia, J. Rolling guidance filter. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2014; pp. 815–830. [Google Scholar]
  33. Mao, R.; Fu, X.S.; Niu, P.J.; Wang, H.Q.; Pan, J.; Li, S.S.; Liu, L. Multi-directional laplacian Pyramid image fusion algorithm. In Proceedings of the 2018 3rd International Conference on Mechanical, Control and Computer Engineering (ICMCCE), Huhhot, China, 14–16 September 2018; pp. 568–572. [Google Scholar]
  34. Liu, Y.; Wang, Z. Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process. 2015, 9, 347–357. [Google Scholar] [CrossRef] [Green Version]
  35. Armato, S.G., III; McLennan, G.; Bidaut, L.; McNitt-Gray, M.F.; Meyer, C.R.; Reeves, A.P.; Zhao, B.; Aberle, D.R.; Henschke, C.I.; Hoffman, E.A.; et al. The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans. Med. Phys. 2011, 38, 915–931. [Google Scholar] [CrossRef] [PubMed]
  36. Hollaus, F.; Diem, M.; Sablatnig, R. MultiSpectral image binarization using GMMs. In Proceedings of the 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), Niagara Falls, NY, USA, 5–8 August 2018; pp. 570–575. [Google Scholar]
  37. Skourt, B.A.; El Hassani, A.; Majda, A. Lung CT image segmentation using deep neural networks. Procedia Comput. Sci. 2018, 127, 109–113. [Google Scholar] [CrossRef]
  38. Banu, S.F.; Sarker, M.; Kamal, M.; Abdel-Nasser, M.; Puig, D.; A Raswan, H. AWEU-Net: An Attention-Aware Weight Excitation U-Net for Lung Nodule Segmentation. Appl. Sci. 2021, 11, 10132. [Google Scholar] [CrossRef]
  39. Rocha, J.; Cunha, A.; Mendonça, A.M. Conventional filtering versus u-net based models for pulmonary nodule segmentation in ct images. J. Med. Syst. 2020, 44, 1–8. [Google Scholar] [CrossRef]
  40. Mukherjee, S.; Huang, X.; Bhagalia, R.R. Lung nodule segmentation using deep learned prior based graph cut. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 1205–1208. [Google Scholar]
  41. Wang, W.; Lu, Y.; Wu, B.; Chen, T.; Chen, D.Z.; Wu, J. Deep active self-paced learning for accurate pulmonary nodule segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2018; pp. 723–731. [Google Scholar]
  42. Zhang, G.; Guo, M.; Gong, Z.; Bi, J.; Kim, Y.; Guo, W. Pulmonary nodules segmentation method based on auto-encoder. In Proceedings of the 10th International Conference on Digital Image Processing (ICDIP 2018), Shanghai, China, 11–14 May 2018; Volume 10806, p. 108062P. [Google Scholar]
  43. Feng, X.; Yang, J.; Laine, A.F.; Angelini, E.D. Discriminative localization in CNNs for weakly-supervised segmentation of pulmonary nodules. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2017; pp. 568–576. [Google Scholar]
  44. Tan, J.; Jing, L.; Huo, Y.; Li, L.; Akin, O.; Tian, Y. Lgan: Lung segmentation in ct scans using generative adversarial network. Comput. Med. Imaging Graph. 2021, 87, 101817. [Google Scholar] [CrossRef]
  45. Chen, S.; Wang, Y. Pulmonary Nodule Segmentation in Computed Tomography with an Encoder-Decoder Architecture. In Proceedings of the 2019 10th International Conference on Information Technology in Medicine and Education (ITME), Qingdao, China, 23–25 August 2019; pp. 157–162. [Google Scholar]
  46. Piella, G.; Heijmans, H. A new quality metric for image fusion. In Proceedings of the 2003 International Conference on Image Processing (Cat. No. 03CH37429), Barcelona, Spain, 14–17 September 2003; Volume 3, pp. III–173. [Google Scholar]
  47. Singh, H.; Kumar, V.; Bhooshan, S. Weighted least squares based detail enhanced exposure fusion. Int. Sch. Res. Not. 2014, 2014. [Google Scholar] [CrossRef] [Green Version]
  48. Wang, Z.; Cui, Z.; Zhu, Y. Multi-modal medical image fusion by Laplacian Pyramid and adaptive sparse representation. Comput. Biol. Med. 2020, 123, 103823. [Google Scholar] [CrossRef] [PubMed]
  49. Petrovic, V.; Xydeas, C. Objective image fusion performance characterisation. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 2, pp. 1866–1871. [Google Scholar]
  50. Sundar, K.J.A.; Jahnavi, M.; Lakshmisaritha, K. Multi-sensor image fusion based on empirical wavelet transform. In Proceedings of the 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT), Mysuru, India, 15–16 December 2017; pp. 93–97. [Google Scholar]
  51. Nencini, F.; Garzelli, A.; Baronti, S.; Alparone, L. Remote sensing image fusion using the curvelet transform. Inf. Fusion 2007, 8, 143–156. [Google Scholar] [CrossRef]
  52. Zhang, Q.; Guo, B.L. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process. 2009, 89, 1334–1346. [Google Scholar] [CrossRef]
  53. Yang, B.; Li, S. Multifocus image fusion and restoration with sparse representation. IEEE Trans. Instrum. Meas. 2009, 59, 884–892. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed segmentation method.
Figure 1. Flowchart of the proposed segmentation method.
Electronics 11 00034 g001
Figure 2. Laplacian Pyramid- and ASR-based fusion algorithm.
Figure 2. Laplacian Pyramid- and ASR-based fusion algorithm.
Electronics 11 00034 g002
Figure 3. Decomposition using the Laplacian Pyramid.
Figure 3. Decomposition using the Laplacian Pyramid.
Electronics 11 00034 g003
Figure 4. Using ASR’s dictionary composition and selection.
Figure 4. Using ASR’s dictionary composition and selection.
Electronics 11 00034 g004
Figure 5. LP sparse vector fusion technique.
Figure 5. LP sparse vector fusion technique.
Electronics 11 00034 g005
Figure 6. Result of the segmented image using the proposed method. (a) is the source image, and (bf) are the segmented outputs with various global threshold values. (g) is the final segmented output.
Figure 6. Result of the segmented image using the proposed method. (a) is the source image, and (bf) are the segmented outputs with various global threshold values. (g) is the final segmented output.
Electronics 11 00034 g006
Figure 7. Segmented outputs using the proposed algorithm. The (left), (center), and (right) columns are the original CT images, segmented with a thick boundary, and the segmented output, respectively.
Figure 7. Segmented outputs using the proposed algorithm. The (left), (center), and (right) columns are the original CT images, segmented with a thick boundary, and the segmented output, respectively.
Electronics 11 00034 g007
Figure 8. Comparison of the segmentation methods. (a) Original image; (b) Region Detection (RD); (c) Level Set Without Initialization (LSWI); (d) Re-initialization Methods (RMs); (e) GDRLSE1; (f) GDRLSE2; (g) GDRLSE3; (h) result of the proposed method.
Figure 8. Comparison of the segmentation methods. (a) Original image; (b) Region Detection (RD); (c) Level Set Without Initialization (LSWI); (d) Re-initialization Methods (RMs); (e) GDRLSE1; (f) GDRLSE2; (g) GDRLSE3; (h) result of the proposed method.
Electronics 11 00034 g008
Figure 9. Dice coefficient comparison of existing methods with the proposed method.
Figure 9. Dice coefficient comparison of existing methods with the proposed method.
Electronics 11 00034 g009
Figure 10. (a(16)) Source images; (b(16)) Gaussian pyramid of Layer 1; (c(16)) Gaussian pyramid of Layer 2; (d(16)) Gaussian pyramid of Layer 3.
Figure 10. (a(16)) Source images; (b(16)) Gaussian pyramid of Layer 1; (c(16)) Gaussian pyramid of Layer 2; (d(16)) Gaussian pyramid of Layer 3.
Electronics 11 00034 g010
Figure 11. Dictionary random samples from a single-source image.
Figure 11. Dictionary random samples from a single-source image.
Electronics 11 00034 g011
Figure 12. Final fused results of lung CT images of different patients.
Figure 12. Final fused results of lung CT images of different patients.
Electronics 11 00034 g012
Table 1. Quantitative comparison of different methods.
Table 1. Quantitative comparison of different methods.
MethodDatasetDice CoefficientRunning Time (s)
U-NetLIDC-IDRI0.89-
AWEU-NetLIDC-IDRI0.89-
2D U-NetLIDC-IDRI0.83-
2D Seg U-DetLIDC-IDRI0.82-
3D FCNLIDC-IDRI0.695.0
3D Nodule R-CNNLIDC-IDRI0.64-
2D AELIDC-IDRI0.90-
2D CNNLIDC-IDRI0.61-
2D-LGANLIDC-IDRI0.98-
2D Encoder–DecoderLIDC-IDRI0.90-
Proposed MethodLIDC-IDRI0.991.2252
Table 2. Result comparison of the proposed method with existing techniques.
Table 2. Result comparison of the proposed method with existing techniques.
MethodologyDatasetSensitivity (%)Specificity (%)Accuracy (%)
U-NetLIDC-IDRI84.096.394.3
AWEU-NetLIDC-IDRI90.096.494.6
2D U-NetLIDC-IDRI89.0--
2D Seg U-DetLIDC-IDRI85.0--
3D FCNLIDC-IDRI---
3D Nodule R-CNNLIDC-IDRI---
2D AELIDC-IDRI---
2D CNNLIDC-IDRI---
2D LGANLIDC-IDRI---
2D Encoder–DecoderLIDC-IDRI90.0--
Proposed MethodLIDC-IDRI89.098.099.0
Table 3. Conventional statistical indicators and objective performance measures of Figure 12.
Table 3. Conventional statistical indicators and objective performance measures of Figure 12.
MethodAPISDAGHMISFQLN
LP [33]4.607.849.193.882.712.160.800.170.02
DWT [50]5.307.078.414.102.681.890.760.220.01
CVT [51]5.467.229.515.222.422.080.770.200.01
NSCT [52]5.427.429.384.662.572.130.810.160.02
SR [53]5.337.489.163.723.592.530.750.200.03
ASR [34]5.377.279.683.992.642.170.760.220.02
Proposed Method5.768.1310.645.623.782.700.790.160.01
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nazir, I.; Haq, I.U.; Khan, M.M.; Qureshi, M.B.; Ullah, H.; Butt, S. Efficient Pre-Processing and Segmentation for Lung Cancer Detection Using Fused CT Images. Electronics 2022, 11, 34. https://doi.org/10.3390/electronics11010034

AMA Style

Nazir I, Haq IU, Khan MM, Qureshi MB, Ullah H, Butt S. Efficient Pre-Processing and Segmentation for Lung Cancer Detection Using Fused CT Images. Electronics. 2022; 11(1):34. https://doi.org/10.3390/electronics11010034

Chicago/Turabian Style

Nazir, Imran, Ihsan Ul Haq, Muhammad Mohsin Khan, Muhammad Bilal Qureshi, Hayat Ullah, and Sharjeel Butt. 2022. "Efficient Pre-Processing and Segmentation for Lung Cancer Detection Using Fused CT Images" Electronics 11, no. 1: 34. https://doi.org/10.3390/electronics11010034

APA Style

Nazir, I., Haq, I. U., Khan, M. M., Qureshi, M. B., Ullah, H., & Butt, S. (2022). Efficient Pre-Processing and Segmentation for Lung Cancer Detection Using Fused CT Images. Electronics, 11(1), 34. https://doi.org/10.3390/electronics11010034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop