Next Article in Journal
Intraoperative MRI Assessment of the Tissue Damage during Laser Ablation of Hypothalamic Hamartoma
Next Article in Special Issue
A Deep Learning Framework for the Characterization of Thyroid Nodules from Ultrasound Images Using Improved Inception Network and Multi-Level Transfer Learning
Previous Article in Journal
Esophagogastric Junction Outflow Obstruction Is Likely to Be a Local Manifestation of Other Primary Diseases: Analysis of Single-Center 4-Year Follow-Up Data
Previous Article in Special Issue
Artificial Intelligence in Dementia: A Bibliometric Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Multimodal Medical Image Fusion Approach Using Intuitionistic Fuzzy Set and Intuitionistic Fuzzy Cross-Correlation

by
Maruturi Haribabu
and
Velmathi Guruviah
*
School of Electronics Engineering, Vellore Institute of Technology, Chennai 600127, India
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(14), 2330; https://doi.org/10.3390/diagnostics13142330
Submission received: 18 February 2023 / Revised: 8 May 2023 / Accepted: 17 May 2023 / Published: 10 July 2023
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging Analysis)

Abstract

:
Multimodal medical image fusion (MMIF) is the process of merging different modalities of medical images into a single output image (fused image) with a significant quantity of information to improve clinical applicability. It enables a better diagnosis and makes the diagnostic process easier. In medical image fusion (MIF), an intuitionistic fuzzy set (IFS) plays a role in enhancing the quality of the image, which is useful for medical diagnosis. In this article, a new approach to intuitionistic fuzzy set-based MMIF has been proposed. Initially, the input medical images are fuzzified and then create intuitionistic fuzzy images (IFIs). Intuitionistic fuzzy entropy plays a major role in calculating the optimal value for three degrees, namely, membership, non-membership, and hesitation. After that, the IFIs are decomposed into small blocks and then perform the fusion rule. Finally, the enhanced fused image can be obtained by the defuzzification process. The proposed method is tested on various medical image datasets in terms of subjective and objective analysis. The proposed algorithm provides a better-quality fused image and is superior to other existing methods such as PCA, DWTPCA, contourlet transform (CONT), DWT with fuzzy logic, Sugeno’s intuitionistic fuzzy set, Chaira’s intuitionistic fuzzy set, and PC-NSCT. The assessment of the fused image is evaluated with various performance metrics such as average pixel intensity (API), standard deviation (SD), average gradient (AG), spatial frequency (SF), modified spatial frequency (MSF), cross-correlation (CC), mutual information (MI), and fusion symmetry (FS).

1. Introduction

In past decades, image fusion has matured significantly in the application fields such as medical [1], military [2,3], and remote sensing [4]. Image fusion is a prominent application in the medical field for better analysis of human organs and tissues. In general, the medical image data is available from various imaging techniques such as magnetic resonance imaging (MRI), magnetic resonance angiography (MRA), computed tomography (CT), T1-weighted MR, T2-weighted MR, positron emission tomography (PET), and single-photon emission computed tomography (SPECT) [5]. Each technique has different characteristics.
Multimodal medical images are widely characterized into two types: anatomical and functional modalities, respectively. Anatomical modalities are, namely, MRI, MRA, T1-weighted MR, T2-weighted MR, and CT. CT images represent a clear bone structure with lower distortion but do not distinguish physical changes, while MRI images provide delicate tissue information with high spatial resolution. CT imaging is used to diagnose diseases such as muscle disease, vascular conditions, bone fractures and tumors etc. MRI imaging is used to diagnose various issues in medial regions such as brain tumors, multiple sclerosis, lung cancer and treatment, brain hemorrhage, and dementia etc. Magnetic resonance angiography, or MRA, is a subset of MRI that utilizes magnetic fields and radio waves, which create images of the body’s arteries, helping clinicians to detect blood flow abnormalities. The weighted MR-T1 images reveal fat, while weighted MR-T2 images provide water content.
Functional modalities are PET, and SPECT. PET imaging gives functionality of human organs with high sensitivity. The PET imaging technology is used to diagnosis different diseases such as Alzheimer’s disease, Parkinson’s disease, cerebrovascular accident, and hematoma. The other application areas of PET imaging are lung and breast cancer diagnosis, and cancer treatment.
SPECT imaging provides blood flow information with minimal spatial resolution, and is used for different diagnoses, namely, brain and bone disorders, and heart problems. The application areas in SPECT imaging are pelvis irradiation detection and treatment, vulvar cancer, breast cancer assessment, and head and neck cancer diagnosis [6,7]. However, single medical image data cannot provide the required information for diagnosis. To overcome this, multimodal medical image fusion is necessary.
Multimodal medical image fusion is the process of merging different modalities of medical images into a single output image. Its advantages include decreased uncertainty, resilient system performance, and higher reliability, all of which contribute to more accurate diagnosis, thus improving treatment. From the literature, authors have reported various multimodality combinations. Fusion of T1- and T2-weighted MR images produce a fused image, and is used to identify tumor regions [8]. The soft and hard tissue information from MRI and CT images, respectively, are combined into a single resultant image by fusion resulting in better image analysis [9]. The T1-weighted MR and MRA [10] combination provides perfect lesion locations with delicate tissues. The MRI–PET [11] combination and MRI–SPECT [12] combinations provide anatomical and functional information in a single image, which is used to better diagnosis disease and medical-related problems. The objective of this research article is to examine the relevance and advancement of information fusion approaches in medical imaging for investigation of clinical aspects and better treatment.
In any fusion strategy, two important requirements should be satisfied: it should not add any artifacts or blocking effects to the resultant image; and no information should be lost throughout the fusion process.
Image fusion techniques are broadly classified into three levels [13], namely, pixel-level, feature-level, and decision-level. In pixel-level fusion, image pixel values are directly merged. In feature level fusion, various salient features are involved in the fusion process such as texture and shape. In decision-level fusion, the input images are fused based on multiple algorithms with decision rules.

2. Related Works

The preeminent research issue in medical image processing is to obtain maximum content of information by combining various modalities of medical images. Various existing techniques are included in this literature such as the simple average (Avg), maximum, and minimum methods. The average method provides a fused image with low contrast, while the maximum and minimum methods provide the less enhanced fused images. The Brovey method [14] gives color distortions. Hybrid fusion methods such as the intensity-hue saturation (IHS) and principal component analysis (PCA) [15] combination provides a degraded fused image with spatial distortions. However, the pyramid decomposition-based method [16] shows better spectral information, but the required edge information is not sufficient. Discrete cosine transform (DCT) [17] and singular value decomposition (SVD) [18] methods give a fused image, which has a more complementary nature but does not show clear boundaries of the tumor region. The multi-resolution techniques, such as discrete wavelet transform (DWT) [19], provides better localization in time and frequency domains, but cannot give the shift-invariance due to down-sampling. To overcome this the redundant wavelet transform (RWT) [20] was employed. However, the above technique is highly complex and cannot provide sufficient edge information. The contourlet transform (CONT) technique [21] provides more edge information in a fused image but does not provide the shift invariance. Shift invariance is the most desirable property and is applied in various applications of image processing. These are: image watermarking [22], image enhancement [23], image fusion [24], and image deblurring [25]. The above mentioned drawbacks are addressed by the non-subsampled contourlet transform (NSCT) [26] and non-subsampled Shearlet transform (NSST) [27,28]. Hybrid combinations of fusion techniques such as DWT and fuzzy logic [29] provide a fused image with low contrast because of the higher uncertainties and vagueness, which is present in a fused image.
In general, medical images have poor illumination which means low contrast and poor visibility in some parts, which indicates uncertainties and vagueness. Visibility and enhancement are the required criteria in the medical field to diagnose the disease accurately. In the literature, various image enhancement techniques are reported, namely, gray-level transformation [30] and histogram-based methods [31]. Yet, these methods are not properly improving the quality of medical images. Zadeh [32] proposed a mathematical approach, namely, a fuzzy set in 1965. This fuzzy set approach has played a significant role by removing the vagueness present in the image. However, it did not eliminate the uncertainties. A fuzzy set does not provide reasonable results regarding more uncertainties because it considers only one uncertainty. This uncertainty is in the form of membership function, that lies between the range 0 to 1, where zero indicates the false membership function, and one indicates the true membership function. In the year 1986 Atanassov [33] proposed a generalized version of the fuzzy set i.e., intuitionistic fuzzy set (IFS), which handles more uncertainties in the form of three degrees. These degrees are membership, non-membership, and hesitation degrees. The IFS technique is highly precise, and flexible in order to handle uncertainties and ambiguity problems.
In this literature review, the research gaps and drawbacks of various medical image fusion techniques are discussed and listed in Table 1:
The main contribution of this research article is described as follows:
  • A novel intuitionistic fuzzy set is used for the fusion process, which can enhance the fused image quality and complete the fusion process successfully.
  • The intuitionistic fuzzy images are created by using the optimum value, α, which can be obtained from intuitionistic fuzzy entropy.
  • The Intuitionistic cross-correlation function is employed to measure the correlation between intuitionistic fuzzy images and then produce a fused image without uncertainty and vagueness.
  • The proposed fusion algorithm proves that the fused image has good contrast and enhanced edges and is superior to other existing methods both visually and quantitatively.

3. Materials and Methods

Intuitionistic fuzzy set (IFS) is used to solve the image processing tasks with membership and non-membership functions [34]. The implementation of IFS is briefly explained, starting from a fuzzy set.
Let us consider, a finite set P is
P = p 1 , p 2 , p 3 , , p n
A fuzzy set F in a finite set P is numerically represented as:
F = p , μ F p p P
where μ F p indicates the membership function of p in P, which lies between [0–1], and the non-membership function can be represented as v F p and will be equal to 1 μ F p . The IFS was introduced by Atanassov [33] in 1986, which considers both μ F p   and   v F p functions, holding μ F p 0 , 1   and   v F p 0 , 1 . The representation of intuitionistic fuzzy set (IFS) F in P in a mathematical form, is written as:
F = p , μ F p , v F p p P
Which holds the condition 0 μ F p + v F p 1 . However, due to lack of knowledge while characterizing the membership degree, a novel parameter was introduced, called hesitation degree π F p , by Szmidt and Kacpryzyk [35], for each element p in F. This can be written as:
π F p = 1 μ F p v F p
where 0 π F p 1 .
Finally, based on the hesitation function, the IFS can be represented as
F = p , μ F p , v F p , π F p p P
This article proposed a new intuitionistic fuzzy set-based medical image fusion that is superior for better diagnosis. Initially, the input images are fuzzified and then create intuitionistic fuzzy images with the help of the optimal value, α, which can be generated by intuitionistic fuzzy entropy (IFE) [36]. After that, the two intuitionistic fuzzy images are split into several blocks and then apply the intuitionistic fuzzy cross-correlation fusion rule [37]. Finally, the enhanced fused image can be obtained without uncertainty by rearrangement of blocks and accompanied by a defuzzification process.

3.1. Intuitionistic Fuzzy Generator

A function ϕ p : 0 , 1 is called an intuitionistic fuzzy generator (IFG) [38] if ϕ p 1 p , p 0 , 1 and ϕ 0 1 , ϕ 1 0 , which is a decreasing, continuous, and increasing function, and these are used for the construction of IFS. The fuzzy complements are calculated from the complement function, which is described as:
N μ F p = g 1 g 1 g μ F p
where g ( . ) is an increasing function with g 0 = 0 . Some of the authors suggested different intuitionistic fuzzy generators using an increasing function, such as Sugeno’s [39], Roy Chowdhury and Wang [40].

3.2. Proposed Fuzzy Complement and Intuitionistic Fuzzy Generator

In this article, a novel fuzzy complement is created using an increasing function, which is described as:
g μ F p = 1 α log 1 + α 1 + e α μ F p
With g 0 = 1 α log 1 = 0 , and g 1 = 1 α log 1 + α 1 + e α .
With the inverse function of g μ F p is
g 1 μ F p = e α μ F p 1 α 1 + e α
Substituting the value of g μ F p in Equation (6), we get
N μ F p = g 1 1 α log 1 + α 1 + e α × 1 1 α log 1 + α 1 + e α μ F p N μ F p = g 1 1 α log 1 + α 1 + e α 1 + α 1 + e α μ F p
By the induction method, the Equation (8) becomes
g 1 1 α log 1 + α 1 + e α 1 + α 1 + e α μ F p = e α 1 α log 1 + α 1 + e α 1 + α 1 + e α μ F p 1 α 1 + e α
N μ F p = 1 μ F p 1 + α 1 + e α μ F p , α > 0
Equation (11) is a fuzzy negation and it satisfies the following axioms:
(i)
P1: Boundary conditions:
μ F p = 1 t h e n N 1 = 1 1 1 + α 1 + e α × 1 = 0
μ F p = 0 t h e n N 0 = 1 0 1 + α 1 + e α × 0 = 1
(ii)
P2: Monotonicity
If μ F ( p ) < μ F ( q ) , then N μ F p > N ( μ F ( q ) ) .
(iii)
P3: Involution
N μ F p is involutive that indicates N N μ F p = μ F p .
Proof: 
N N μ F p = 1 N μ F p 1 + α 1 + e α N μ F p = 1 1 μ F p 1 + α 1 + e α μ F p 1 + α 1 + e α 1 μ F p 1 + α 1 + e α μ F p = μ F p
It can be noticed that if α = 0 , then N μ F p = 1 μ F p ; this is equivalent to standard Zadeh’s fuzzy complement. □
The intuitionistic fuzzy generator cannot be represented by all of the fuzzy complements. If the fuzzy complement satisfies the conditions, it will be referred to as an intuitionistic fuzzy generator:
N μ F p = 1 μ F p for all μ F ( p ) 0 , 1 , with N ( 0 ) = 1   and   N ( 1 ) = 0 .
The proposed fuzzy complement is the intuitionistic fuzzy generator and it satisfies the conditions. From the Equation (11), non-membership degree values are computed by using a new intuitionistic fuzzy generator, and new IFS (NIFS) becomes:
F α N I F S = p , μ F ( p ) , v F ( p ) = 1 μ F p 1 + α 1 + e α μ F p p P
and the hesitation degree can be represented as:
π F ( p ) = 1 μ F ( p ) v F ( p )
Equation (11), a new intuitionistic fuzzy generator, is used to expand and enhance the intensity levels over a range because some of the multimodal medical images are primarily dark. Varying the α value indicates a change in the intensity values not only in grayscale images but also a change in the ratio of components in the color images.
In image processing, entropy plays a significant role and is used to distinguish the texture of the image. The fuzzy entropy estimates ambiguity and fuzziness in a fuzzy set and was introduced by Zadeh. De Luca and S. Termini [41] introduced the first skeleton of non-parabolic entropy in 1972. Many researchers [42,43] have proposed various structures of entropy methods employing the IFS theory. In this article, a novel IFE function is presented, that can be determined as in [36], and it has been utilized to develop the proposed technique, which is described as:
I F E F ; α = 1 n i = 1 n π F p i exp π F ( p i )
where π F p i = 1 μ F p i v F p i , μ F p i , and v F p i are the hesitation, membership, and non-membership degrees, respectively. Entropy (IFE) function is computed by using Equation (14) for the α values between [0.1–1.0], thus, it is optimized by calculating the highest entropy value using Equation (15), i.e.,
α o p t = m a x α I F E F ; α
With the known value of α, the membership values of the new intuitionistic fuzzy set (NIFS) are calculated, and finally, the new intuitionistic fuzzy image (NIFI) is represented below:
F N I F I = p , μ F ( p ; α ) , v F ( p ; α ) , π F p ; α p P

3.3. Intuitionistic Fuzzy Cross-Correlation (IFCC)

The cross-correlation of IFS [37] is a significant measure in IFS theory and has extraordinary fundamental potential in various areas, such as medical diagnosis, decision-making, recognition, etc. The IFCC function is used to measure the correlation between two intuitionistic fuzzy images (IFIs). Let C 1 , C 2 I F S P and P = p 1 , p 2 , , p n be a finite universe of discourse, then the correlation coefficient is described as, follows:
ρ * C 1 , C 2 = 1 2 × n g = 1 n α g 1 Δ μ g + β g 1 Δ v g where   α g = c Δ μ g Δ μ m a x c Δ μ m i n Δ μ m a x ,   β g = c Δ v g Δ v m a x c Δ v m i n Δ v m a x ,   g = 1 , 2 , , n Δ μ g = μ C 1 p g μ C 2 p g ,   Δ v g = v C 1 p g v C 2 p g , Δ μ m a x = m a x g μ C 1 p g μ C 2 p g ,   Δ μ m i n = m i n g μ C 1 p g μ C 2 p g , Δ v m i n = m i n g v C 1 p g v C 2 p g ,   Δ v m a x = m a x g v C 1 p g v C 2 p g
Here, the α g   a n d   β g and IFCC values range from [0–1], which depends on the constant value ‘c’.

4. Proposed Fusion Method

In this section, we present a new approach to IFS-based multimodality medical image fusion with the IFCC fusion rule. Here, various combinations of medical images are involved in the fusion process such as T1–T2 weighted MR images, T1-weighted MR–MRA images, MRI–CT images, MRI–PET images, and MR-T2–SPECT images. This proposed method can be implemented in both grayscale and color images. This fusion algorithm is arranged sequentially as shown in Figure 1 and Figure 2.

4.1. Grayscale Image Fusion Algorithm

  • Read the registered input images I 1 and I 2 .
  • Initially, the first input image I 1 is fuzzified by using Equation (18):
μ I 1 I g h 1 = I g h 1 I m i n I m a x I m i n
where I g h 1 is the gray pixel of the first input image. I m a x   and   I m i n represent the highest and least gray level pixel values of the first input image, respectively.
3.
Compute the optimum value, ( α o p t 1 ) for first input image by using IFE, which is given in Equations (14) and (15).
4.
With the help of the optimized value, ( α o p t 1 ) , calculate the fuzzified new IFI (NIFI) for the first input image by using Equations (19)–(22), which can be represented as I I F 1 .
The membership degree of the NIFI is created as:
μ I 1 N I F S I g h 1 ; α o p t 1 = 1 + α o p t 1 1 + e α o p t 1 μ I 1 I g h 1 1 + α o p t 1 1 + e α o p t 1 μ I 1 I g h 1
Non-membership function is created as:
ν I 1 N I F S I g h 1 ; α o p t 1 = 1 μ I 1 N I F S I g h 1 ; α o p t 1 1 + α o p t 1 1 + e α o p t 1 μ I 1 N I F S I g h 1 ; α o p t 1 = 1 μ I 1 I g h 1 1 + 2 α o p t 1 μ I 1 I g h 1 1 + e α o p t 1 + μ I 1 I g h 1 α o p t 1 1 + e α o p t 1 2
and finally, the hesitation degree is obtained as:
π I 1 N I F S I g h 1 ; α o p t 1 = 1 μ I 1 N I F S I g h 1 ; α o p t 1 v I 1 N I F S I g h 1 ; α o p t 1
I I F 1 = I g h 1 , μ I 1 N I F S I g h 1 ; α o p t 1 , v I 1 N I F S I g h 1 ; α o p t 1 , π I 1 N I F S I g h 1 ; α o p t 1
5.
Similarly, for the second input image, repeat from step 2 to step 4 to obtain the optimum value, α o p t 2 , used to calculate NIFI ( I I F 2 ):
I I F 2 = I g h 2 , μ I 2 N I F S I g h 2 ; α o p t 2 , v I 2 N I F S I g h 2 ; α o p t 2 , π I 2 N I F S I g h 2 ; α o p t 2
6.
Decompose the two NIFI images ( I I F 1 and I I F 2 ) into small i × j blocks and the kth block of two decomposed images are represented as I I F 1 k and I I F 2 k , respectively.
7.
Compute the intuitionistic fuzzy cross-correlation fusion rule between two windows of images ( I I F 1 k and I I F 2 k ) and the kth block of the fused I I F k image is obtained by using minimum, average, and maximum operations:
I I F k = min I I F 1 k , I I F 2 k i f ρ * I I F 1 k , I I F 2 k 0 I I F 1 k + I I F 2 k 2 i f ρ * I I F 1 k , I I F 2 k = 1 max I I F 1 k , I I F 2 k . o t h e r w i s e
8.
Reconstruct the fused IFI image by the combined small blocks.
9.
Finally, the fused image can be obtained in the crisp domain by using the defuzzification process, which is obtained by the inverse function of Equation (18).
F c i , j = I m a x I m i n I I F k + I m i n

4.2. Color Image Fusion Algorithm

The complete fusion algorithm for the combination of gray (MRI) and color images (PET/SPECT) is arranged sequentially as shown in Figure 2.
  • Consider MRI and PET/SPECT as input images. The PET/SPECT image is converted into an HSV color model, such as hue (H), saturation (S), and value (V).
  • For the fusion process, take the MRI image and V component image, and then perform a grayscale image fusion algorithm from step 2 to step 9 as shown in Section 4.1, to get the fused component (V1).
  • Finally, the colored fused image can be obtained by considering the brightness image (V1) and unchanged hue (H) and saturation (S) parts and then converting into the RGB color model.

5. Experimental Results and Discussion

This section represents a brief explanation of the effectiveness of the proposed method and a detailed comparison of various existing algorithms with the help of performance metrics. In this paper, all input medical images are assumed to be perfectly registered, and experiments are performed with two different modalities of medical images, where the data is collected and downloaded from metapix and whole brain atlas [44,45]. The fusion of these two modalities of the medical image will provide a composite image, which will be more useful for diagnosing diseases, tumors, lesion locations, etc.
In this article, we have performed a new intuitionistic fuzzy set-based image fusion over various modalities of medical image datasets of dimensions 256 × 256 using the IFCC fusion rule. The proposed fusion algorithm is used to expand and enhance the intensity levels over a range because some of the medical images are primarily dark. Varying the α value indicates a change not only in the intensity values but also changes in the ratio of components in the color image. These enhanced medical images are fused to obtain a single image with more complementary information and better quality. Hence, we conclude that a single medical image cannot provide the required information regarding the disease. As a result, MIF is required to obtain all relevant and complete information in a single resultant image.
The evaluation of the fused image can be completed with the help of subjective (visual) and objective (quantitative) analysis, respectively. The subjective analysis is performed with the visual appearance, and the objective analysis is finished with a set of performance metrics. In this paper, eight metrics are used: API [46], SD [46], AG [47], SF [48], MSF [49], CC [50], MI [51], and FS [48].
The input images are I 1 g , h and I 2 g , h and the fused image is F u s e d g , h with G × H dimensionality.
API: API is used to quantify the average intensity values of the fused image i.e., brightness, which can be defined as:
A P I = 1 G × H g = 1 G h = 1 H F u s e d g , h
SD: SD is used to represent the amounts of intensity variations—contrast—in an image. It is described as
S D = 1 G × H g = 1 G h = 1 H F u s e d g , h μ 2
AG: This metric is used to measure the sharpness degree and clarity, which is represented as:
A G = 1 G 1 H 1 g = 1 G 1 h = 1 H 1 F u s e d ( g , h ) F u s e d ( g + 1 , h ) 2 + F u s e d ( g , h ) F u s e d ( g , h + 1 ) 2 2
SF: SF reflects the rate of change in the gray level of the image and also measures the quality of the image. For better performance, the SF value should be high. It can be calculated as follows:
S F = R F 2 + C F 2
where
R F = 1 G × H g = 1 G h = 2 H F u s e d g , h F u s e d g , h 1 2 C F = 1 G × H g = 2 G h = 1 H F u s e d g , h F u s e d g 1 , h 2
MSF: This metric is used to measure the overall active levels present in the fused image. It can be employed as follows:
M S F = R F F 2 + C F F 2 + D F F 2 ,   D F F = A + B
where
R F F = 1 G × H 1 g = 1 G h = 2 H F u s e d g , h F u s e d g , h 1 2 C F F = 1 G 1 × H g = 2 G h = 1 H F u s e d g , h F u s e d g 1 , h 2 A = 1 G 1 × H 1 g = 2 G h = 2 H F u s e d g , h F u s e d g 1 , h 1 2 B = 1 G 1 × H 1 g = 2 G h = 2 H F u s e d g 1 , h F u s e d g , h 1 2
CC: This metric represents the similarity between the source and fused images. The range of CC is [0–1]. For high similarity, the CC value is 1 and it decreases as the dissimilarity increases. It is represented as follows:
C C = C C I 1 , F u s e d + C C I 2 , F u s e d 2
where
C C I 1 , F u s e d = g = 1 G h = 1 H I 1 g h μ I 1 g h F u s e d g h μ F u s e d g = 1 G h = 1 H I 1 g h μ I 1 g h 2 g = 1 G h = 1 H F u s e d g h μ F u s e d 2 , C C I 2 , F u s e d = g = 1 G h = 1 H I 2 g h μ I 2 g h F u s e d g h μ F u s e d g = 1 G h = 1 H I 2 g h μ I 2 g h 2 g = 1 G h = 1 H F u s e d g h μ F u s e d 2
MI: The MI parameter is used to calculate the total information that is transferred to the fused image from input images.
M I T = M I I 1 , F u s e d + M I I 2 , F u s e d
where M I I 1 , F u s e d = g = 1 g h = 1 h h I 1 , F u s e d ( g , h ) log 2 h I 1 , F u s e d ( g , h ) h I 1 ( g , h ) h F u s e d ( g , h ) is the MI of input I 1 g , h and fused images, and
M I I 2 , F u s e d = g = 1 g h = 1 h h I 2 , F u s e d ( g , h ) log 2 h I 2 , F u s e d ( g , h ) h I 2 ( g , h ) h F u s e d ( g , h ) is the MI of input I 2 g , h and fused images, respectively. For better performance, the MI value should be high.
FS: FS is introduced to measure the symmetry of the fused image with respect to the source images. If the value of FS is close to 2, this indicates both input images equally contribute to the fused image. Therefore, the fused image quality will be better.
F S = 2 M I I 1 , F u s e d M I 0.5

5.1. Subjective-Type Evaluation

The subjective evolution is carried out on various input datasets as shown in Figure 3. In this paper, five groups of datasets have been used. The group 1 input images are MR-T1–MR-T2 datasets as shown in Figure 3((p1–p4) and (q1–q4)). Group 2 input images are MR-T1 and MRA as shown in Figure 3((p5) and (q5)). Group 3 input images are MRI and CT in Figure 3((p6–p7) and (q6–q7)), and group 4 input data set images are MRI and PET in Figure 3((p8–p11) and (q8–q11)). Finally, group 5 input images are MR-T2 and SPECT datasets as shown in Figure 3((p12–p16) and (q12–q16)). In this article, the performance of the proposed fusion scheme is compared with various existing algorithms, namely, the PCA method, Naidu’s [52] method, Sanjay’s [29] method, contourlet transform (CONT) method, Chaira’s IFS [53] method, Bala’s IFS [54] method, Sugeno’s IFS [55] method, and Zhu’s [56] method are in Figure 4. The fusion results of the PCA method-based fusion images are shown in the first column in Figure 4(a1–a16), DWTPCA method-based fusion images are displayed in the second column in Figure 4(b1–b16), DWT with fuzzy method-based fusion images are shown in the third column in Figure 4(c1–c16), CONT method based fusion images are displayed in the fourth column in Figure 4(d1–d16), Chaira’s IFS-method based fusion images are shown in the fifth column in Figure 4(e1–e16), Bala’s IFS method based fusion images are displayed in the sixth column in Figure 4(f1–f16), Sugeno’s IFS-method based fusion images in the seventh column in Figure 4(g1–g16), PC- NSCT method based fusion images are in the eighth column in Figure 4(h1–h16). Finally, the proposed fusion images are exhibited in the last column in Figure 4(i1–i16). Subjective analysis is related to human perception, and the proposed fusion method proves, the fused image has greater contrast, luminance, and better edge information than other existing methods, and clear tumor regions are shown in Figure 4((i4), (i8), (i12), (i13), and (i16)).
The proposed fusion results show that the quality of the fused image is better than other existing fusion methods. Among all the groups of medical image datasets, the first group of medical image datasets are T1–T2 weighted MR images. Fusing these two images shows soft tissue and an enhanced tumor region. The second group of medical image datasets are MR-T1and MRA images. MR-T1 images produce delicate tissue data but do not detect the abnormalities in the image, while the MRA image easily detects the abnormalities but due to low spatial resolution, is unable to produce the tissue information. Fusion of these images (MR-T1 and MRA) shows the complementary information with detailed lesion locations in the fused image.
The third group dataset consists of MRI and CT images, which are taken from reference [44]. MRI imaging produces delicate tissue data, while CT imaging gives bone information. The combination of these two images produces a quality fused image, which will be more useful for the diagnosis of disease. The fourth and fifth medical image datasets are MRI–PET and MR-T2–SPECT images. The fusion of these combinations to get more complementary information is achieved in a fused image and highlights the tumor regions, which will be helpful for medical-related problems.

5.2. Objective Evaluation

The fused image quality cannot be completely judged by subjective analysis. Therefore, objective evaluation is preferable for better analysis of fused images using various quality metrics. The proposed method and other existing methods’ results are listed in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. The values of the average pixel intensity (API) are tabulated in Table 2. It can be observed that the proposed fusion method provides the highest API values, which indicates that the fused image has good quality. The graphical representations of API values are shown in Figure 5a. The standard deviation quantity values are tabulated in Table 3. It can be shown that the proposed method’s SD values are greater than the other existing techniques, which indicates the output fused image has better texture details and is graphically presented in Figure 5b.
The average gradient (AG) values are shown in Table 4. It can be seen that the proposed method gives the highest AG values, which reveals that more complementary information is presented in a fused image, and this is presented graphically in Figure 5c.
The SF values are listed in Table 5. It can be seen that the SF of the proposed method gives superior values to the other methods, which indicates texture changes and detailed differences are reflected in a fused image, and this is shown graphically in Figure 6. The MSF values are listed in Table 6. It can be seen that the MSF values of the proposed method provides greater values than the other methods, which indicates that a fused image has more detailed information, and this is observed graphically in Figure 7a.
The CC, MI, and FS values of all datasets and existing fusion methods are listed in Table 7, Table 8 and Table 9. In the proposed fusion method, the average values of CC, MI, and FS values are better, and some datasets are moderate, which shows that the proposed fused image has more information and symmetry. The graphical representation of CC, MI, and FS is shown in Figure 7b–d.

5.3. Ranking Analysis

In this article, the proposed intuitionistic fuzzy set based multimodal medical image fusion algorithm provides better results than other methods using various quality metrics. Objective evaluation was used in Section 5.2. This showed the ranking analysis of each method based on the average value of each quality metric, as shown in Table 10. The best performance of the fusion method was ranked 1, and the worst performance of the fusion method was ranked 9.

5.4. Running Time

The computational efficiency of the proposed and existing medical image fusion methods such as PCA, DWT, Contourlet, DWT + fuzzy, Chaira’s IFS, Bala’s IFS, Sugeno’s IFS, and PC-NSCT are shown in Table 11. Compared with all methods, the DWTPCA method takes the least execution time of 0.60 s because the image pixels are directly selected. Hence, it is found that the DWTPCA fusion method performance is poor in terms of subjectivity and objectivity. The highest execution time of the fusion method was PC-NSCT, which was 36.72 s due to decomposition levels and fusion rules. The second highest execution time of the Contourlet transforms method was 17.29 s. The third-highest execution time of the DWT + fuzzy method was 1.48 s. The average running time of the proposed method was 1.19. However, the proposed method provides better performance with relatively low execution times and less complexity than the other methods.

6. Conclusions

In this article, a novel IFS-based medical image fusion process was proposed, which included four steps. Firstly, the registered input images were fuzzified. Secondly, intuitionistic fuzzy images were created by the optimum value, α using IFE. Thirdly, a fused IFI image was obtained using the IFCC fusion rule with block processing. Fourthly, the defuzzification operation was performed for the final enhanced fused image. This method is an extension of the various existing methods, such as PCA, DWTPCA, DWT + Fuzzy, CONT, Chaira’s IFS, Bala’s IFS, Sugeno’s IFS, and PC-NSCT. These existing algorithms do not provide a quality fused image, and include various drawbacks, such as blocking artifacts, poor visibility of tumor regions, invisible blood vessels, low contrast, and vague boundaries. This proposed method overcomes the difficulties present in the existing methods and provides a better enhanced fused image without uncertainties.
The experimental result shows that the proposed fusion method gives a better fusion performance in terms of subjective and objective analysis, respectively. In Figure 4(i4), the soft tissue and tumor regions are clearly enhanced and the obtained SD (79.83) and SF (34.60) values are large in Table 3 and Table 5, respectively. In Figure 4(i5), the soft tissue and lesion structure information are reflected exactly in a fused image, and the obtained quantitative value is 75.38, as shown in Table 2. In Figure 4(i8), the anatomy and functional information are visible with high quality in a fused image, and the quantitative values attained show that SD, AG, SF, MSF, MI, and FS are higher (59.54, 5.80, 24.92, 51.53, 3.5689, 1.8658) in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. In Figure 4(i16), the tumor region was clearly enhanced, and attained high performance metric values compared to the other existing fusion methods. As previously discussed, the heart of this proposed fusion algorithm is to calculate the intuitionistic fuzzy membership function, which is obtained by the optimum value, α using IFE. For better diagnosis and superior outcomes, the proposed fusion method can be extended to fuse different medical datasets based on the advanced fuzzy sets, such as the neutrosophic fuzzy set, pythagorean fuzzy set and fusion rules.

Author Contributions

Conceptualization, M.H.; methodology, M.H.; implementation, M.H; writing—original draft preparation, M.H.; writing—review and editing, M.H. and V.G.; visualization, M.H.; supervision, V.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Azam, M.A.; Khan, K.B.; Salahuddin, S.; Rehman, E.; Khan, S.A.; Khan, M.A.; Kadry, S.; Gandomi, A.H. A Review on Multimodal Medical Image Fusion: Compendious Analysis of Medical Modalities, Multimodal Databases, Fusion Techniques and Quality Metrics. Comput. Biol. Med. 2022, 144, 105253. [Google Scholar] [CrossRef]
  2. Ma, J.; Liu, Y.; Jiang, J.; Wang, Z.; Xu, H.; Huo, X.; Deng, Y.; Shao, K. Infrared and Visible Image Fusion with Significant Target Enhancement. Entropy 2022, 24, 1633. [Google Scholar]
  3. Deveci, M.; Gokasar, I.; Pamucar, D.; Zaidan, A.A.; Wen, X.; Gupta, B.B. Evaluation of Cooperative Intelligent Transportation System Scenarios for Resilience in Transportation Using Type-2 Neutrosophic Fuzzy VIKOR. Transp. Res. Part A Policy Pract. 2023, 172, 103666. [Google Scholar] [CrossRef]
  4. Mary, S.R.; Pachar, S.; Srivastava, P.K.; Malik, M.; Sharma, A.; Almutiri, G.T.; Atal, Z. Deep Learning Model for the Image Fusion and Accurate Classification of Remote Sensing Images. Comput. Intell. Neurosci. 2022, 2022, 2668567. [Google Scholar] [CrossRef] [PubMed]
  5. James, A.P.; Dasarathy, B.V. Medical Image Fusion: A Survey of the State of the Art. Inf. Fusion 2014, 19, 4–19. [Google Scholar] [CrossRef] [Green Version]
  6. Kumar, M.; Kaur, A. Amita Improved Image Fusion of Colored and Grayscale Medical Images Based on Intuitionistic Fuzzy Sets. Fuzzy Inf. Eng. 2018, 10, 295–306. [Google Scholar] [CrossRef] [Green Version]
  7. Venkatesan, B.; Ragupathy, U.S.; Natarajan, I. A Review on Multimodal Medical Image Fusion towards Future Research. Multimed. Tools Appl. 2023, 82, 7361–73824. [Google Scholar] [CrossRef]
  8. Palanisami, D.; Mohan, N.; Ganeshkumar, L. A New Approach of Multi-Modal Medical Image Fusion Using Intuitionistic Fuzzy Set. Biomed. Signal Process. Control 2022, 77, 103762. [Google Scholar] [CrossRef]
  9. Prakash, O.; Park, C.M.; Khare, A.; Jeon, M.; Gwak, J. Multiscale Fusion of Multimodal Medical Images Using Lifting Scheme Based Biorthogonal Wavelet Transform. Optik 2019, 182, 995–1014. [Google Scholar] [CrossRef]
  10. Kumar, P.; Diwakar, M. A Novel Approach for Multimodality Medical Image Fusion over Secure Environment. Trans. Emerg. Telecommun. Technol. 2021, 32, e3985. [Google Scholar] [CrossRef]
  11. Dilmaghani, M.S.; Daneshvar, S.; Dousty, M. A New MRI and PET Image Fusion Algorithm Based on BEMD and IHS Methods. In Proceedings of the 2017 Iranian Conference on Electrical Engineering (ICEE), Tehran, Iran, 2–4 May 2017; pp. 118–121. [Google Scholar]
  12. Panigrahy, C.; Seal, A.; Mahato, N.K. MRI and SPECT Image Fusion Using a Weighted Parameter Adaptive Dual Channel PCNN. IEEE Signal Process. Lett. 2020, 27, 690–694. [Google Scholar] [CrossRef]
  13. Kaur, H.; Koundal, D.; Kadyan, V. Image Fusion Techniques: A Survey. Arch. Comput. Methods Eng. 2021, 28, 4425–4447. [Google Scholar] [CrossRef] [PubMed]
  14. Wang, Z.; Ziou, D.; Armenakis, C.; Li, D.; Li, Q. A Comparative Analysis of Image Fusion Methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1391–1402. [Google Scholar] [CrossRef]
  15. He, C.; Liu, Q.; Li, H.; Wang, H. Multimodal Medical Image Fusion Based on IHS and PCA. Procedia Eng. 2010, 7, 280–285. [Google Scholar] [CrossRef] [Green Version]
  16. Li, M.; Dong, Y. Image Fusion Algorithm Based on Contrast Pyramid and Application. In Proceedings of the 2013 International Conference on Mechatronic Sciences, Electric Engineering and Computer (MEC), Shenyang, China, 20–22 December 2013; pp. 1342–1345. [Google Scholar]
  17. Tang, J. A Contrast Based Image Fusion Technique in the DCT Domain. Digit. Signal Process. 2004, 14, 218–226. [Google Scholar] [CrossRef]
  18. Liang, J.; He, Y.; Liu, D.; Zeng, X. Image Fusion Using Higher Order Singular Value Decomposition. IEEE Trans. Image Process. 2012, 21, 2898–2909. [Google Scholar] [CrossRef] [PubMed]
  19. Prasad, P.; Subramani, S.; Bhavana, V.; Krishnappa, H.K. Medical Image Fusion Techniques Using Discrete Wavelet Transform. In Proceedings of the 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 27–29 March 2019; pp. 614–618. [Google Scholar]
  20. Li, X.; He, M.; Roux, M. Multifocus Image Fusion Based on Redundant Wavelet Transform. IET Image Process. 2010, 4, 283. [Google Scholar] [CrossRef]
  21. Khare, A.; Srivastava, R.; Singh, R. Edge Preserving Image Fusion Based on Contourlet Transform. In Proceedings of the Image and Signal Processing: 5th International Conference, ICISP 2012, Agadir, Morocco, 28–30 June 2012; Volume 7340, pp. 93–102. [Google Scholar]
  22. Sinhal, R.; Sharma, S.; Ansari, I.A.; Bajaj, V. Multipurpose Medical Image Watermarking for Effective Security Solutions. Multimed. Tools Appl. 2022, 81, 14045–14063. [Google Scholar] [CrossRef]
  23. Liu, M.; Mei, S.; Liu, P.; Gasimov, Y.; Cattani, C. A New X-Ray Medical-Image-Enhancement Method Based on Multiscale Shannon–Cosine Wavelet. Entropy 2022, 24, 1754. [Google Scholar]
  24. Liu, S.; Wang, M.; Yin, L.; Sun, X.; Zhang, Y.-D.; Zhao, J. Two-Scale Multimodal Medical Image Fusion Based on Structure Preservation. Front. Comput. Neurosci. 2022, 15, 133. [Google Scholar] [CrossRef]
  25. Chen, X.; Wan, Y.; Wang, D.; Wang, Y. Image Deblurring Based on an Improved CNN-Transformer Combination Network. Appl. Sci. 2023, 13, 311. [Google Scholar] [CrossRef]
  26. Ganasala, P.; Kumar, V. CT and MR Image Fusion Scheme in Nonsubsampled Contourlet Transform Domain. J. Digit. Imaging 2014, 27, 407–418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Qiu, C.; Wang, Y.; Zhang, H.; Xia, S. Image Fusion of CT and MR with Sparse Representation in NSST Domain. Comput. Math. Methods Med. 2017, 2017, 9308745. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Liu, X.; Mei, W.; Du, H. Multi-Modality Medical Image Fusion Based on Image Decomposition Framework and Nonsubsampled Shearlet Transform. Biomed. Signal Process. Control 2018, 40, 343–350. [Google Scholar] [CrossRef]
  29. Sanjay, A.R.; Soundrapandiyan, R.; Karuppiah, M.; Ganapathy, R. CT and MRI Image Fusion Based on Discrete Wavelet Transform and Type-2 Fuzzy Logic. Int. J. Intell. Eng. Syst. 2017, 10, 355–362. [Google Scholar] [CrossRef]
  30. Cao, G.; Huang, L.; Tian, H.; Huang, X.; Wang, Y.; Zhi, R. Contrast Enhancement of Brightness-Distorted Images by Improved Adaptive Gamma Correction. Comput. Electr. Eng. 2018, 66, 569–582. [Google Scholar] [CrossRef] [Green Version]
  31. Salem, N.; Malik, H.; Shams, A. Medical Image Enhancement Based on Histogram Algorithms. Procedia Comput. Sci. 2019, 163, 300–311. [Google Scholar] [CrossRef]
  32. Zadeh, L.A. Fuzzy Sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  33. Atanassov, K.T. Intuitionistic Fuzzy Sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
  34. Güneri, B.; Deveci, M. Evaluation of Supplier Selection in the Defense Industry Using Q-Rung Orthopair Fuzzy Set Based EDAS Approach. Expert Syst. Appl. 2023, 222, 119846. [Google Scholar] [CrossRef]
  35. Szmidt, E.; Kacprzyk, J. Distances between Intuitionistic Fuzzy Sets. Fuzzy Sets Syst. 2000, 114, 505–518. [Google Scholar] [CrossRef]
  36. Chaira, T. A Novel Intuitionistic Fuzzy C Means Clustering Algorithm and Its Application to Medical Images. Appl. Soft Comput. 2011, 11, 1711–1717. [Google Scholar] [CrossRef]
  37. Huang, H.L.; Guo, Y. An Improved Correlation Coefficient of Intuitionistic Fuzzy Sets. J. Intell. Syst. 2019, 28, 231–243. [Google Scholar] [CrossRef]
  38. Bustince, H.; Kacprzyk, J.; Mohedano, V. Intuitionistic Fuzzy Generators Application to Intuitionistic Fuzzy Complementation. Fuzzy Sets Syst. 2000, 114, 485–504. [Google Scholar] [CrossRef]
  39. Sugeno, M. Fuzzy measures and fuzzy integrals—A survey. In Readings in Fuzzy Sets for Intelligent Systems; Elsevier: Amsterdam, The Netherlands, 1993; pp. 251–257. [Google Scholar]
  40. Roychowdhury, S.; Wang, B.H. Composite Generalization of Dombi Class and a New Family of T-Operators Using Additive-Product Connective Generator. Fuzzy Sets Syst. 1994, 66, 329–346. [Google Scholar] [CrossRef]
  41. De Luca, A.; Termini, S. A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory. Inf. Control 1972, 20, 301–312. [Google Scholar] [CrossRef] [Green Version]
  42. Joshi, D.; Kumar, S. Intuitionistic Fuzzy Entropy and Distance Measure Based TOPSIS Method for Multi-Criteria Decision Making. Egypt. Inform. J. 2014, 15, 97–104. [Google Scholar] [CrossRef]
  43. Hung, W.L.; Yang, M.S. Fuzzy Entropy on Intuitionistic Fuzzy Sets. Int. J. Intell. Syst. 2006, 21, 443–451. [Google Scholar] [CrossRef]
  44. Brain Image. Available online: http://www.metapix.de/examples.html (accessed on 3 February 2020).
  45. The Whole Brain Atlas. Available online: https://www.med.harvard.edu/aanlib/home.html (accessed on 3 February 2020).
  46. Bavirisetti, D.P.; Kollu, V.; Gang, X.; Dhuli, R. Fusion of MRI and CT Images Using Guided Image Filter and Image Statistics. Int. J. Imaging Syst. Technol. 2017, 27, 227–237. [Google Scholar] [CrossRef]
  47. Haddadpour, M.; Daneshavar, S.; Seyedarabi, H. PET and MRI Image Fusion Based on Combination of 2-D Hilbert Transform and IHS Method. Biomed. J. 2017, 40, 219–225. [Google Scholar] [CrossRef]
  48. Bavirisetti, D.P.; Dhuli, R. Multi-Focus Image Fusion Using Multi-Scale Image Decomposition and Saliency Detection. Ain Shams Eng. J. 2018, 9, 1103–1117. [Google Scholar] [CrossRef] [Green Version]
  49. Das, S.; Kundu, M.K. NSCT-Based Multimodal Medical Image Fusion Using Pulse-Coupled Neural Network and Modified Spatial Frequency. Med. Biol. Eng. Comput. 2012, 50, 1105–1114. [Google Scholar] [CrossRef]
  50. Shreyamsha Kumar, B.K. Image Fusion Based on Pixel Significance Using Cross Bilateral Filter. Signal Image Video Process. 2015, 9, 1193–1204. [Google Scholar] [CrossRef]
  51. Dammavalam, S.R. Quality Assessment of Pixel-Level ImageFusion Using Fuzzy Logic. Int. J. Soft Comput. 2012, 3, 11–23. [Google Scholar] [CrossRef]
  52. Naidu, V.P.S.; Raol, J.R. Pixel-Level Image Fusion Using Wavelets and Principal Component Analysis. Def. Sci. J. 2008, 58, 338–352. [Google Scholar] [CrossRef]
  53. Chaira, T. A Rank Ordered Filter for Medical Image Edge Enhancement and Detection Using Intuitionistic Fuzzy Set. Appl. Soft Comput. 2012, 12, 1259–1266. [Google Scholar] [CrossRef]
  54. Balasubramaniam, P.; Ananthi, V.P. Image Fusion Using Intuitionistic Fuzzy Sets. Inf. Fusion 2014, 20, 21–30. [Google Scholar] [CrossRef]
  55. Tirupal, T.; Mohan, B.C.; Kumar, S.S. Multimodal Medical Image Fusion Based on Sugeno’s Intuitionistic Fuzzy Sets. ETRI J. 2017, 39, 173–180. [Google Scholar] [CrossRef]
  56. Zhu, Z.; Zheng, M.; Qi, G.; Wang, D.; Xiang, Y. A Phase Congruency and Local Laplacian Energy Based Multi-Modality Medical Image Fusion Method in NSCT Domain. IEEE Access 2019, 7, 20811–20824. [Google Scholar] [CrossRef]
Figure 1. Flow chart of proposed grayscale medical image fusion algorithm.
Figure 1. Flow chart of proposed grayscale medical image fusion algorithm.
Diagnostics 13 02330 g001
Figure 2. Flow chart of proposed color medical image fusion algorithm.
Figure 2. Flow chart of proposed color medical image fusion algorithm.
Diagnostics 13 02330 g002
Figure 3. Medical image datasets: (p1p4) and (q1q4) are MR T1–MR T2 input images: (p5) and (q5) are T1 weighted MR–MRA input images; (p6,p7) and (q6,q7) are MRI–CT input images; (p8p11) and (q8q11) are MRI–PET input images; and (p12p16) and (q12q16) are the MR-T2–SPECT input images.
Figure 3. Medical image datasets: (p1p4) and (q1q4) are MR T1–MR T2 input images: (p5) and (q5) are T1 weighted MR–MRA input images; (p6,p7) and (q6,q7) are MRI–CT input images; (p8p11) and (q8q11) are MRI–PET input images; and (p12p16) and (q12q16) are the MR-T2–SPECT input images.
Diagnostics 13 02330 g003
Figure 4. Fused images using: (a) PCA method, (b) DWTPCA [52] method, (c) DWT + fuzzy method [29], (d) Contourlet transform (CONT) based method, (e) Chaira’s IFS [53], (f) Bala’s IFS [54], (g) Sugeno’s IFS [55], (h) PC-NSCT method [56], and (i) Proposed method.
Figure 4. Fused images using: (a) PCA method, (b) DWTPCA [52] method, (c) DWT + fuzzy method [29], (d) Contourlet transform (CONT) based method, (e) Chaira’s IFS [53], (f) Bala’s IFS [54], (g) Sugeno’s IFS [55], (h) PC-NSCT method [56], and (i) Proposed method.
Diagnostics 13 02330 g004aDiagnostics 13 02330 g004b
Figure 5. Graphical representation of (a) API, (b) SD, (c) AG measures of proposed and other existing methods.
Figure 5. Graphical representation of (a) API, (b) SD, (c) AG measures of proposed and other existing methods.
Diagnostics 13 02330 g005
Figure 6. Graphical representation of SF measures of proposed and existing methods.
Figure 6. Graphical representation of SF measures of proposed and existing methods.
Diagnostics 13 02330 g006
Figure 7. Graphical representation of (a) MSF, (b) CC, (c) MI, and (d) FS measures of proposed and existing methods.
Figure 7. Graphical representation of (a) MSF, (b) CC, (c) MI, and (d) FS measures of proposed and existing methods.
Diagnostics 13 02330 g007aDiagnostics 13 02330 g007b
Table 1. Comparison of the existing fusion methods.
Table 1. Comparison of the existing fusion methods.
Fusion MethodsModalitiesMeritsDemerits
IHS and PCAMRI-PETGood spatial features and better color visualization in a fused image.Low contrast and distorted boundaries.
PyramidMRI-CTPreserves better outlines in the fused image.Due to a lack of spatial orientation selectivity, the unwanted edges and blocking effects exist in the fused image.
SVDMRI-CTProvides better quality fused image. Fails to show the clear boundaries of the tumor region.
DWTMRI-CT, MRI-PETProvides good localization in both time and frequency.Has more complexity and lack of edges information.
CONTMRI-CTFused image has better edges and is superior to DWT and Curvelet transform.Does not provide the shift invariance, may cause blocking effects
NSCTMRI-CTSuperior to traditional transform techniques in terms of directionality.Complexity is high.
NSSTMRI-CTFusion process is superior to NSCT with lower complexity.Low brightness and contrast due to uncertainties, and high computational time.
Table 2. Performance evaluation of the fusion methods using the API measure.
Table 2. Performance evaluation of the fusion methods using the API measure.
Medical Image Modality Fusion Techniques
Data SetsPCADWTPCADWT + FuzzyCONTChaira’s IFSBala’s IFSSugeno’s IFSPC-NSCTProposed Method
MR T1–MR T2148.5348.5547.6661.5255.6556.6267.1152.9270.9
240.9440.7943.449.5944.6945.454.3843.7757.77
349.7748.8354.5864.8561.2362.371.5156.6375.53
456.5435.7342.5757.253.1554.2363.1148.6667.15
MR-T1–MRA535.545.8766.3867.8558.8259.3469.9266.3875.38
MRI–CT652.7432.6755.2460.7955.9956.2370.5855.7776.85
749.5439.4721.8760.3359.9260.3667.9358.5272.23
MRI–PET817.869.0117.9218.1627.2927.9625.8517.8927.17
925.8113.0125.9226.0437.3137.4539.4125.8841.21
1032.2416.2132.3332.4931.8532.2137.5432.3339.09
1162.8231.5662.9863.2257.3258.0776.5562.9679.4
MR-T2–SPECT1236.2418.2236.3436.4240.941.4749.6936.2853.57
1334.8717.5434.9635.1141.742.1849.2134.9552.07
1435.1217.7135.2535.3747.4148.0763.2835.1466.98
1541.8921.064242.1539.640.2451.2342.0154.05
1648.8524.4448.8749.1146.4446.9556.348.7860.14
Average Value41.8328.7941.7747.5147.4548.0757.1044.9360.59
Table 3. Performance evaluation of the fusion methods using SD measures.
Table 3. Performance evaluation of the fusion methods using SD measures.
Medical Image Modality Fusion Techniques
Data
Sets
PCADWTPCADWT + FuzzyCONTChaira’s IFS Bala’s IFS Sugeno’s IFS PC-NSCTProposed Method
MR T1–MR T2158.7358.7164.7478.3470.3671.7580.5169.7983.83
255.2155.0260.1767.0461.262.3274.3661.7978.16
359.2557.8669.7477.0673.9875.5380.4373.0883.45
457.7946.1657.572.5569.0270.8476.4668.3879.83
MR-T1–MRA546.1945.4968.5268.8662.1162.4572.0169.2274.73
MRI–CT654.134.9556.961.7360.3760.8968.7760.0369.87
761.4147.2132.5873.773.2273.8875.4573.4278.27
MRI–PET841.8321.0440.4741.6154.0155.7157.641.9859.54
944.9222.6144.8444.8968.668.8772.2345.4674.12
1060.5730.4359.9160.463.0464.0573.1360.7474.9
1175.9838.1675.0175.9376.9778.3785.2476.1886.72
MR-T2–SPECT1247.1123.6746.6147.1350.8451.6957.3947.2460.38
1353.7226.9853.3353.7558.759.3364.6753.8567.1
1443.121.742.843.1653.6654.1863.6843.3066.03
1558.4429.3658.0358.4955.0956.1966.9758.6069.4
1665.3132.6864.7465.1561.4762.3871.0965.3974.78
Average Value-55.2337.0055.9961.8563.2964.2871.2560.5373.82
Table 4. Performance evaluation of the fusion methods using the AG measure.
Table 4. Performance evaluation of the fusion methods using the AG measure.
Medical Image Modality Fusion Techniques
Data
Sets
PCADWTPCADWT + FuzzyCONTChaira’s IFSBala’s IFSSugeno’s IFSPC-NSCTProposed Method
MR T1–MR T215.795.87.358.315.968.218.518.628.6
24.294.255.76.424.375.096.486.46.59
38.328.1510.611.258.3811.7312.112.0512.1
47.687.369.5910.697.359.659.810.8110.81
MR-T1–MRA57.46.439.119.787.379.3811.0310.1811.33
MRI–CT65.43.96.437.396.156.927.997.398.12
76.775.46.338.126.638.68.648.548.94
MRI–PET85.783.455.425.104.045.245.375.735.80
94.792.414.474.915.315.836.844.87.01
107.863.967.248.046.227.378.57.928.58
1114.557.312.8814.8110.4913.215.5314.5915.66
MR-T2–SPECT128.154.097.038.175.857.599.598.1810.09
136.473.245.86.545.065.856.526.496.69
146.933.485.7974.976.657.556.977.88
157.793.916.77.875.026.518.067.828.34
167.413.716.527.375.587.238.437.428.57
Average Value-6.754.576.907.785.827.368.287.918.53
Table 5. Performance evaluation of the fusion methods using the SF measure.
Table 5. Performance evaluation of the fusion methods using the SF measure.
Medical Image Modality Fusion Techniques
Data
Sets
PCADWTPCADWT + FuzzyCONTChaira’s IFSBala’s IFSSugeno’s IFSPC-NSCTProposed Method
MR T1–MR T2120.1820.2822.2726.2522.9927.8129.4226.0130.04
214.2314.0218.6419.9915.951921.5520.2322.05
324.0423.1829.6232.9627.1732.053332.7934.3
420.8223.3628.7631.3726.9833.2534.0530.934.6
MR-T1–MRA523.4116.4324.2324.8824.4825.3625.9425.9225.98
MRI–CT613.6910.2516.9118.6516.917.6919.318.6719.3
717.1613.0715.3121.4518.6821.5522.1920.6622.27
MRI–PET821.2214.1622.4924.916.8523.2323.9124.3124.92
916.28.1415.8816.1720.7122.0325.7116.2626.54
1026.3213.224.8526.2823.6326.4829.5326.430.13
1137.9219.0234.7737.9331.9937.4941.9738.0142.6
MR-T2–SPECT1219.579.8217.8519.5615.6418.8422.9319.6224.02
1319.049.5517.7618.9716.4118.2820.219.0821.84
1416.178.1214.4716.1513.9716.9417.1216.2319.96
1521.5210.7919.4321.4415.5418.9723.2421.5724.16
1624.9612.4922.9124.7620.8724.0229.3524.9630.41
Average Value-21.0314.1221.6323.8620.5523.9426.2123.8527.07
Table 6. Performance evaluation of the fusion methods using the MSF measure.
Table 6. Performance evaluation of the fusion methods using the MSF measure.
Medical Image Modality Fusion Techniques
Data
Sets
PCADWTPCADWT + FuzzyCONTChaira’s IFSBala’s IFSSugeno’s IFSPC-NSCTProposed Method
MR T1–MR T2143.0143.2247.4356.2748.6358.8662.3955.8663.79
230.5830.1440.2142.7234.2140.6646.1643.2947.23
350.8149.0563.0569.3657.4369.5971.5169.0472.04
444.7748.8760.8966.1056.7569.3370.9665.3571.08
MR-T1–MRA548.9535.3552.4953.9952.2855.2555.3655.1255.45
MRI–CT630.3622.6237.4141.1337.1638.9541.4741.2442.30
73.0728.1332.6146.2640.2146.3447.5944.6447.75
MRI–PET847.8029.0047.3151.0935.3649.6751.4950.9851.53
935.2817.7334.6235.2844.2147.4956.0035.4257.81
1056.7028.4453.8956.6950.4756.8263.7256.8664.95
1180.7640.5174.6180.8667.5179.5389.7280.9782.04
MR-T2–SPECT1241.6420.8838.2841.6133.3540.1048.7541.7550.99
1341.0220.5738.5040.8535.2639.2943.3841.1244.71
1434.1417.1430.9634.1029.8235.8040.3534.2642.02
1545.8022.9741.7545.6133.1440.2549.5445.9251.48
1652.9426.4848.9552.4644.2150.9862.1352.9464.32
Average Value-42.9830.0746.4450.9043.7551.1856.2850.9256.84
Table 7. Performance evaluation of the fusion methods using the CC measure.
Table 7. Performance evaluation of the fusion methods using the CC measure.
Medical Image Modality Fusion Techniques
Data SetsPCADWTPCADWT + FuzzyCONTChaira’s IFSBala’s IFSSugeno’s IFSPC-NSCTProposed Method
MR T1–MR T210.920.92010.85230.86470.89050.89320.90930.88380.9089
20.94280.94330.91170.92140.93920.93810.94330.93230.9421
30.90070.90640.82860.87250.88440.8890.88920.87150.8849
40.75830.75870.62960.67210.75720.76590.75850.74440.7553
MR-T1–MRA50.90120.90780.84570.90210.91520.91340.91290.91330.9157
MRI–CT60.54440.64130.53050.64120.64390.64720.6350.63480.6481
70.80070.4450.75480.79510.80550.81270.81020.80430.8111
MRI–PET80.81320.81060.81160.79850.7940.79510.79510.80340.8088
90.69120.69540.67950.67150.6880.69010.58780.69360.5694
100.6910.69120.67240.64570.5850.66090.69070.69110.6886
110.5740.57550.54940.67360.56620.67090.66950.57360.6907
MR-T2–SPECT120.52790.52960.47550.51370.52850.52280.52880.52360.5226
130.62830.63130.68580.67830.69060.60670.65670.69250.6966
140.64760.65130.59010.62380.61120.61670.61370.64560.6165
150.6810.68190.66180.67360.67780.6770.68190.69320.6867
160.65410.65740.64950.62190.65670.65950.65950.6620.6693
Average Value-0.72980.71540.69560.72310.72710.7350.73390.73520.7385
Table 8. Performance evaluation of the fusion methods using the MI measure.
Table 8. Performance evaluation of the fusion methods using the MI measure.
Medical Image Modality Fusion Techniques
Data SetsPCADWTPCADWT + FuzzyCONTChaira’s IFSBala’s IFSSugeno’s IFSPC-NSCTProposed Method
MR-T1–MR-T213.54053.27953.44642.34953.68654.85384.21213.81463.7825
23.29352.96213.75143.24153.46374.2923.65642.95743.4782
33.8373.36223.95743.45213.54115.28664.25273.16863.9925
44.04953.23253.74573.68483.43024.26214.22654.08544.3005
MR-T1–MRA55.01215.94025.24964.73545.66265.98545.24564.7915.9928
MRI–CT66.39185.27445.23146.21985.13256.59856.33054.38276.7901
74.20133.8514.95724.9474.32075.29716.12455.62286.165
MRI–PET83.00263.04522.25362.93583.00573.50673.25043.48733.5689
92.97693.04681.99562.7852.31853.00953.10472.36452.9308
102.88452.96361.93112.68312.24132.66242.78582.94162.9711
114.33824.512.49663.85362.82134.81015.09564.65934.4321
MR-T2–SPECT125.02625.00453.15634.95743.94245.42317.05424.99627.1046
133.99573.88442.763.86143.99524.71765.0274.98315.1158
144.93234.92443.18784.71644.31465.51476.43636.69076.2446
154.9345.06713.22075.64164.30175.45516.01784.93476.0238
163.22194.31762.22224.91354.23783.87044.3943.84715.416
Average Value-4.10244.04163.34774.06113.7764.72164.82594.23294.8943
Table 9. Performance evaluation of the fusion methods using the FS measure.
Table 9. Performance evaluation of the fusion methods using the FS measure.
Medical Image Modality Fusion Techniques
Data SetsPCADWTPCADWT + FuzzyCONTChaira’s IFSBala’s IFSSugeno’s IFSPC-NSCTProposed Method
MR T1–MR T211.95521.96241.95161.92541.95151.95371.95971.95241.9655
21.97191.97191.97221.98371.95291.92591.7711.9911.647
31.9791.98541.87121.91651.98491.96411.9281.94211.9238
41.85511.84921.79681.83791.84321.83221.84831.83251.8573
MR-T1–MRA51.8281.78571.82761.79751.82661.83191.8151.79281.8358
MRI–CT61.57961.59131.60121.61351.60281.61031.61561.60351.6172
71.72051.72571.75541.76351.73341.78771.78911.77651.7898
MRI–PET81.83011.4691.86521.79631.85681.83731.87031.84071.8658
91.7461.75791.75821.82741.90761.88561.90681.93491.8943
101.70741.73671.74771.78521.86161.83041.85141.85641.8659
111.68821.74351.73821.82761.91031.84641.8831.89751.8897
MR-T2–SPECT121.70561.71321.3821.87241.89281.89311.93411.7131.9416
131.8661.85761.48251.79481.80621.83091.89241.88691.8973
141.8141.82241.81231.83751.90581.85691.88911.86121.8924
151.35451.77031.88911.86271.8091.88191.87351.75431.935
161.52951.65451.54091.62731.58261.64141.62171.61471.6559
Average Value-1.75821.77481.74951.81681.83931.83811.84061.82821.8421
Table 10. Performance evaluation of the fusion methods in the ranking strategy.
Table 10. Performance evaluation of the fusion methods in the ranking strategy.
Fusion Techniques
Performance MeasuresPCADWTPCADWT + FuzzyCONTChaira’s IFSBala’s IFSSugeno’s IFSPC-NSCTProposed Method
API697453281
SD897543261
AG796485231
SF796584231
MSF896574231
CC589763421
MI579683241
FS879634251
Table 11. Average running time (seconds) of the proposed method with different existing methods.
Table 11. Average running time (seconds) of the proposed method with different existing methods.
Medical Image ModalityFusion Techniques
PCADWTPCADWT + FuzzyCONTChaira’s
IFS
Bala’s IFSSugeno’s IFS PC-NSCTProposed Method
Average Value0.800.601.4817.690.870.650.5036.721.19
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Haribabu, M.; Guruviah, V. An Improved Multimodal Medical Image Fusion Approach Using Intuitionistic Fuzzy Set and Intuitionistic Fuzzy Cross-Correlation. Diagnostics 2023, 13, 2330. https://doi.org/10.3390/diagnostics13142330

AMA Style

Haribabu M, Guruviah V. An Improved Multimodal Medical Image Fusion Approach Using Intuitionistic Fuzzy Set and Intuitionistic Fuzzy Cross-Correlation. Diagnostics. 2023; 13(14):2330. https://doi.org/10.3390/diagnostics13142330

Chicago/Turabian Style

Haribabu, Maruturi, and Velmathi Guruviah. 2023. "An Improved Multimodal Medical Image Fusion Approach Using Intuitionistic Fuzzy Set and Intuitionistic Fuzzy Cross-Correlation" Diagnostics 13, no. 14: 2330. https://doi.org/10.3390/diagnostics13142330

APA Style

Haribabu, M., & Guruviah, V. (2023). An Improved Multimodal Medical Image Fusion Approach Using Intuitionistic Fuzzy Set and Intuitionistic Fuzzy Cross-Correlation. Diagnostics, 13(14), 2330. https://doi.org/10.3390/diagnostics13142330

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop