Next Article in Journal
Special Issue on Mathematical Modeling Using Differential Equations and Network Theory
Next Article in Special Issue
An Anatomical-Based Subject-Specific Model of In-Vivo Knee Joint 3D Kinematics From Medical Imaging
Previous Article in Journal
Multi-Segmental Motion in Foot during Counter-Movement Jump with Toe Manipulation
Previous Article in Special Issue
Three-Dimensional CAD in Skull Reconstruction: A Narrative Review with Focus on Cranioplasty and Its Potential Relevance to Brain Sciences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion

1
Department of Electrical engineering, École de Technologie Supérieure (ÉTS), University of Quebec, Montreal, QC H3C 1K3, Canada
2
CoFaMic research Center, Computer Science department, Université du Québec à Montréal (UQAM), University of Quebec, Montreal, QC H3C 3P8, Canada
3
Artificial intelligence research group (ERIA), Computer Science laboratory (LRI), Computer Science department, University of Badji Mokhtar Annaba (UBMA), BP. 12 Annaba 23000, Algeria
4
Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(5), 1894; https://doi.org/10.3390/app10051894
Submission received: 31 December 2019 / Revised: 24 February 2020 / Accepted: 26 February 2020 / Published: 10 March 2020
(This article belongs to the Special Issue Computer-aided Biomedical Imaging 2020: Advances and Prospects)

Abstract

:
Computer-aided diagnostic (CAD) systems use machine learning methods that provide a synergistic effect between the neuroradiologist and the computer, enabling an efficient and rapid diagnosis of the patient’s condition. As part of the early diagnosis of Alzheimer’s disease (AD), which is a major public health problem, the CAD system provides a neuropsychological assessment that helps mitigate its effects. The use of data fusion techniques by CAD systems has proven to be useful, they allow for the merging of information relating to the brain and its tissues from MRI, with that of other types of modalities. This multimodal fusion refines the quality of brain images by reducing redundancy and randomness, which contributes to improving the clinical reliability of the diagnosis compared to the use of a single modality. The purpose of this article is first to determine the main steps of the CAD system for brain magnetic resonance imaging (MRI). Then to bring together some research work related to the diagnosis of brain disorders, emphasizing AD. Thus the most used methods in the stages of classification and brain regions segmentation are described, highlighting their advantages and disadvantages. Secondly, on the basis of the raised problem, we propose a solution within the framework of multimodal fusion. In this context, based on quantitative measurement parameters, a performance study of multimodal CAD systems is proposed by comparing their effectiveness with those exploiting a single MRI modality. In this case, advances in information fusion techniques in medical imagery are accentuated, highlighting their advantages and disadvantages. The contribution of multimodal fusion and the interest of hybrid models are finally addressed, as well as the main scientific assertions made, in the field of brain disease diagnosis.

1. Introduction

In the fields of research in medical imaging and diagnostic radiology, computer-aided diagnosis (CAD) has had major interest and development during the last two decades [1,2,3,4,5,6,7,8,9]. The objective of this technology is to support radiologists using computer systems in their interpretation of brain images and in the diagnosis of brain diseases. The CAD system provides a second opinion, it makes it possible to analyze medical images thanks to its techniques of pattern recognition and machine learning. This alleviates the fatigue of the radiologist and the burden of the workload, due to the overloaded data. As a result, this technology has the ability to improve diagnostic consistency and accuracy in order to decrease the rate of false negatives, including estimating the extent of the disease.
Alzheimer’s disease (AD) is one of the brain disorders that is extremely difficult to identify. It is linked to structural atrophy, pathological amyloid deposits and metabolic alterations in the brain [10]. This neurodegenerative disease is the cause of 60% to 70% of cases of dementia [11], which generally begins slowly and worsens over time. It gradually deteriorates cognitive and behavioral capacities and the causes of this disease remain unknown [1], with the exception of certain hereditary forms. A CAD system can help to perform an early diagnosis which is crucial for mitigating the effects of AD. In fact, several diagnostic tools and approaches have been developed in order to provide measures that make it possible to detect early changes during subclinical periods, clarify the underlying mechanisms and inform neuro-protective interventions aimed at slowing down the extent of the disease. As a result, the rising costs are reduced for families and society.
In addition to this, magnetic resonance imaging (MRI) [12] has been used for a long time to eliminate several sources of brain disorders. This technology provides a detailed description of the anatomy including brain pathology with spatial resolution and soft tissue contrast. It makes it possible to study the structural and chemical correlates of a disease, which improves understanding of the mechanisms involved [13]. In AD, MRI adds a positive predictive value to the diagnosis [5]; solid experiments have shown that changes in brain structure can be detected with structural MRI in elderly subjects with mild cognitive impairment (MCI) [14]. In this context, patients with MCI to be converted to AD are characterized by significant atrophy of the medial temporal lobes, posterior cingula, lateral and parietal temporal cortex compared to control subjects or stationary MCIs.
However, the benefits of structural MRI are limited and several problems have arisen as a result. The MRI went from two-dimensional (2D) to three-dimensional (3D) modality, which caused the neuro-radiologist to rapidly increase the data to be analyzed. Added to this, the resolution and the signal-to-noise ratio (SNR) have become higher [15]. In this regard, the problem of developing new CAD tools, such as data fusion techniques, has been widely addressed in recent years in order to reduce the workload, paying particular attention to the study with a non-invasive way of the existing connectivity between anatomical and functional imaging. This process of data fusion in a multimodal environment makes it possible to generate a more informative merged image which helps diagnosis and forecasting, by combining complementary and redundant information coming from the MRI and from functional modalities, such as, computerized tomography (CT), single photon emission tomography (SPECT) and positron emission tomography (PET), which are characterized by different objectives in radiology. As a result, the resulting fusion image is better suited for visual perception and image processing and analysis tasks [16,17], providing more condensed and relevant information. In addition, instead of storing several multi-source images, a single merged image is taken into account, which reduces memory costs. Additionally, noting that the combination of medical images can often lead to additional clinical information that does not appear in the separate images.
The objective of this review article is to highlight the interest shown in multimodal fusion by researchers in neuroimaging for the diagnosis of brain disorders, in particular AD. It consists of summarizing and examining the main applications, results, perspectives as well as the advantages and disadvantages of different MRI neuroimaging technologies for the diagnosis of brain disorders, with emphasis on the application of multimodal fusion, especially for the diagnosis of AD.
Firstly this review study concerns the collection of several works related to CAD systems for the diagnosis of cerebral dementia, notably AD. We analyze their proposals which were introduced with the hope of helping the radiologist to properly assess the extent of the disease, by providing him with a second opinion in the form of a computer output. In order to reduce the rate of false negatives and improve the accuracy of the diagnosis, the researchers developed techniques notably in the two main phases in the CAD system, namely segmentation of the brain regions and classification. In this context, we inspect these techniques from artificial intelligence by highlighting their advantages and disadvantages. In addition, we identify the various measurement parameters that have been used to quantitatively assess the performance of the proposed CAD systems. For the classification process, several measurements were considered, such as the sensitivity (SE), which represents the true positive rate; specificity (SP), which estimates the true negative rate; and accuracy (AC), which determines the proportion of true results in the database, whether true positive or true negative. For the same purpose, for some work, the area under the ROC curve (AUC) value was estimated, which determines the diagnostic validity by combining sensitivity and specificity. Likewise, in the segmentation process various measures have been identified, which in the majority take into account the portions of the rates of true and false positives as well as true and false negatives, such as the Tanimoto and dice coefficients, the Jaccard similarity index, etc.
Then, secondly, after identification of the gaps raised by mono-modal MRI CAD systems, we determine one of the solutions proposed in the literature within the framework of the use of data fusion techniques which can be applied in a multimodal imaging environment. The effectiveness of this approach is linked to its power to reconstruct and predict missing information from the MRI. Therefore, the most used fusion techniques are described, highlighting their advantages and disadvantages. Likewise, some key works in the literature are described, which have used multimodal fusion to improve the performance of conventional CAD systems. In addition, a performance evaluation and comparison with single-modal systems is proposed by applying reliability estimation methods such as cross-validation. The measurement parameters are also determined for the purpose of quantitatively evaluating the effectiveness of multimodal CAD systems. Finally, a discussion is tackled which highlights the interest of combining several techniques in the framework of hybrid models and the contribution of multimodal fusion and its usefulness in clinical studies.
The rest of the article is organized as follows: In Section 2, first, we describe the foundations of a CAD system, then we analyze the research work already carried out using the MRI mono modality. In this regard, the methods used in the different phases of the CAD system, including the classification and segmentation of brain regions, are described, summarizing their disadvantages and advantages. We then clarify the problem addressed by the works examined by reporting the solution proposed in the literature. Consequently, the efforts investigated to find a solution within the framework of multimodal fusion of brain images are presented in Section 3. We review the aspects of data fusion, with the aim of providing an overview of the applicability and progress of fusion techniques in medical imaging. In this regard, several research works using multimodal fusion are examined. A performance study of the works selected in the context of experimental classification is presented; and the comparison of the results with those of CAD systems exploiting a single MRI modality is proposed. A discussion is then suggested, summarizing the disadvantages and advantages of the multimodal fusion methods. Concluding remarks as well as ideas and directions for future research are presented in Section 4.

2. Cad Systems of Brain Disorders Based on MRI Technology

2.1. CAD System Architecture

The architecture of a CAD system associated with a brain image is illustrated in Figure 1, in which several processes are carried out. First, the image from MRI is proposed as an input for the CAD system for which it selects the training samples. Then, a preprocessing and a definition of region(s) of interest (ROI) (block A in Figure 1) are developed to eliminate samples not relevant for the diagnosis. An extraction of the characteristic parameters of the voxels (block B) is carried out thereafter. Finally, segmentation and classification are performed. The segmentation (block C block corresponds to the Section 2.2) groups the voxels into regions, based on the characteristics of the cerebral image. While the classification (block D block corresponds to the Section 2.3) allows to classify the images in two classes: normal or abnormal. Various machine learning tools have been used successfully in both of these processes, and many artificial intelligence techniques have been developed in the past two decades.
In the following, we are interested in the brain regions segmentation which represents a key step in the CAD system, for which good or bad segmentation determines the success or failure of the next step i.e., the classification process. In this context, there is a large amount of work on neuroimaging and several studies relating to AD, thus various conventional machine learning tools have been used with success and numerous artificial intelligence techniques have been developed.

2.2. Segmentation of Brain Regions

Accurate quantification of the volume of brain tissue, particularly cerebrospinal fluid (CSF), gray matter (GM) and white matter (WM) would aid in the diagnosis and understanding of certain neurodegenerative diseases such as AD and Parkinson’s syndrome. In this section, which corresponds to block C in Figure 1, we will detail the advances in research in the context of the brain regions segmentation.

2.2.1. Related-Work to the Segmentation of Brain Regions

In [18] the authors proposed the adaptive fuzzy C-means algorithm (AFCM) for the segmentation of multi-spectral MRI images in 2D and 3D. AFCM makes it possible to model the intensity inhomogeneity as a gain field by gradually varying the intensities in the image space. Performance comparisons were made with the fuzzy and noise-tolerant adaptive segmentation method (FANTASM), in a noisy environment. Misclassification rate and mean-squared error are used as evaluation measuring parameters.
In [19], a modified FCM (MFCM) algorithm is proposed which incorporates both the local spatial context and the non-local information by using a new dissimilarity index instead of the usual distance metric. The efficiency of the algorithm is demonstrated by segmentation experiments and by comparison with other advanced algorithms, namely: standard FCM, spatial FCM (FCMS), FCM with spatial information (FCMSI) and fast generalized FCM (FGFCM). The measurement parameters used to quantitatively assess the performance are: Similarity index, false positive ratio and false negative ratio.
In [20], the authors proposed a method which makes it possible to divide the brain into homogeneous regions for the detection of tumors. The process was split into two stages: Pre-segmentation and segmentation and several techniques have been exploited such as anisotropic filtering and the stochastic model Markov random field (MRF). The maximum posterior criterion (MAP) was used to estimate the MRF achievement taking into account the observed dependent data.
In [21] the authors applied the discriminant random fields (DRF) models for the segmentation of brain tumors. A comparison was made with the MRF models using the Jaccard similarity coefficient.
In [22], the authors presented a method for the segmentation of WM tissue lesions (WML). Support vector machines (SVM) were used to integrate characteristics of 4 MRI acquisition protocols to distinguish WML from normal tissue. A visual assessment was performed using two experienced neuro-radiologists. A quantitative validation was also carried out with Pearson correlation, Spearman correlation, coefficient of variation and reliability coefficient.
In [23], the authors proposed the adaptive mean shift (AMS) method to classify voxels in one of the GM, WM and CSF tissues. A comparison was made with the adaptive MAP (AMAP) and maximum posterior marginal-MAP (MPM-MAP) methods. Tanimoto coefficient was applied as an evaluation criterion.
In [24], the authors used sets of contours of multiple sclerosis lesions (MS) taken from MRI segmented images, and united their 3D surfaces by spherical harmonics. The objective was the 3D reconstructions of MS lesions and calculates their volumes. A comparison was made with the slice stacking technique by applying quantitative measurement parameters such as misclassification rate, mean-squared error.
In [25], the authors proposed a multi-context wavelet-based thresholding (MCWT) method to classify pixels with GM, WM and CSF tissues. A comparison was made with the wavelet and multigrid wavelet transforms.
In [26], the authors proposed an algorithm based on the transformation into spherical wavelets. The algorithm is applied to the caudate nucleus and the hippocampus for the study of schizophrenia. The validation performed using the Average max error, average min error evaluation criteria showed efficiency from the calculation point of view and compared to the active shape model algorithm by capturing finer shape details.
In [27], the authors introduced a threshold-based scheme that uses level set (LS) for 3D segmentation of the brain tumor. Two threshold update systems have been developed, based on research and adaptation. The experimental results by applying the following evaluation criteria— Jaccard measure, Hausdorff distance and mean absolute surface distance—demonstrated the effectiveness of the method and its performance compared to the method based on competition by region.
In [28], the authors proposed a method based on neighborhood hypergraph partitioning. The experiments have demonstrated the proper functioning of the method and its performance compared to the normalized cut Ncut) algorithm.
In [29], the authors used the adaptive graph cut method optimized in an iterative mode for the automatic segmentation of MRI brain images. A comparison with conventional graph cut and the MRF model was performed using the classification rate evaluation criterion.
In [30], the authors used the FCM algorithm to combine the average filter with the local median filter in order to perform local segmentation of brain MRI volumes. Tagging is achieved by tagging by region using genetic algorithms (GAs), followed by an amendment in terms of voxel using the growth of parallel regions. The fuzzy model is used both to design the fitness function of GAs and to guide the growing region. Several measurement parameters were applied such as mean, standard deviation, false positive ratio, false negative ratio, similarity index and Kappa statistic.
In [31], the authors proposed a brain tissue segmentation method which aims to calculate the fuzzy membership of each voxel to indicate the degree of partial volume using fuzzy Markov random segmentation. The average error rate was used to assess the performance of the segmentation.
In [32], the authors proposed an adaptive mean-shift algorithm for tissue segmentation in WM, GM and CSF. The Bayesian model was applied to estimate the bandwidth of the adaptive nucleus and to study its impact on the precision of tissue segmentation. A comparison was completed with the hybrid k-NN/AMS model using the Dice and Tanimoto coefficients as an evaluation criterion.
In our previous work, hybrid model based brain tissue segmentation was proposed, for images from patients with AD. The approach mainly uses clustering techniques from fuzzy logic in particular, the possibility theory. In [33], the possibilistic C-means algorithm (PCM) was applied to derive fuzzy maps of the volume of WM, GM and CSF tissues, based on an initial partition of tissue centers provided using the FCM.
To make PCM based tissue quantification more robust to noise and artifacts, the FCM algorithm was replaced in [34] by the bias correction FCM (BCFCM) algorithm. Whereas in [35] the FCM partition was optimized, by a genetic process that uses GA.
To ensure the robustness against noise, for the segmentation process based on the hybrid possibilistic-fuzzy-genetic model, intensive experiments were proposed in [36] applied in real and synthetic images, with high additive noise levels which reached 20%.
For the purpose of improving performance and reducing noise sensitivity, experiments were carried out by applying the fuzzy possibilistic C-means (FPCM) algorithm in [37] and the possibilistic fuzzy C-means algorithm (PFCM) in [17] to derive the fuzzy tissue maps. Comparisons were made with FPCM, PCM, FCM and many hybrid clustering algorithms, by applying the Tanimoto coefficient, Jaccard similarity index, specificity and sensitivity.
Table 1 summarizes certain works that have used segmentation techniques from artificial intelligence and applied to brain images.
The techniques most used in the literature are summarized below.

2.2.2. Segmentation Techniques Proposed in Literature: Description, Advantages and Disadvantages

Despite decades of research, there is no standard method that could be considered effective for all types of medical images. However, a set of ad hoc methods have received a certain degree of popularity presented in Figure 2.

The Region Approach

It is designed to partition images into several classes.
  • Artificial neural networks: ANNs used in several works related to neuroimaging [38,39,40,41,42,43,44,45,46] are a supervised deterministic method, represented by an interconnected group of artificial neurons using a mathematical model to process information. It performs well in complex and multivariate nonlinear domains, such as tumor segmentation where it becomes difficult to use decision trees or rule-based systems. It also works a little better for noisy data. Data allocation is not required as in the case of statistical modeling. Its learning process consumes enough time, usually with gradient-type methods. The representation of knowledge is not explicit, in the form of rules or other easily interpretable. Initialization may affect the result which may cause overtraining.
  • Genetic algorithms: GAs exploited in many neuroimaging studies [47,48,49,50,51,52] are a supervised deterministic method of optimizing research that exploits the concepts of natural selection. It differs from traditional optimization methods in four points: (1) It is a parallel search approach in a population of points, thus having the possibility of avoiding being trapped in a local optimal solution. (2) Its selection rules are probabilistic. (3) She works on the chromosome, a coded version of the potential solutions of the parameters, rather than the parameters themselves. (4) It uses the fitness score, obtained from objective functions, without any other derived or auxiliary information. However, its optimization process depends on the fitness function. It is hard to create good heuristics that really reflect our goal. It is difficult to select the initial parameters (the number of generations, the size of the population, etc.).
  • k-means:k-means [53,54,55,56,57] are a deterministic method based on unsupervised learning which makes it possible to divide a set of data into k clusters. It is widely used for brain segmentation with mainly satisfactory results, to overcome the isolated distribution of pixels inside the image segments. Its execution is simple to implement, fast in real time and in calculation even with a large number of variables. Unfortunately, the unstable quality of the results prevents its application in the case of automatic segmentation. Generally, a degradation of the quality of the segmentation is observed in the case of an automatic segmentation, or when the weight of the pixels in the neighboring local regions is added. Difficulty predicting the k value. Different initial partitions can result in different final clusters. The algorithm only works well when spherical clusters are naturally available in the original data.
  • Fuzzy C-means: FCM [19,53,58,59,60,61,62,63,64] is an unsupervised deterministic method which represents the advanced version of k-means. It is based on the theory of fuzzy subsets giving rise to the concept of partial adhesion based on the membership functions. It is widely used in the segmentation and diagnosis of medical images. It provides better results for overlapping data. Unlike k-means where data must systematically belong to a single cluster, FCM assigns a fuzzy degree of belonging to each cluster for each data, which allows it to belong to several clusters. However, the computation time is considerable. It does not often provide standard results due to the randomness of the initial membership values. In addition, MRI images often contain a significant amount of noise, resulting in serious inaccuracies in segmentation. It only takes into account the intensity of the image, which causes unsatisfactory results for noisy images. The counterintuitive form of class membership functions limits its use.
  • Mean shift: it’s an unsupervised deterministic method [62,65,66,67], which makes it possible to locate the maxima—the modes—of a density function, from discrete data sampled through this function. It is based on a non-parametric algorithm which does not take any predefined form on the clusters and assumes no constraint on the number of clusters. Robust with outliers and able to manage arbitrary function spaces. However, it is sensitive to the selection of the window h which is not trivial. An inappropriate window size may result in the merging of modes or the generation of additional “shallow” modes. Costly from a calculation point of view and does not adapt well to the size of the function space.
  • Threshold-based techniques: the easiest way is to convert a grayscale image to a binary using a threshold value [68]. Pixels lighter than the threshold are white pixels in the resulting image and darker pixels are black pixels. Several improvements have been reported in which the threshold is selected automatically. They are very useful for the linearization of images, an essential task for any type of segmentation. They work well for fairly noisy images. Do not require prior image information. They are useful if the brightness of objects differs significantly from the brightness of the background. Their speed of execution is quite fast with minimum IT complexity. However, do not work properly for all types of MRI brain images, due to the large variation between foreground intensities and background image intensities. The reason why, selecting the appropriate threshold value is a tedious task. In addition, their performance degrades for images without apparent peaks or with a wide and flat valley [69].
  • Region growing method: it allows to group pixels together in a homogeneous region, including growing, dividing and merging regions. It correctly separates the regions with the same characteristics already defined, especially when the criteria for region homogeneity are easy to define [69,70,71,72,73]. However, it is very sensitive to noise. Costly in memory and sequential from the calculation point of view. In addition, it requires manually selecting an origin point and requires deleting all the pixels connected to the preliminary source by applying a predefined condition.
  • Mixture of laws (Gaussian mixture models): parametric probabilistic method which allows each observation to be assigned to the most probable class. The classes follow a probability distribution (law), normal in the case of Gaussian mixture models (GMM). GMMs [74,75,76,77,78] require few parameters estimated by a simple likelihood function. These parameters can be estimated by adopting the EM algorithm in order to maximize the likelihood function of the log. However, GMM assume that each pixel is independent of its neighbors; this does not take into account the spatial relationships between neighboring pixels [79]. Also, the previous distribution does not depend on the pixel index.
  • Markov Random field: non parametric probabilistic method, which allows modeling the interactions between a voxel and its neighborhood. In the Markov random field (MRF), the local conditional probabilities are calculated by the Hammersley—Clifford theorem, which allows to pass from a probabilistic representation to the energy representation via the Gibbs field. MRF [80,81,82,83] is characterized by their statistical properties; non-directed graphs can succinctly express certain dependencies that Bayesian networks cannot easily describe. It is effectively applied for the segmentation of MRI images, for which there is no natural directionality associated with variable dependencies. In MRF, the computation of the normalization constant Z requires a sum over a number of potentially exponential assignments generally; it is an NP-difficult. In addition, many non-directed models are difficult to interpret or intractable which require approximation techniques.
  • Hidden Markov models: similar to the MRF, it’s non parametric probabilistic method. It is possible to express a posterior probability of a label field from an observation in hidden Markov models (HMM), thanks to the Bayes theorem. HMMs [84,85,86,87,88] make it possible to model arbitrary characteristics of observations, making it possible to inject knowledge specific to the problem encountered into the model, in order to produce an ever finer resolution of spectral, spatial and temporal data. In the case of HMMs, the types of previous distributions that can be placed on masked states are severely limited; it is not possible to predict the probability of seeing an arbitrary observation. In practice, this limitation is often not a problem, as many common uses of HMMs do not require such probabilities.
  • Support vector machines: SVMs [21,89,90,91,92] are a non-parametric probabilistic method whose objective is to find an optimal decision border (hyperplane) which separates the data into groups. The formation process depends on various factors such as the penalty parameter C or the kernel used, such as the linear, polynomial kernel, the radial-based function (RBF) and in its particular case the Gaussian kernel. The generalization performance of this method is high, especially when the dimension of the function space is very large. It makes it possible to train non-linearly generalizable classifiers in large spaces using a small learning set. It minimizes the number of classification errors for any set of samples. However, it requires a high learning time and memory space for data storage. Moreover, the optimality of the solution can depend on the kernel used unfortunately, there is no theory allowing to determine a priori which will be the best kernel for a concrete task. Also, SVMs assume that the data is distributed independently and identically, which is not appropriate for the segmentation of noisy medical images.

The Contour Approach

The primitives to be extracted are lines of contrast between regions of different gray levels and relatively homogeneous. We could cite the derived models and the scale-space models.
  • Gaussian Scale-Space representation: this concept [93,94,95,96] makes it possible to manage image structures at different scales by generally smoothing. This representation is obtained by solving a linear diffusion equation. Its transparent and natural way of handling scales at the data level makes this concept popular. However, it is sensitive to signal noise since smoothing is applied without an average filter. In addition, parasitic characteristics are to be considered because of the high-frequency noises which introduce local extrema into the signal.
  • Derived models: they make it possible to model image zones (contours) assuming that the digital image comes from a sample of a scalar function developed with a narrow and differentiable support. In this case, the variations in intensity of the image are characterized by a 3D variable which represents the light intensity corresponding to the illuminations (shadows), changes in orientation or distance, changes in surface reflectance, changes in absorption of rays, etc.

The Structural Approach

It takes into account the structural and contextual information of the image.
  • Morphological gradient: it’s the difference between the operators of the gradient of expansion and erosion of an image [97,98]. In this case, the value of a pixel corresponds to the intensity of the contrast in the nearest neighbor. Generally, the extensive and anti-extensive operators exploited by gradient masks are effective in determining the intensity transitions of gray levels in the borders of objects. However, this technique suffers from the problem of the edge detail smearing. In addition, its sensitivity in particular to white Gaussian noise condensed on the high-frequency part of the signal.
  • Watershed line: it interprets an image as a height profile, flooded from regional minima so that, the lines where the flooded areas touch represent the watersheds [99,100]. It makes it possible to use the a priori knowledge of the clinician and his intervention, which facilitates visual evaluation in the higher level. However, it is difficult to implement and slow from a calculation point of view. In addition, the over-segmentation of images is generally frequent.

The Form-Based Approach

It searches for areas that derive from a given form a priori.
  • Deformable models: generate curves or surfaces (from a simple image), or hyper-surfaces (in the case of larger images). They use internal and external forces to delimit the limits of objects and thus distort images. We could distinguish parametric models (active contours or snake) [101,102,103,104,105] and geometric models [106]. They are robust to noise and parasitic edges thank to their ability to generate closed parametric surfaces or curves. Simple to implement on the continuum and achieve less than pixel accuracy, a property highly desirable for medical imaging applications. Ease of integrating border elements into a coherent mathematical description. They are able to enlarge or contract over time, within an image [52,107]. However, they risk producing shapes whose topology is inconsistent with the real object, when applied to noisy images with ill-defined borders.
  • Atlas: allows an image segmented by an automated algorithm to correspond to a reference image (atlas) [108,109]. These techniques take into account a priori knowledge of brain structures and manage segmentation as a recording problem. They are used in clinical practice, for computer-assisted diagnosis and offer a standard system for detecting properties and morphological differences between patients. They allow segmentation even if there is not a well-defined relationship between the intensities of the pixels and associated regions. However, building an atlas takes time. Difficult to produce objective validation, because segmentation is used when the information from the gray level intensities is not sufficient.
  • Wavelets: automatically extracts the histogram threshold from the image by wavelet transform. The threshold segmentation is carried out by exploiting multi-scale characteristics of the wavelet transformation [25,56,110,111,112]. It preserves the sharpness of the contours and provides frequency information located on a function of a signal, which is beneficial for segmentation. However, the overall threshold value is not constant, which leads to a sensitivity of the transformation to the shift. A transformation of dimension greater than 1 suffers from a bad direction when the transformation coefficients reveal only a few orientations of characteristics, in the spatial domain. In addition, there is no information available on the phase of a signal or vector with complex values; it is calculated by applying real and imaginary projections.
  • Spherical harmonics: It offers solutions of the Laplace equation expressed in a spherical coordinate system [24,113,114,115]. A base of orthogonal functions is created, which ensures the uniqueness of the decomposition of a form on the unit sphere. So that any finite energy and differentiable function defined on the sphere can be approximated by a linear combination of spherical harmonics. The estimation of the harmonic coefficients makes it possible to model the form with a level of detail relatively linked to the level of the decomposition whose calculation is fast. However, certain continuity constraints should be included when estimating the coefficients. In addition, the results of the shape reconstruction from the decomposition into harmonics are poor when the missing data are concentrated in one area [116].
  • Level set: consists of representing the segmentation of the contour by the zero level set of a smoothing [108,117,118,119,120,121]. There are two types of methods: geometric and geodesic. Its strong points are the ease of following forms that change topology. Best results for weak and variable signal-to-noise ratios and for non-uniform intensities. Allow to manage any cavity, concavity, convolution, split or fusion. Allow numerical calculations involving curves and surfaces to be performed on a fixed Cartesian grid without having to configure objects. However, the boundaries of the object are not clearly defined by strong image gradients or significant changes in the intensity distribution, this is common in several medical applications, for which image data often suffer from low contrast or fabric noise. In addition, for large images, the execution speed could be very slow and requires manual adjustment of the parameters to obtain optimal results. Possibility of being trapped by an undesirable local minimum which requires additional regularization to obtain the desired minimum.
  • Edge detection techniques: try to locate points with more or less abrupt changes in gray level [122]. His reasoning is close to human perception and works well for images with good contrast between them [123]. However, its performance degrades in the case of poorly demarcated edges or many edges.

The Theory of Graphs

Mathematical structure allowing to model the relations in pairs between the objects of a set.
  • Hypergraph: enlargement of the graph, due to its hyper-edges linked with three or more vertices, which is advantageous for processing large data [124,125]. The concept of cross family (intersection of hyper-edges) derived suits the problem of segmentation on several levels and gives good results. In addition, it provides more meaningful and more robust edge maps. However, its algorithms are quite complex [126].
  • Graph cut: the image is considered as an undirected graph whose pixels represent the nodes and where the distance between the neighboring pixels forms an edge. A weight is assigned to each edge so that the weight vectors characterize the segmentation parameters [29,127,128,129,130,131]. No initialization is required for this method which guarantees optimal global solutions. Easy to execute and delivers precise results. Ability to integrate constraints and approximate continuous cutting metrics with arbitrary precision. It is applicable for highly textured, noisy, colored images, complex backgrounds, etc. However, it is limited to binary segmentation, and to a special class of functional energy.
In Table 2, some visual segmentation results are reported.

2.2.3. Overview of Software Toolkits for Segmentation of Brain Images

Due to the increasing volumes of medical images, more efficient segmentation software toolkits have been developed and several free, and powerful cross-platform library for brain image informatics, image analysis (processing, registration, segmentation, …) and three-dimensional visualization are available in open access to physicians, researchers and application development. Below is an overview of some of these tools.

FMRIB Software Library

The FSL (www.fmrib.ox.ac.uk›fsl) [137] offers statistical tools to analyze the brain imaging data of structural MRI, functional MRI (fMRI) and diffusion tensor imaging (DTI). The majority of these tools are functional via GUI. We could cite FAST which allows automatic segmentation into different types of tissue and corrects the bias field; FLIRT allows linear registration inter- and intra-modal; MIST for multimodal image segmentation; SUSAN to attenuate non-linear noise and BET/BET2 to extract brain from non-brain and to model the surfaces of the skull and scalp.

Insight segmentation and registration ToolKit

The ITK (www.itk.org) is an open-source, cross-platform library for processing, segmentation and registration of medical images. It contains algorithms programmed in C++ and wrapped for Python. The implementation is achieved by applying generic programming through the C++ templates. Moreover, CMake generation environment is used to manage the configuration process. It contains scripts for image processing like Gradient image subjected to a Gaussian filter, for the development of segmentation methods such as the region growing method.

ITK-SNAP

The ITK-SNAP (http://www.itksnap.org/) is a software application that offers semi-automatic segmentation of medical images in structures and in three dimensions. It applied active contour approach and allows different tasks namely: seamless three-dimensional image navigation, manual delineation in three orthogonal planes simultaneously of anatomical regions of interest; multiple 3D image formats are considered including NIfTI and DICOM; takes into account the simultaneous linked display, and segmentation of several images; takes into account color, multi-channel images which vary over time and much more...

FreeSurfer

The FreeSurfer (https://surfer.nmr.mgh.harvard.edu/) is an open source software package developed for processing, visualizing and analyzing structural and functional neuroimaging data from cross-sectional or longitudinal studies. Some of the main tasks include: Skullstripping, image registration, subcortical and cortical segmentation, cortical surface reconstruction, cortical thickness estimation, longitudinal processing, fMRI analysis, tractography, freeview visualization GUI and much more... Among these registration tools, we could cite: mri_robust_register, mri_ca_register mri_robust_template, bbregister, mri_cvs_register, mri_em_register, and others.

Analysis of Functional NeuroImages

The AFNI (https://afni.nimh.nih.gov/) is free, open source software for research purposes, developed through support from the National institute of mental health. Its algorithms are programmed with C, Python, R languages and shell scripts. It allows processing, analyzing, and displaying anatomical and functional MRI data. It runs on Unix systems with X11 and Motif displays.

3D Slicer

The 3D Slicer (https://www.slicer.org/) is an open source software platform which applied medical image informatics and allows image processing and three-dimensional visualization. The SlicerDMRI is an extension of 3D Slicer which provides an enhancement diffusion magnetic resonance imaging (dMRI) software. It allows different tasks namely: Load DICOM and nrrd/nhdr dMRI medical image data; load and save tractography in new DICOM format; visualization and registration of multimodal data exploiting the tools of 3D Slicer; and much more…

2.2.4. Critical Discussion about Segmentation Techniques

Segmentation is a key step that determines the success or failure of the classification process in the CAD system. However, it represents an arduous process, due to the complexity of the medical data, the diversity of the artefacts in particular, the low signal/noise ratio, the uncertain limits of the images, the great variability of the tissues within the same population and the artifacts due to patient movement or little time for data acquisition.
Despite decades of intensive research in the area of brain region segmentation, there is no reliable general theory for all types of images. That said, no standard method is established or considered effective. Therefore, a set of ad hoc methods has been devised by researchers that have received a certain degree of popularity. We can split these segmentation techniques into five categories: those based on shape, contour, region, graph theory and on the structural approach.
Certain conclusions could be drawn concerning these methods. The neural model suffers from complexity from the point of view of the desired topology, it’s appropriate learning and the generalization of the network. It would only be chosen if no prior distribution is required and no very high-quality object information is required. The MRF has been widely exploited due to its use of spectral, spatial, textural, contextual and earlier image properties. However, its implementation is very complex and it does not allow integration of form. The watershed model could however be applied to the segmentation of medical images, however, intensive research is needed to understand its mechanism. By making several models evolve simultaneously, the use of GA makes it possible to remedy the problem of local minima observed in deformable models, the estimation of the pose as well as the initialization of the model. The strong point of the clustering methods based on the theory of fuzzy subsets lies in the resolution of the ambiguities of the borders of the regions. In the framework of hybrid models they combine with neural networks, the MRF model and histogram thresholding method.
The segmentation performance also depends on the homogeneity measures which must be taken into account in the analysis of complex regions. Several have been used in literature, spectral, spatial, texture, shape, scale, size, compactness and contextual, temporal and prior knowledge. Spectral measurement was the most primitive. However it is unable to process high-resolution imagery. Texture measurement was also widely used since it simultaneously benefits from spectral and spatial properties. However it often does not provide perfect segmentation. Therefore, to properly estimate the threshold of homogeneity or heterogeneity of the brain regions, it is beneficial to combine all or most of the measures for better segmentation results [138,139,140,141]. For example, the interest of integrating prior knowledge and contextual information has opened up good tracks of research in the segmentation of medical images [47,142].
In addition to this, the required information scale which is important for better segmentation performance unfortunately, it is selected manually in most existing works. In addition, it would be relevant to propose a quantitative analysis to the methods for assessing segmentation.
Moreover, we noticed that some studies have proven the interest of applying hybrid models which integrate the advantages of several intelligent methods derived from soft computing in order to solve certain problems encountered in the brain regions segmentation, in particular the combination of fuzzy logic clustering algorithms, neural networks, stochastic models and bio-inspired optimization algorithms. The interest of this hybridization is revealed in the context where medical images are susceptible to different artifacts and noises which cause disconnected and indistinct limits.
In the following, an effort to bring together most of the advantages and disadvantages of the techniques proposed for the classification of brain images (bloc D in Figure 1) is summarized below. Thus the reported existing works in the literature, on the diagnosis of brain diseases including AD are summarized.

2.3. Classification of Brain Images

Several CAD systems have been proposed to distinguish patients affected by cerebral dementias in particular, those used to predict AD and to distinguish it reliably from normal aging. In this section which corresponds to block D in Figure 1, we will detail the advances in research in the context of the brain images classification. Table 3 explores certain AD related works [7,143,144,145,146,147,148,149,150,151,152,153,154,155,156] described in the literature which have applied classification approaches from artificial intelligence and pattern recognition.

2.3.1. Classification Techniques Proposed in Literature: Description, Advantages and Disadvantages

Below is an effort to bring together most of the classification techniques (see Figure 3) used for the diagnosis of AD, emphasizing their advantages and disadvantages.

Techniques based on Supervised Learning

  • Artificial neural networks: ANNs are used in many works related to neuroimaging as classification method [15,157,158,159,160,161,162,163] have been widely applied as a classifier to distinguish new test data. They are universal functional approximations allowing to approximate any function with arbitrary precision. They are flexible nonlinear models for modeling complex real-world applications. They are self-adapting adaptable to data, without explicit specification of the functional or distributional form with the underlying model. They are able to estimate the later probabilities, necessary to establish classification rules and statistical analyzes. However, the learning time is high for large ANNs and the adjustment of the parameters to be minimized requires a lot of calculation.
  • k-nearest neighbors: the k-NNs proposed in many neuroimaging studies [164,165,166,167] allow a test sample to be classified in the class most frequently represented among the k closest training samples. In the case of two or more classes, it will be classified to the class with a minimum average distance. This classifier is powerful and simple to implement. It provides precise distance and weighted average information about the pixels. However, its efficiency degrades for large-scale and large-scale data due to its “lazy” learning algorithm. The choice of k affects classification performance which is slow and the memory cost is high.
  • Gaussian mixing model: GMM suggested by many neuroimaging researchers [76,168,169,170,171,172] is easy to implement. Effective and robust due to its probabilistic basis. It does not require a lot of time, for large data sets. However, this classifier does not exclude exponential functions, and its ability to follow trends over time is slow.
  • Support vector machines: SVMs used in several works [6,159,173,174,175,176] have high generalization performance, especially when the dimension of the function space is very large. These machines offer the possibility of training generalizable nonlinear classifiers in large spaces using a small learning set. They minimize the number of classification errors for any set of samples. However, learning SVM is slow and requires computation time for implementation. High cost of memory space to store data. No method is approved to determine a priori the best kernel for a concrete task. So the optimality of the solution can depend on the chosen kernel.

Techniques based on Unsupervised Learning

  • Self-organizing map: SOM [112,176,177] is a type of ANN that produces a discrete, low-dimensional representation of the input space for learning samples. This classifier is simple to implement and easy to understand. Capable of handling various classification issues while providing a useful, interactive and intelligible summary of the data. However, despite the ease of viewing the distribution of input vectors on the map, it is difficult to properly assess the distances and similarities between them. In addition, if the output dimension and the learning algorithms are selected incorrectly, similar input vectors may not always be close to each other and the formed network may converge to local optima.
  • Fuzzy C-means: FCMs [177,178,179,180,181] make it possible to determine a degree of data belonging to each class. However, it is necessary to set a priori certain parameters, such as the initial partition, the number of classes and their centroids.

2.3.2. Critical Discussion about Classification Techniques

Existing work reported in the literature has shown that the classification of brain images is possible via supervised techniques such as ANN, Bayesian networks, k-NN, GMM, HMM, decision tree induction, rule-based classification, PCA and SVM [88,166,182], and via unsupervised techniques such as SOM and FCM. In reality, unsupervised classification, which does not require training data, has not been widely used in CAD systems, due to the specificity of the brain images in which the CAD system should be trained according to the truth field or clinical evidence. On the other hand, the supervised classification was the best adopted because, before the clinical test, the CAD system introduced characteristic values for the data chosen for learning by doing, teaching the classifier to know the labels of the target class by assigning values binaries (one for the target class and zero for the second class). In this case, in order to improve robustness, the system should be trained with a sufficient amount of training data in order to remedy the problem of overtraining.
Several researchers have been interested in the application of hybrid models which aim to combine the advantages of different intelligent techniques of “soft computing” within the same system, or to combine the relative strengths of different classifiers and apply them in a sequence of so that the overall accuracy is maximized, which allows greater flexibility in modeling dynamic phenomena. However, the cost of computing these systems is sophisticated and high.
Unfortunately, the single-modality MRI-based CAD systems and using these classification techniques have limited performance and cannot provide comprehensive and accurate information especially in real applications where noise and artifacts are condensed. In neuroimaging, the other modalities could provide additional information, whose use is generally necessary for a relevant diagnosis of cerebral dementia. This information is combined using fusion techniques from artificial intelligence that provide for human visual perception a fusion image providing additional and useful clinical information that does not appear in the separate images.
Thus the efforts investigated to find a solution within the framework of multimodal fusion are presented in the following section. We summarize some works related to multimodal fusion, with an experimental performance study and a comparison with systems using a single MRI modality for diagnosis. We also provide an overview of the applicability and progress of information fusion techniques in medical imaging, highlighting the disadvantages and advantages of the methods suggested by researchers in the context of multimodal fusion.

3. CAD Systems of Brain Disorders Based on Multimodal Fusion

3.1. Motivation for the Application of Multimodal Fusion

Given its clinical accessibility, magnetic resonance imaging technology has been widely used as a non-invasive tool for diagnosing brain diseases because it does not use ionizing radiation, which makes it safe. However, MRI is sensitive to movement, which limits its effectiveness especially in the diagnosis of mobile organs. To overcome this problem and in order to obtain better performance from the CAD system, several researchers have attempted to combine MRI with other modalities using multimodal fusion. With this technology, one could predict and reconstruct missing information that is not available in the MRI. We could also extract additional characteristics, not visible in the MRI images.
Citing, for example, the multimodal CT/MRI fusion, whereby, thanks to the CT image, dense structures (bones and implants) are visualized with less distortion; however, physiological changes are not localized. While the MRI image detects normal and pathological information on the soft tissue, while information relating to the bones is not considered [183]. In addition, the MRI-T1/MRI-T2 fusion, whereby, thanks to the T1-weighted MRI, details on the anatomical structures are provided, while the T2-weighted MRI detects a greater contrast between normal and abnormal tissues. Added to all of this is the MRI/PET merger, whose functional information is extracted using the PET image. This information locates the metabolic changes caused by the growth of abnormal cells before an anatomical abnormality. While the MRI image, thanks to its high-resolution, provides anatomical information about the regions (or tissues) affected by the disease.
The MRI/PET fusion was widely used for the diagnosis of AD. Recent studies [184,185,186,187,188] have shown that this fusion effectively contributes to accurately interpreting the location and extent of AD with combined information. In fact, the MRI measures the early structural changes in the medial temporal lobe, in particular the entorhinal cortex and the hippocampus. Then, PET—FDG (FluoroDeoxyGlucose) [189] makes it possible to observe, in AD patients, the reduction of glucose metabolism in the parietal, posterior cingulate and temporal regions of the brain [190].
Table 4 reports some work related to brain disease diagnostic systems [183,191,192,193,194,195,196,197,198,199,200,201,202], while Table 5 summarizes some CAD systems related to AD [10,16,17,184,185,186,187,188,203,204,205,206,207,208,209,210,211], with a comparative study with systems using only MRI for the purpose of exploring the efficiency of multimodal fusion. In this context, the researchers proposed techniques for merging data from artificial intelligence, and applied in a multimodal imaging environment, in order to create an improved fusion image more suited to image processing tasks such as than segmentation and diagnosis. The most widely used fusion techniques in the literature are summarized below.

3.2. Multimodal Fusion Techniques Proposed in Literature: Description, Advantages and Disadvantages

Data fusion techniques (see Figure 4) [212,213,214,215] can be classified into three categories, depending on the level of the desired fusion: pixel or imaging sensor level, level of functional parameters and level of decision.

3.2.1. Spatial Domain Techniques

  • Principal components analysis: PCA used in many works [216,217,218,219,220,221,222,223] makes it possible to carry out a linear orthogonal transformation of a multivariate set of data which contains variables correlated with N dimensions in other containing new variables not correlated to M smaller size dimensions. The transformation parameters sought are obtained by minimizing the error covariance introduced by neglecting N-M of the transformed components. This technique is very simple and effective. It benefits from faster processing time with high spatial quality. It selects the optimal weighting coefficients according to the content of the information; it removes the redundancy present in the input image. It compresses a large amount of input data without much loss of information. However, a strong correlation between the input images and the merged image is necessary. In addition, the merged image quality is generally poor with spectral degradation and color distortion.
  • Hue-intensity-saturation: HIS used in many works [224,225,226,227,228] converts a color image of the RGB space (red, green and blue) into an HIS color space. The intensity band (I) in the HIS space is replaced by a high-resolution panoramic image, then reconverted in the original RGB space at the same time as the previous hue band (H) and the saturation band (S), which creates an HIS fusion image. It is very simple, efficient in calculation and the processing time is faster. It provides high spatial quality and better visual effect. The change in intensity has little effect on the spectral information and is easy to manage. However, it suffers from artifact and noise which tend to weaken the contrast. It only processes multi-spectral bands and results in color distortion.
  • Brovery transformation: It is a combination of arithmetic operations which normalize the spectral bands before they are multiplied by the panoramic image. It retains the corresponding spectral characteristic of each pixel and transforms all the luminance information into a high-resolution panoramic image. This technique proposed by many works [229,230,231] is very simple, effective on the computer level and has a faster processing time. It produces RGB images with a high degree of contrast. Good for multi-sensory images and provides a superior, high-resolution visual image. Generally the Bovery fusion image is used as a reference for comparison with other fusion techniques. However, it ignores the requirement for high-quality synthesis of spectral information and causes spectral distortion, which results in color distortion and high-contrast pixel values in the input image. It does not guarantee to have clear objects of all the images.
  • Guided filtering: This technique [232,233,234,235] is based on a local linear model which takes into account the statistics of a region in the corresponding spatial neighborhood in the guide image while calculating the value of the output pixel. The process first uses a median filter to obtain the two-scale representations. Then the base and detail layers are merged using a weighted average method. It is very simple, in terms of computation and adaptable to real applications whose computational complexity is independent of the size of the filtering kernel. It has good smoothing properties preserving the edges and does not suffer from the gradient reversal artifacts observed when using a bilateral filter. It does not blur strong edges in the decomposition process. Despite the simplicity and effectiveness of this technique; however, the principal problem with the majority of guided filters is associated with ignorance of the structural inconsistency between the ground truth and target images such as color [234]. Moreover, the halos could represent an obstacle [235].
Several spatial domain techniques represent a simple means to obtain a fusion image but, due to degrading performance and weak or ineffective results, they have not been sufficiently applied, especially in real-time applications. Some of these methods are mentioned below.
  • Simple average: The pixel value of each image is added. The sum is then divided by 2 to obtain the average. The average value is assigned to the corresponding pixel of the output image. The principle is repeated for all pixel values. This technique [217,236] is a simple way to obtain a fusion image with focusing of all the regions of the original images. However, the quality of the output image is reduced by incorporating noise into the merged image, which results in undesirable effects, such as reduced contrast. In addition, the possibility of having clear objects from all of the images is not guaranteed.
  • Weighted average: It calculates the sum of the pixels affected by coefficients, divided by the number of pixels. The weighted average value is assigned to each pixel of the input image to obtain the value of the corresponding pixel in the output image. This technique used in some works [217,231,237,238,239] improves the reliability of detection. However, there is a risk of increasing noise.
  • Simple block replacement: In this technique [223,240], for each pixel, its neighboring pixels are added and a block average is calculated. The pixel of the merged image is obtained by taking the pixel with a maximum block average among all the corresponding pixels in the input image.
  • Max and Min pixel values: These techniques are used in many works [217,236,240], they choose the focused regions of each input image by choosing the highest value (or the lowest in the case of the min pixel value technique) for each pixel. This value is assigned to the corresponding pixel in the merge image.
  • Max-Min: For this technique [240], in the merged image, the output pixels are obtained by averaging the smallest and largest values of the corresponding pixels in all of the input images.
The last four techniques are easy to implement and provide several rules for merging images, most of which are very simple. However, they produce a fuzzy output that affects the contrast of the image, which limits their potential for real-time applications.

3.2.2. Frequency Domain Techniques

Discrete Transform

In signal processing, the discrete transformations represent in most cases linear transformations of signals between discrete domains, such as between discrete time and discrete frequency. In the case of medical signal processing, it provides a sparse representation of smooth images in pieces. The most used techniques based on this type of transformation are discrete cosine transform (DCT), curvelet transformation (CT) and wavelet transform.
  • Discrete Cosine Transform: The DCT described in many works [241,242,243,244,245,246] makes it possible to perform a discrete transformation which provides a division into N × N pixel blocks by operating on each block. As a result, it generates N coefficients which are quantified to reduce their magnitude. It reduces complexity by breaking down images into series of waveforms. It can be used for real applications. However, the merged image is not of good quality if the block size is less than 8 × 8 or equivalent to the size of the image itself.
  • Curvelet Transform: The CT used in many works [247,248,249,250,251,252] is a means of characterizing curved shapes in images. The concept is to segment the complete image into small overlapping tiles, then the ridgelet transformation is performed on each tile. The curvelet transform provides fairly clear edges because the curvelets are very anisotropic. They are also adjustable to properly represent or improve the edges on several scales. However curvelet do not have time invariance.
  • Wavelet Transform: In wavelet-based fusion, used in different studies [217,220,253,254,255,256,257,258], once the image is decomposed by wavelet transformation, a multi-scale composite representation is constructed by selecting the salient wavelet coefficients. The most applied techniques based on wavelet transform are discrete wavelet transform (DWT), stationary wavelet transform (SWT) and Kekre’s Wavelet transform (KWT).
    -
    Discrete wavelet transform: DWT [217,258] allows a discrete transformation for which the wavelets are discretely sampled. The key advantage of DWT is that it provides temporal resolution, in the sense that it captures frequency and location information. However DWT requires a large storage space, the lack of directional selectivity and causes the loss of information on the edges due to the down-sampling process, effect of blurring, etc.
    -
    Stationary wavelet transform: SWT [250,259], allowing a discrete transformation, begins by providing from the original image, information relating to the edges of levels 1 and 2. Then, a measurement of spatial frequency is used to merge the two contour images, and in order to obtain a complete contour image. From level 2 of the decomposition, the SWT offers satisfactory results. Its stationary property guarantees temporal invariance, which is obtained by suppressing the process of subsampling, but SWT is more complex in terms of computation and processing, which consume time. Moreover, although it performs better at separate discontinuities, its effectiveness degrades at the edges and textured locals.
    -
    Kekre’s wavelet transform: The KWT [260], which allows a discrete transformation to be carried out, is applicable for images of different sizes. Its results are generally good. In addition, different variations of KWT are simply generable, only by changing the size of the basic Kekre’s transformation. However, this type of transformation is not explored enough. Intensive research is; therefore, desired to bring out its weaknesses.
  • Hybrid Approach-Based Fusion: To achieve a merger, some researchers have used hybrid methods which allow two or more methods to be combined in a single scheme. Below some of them are presented.
    -
    Hybrid SWT and curvelet transform: With the hybrid SWT/CT technique [261] we first decompose the input images by applying the SWT in order to obtain the high- and low-frequency components. Thereafter, a curvelet transform is applied to merge the low-frequency components. The principle is based on the segmentation of the whole image into small superimposed mosaics, then for each mosaic, we apply the transformation into crest. Components with high frequencies are merged according to the largest coefficients of absolute value. The final fusion image is finally obtained using the inverse SWT transformation. This hybrid method makes it possible to avoid the drawbacks of the two combined methods, namely the block effects of the fusion algorithm applied by the wavelet transformation, as well as the performance defects of the details of the image in the curvelet transformation. It retains image details and profile information such as contours. It adapts to real applications. However, a lot of time is consumed.
    -
    Discrete wavelet with Haar-based fusion: In the DWT with Haar-based fusion [262], once the image is decomposed by wavelet transformation; a multi-scale composite representation is constructed by selecting the salient wavelet coefficients. The selection can be based on the choice of the maximum of the absolute values or of a maximum energy based on the surface. The final step is a transformation into inverse discrete wavelets on the composite wavelet representation. It provides a good quality merged image and better signal-to-noise ratio. It also minimizes spectral distortion. Different rules are applied to the low- and high-frequency parts of the signal. However, pixel-by-pixel analysis is not possible and it is not possible to merge images of different sizes. The final merged image has a lower spatial resolution.
    -
    Kekre’s hybrid wavelet transform: This fusion technique [263] allowing a discrete transformation to be carried out, exploits various hybrid transformations to decompose the input images. Quoting the hybridization Kekre/Hadamard, Kekre/DCT, DCT/Hadamard, etc. Then the average is applied to merge the decomposed images, and to obtain its transformation components. The latter are subsequently converted to an output image by applying a reverse transformation. Like its similar KWT, its advantage is that it is applicable for images that are not only a whole power of 2.
    -
    Hybrid DWT/PCA: The DWT/PCA fusion [220] allows for a discrete transformation. It provides multi-level merging where the image is double-merged, which provides an improved output image containing both high spatial resolution and high-quality spectral content. However, this kind of merger is quite complex to achieve.

Pyramid Decomposition

An image pyramid consists of a set of low-pass or band-pass copies of an image, each copy representing pattern information of a different scale. In a picture pyramid, each level corresponds to a factor two smaller than its predecessor, and the highest levels will focus on the low-spatial frequencies. An image pyramid contains all the information necessary to reconstruct the original image. Among the techniques based on pyramidal decomposition, we could cite the Laplacian technique [250,264,265,266,267,268,269,270], Gaussian technique [269] gradient pyramid [270], low-pass ratio pyramid [271], contrast [272] and the morphological technique [273]. Pyramid techniques provide good visual quality for a multi-focus image. However, all pyramid decomposition techniques produce more or less similar results. In addition, the number of decomposition levels affects the result of the merge.

3.3. Critical Discussion about Multimodal Fusion Techniques

Various multimodal fusion techniques have been exploited by several works related to brain imaging. We could break these techniques down into two types: Those that can be used in the frequency domain where the Fourier transform of the input image is first calculated, then the inverse Fourier transform is determined to provide the output image. Other fusion methods are of the spatial domain which are interested in the pixels of the input images whose modifications are made on the values of the pixels to provide the desired output image.
In general, researchers have preferred to use pixel level fusion methods that have provided the best results such as IHS, PCA, independent component analysis. (ICA), guided filtering and Brovey transformation, etc. However, these techniques suffer from spectral degradation. Techniques using traditional fusion rules have also been used such as weighted average, absolute maximum, etc. Unfortunately, they provide a poor quality output image because the information from the low- and high-frequency coefficients is overlooked or used inefficiently. Since medical images are sensitive to the human visual system which exists on different scales, this type of technique is undesirable for performing the fusion.
Pyramidal decomposition [250,264,265,266,267,268,269,270,271,272,273,274] and multi-resolution [275,276] techniques have solved this problem, citing fusion by the gradient pyramid, the Laplacian pyramid, the contrast pyramid, etc. On the other hand, the output image suffers from a reduced contrast with pyramidal fusion and slightly less with multi-resolution techniques. In addition, these techniques produce blocking effects, because in their decomposition process, no spatial orientation selectivity is taken into account.
With the progression of multi-resolution fusion, wavelet (WT) transformation was widely used [205,217,220,250,253,254,256,257,258] to merge medical images, particularly DWT, since it maintains spectral information. However, the spatial characteristics are poorly expressed. Added to this, the isotropic WT is devoid of shift invariance and multi-directionality. It also does not provide optimal expression of highly anisotropic edges and contours in brain images [183]. In the output image, it is difficult to surely preserve all of the salient features of the input images. This eventually causes an inconsistency in the results of the merger, and introduces artifacts. In addition, WT offers efficient fusion only for isolated discontinuities. Unfortunately, its performance deteriorates on edges and textured regions. On the other hand, it reserves limited directional information along the vertical, horizontal and diagonal directions [183].
Multi-scale geometric analysis (MSGA) [193] has overcome these limitations, thanks to its proposed techniques which allow multi-scale decomposition for high dimensional signals. Citing the ridgelet, curvelet, bandlet, brushlet and contourlet techniques [247,248,249,250,251,252,276]. The contourlet technique [193,250,276] is a lean 2D representation for 2D signals. It allows better capture of 2D geometric structures in visual information than traditional multiscale methods. In addition, the extended version, the non-subsampled contourlet transform (NSCT) [183,191,192,276,277,278,279,280], inherits all the advantages of contourlet transformation by adding the characteristic of invariant decomposition by offset, which effectively suppresses pseudo-Gibbs phenomena. This increases performance thanks to the use of directive contrast which takes advantage of contrast and visibility. It also improves the quality of the output image, especially around the edges [183], by producing a more noticeable and more natural merged image. Some works have attempted to propose new approaches which go beyond the conventional context of fusion such as neural networks with pulse-coupled neural network (PCNN) [191,281], fuzzy logic [282], genetic algorithms (GA) [283] and independent component analysis (ICA) [284].
Recent trends to apply hybrid fusion techniques like hybridization weighted average/Brovery [231], HIS/PCA [285,286], Laplacian/maximum likelihood [265], IHS/WT [287], contourlet/PCA [288], Laplacian/DCT [266], Bloc replacement/PCA [223], NSCT/PCNN [191], WT/PCNN [281], Laplacian/histogram equalization [267], GA/WT [289], DTW/PCA [290], etc. The objective is to increase the performance of conventional medical image fusion systems, and to meet the needs of real medical applications. For example image fusion using the IHS/WT hybrid transform [287] improves the synthetic quality of the merged image. The fusion by IHS improves the textural characteristics in the merged image, and the spatial details of the multi-spectral image. However, the spectral distortion is severe in the output image. WT could remedy this problem as it provides high-quality spectral content. In addition, hybridizing DWT with PCA [290] or other space domain methods improves performance compared to using the techniques separately.
In conclusion, traditional fusion techniques [291] suffer from several drawbacks and do not meet the requirements of current medical applications. The reason why, researchers have started a new research by focusing on different tracks in order to increase the performance of CAD systems. Among the addressed tracks, the tendency to apply hybrid models which combine the advantages of several conventional fusion techniques. These models could be the future trend for neuroimaging research.

3.4. Critical Discussion about the Multimodal Diagnosis of AD

Several CAD systems [7,14,144,150,151,152,153,156] have been implemented over the past decade for the diagnosis of AD or its early-stage MCI. To distinguish patients with AD from those in normal aging, the researchers used machine learning methods such as SVM, ANN and naive Bayes classifier. However, it should be noted that these techniques have been used for single-modal images (MRI or PET) in the majority of the work. Unfortunately, few works have exploited multimodality for the diagnosis of AD like [10,17,46,154,184,186,187,206,207], which seems to be better achieved by serving the advantages of several types of images, each measuring a different type of structural or functional characteristic. In reality, the various imaging environments and modalities offer complementary information that is useful when used in conjunction. This improves the performance of the diagnostic system compared to the system using a single modality.
In this context, some research groups have adapted machine learning algorithms to several types of modalities (or to additional clinical/cognitive data), by concatenating the structural and functional characteristics of each subject into a single vector of characteristics. However, this type of approach causes strong growth in the distribution dimension. In addition, the principle prompts us to find the right standardization for each modality in order to preserve its informational content and avoid overwhelming the characteristics derived from one or the other type of image. The difficulty in this case is in the evaluation of the relevance of the content of each modality, which allows the optimization of the classifier.
To remedy this problem, several researchers [186,187,203,204,208] have exploited the idea of carrying out multi-kernel learning (MKL) based on the combination of several kernels. This learning applied to MRI/PET multimodal images is more flexible, thanks to the use of different weights on the modality biomarkers. In addition, it provides a unified means for combining heterogeneous data when a different data type cannot be directly concatenated. However, despite its performance in terms of cross-validation, this feature selection method requires having the same number of features calculated for each modality.
In addition to the use of MRI/PET multimodal neuroimaging data, studies have attempted to integrate other types of data from other biomarkers and test these prognostic capacities such as CSF, Pittsburgh compound B (PIB)-PET, apolipoprotein E (ApoE) and genetic data. The advantage of these measures developed for the diagnosis of AD is that the number of its characteristics is different, and that they often contain relevant and complementary pathological information which can help in the future diagnosis of AD and increase the performance of the classification [292].
For example, researchers, as in [10,206], proposed a kernel-based combination to integrate the CSF with the FDG-PET and MRI modalities. They have shown that the morphometric changes in AD and MCI are linked to the CSF, which offers additional information [293]. Additionally, the [11C] PIB-PET biomarker has been used by several longitudinal studies and therapeutic interventions to estimate the evolution of AD. The PET imaging tracer, PIB, was developed to identify the cerebral amyloid. However, quantitative analysis of PIB [11C] data requires the definition of regional interest volumes. In this context, to define the regions for a PIB-PET analysis, researchers as in [188,294] have shown that the integration of MRI or PET offers similar results. This avoids the need for an MRI that takes time and increases costs. Therefore, MRI analysis remains more appropriate for clinical research, while the application of a PET model to [11C] PIB is adequate for clinical diagnosis. Studies have been interested in the application of the ApoE4 genotype that has been integrated into the MRI and CSF in several works [184,185], which have tested the performance of the classification after stratification of ApoE4. This biomarker was commonly applied as a stratification factor or covariate that contributes to the adjustment of heterogeneity in sporadic AD, since non-e4 status in EOAD patients is correlated with typicity.
Biological or genetic biomarkers [184,185] have also been developed for the diagnosis of AD or its early-stage MCI. Studies by genetic endophenotype by imaging would make it possible to develop the link between genetics and the topography of AD by focusing on the areas of the brain most associated with pathological genotypes. However, the use of these biomarkers differs from one research center to another and is subject to various factors, such as the cost of local availability and historical patterns of use [207]. In general, several deductions have been noted by researchers from these centers, namely: (1) The increase in total tubulin-associated unit (t-tau) proteins and hyperphosphorylated tau at the level of threonine 181 (p-tau) in the CSF is associated with the pathology of neurofibrillary tangling; (2) the decrease in the amyloid β (Aβ42) is indicative of the pathology of the amyloid plaque; and (3) the presence of the ε4 allele of APOE can predict cognitive decline or conversion to AD [184]. However, the change in the proteins t-tau and p-tau in the CSF does not really express the extent of the tau pathology in the Alzheimer’s brain. These CSF biomarkers are only peripheral substitutes for the current pathology of the tau protein. Recently, several ligands of the tau protein are being developed. Such technology offers a much more precise measure of the pathology of the tau protein, which significantly promotes better understanding and classification of the early and presymptomatic stages of AD.
It should be added that most large-scale works have exploited multimodal techniques which target the exploration of the relationship between various modalities for the same participants, neglecting the useful relationship between the different participants. In another sense, the techniques for selecting the multitasking functionalities are used only to jointly select the common functionalities through different modalities, ignoring the information relating to the distribution of the data in each of these modalities, which is important for a later classification. Some researchers as in [186,187] have tried to tackle this problem, by proposing a method for learning multiple regularized multitasking functionalities, which makes it possible to preserve both the inherent relationship between the data of the different modalities and the information relating to the distribution of data in each modality.
However, we can generally suggest a few remarks found in most of the works cited in the literature:
-
All the classification models performed very well, distinguishing participants with normal aging from the patients with AD. The performance was worse for MCI vs. AD determination, which proved more difficult, probably due to the MCI biomarker scheme which is quite similar to that observed in AD. However, multimodal classification had better diagnostic and forecasting power than single-modal classification.
-
The majority of clinical studies in recognized biomedical laboratories have focused on binary classification problems (i.e., AD vs. HC and MCI vs. HC), neglecting to test the power of the models proposed for multi-class classification of AD, MCI and normal controls. It is true that the latter type of classification is more difficult to verify than the binary classification, but it is crucial to diagnose the different stages of dementia. In addition, longitudinal data may contain essential information for classification, while the proposed studies can only deal with basic data.
-
Few studies have used two or more biomarkers simultaneously among MRI, PET and CSF for the diagnosis of AD, using images from synthetic bases such as Alzheimer’s Disease Neuroimaging Initiative (ADNI) [293] and Open Access Series of Imaging Studies (OASIS) [295] to combine MRI with other modalities applying image fusion methods. The ADNI database, launched in 2003 by various institutes, companies and non-profit organizations, represents one of the largest databases to date; however, the limit of the ADNI data set is that it is not neuropathologically confirmed, which is like even a fairly delicate task to perform in practice.

4. Conclusions and General Requirements

Many advanced countries with a long life expectancy, such as Canada, Japan and the United States, are moving toward an aging society, and thus the number of patients with brain diseases, including some dementia disorders, will increase as AD with the rise in average life expectancy. Neuroradiologists expect that the CAD systems can assist them in diagnosing brain diseases by providing useful information. Therefore, the CAD systems for brain diseases, especially for AD which is the most common form of dementia, will become more essential for neuroradiologists in clinical practice in the near future. Moreover, because MR neuroimaging used in practice is one of the most widely used imaging modalities to establish trusted clinical settings in brain medical studies in this regard, this review provides a preliminary summary and reported some studies of researchers which attempt to develop very useful magnetic resonance CAD systems for brain disorders and AD in particular. In this context, the various machine learning techniques that have been explored for in vivo classification, and particularly segmentation phase in the CAD MR brain system, have been described and criticized, specifying the disadvantages and advantages of each of them. We are interested in machine learning methodologies because they are highly flexible to the inclusion of expert knowledge and have been demonstrated in numerous applications to perform accurately and robustly.
In the context of the brain regions segmentation, despite intensive research in this area, there is currently no reliable method for all types of images. Nevertheless, a set of ad hoc methods that has received a certain degree of popularity and that we have divided into five categories are: Techniques based on the shape approach, based on the contour approach, based on the region approach, based on theory graphs and those based on the structural approach.
In the context of the classification process, existing work reported in the literature has shown that the classification of brain images is possible via techniques based on supervised or unsupervised learning. However, due to the specificity of the brain images in which the CAD system must be formed, based on the field of truth or clinical evidence, the researchers preferred to apply methods based on supervised learning and rejected unsupervised classification, which does not take into account training data.
However, because of the drawbacks raised by the classification and segmentation methods, it has been found that different researchers have turned to new tracks to maximize the accuracy of CAD systems. Among these tracks, the hybrid models which make it possible to combine the advantages of various techniques derived from “soft computing”.
MRI provides high-resolution images with anatomical information. On the other hand, functional images, such as PET, SPECT, etc., provide low-spatial resolution images with functional information. Therefore, a single modality of medical images cannot provide comprehensive and accurate information. As a result, combining anatomical and functional medical images to provide much more useful information through image fusion, has become the focus of imaging research and processing. Different biomarkers provide complementary information, which have been shown in the literature to be useful in neuroimaging and AD diagnoses when used together. Thus we showed in this survey paper that the combination of MRI and other modalities using data mining methods, as a tool, more accurately classifies brains disorder patients as AD subjects at the baseline compared to using either biomarker separately. For this purpose, some results of related-work were reported, and the most used image fusion methods were summarized by specifying the disadvantages and the advantages of each one.
Several multimodal fusion methods have been devised by the researchers. We divided them into two types: Those which can be exploited in the frequency domain and others used in the spatial domain where the pixels of the input images undergo improvements. In reality, the latter type of technique was preferred in many studies.
However, the conventional methods applied for multimodal fusion suffer from several drawbacks and do not meet the requirements of current medical applications. In this context too, hybrid models combining the advantages of conventional techniques have been applied for the purpose of improving the performance of CAD systems based on multimodal fusion.
We noticed that the efficiency of the CAD systems proposed by researchers is demonstrated by estimating the percentage of several performance measures, such as sensitivity (SE), which represents the true positive rate; specificity (SP), which estimates the true negative rate; and precision (AC), which determines the proportion of true results in the database, whether true positive or true negative.
The authors tried to prove that the proposed methods performed fairly well compared to a selection of other methods. Indeed, we have observed that certain techniques have obvious advantages over others, based on the speed of processing time, or on the lesser complexity or even on the lower memory requirements. However, it is not possible to draw absolute conclusions about what is best or worst without first performing in-depth tests on all the proposed methods.
Furthermore, a potential limitation of combining biomarkers is limited by the practical clinical implications as imposed by the medical experts based on the requirements of specific medical studies. In addition to medical reasons, there exists technical challenges in image fusion resulting from image noise, resolution difference between images, inter-image variability between the images, lack of sufficient number of images per modality, high cost of imaging and increased computational complexity with increasing image space and time resolution. Many of these challenges remain open and the problem is much more significant in developing fusion image algorithms for real-time medical applications such as robotic-guided surgery. Nonetheless, even under these challenging situations, the fused images provide the human observers improved viewing and interpretation of medical images; and the image fusion in multimodal medical imaging environment has proved to be useful and the trust in its techniques is on the rise.
It is expected that the innovation and practical advancements would continue to grow in the upcoming years. In this context, some general requirements on the application of multimodality fusion that emerge from this survey, especially for fusion algorithms, could be suggested as follows: (1) The algorithm should be able to extract and integrate complimentary features from the input images; (2) it must not introduce artifacts or inconsistencies according to the human visual system; (3) it should be robust and reliable. Generally, these can be evaluated subjectively or objectively. The former relies on human visual characteristics and the specialized knowledge of the observer, hence are vague and time-consuming but typically accurate if performed correctly. The other one is relatively formal and is easily realized by the computer algorithms, which generally evaluate the similarity between the fused and source images. However, selecting a consistent criterion with the subjective assessment of the image quality is rigorous. Hence, there is a need to establish an evaluation system and an evaluation index system is set up to evaluate the proposed fusion algorithm.

Author Contributions

Conceptualization, L.L.; methodology, L.L.; formal analysis, L.L.; investigation, L.L.; resources, L.L.; data curation, L.L.; writing—original draft preparation, L.L.; writing—review and editing, L.L.; visualization, L.L.; Supervision, M.B.; Funding acquisition, L.L., M.B. and O.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the following organizations: FRQNT (Fonds de recherche du Québec—Nature et technologies), Quebec, Canada; ReSMiQ (Regroupement Stratégique en Microsystèmes du Québec), Quebec, Canada; L’Oréal-UNESCO for Women in Science, Paris, France and Caisse Desjardins de Sault-au-Récollet, Quebec, Canada.

Acknowledgments

The authors thank the following organizations: FRQNT, ReSMiQ, L’Oréal-UNESCO and Caisse Desjardins de Sault-au-Récollet for their financial support offered to accomplish this research. Special thanks to the evaluators of this work for the relevant comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Besga, A.; Termenon, M.; Graña, M.; Echeveste, J.; Pérez, J.M.; Gonzalez-Pinto, A. Discovering Alzheimer’s disease and bipolar disorder white matter effects building computer aided diagnostic systems on brain diffusion tensor imaging features. Neurosci. Lett. 2012, 520, 71–76. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, C.-M.; Chou, Y.-H.; Tagawa, N.; Do, Y. Computer-Aided Detection and Diagnosis in Medical Imaging. Comput. Math. Methods Med. 2013, 2013. [Google Scholar] [CrossRef] [Green Version]
  3. Dessouky, M.M.; Elrashidy, M.A.; Taha, T.E.; Abdelkader, H.M. Computer-aided diagnosis system for Alzheimer’s disease using different discrete transform techniques. Am. J. Alzheimer Dis. Other Dement. 2016, 31, 282–293. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Doi, K. Computer-aided diagnosis in medical imaging: Historical review, current status and future potential. Comput. Med Imaging Graph. Off. J. Comput. Med Imaging Soc. 2007, 31, 198–211. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Karami, V.; Nittari, G.; Amenta, F. Neuroimaging computer-aided diagnosis systems for Alzheimer’s disease. Int. J. Imaging Syst. Technol. 2019, 29, 83–94. [Google Scholar] [CrossRef] [Green Version]
  6. Khedher, L.; Illán, I.A.; Górriz, J.M.; Ramírez, J.; Brahim, A.; Meyer-Baese, A. Independent component analysis-support vector machine-based computer-aided diagnosis system for Alzheimer’s with visual support. Int. J. Neural Syst. 2016, 27, 1650050. [Google Scholar] [CrossRef]
  7. Lazli, L.; Boukadoum, M.; Aït-Mohamed, O. Computer-aided diagnosis system for Alzheimer’s disease using fuzzy-possibilistic tissue segmentation and SVM classification. In Proceedings of the 2018 IEEE Life Sciences Conference (LSC), Montreal, QC, Canada, 28–30 October 2018; pp. 33–36. [Google Scholar]
  8. Lin, T.; Huang, P.; Cheng, C.W. Computer-aided diagnosis in medical imaging: Review of legal barriers to entry for the commercial systems. In Proceedings of the 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–16 September 2016; pp. 1–5. [Google Scholar] [CrossRef]
  9. Park, H.J.; Kim, S.M.; La Yun, B.; Jang, M.; Kim, B.; Jang, J.Y.; Lee, J.Y.; Lee, S.H. A computer-aided diagnosis system using artificial intelligence for the diagnosis and characterization of breast masses on ultrasound: Added value for the inexperienced breast radiologist. Medicine 2019, 98, e14146. [Google Scholar] [CrossRef]
  10. Zhang, D.; Wang, Y.; Zhou, L.; Yuan, H.; Shen, D. Multimodal classification of Alzheimer’s disease and mild cognitive impairment. NeuroImage 2011, 55, 856–867. [Google Scholar] [CrossRef] [Green Version]
  11. Patterson, C. World Alzheimer Report 2018 The State of The Art of Dementia Research: New Frontiers (Alzheimer’s Disease International (ADI)). 2018. Available online: https://www.alz.co.uk/news/world-alzheimer-report-2018-state-of-art-of-dementia-research-new-frontiers (accessed on 19 October 2018).
  12. Geethanath, S.; Vaughan, J.T., Jr. Accessible magnetic resonance imaging: A review. J. Magn. Reson. Imaging 2019, 49, e65–e77. [Google Scholar] [CrossRef]
  13. Agüera-Ortiz, L.; Hernandez-Tamames, J.A.; Martinez-Martin, P.; Cruz-Orduña, I.; Pajares, G.; López-Alvarez, J.; Osorio, R.S.; Sanz, M.; Olazarán, J. Structural correlates of apathy in Alzheimer’s disease: A multimodal MRI study. Int. J. Geriatr. Psychiatry 2017, 32, 922–930. [Google Scholar] [CrossRef]
  14. Tondelli, M. Structural MRI changes detectable up to ten years before clinical Alzheimer’s disease. Neurobiol. Aging 2012, 33, 825.e25–825.e36. [Google Scholar] [CrossRef] [PubMed]
  15. Islam, J.; Zhang, Y. A novel deep learning based multi-class classification method for Alzheimer’s disease detection using brain MRI data. In Proceedings of the Brain Informatics: International Conference, BI 2017, Beijing, China, 16–18 November 2017; pp. 213–222. [Google Scholar] [CrossRef]
  16. Bhavana, V.; Krishnappa, H.K. Multi-modality medical image fusion using discrete wavelet transform. Procedia, Comput. Sci. 2015, 70, 625–631. [Google Scholar] [CrossRef] [Green Version]
  17. Lazli, L.; Boukadoum, M.; Ait Mohamed, O. Computer-Aided Diagnosis System of Alzheimer’s Disease Based on Multimodal Fusion: Tissue Quantification Based on the Hybrid Fuzzy-Genetic-Possibilistic Model and Discriminative Classification Based on the SVDD Model. Brain Sci. Clin. Neurosci. Sect. 2019, 9, 289. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Pham, D.L. Robust fuzzy segmentation of magnetic resonance images. In Proceedings of the 14th IEEE Symposium on Computer-Based Medical Systems, Bethesda, MD, USA, 26–27 July 2001; pp. 127–131. [Google Scholar]
  19. Wang, J.; Kong, J.; Lu, Y.; Qi, M.; Zhang, B. A modified FCM algorithm for MRI brain image segmentation using both local and non-local spatial constraints. Comput. Med. Imaging Graph. 2008, 32, 685–698. [Google Scholar] [CrossRef] [PubMed]
  20. Capelle, A.; Alata, O.; Fernandez, C.; Lefevre, S.; Ferrie, J.C. Unsupervised segmentation for automatic detection of brain tumors in MRI. In Proceedings of the 2000 International Conference on Image Processing, Vancouver, BC, Canada, 10–13 September 2000; Volume 1, pp. 613–616. [Google Scholar]
  21. Lee, C.-H.; Schmidt, M.; Murtha, A.; Bistritz, A.; Sander et, J.; Greiner, R. Segmenting brain tumors with conditional random fields and support vector machines. In Computer Vision for Biomedical Imaging Applications; Springer: Berlin, Heidelberg, 2005; pp. 469–478. [Google Scholar]
  22. Lao, Z.; Shen, D.; Jawad, A.; Karacali, B.; Liu, D.; Melhem, E.; Bryan, R.; Davatzikos, C. Automated segmentation of white matter lesions in 3D brain MR images, using multivariate pattern classification. In IEEE International Symposium on Biomedical Imaging: Macro to Nano; IEEE: Piscataway, NJ, USA, 2006; pp. 307–310. [Google Scholar]
  23. Mayer, A.; Greenspan, H. An Adaptive Mean-Shift Framework for MRI Brain Segmentation. IEEE Trans. Med Imaging 2009, 28, 1238–1250. [Google Scholar] [CrossRef]
  24. Goldberg-Zimring, D.; Azhari, H.; Miron, S.; Achiron, A. 3-D surface reconstruction of multiple sclerosis lesions using spherical harmonics. Magn. Reson. Med. 2001, 46, 756–766. [Google Scholar] [CrossRef]
  25. Zhou, Z.; Ruan, Z. Multicontext wavelet-based thresholding segmentation of brain tissues in magnetic resonance images. Magn. Reson. Med. 2007, 25, 381–385. [Google Scholar] [CrossRef]
  26. Nain, D.; Haker, S.; Bobick, A.; Tannenbaum, A. Multiscale 3-d shape representation and segmentation using spherical wavelets. IEEE Trans. Med Imaging 2007, 26, 598–618. [Google Scholar] [CrossRef] [Green Version]
  27. Taheri, S.; Ong, S.H.; Chong, V.F.H. Level-set segmentation of brain tumors using a threshold-based speed function. Image Vis. Comput. 2010, 28, 26–37. [Google Scholar] [CrossRef]
  28. Singh, S.; Singh, M.; Apte, C.; Perner , P. Weighted Adaptive Neighborhood Hypergraph Partitioning for Image Segmentation. In Proceedings of the 3rd International Conference on Advances in Pattern Recognition, ICAPR 2005, Lecture Notes in Computer Science, Bath, UK, 22–25 August 2005; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3687, pp. 522–531. [Google Scholar] [CrossRef] [Green Version]
  29. Song, Z.; Tustison, N.; Avants, B.; Gee, J. Adaptive graph cuts with tissue priors for brain MRI segmentation. In Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: Nano to Macro, Arlington, VA, USA, 6–9 April 2006; pp. 762–765. [Google Scholar] [CrossRef]
  30. Xue, J.; Ruan, S.; Moretti, B.; Revenu, M.; Bloyet, D. Knowledge-based segmentation and labeling of brain structures from MRI images. Pattern Recognit. Lett. 2001, 22, 395–405. [Google Scholar] [CrossRef]
  31. Ruan, S.; Bloyet, D.; Revenu, M.; Dou, W.; Liao, Q. Cerebral magnetic resonance image segmentation using fuzzy Markov random fields. In Proceedings of the International Symposium on Biomedical Imaging, Washington, DC, USA, 7–10 July 2002; pp. 237–240. [Google Scholar]
  32. Mahmood, Q.; Chodorowski, A.; Mehnert, A. A novel Bayesian approach to adaptive mean shift segmentation of brain images. In Proceedings of the 25th IEEE International Symposium on Computer-Based Medical Systems, CBMS 2012, Rome, Italy, 20–22 June 2012. [Google Scholar] [CrossRef] [Green Version]
  33. Lazli, L.; Boukadoum, M. Quantification of Alzheimer’s Disease Brain Tissue Volume by an Enhanced Possibilistic Clustering Technique Based on Bias-Corrected Fuzzy Initialization. In Proceedings of the 16th IEEE International Conference on Ubiquitous Computing and Communications, Guangzhou, China, 12–15 December 2017; pp. 1434–1438. [Google Scholar] [CrossRef]
  34. Lazli, L.; Boukadoum, M. Brain tissues volumes assessment by fuzzy genetic optimization based possibilistic clustering: Application to Alzheimer patients images. In Proceedings of the 14th IEEE International Symposium on Pervasive Systems, Algorithms, and Networks, Exeter, UK, 21–23 June 2017; pp. 112–118. [Google Scholar] [CrossRef]
  35. Lazli, L.; Boukadoum, M.; Ait-Mohamed, O. Brain Tissue Classification of Alzheimer disease Using Partial Volume possibilistic Modeling: Application to ADNI Phantom Images. In Proceedings of the seventh International Conference on Image Processing Theory, Tools and Applications, Montreal, QC, Canada, 28 November–1 December 2017; pp. 1–5. [Google Scholar] [CrossRef]
  36. Lazli, L.; Boukadoum, M. Tissue segmentation by fuzzy clustering technique: Case study on Alzheimer’s disease. In Proceedings of the Medical Imaging: Imaging Informatics for Healthcare, Research, and Applications, Houston, TX, USA, 10–15 February 2018; p. 105791K. [Google Scholar] [CrossRef]
  37. Lazli, L.; Boukadoum, M. Dealing With Noise and Partial Volume Effects in Alzheimer Disease Brain Tissue Classification by a Fuzzy-Possibilistic Modeling Based on Fuzzy-Genetic Initialization. Int. J. Softw. Innov. (IJSI) 2019, 7, 119–143. [Google Scholar] [CrossRef]
  38. Akkus, Z.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep learning for brain MRI segmentation: State of the art and future directions. J. Digit. Imaging 2017, 30, 449–459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Bahrami, K.; Rekik, I.; Shi, F.; Shen, D. Joint reconstruction and segmentation of 7t-like mr images from 3t mri based on cascaded convolutional neural networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2017; pp. 764–772. [Google Scholar] [CrossRef]
  40. Bernal, J.; Kushibar, K.; Asfaw, D.S.; Valverde, S.; Oliver, A.; Martí, R.; Lladó, X. Deep Convolutional Neural Networks for Brain Image Analysis on Magnetic Resonance Imaging: A Review. Artif. Intell. Med. 2019, 95, 64–81. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Brébisson, A.D.; Montana, G. Deep neural networks for anatomical brain segmentation. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 7–12 June 2015; pp. 20–28. [Google Scholar]
  42. Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Mittal, M.; Goyal, L.M.; Kaur, S.; Kaur, I.; Verma, A.; Jude Hemanth, D. Deep learning based enhanced tumor segmentation approach for MR brain images. Appl. Soft Comput. 2019, 78, 346–354. [Google Scholar] [CrossRef]
  44. Moeskops, P.; Viergever, M.A.; Mendrik, A.M.; De Vries, L.S.; Benders, M.J.N.L.; Išgum, I. Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans. Med. Imaging 2016, 35, 1252–1261. [Google Scholar] [CrossRef] [Green Version]
  45. Wang, G.; Li, W.; Vercauteren, T.; Ourselin, S. Automatic Brain Tumor Segmentation Based on Cascaded Convolutional Neural Networks with Uncertainty Estimation. Front. Comput. Neurosci. 2019, 13, 56. [Google Scholar] [CrossRef] [Green Version]
  46. Zhou, T.; Ruan, S.; Canu, S. A review: Deep learning for medical image segmentation using multi-modality fusion. Array 2019, 31, 100004. [Google Scholar] [CrossRef]
  47. Ghosh, P.; Mitchell, M.; Tanyi, J.A.; Hung, A.Y. Incorporating priors for medical image segmentation using a genetic algorithm. Learn. Med. Imaging 2016, 195, 181–194. [Google Scholar] [CrossRef] [Green Version]
  48. Jedlicka, P.; Ryba, T. Genetic algorithm application in image segmentation. Pattern Recognit. Image Anal. 2016, 26, 497–501. [Google Scholar] [CrossRef]
  49. Kaushik, D.; Singh, U.; Singhal, P.; Singh, V. Medical Image Segmentation using Genetic Algorithm. Int. J. Comput. Appl. 2013, 81, 10–15. [Google Scholar] [CrossRef]
  50. Kavitha, A.R.; Chellamuthu, C. Brain tumour segmentation from MRI image using genetic algorithm with fuzzy initialisation and seeded modified region growing (GFSMRG) method. Imaging Sci. J. 2016, 64, 285–297. [Google Scholar] [CrossRef]
  51. Maulik, U. Medical Image Segmentation Using Genetic Algorithms. IEEE Trans. Inf. Technol. Biomed. Publ. IEEE Eng. Med. Biol. Soc. 2009, 13, 166–173. [Google Scholar] [CrossRef]
  52. McIntosh, C.; Hamarneh, G. Medial-Based Deformable Models in Nonconvex Shape-Spaces for Medical Image Segmentation. IEEE Trans. Med. Imaging 2011, 31, 33–50. [Google Scholar] [CrossRef]
  53. Bal, A.; Banerjee, M.; Sharma, P.; Maitra, M. Brain Tumor Segmentation on MR Image Using K-Means and Fuzzy-Possibilistic Clustering. In Proceedings of the 2nd International Conference on Electronics, Materials Engineering & Nano-Technology, Kolkata, India, 4–5 May 2018. [Google Scholar] [CrossRef]
  54. Dhanachandra, N.; Manglem, K.; Chanu, Y.J. Image Segmentation Using K -means Clustering Algorithm and Subtractive Clustering Algorithm. In Proceedings of the ICCN 2015/ICDMW/ ICISP 2015 2018 2015, Bangalore, India, 21–23 August 2015; Volume 54, pp. 764–771. [Google Scholar] [CrossRef] [Green Version]
  55. Liu, J.; Guo, L. An Improved K-means Algorithm for Brain MRI Image Segmentation. In 3rd International Conference on Mechatronics, Robotics and Automation; Atlantis Press: Shenzhen, China, 2015. [Google Scholar] [CrossRef] [Green Version]
  56. Liu, J.; Guo, L. A new brain MRI image segmentation strategy based on wavelet transform and K-means clustering. In Proceedings of the 2015 IEEE International Conference on Signal Processing 2015, Communications and Computing (ICSPCC), Ningbo, China, 19–22 September 2015; pp. 1–4. [Google Scholar] [CrossRef]
  57. Swathi, K.; Balasubramanian, K. Preliminary investigations on automatic segmentation methods for detection and volume calculation of brain tumor from MR images. Biomed. Res. 2016, 27, 563–569. [Google Scholar]
  58. Cai, W.; Chen, S.; Zhang, D. Fast and robust fuzzy C-means clustering algorithms incorporating local information for image segmentation. Pattern Recognit. 2007, 40, 825–838. [Google Scholar] [CrossRef] [Green Version]
  59. Chuang, K.-S.; Tzeng, H.-L.; Chen, S.; Wu, J.; Chen, T.-J. Fuzzy C-means clustering with spatial information for image segmentation. Comput. Med. Imaging Graph. 2006, 30, 9–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Shen, S.; Sandham, W.; Granat, M.; Sterr, A. MRI fuzzy segmentation of brain tissue using neighborhood attraction with neural-network optimization. IEEE Trans. Inf. 2005, 9, 459–467. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Sikka, K.; Sinha, N.; Singh, P.; Mishra, A. A fully automated algorithm under modified FCM framework for improved brain MR image segmentation. Magn. Reson. Imaging 2009, 27, 994–1004. [Google Scholar] [CrossRef]
  62. Sucharitha, M.; Jackson, D.; Anand, M.D. Brain Image Segmentation Using Adaptive Mean Shift Based Fuzzy C Means Clustering Algorithm. Int. Conf. Model. Optim. Comput. 2012, 38, 4037–4042. [Google Scholar] [CrossRef] [Green Version]
  63. Szilagyi, L.; Benyo, Z.; Szilagyi, S.M.; Adam, H.S. MR brain image segmentation using an enhanced fuzzy C-means algorithm. In Proceedings of the 25th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE Cat. No.03CH37439), Cancun, Mexico, 17–21 September 2003; Volume 1, pp. 724–726. [Google Scholar] [CrossRef]
  64. Zhang, D.-Q.; Chen, S.-C. A novel kernelized fuzzy C-means algorithm with application in medical image segmentation. Atificial Intell. Med. China 2004, 32, 37–50. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Mayer, A.; Greenspan, H. Segmentation of brain MRI by adaptive mean shift. In Proceedings of the 3rd IEEE International Symposium on Biomedical Imaging: Macro to Nano, Arlington, VA, USA, 6–9 April 2006; pp. 319–322. [Google Scholar] [CrossRef]
  66. Singh, B.; Aggarwal, P. Detection of brain tumor using modified mean-shift based fuzzy c-mean segmentation from MRI Images. In Proceedings of the 8th IEEE Annual Information Technology 2017, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 3–5 October 2017; pp. 536–545. [Google Scholar] [CrossRef]
  67. Vallabhaneni, R.B.; Rajesh, V. Brain tumour detection using mean shift clustering and GLCM features with edge adaptive total variation denoising technique. Alex. Eng. J. 2018, 57, 2387–2392. [Google Scholar] [CrossRef]
  68. Anithadevi, D.; Perumal, K. A hybrid approach based segmentation technique for brain tumor in MRI Images. arXiv 2016, arXiv:1603.02447. [Google Scholar]
  69. Wang, Z. An Automatic Region-Based Image Segmentation System for Remote Sensing Applications; Proquest: Ann Arbor, MI, USA, 2008. [Google Scholar]
  70. Chidadala, J.; Maganty, S.N.; Prakash, N. Automatic Seeded Selection Region Growing Algorithm for Effective MRI Brain Image Segmentation and Classification. In 2019—System Reliability, Quality Control, Safety, Maintenance and Management. ICICCT 2019; Gunjan, V., Garcia Diaz, V., Cardona, M., Solanki, V., Sunitha, K., Eds.; Springer: Singapore, 2020. [Google Scholar]
  71. Javadpour, A. and Mohammadi Improving Brain Magnetic Resonance Image (MRI). Segmentation via a Novel Algorithm based on Genetic and Regional Growth. J. Biomed. Phys. Eng. 2016, 6, 95–108. [Google Scholar]
  72. Mohammadi, A.; Javadpour, A. A New Algorithm Based on Regional Growth in Segmentation of Brain’s Magnetic Resonance Imaging: New Method to Diagnosis of Mild Cognitive Impairment; Academia.Edu: San Francisco, CA, USA, 2015. [Google Scholar]
  73. Weglinski, T.; Fabijańska, A. Brain tumor segmentation from MRI data sets using region growing approach. In Proceedings of the 2011 Proceedings of 7th International Conference on Perspective Technologies and Methods in MEMS Design 2015, Vancouver, BC, Canada, 3–5 October 2011. [Google Scholar]
  74. Balafar, M.A. Gaussian mixture model based segmentation methods for brain MRI images. Artif. Intell. Rev. 2014, 41, 429–439. [Google Scholar] [CrossRef]
  75. Ji, Z.; Xia, Y.; Zheng, Y. Robust generative asymmetric GMM for brain MR image segmentation. Comput. Methods Programs Biomed. 2017, 151, 123–138. [Google Scholar] [CrossRef]
  76. Moraru, L.; Moldovanu, S.; Dimitrievici, L.T.; Dey, N.; Ashour, A.S.; Shi, F.; Fong, S.J.; Khan, S.; Biswas, A. Gaussian mixture model for texture characterization with application to brain DTI images. J. Adv. Res. 2019, 16, 15–23. [Google Scholar] [CrossRef]
  77. Subashini, T.S.; Balaji, G.N.; Akila, S. Brain tissue segmentation in MRI images using GMM. Int. J. Appl. Eng. Res. 2015, 10, 102–107. [Google Scholar]
  78. Zhu, F.; Song, Y.; Chen, J. Brain MR image segmentation based on Gaussian mixture model with spatial information. In Proceedings of the 2010 3rd International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; Volume 3, pp. 1346–1350. [Google Scholar] [CrossRef]
  79. Nguyen, T.M.; Wu, Q.J. Robust Student’s-t Mixture Model with Spatial Constraints and Its Application in Medical Image Segmentation. IEEE Trans. Med Imaging 2012, 31, 103–116. [Google Scholar] [CrossRef]
  80. Chen, M.; Yan, Q.; Qin, M. A segmentation of brain MRI images utilizing intensity and contextual information by Markov random field. Comput. Assist. Surg. 2017, 22, 200–211. [Google Scholar] [CrossRef] [PubMed]
  81. Scherrer, B.; Dojat, M.; Forbes, F.; Garbay, C. MRF Agent Based Segmentation: Application to MRI Brain Scans. In Artificial Intelligence in Medicine; Bellazzi, R., Abu-Hanna, A., Hunter, J., Eds.; Springer: Berlin, Germany, 2007; pp. 13–23. [Google Scholar]
  82. Yousefi, S.; Azmi, R.; Zahedi, M. Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms. Med Image Anal. 2012, 16, 840–848. [Google Scholar] [CrossRef]
  83. Zhang, Y.; Lu, P.; Liu, X.; Zhou, S. A modified MRF segmentation of brain MR images. In Proceedings of the 10th International Congress on Image and Signal Processing 2017, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 14–16 October 2017; pp. 1–5. [Google Scholar] [CrossRef]
  84. Chen, Y.; Pham, T.D. Development of a brain MRI-based hidden Markov model for dementia recognition. Biomed. Eng. Online 2013, 12, S2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  85. Ibrahim, M.; John, N.; Kabuka, M.; Younis, A. Hidden Markov models-based 3D MRI brain segmentation. Image Vis. Comput. 2006, 24, 1065–1079. [Google Scholar] [CrossRef]
  86. Ismail, M.; Soliman, A.; Ghazal, M.; Switala, A.E.; Gimel’farb, G.; Barnes, G.N.; Khalil, A.; El-Baz, A. A fast stochastic framework for automatic MR brain images segmentation. PLoS ONE 2017, 12, e0187391. [Google Scholar] [CrossRef] [Green Version]
  87. Mirzaei, F.; Parishan, M.R.; Faridafshin, M.; Faghihi, R.; Sina, S. Automated brain tumor segmentation in mr images using a hidden markov classifier framework trained by svd-derived features. ICTACT J. Image Video Process. 2018, 9. [Google Scholar] [CrossRef]
  88. Sharma, S.; Rattan, M. An Improved Segmentation and Classifier Approach Based on HMM for Brain Cancer Detection. Open Biomed. Eng. J. 2019, 28, 13. [Google Scholar] [CrossRef]
  89. Ayachi, R.; Amor, B.; Brain, N. Tumor Segmentation Using Support Vector Machines. In Symbolic and Quantitative Approaches to Reasoning with Uncertainty; Sossai, C., Chemello, G., Eds.; Springer: Berlin, Germany, 2009; pp. 736–747. [Google Scholar]
  90. Kathane, M.S.; Thakare, V. Brain Segmentation using Support Vector Machine: Diagnostic Intelligence Approach. In International Conference on Benchmarks in Engineering Science and Technology ICBEST; Proceedings published by International Journal of Computer Applications® (IJCA): Maharashtra, India, 2012; pp. 12–14. [Google Scholar]
  91. Liu, Y.; Zhang, H.; Li, P. Research on SVM-based MRI image segmentation. J. China Univ. Posts Telecommun. 2011, 18, 129–132. [Google Scholar] [CrossRef]
  92. Reddy, U.J.; Dhanalakshmi, P.; Reddy, P.D.K. Image segmentation technique using SVM classifier for detection of medical disorders. Ing. Syst. Inf. 2019, 24, 173–176. [Google Scholar] [CrossRef] [Green Version]
  93. Romeny, B.M.H. Foundations of scale-space. In: Front-End Vision and Multi-Scale Image Analysis. In Computational Imaging and Vision; Springer: Dordrecht, The Netherlands, 2003; Volume 27. [Google Scholar]
  94. Lindeberg, T. Spatio-temporal scale selection in video data. In Proceedings of the Scale Space and Variational Methods in Computer Vision (SSVM 2017); Springer LNCS: Kolding, Denmark, 2017; Volume 10302, pp. 3–15. [Google Scholar]
  95. Tau, M.; Hassner, T. Dense correspondences across scenes and scales. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 875–888. [Google Scholar] [CrossRef] [Green Version]
  96. Witkin, A. Scale-space filtering. In Readings in Computer Vision; Morgan Kaufmann: Karlsruhe, Germany, 1983; pp. 1019–1022. [Google Scholar]
  97. Hsiao, Y.-T.; Chuang, C.-L.; Jiang, J.-A.; Chien, C.-C. A contour based image segmentation algorithm using morphological edge detection. In Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 12 October 2005; Volume 3, pp. 2962–2967. [Google Scholar] [CrossRef]
  98. Senthilkumaran, N.; Kirubakaran, C. A Case Study on Mathematical Morphology Segmentation for MRI Brain Image. Int. J. Comput. Sci. Inf. Technol. 2014, 5, 5336–5340. [Google Scholar]
  99. Beare, R.; Chen, J.; Adamson, C.L.; Silk, T.; Thompson, D.K.; Yang, J.Y.; Wood, A.G. Brain extraction using the watershed transform from markers. Front. Neuroinform. 2013, 7, 32. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  100. Ghosh, N.; Sun, Y.; Turenius, C.; Bhanu, B.; Obenaus, A.; Ashwal, S. Computational Analysis: A Bridge to Translational Stroke Treatment. In Translational Stroke Research; Springer: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
  101. Akram, F.; Garcia, M.A.; Puig, D. Active contours driven by local and global fitted image models for image segmentation robust to intensity inhomogeneity. PLoS ONE 2017, 12, e0174813. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  102. Al-Tamimi, M.; Sulong, G. A Review of Snake Models in Medical MR Image Segmentation. Jurnal Teknol. 2014, 2, 101–106. [Google Scholar] [CrossRef] [Green Version]
  103. Chen, H.; Yu, X.; Wu, C.; Wu, J. An active contour model for brain magnetic resonance image segmentation based on multiple descriptors. Int. J. Adv. Robot. Syst. 2018, 25, 15. [Google Scholar] [CrossRef]
  104. Meng, X.; Gu, W.; Chen, Y.; Zhang, J. Brain MR image segmentation based on an improved active contour model. PLoS ONE 2017, 12, e0183943. [Google Scholar] [CrossRef]
  105. Voronin, V.; Semenishchev, E.; Pismenskova, M.; Balabaeva, O.; Agaian, S. Medical image segmentation by combing the local, global enhancement, and active contour model. Proc. SPIE 2019. [Google Scholar] [CrossRef]
  106. Tsechpenakis, G. Deformable Model-Based Medical Image Segmentation. In Multi Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies: Volume 1; El-Baz, A.S., Acharya, U.R., Mirmehdi, M., Suri, J.S., Eds.; Springer: Boston, MA, USA, 2011; pp. 33–67. [Google Scholar] [CrossRef]
  107. Jayadevappa, D.; Kumar, S.S.; Murty, D.S. Medical Image Segmentation Algorithms using Deformable Models: A Review. IETE Tech. Rev. 2011, 28, 248–255. [Google Scholar] [CrossRef]
  108. Yang, Y.; Jia, W.; Yang, Y. Multi-atlas segmentation and correction model with level set formulation for 3D brain MR images. Pattern Recognit. 2019, 90, 450–463. [Google Scholar] [CrossRef]
  109. Huo, Y. Data-driven Probabilistic Atlases Capture Whole-brain Individual Variation. arXiv 2015, arXiv:1806.02300. [Google Scholar]
  110. Basukala, D.; Jha, D.; Kwon, G.-R. Brain Image Segmentation Based on Dual-Tree Complex Wavelet Transform and Fuzzy C-Means Clustering Algorithm. J. Med Imaging Health Inform. 2018, 8, 1776–1781. [Google Scholar] [CrossRef]
  111. Si, T.; De, A.; Bhattacharjee, A.K. Segmentation of Brain MRI Using Wavelet Transform and Grammatical Bee Colony. J. Circuits Syst. Comput. 2017, 27, 1850108. [Google Scholar] [CrossRef]
  112. Tian, D.; Fan, L. MR brain image segmentation based on wavelet transform and SOM neural network. In Proceedings of the 2010 Chinese Control and Decision Conference 2010, Xuzhou, China, 26–28 May 2010; pp. 4243–4246. [Google Scholar] [CrossRef]
  113. Goldberg-Zimring, D.; Talos, I.F.; Bhagwat, J.G.; Haker, S.J.; Black, P.M.; Zou, K.H. Statistical validation of brain tumor shape approximation via spherical harmonics for image-guided neurosurgery. Acad. Radiol. 2005, 12, 459–466. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Guillaume, H.; Dillenseger, J.-L.; Patard, J.-J. Intra subject 3D/3D kidney registration/modeling using spherical harmonics applied on partial information. In Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004. [Google Scholar]
  115. Nitzken, M.; Casanova, M.F.; Gimel’farb, G.; Khalifa, F.; Elnakib, A.; Switala, A.E.; El-Baz, A. 3D shape analysis of the brain cortex with application to autism. In Proceedings of the 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 1847–1850. [Google Scholar] [CrossRef]
  116. Dillenseger, J.-L. Spherical Harmonics Based Intrasubject 3-D Kidney Modeling/Registration Technique Applied on Partial Information. In IEEE Transactions on Biomedical Engineering; IEEE: Piscataway, NJ, USA, 2006. [Google Scholar]
  117. Chen, Y.; Zhang, J.; Macione, J. An improved level set method for brain MR image segmentation and bias correction. Comput. Med imaging Graph. Off. J. Comput. Med Imaging Soc. 2009, 33, 510–519. [Google Scholar] [CrossRef] [PubMed]
  118. Chen, T.F. Medical Image Segmentation Using Level Sets. Technical Report #CS-2008-12. Available online: https://pdfs.semanticscholar.org/08cf/16fcdc3f2907f8ab1d0f5fe331c6b2254ee9.pdf (accessed on 4 May 2019).
  119. Duth, P.S.; Saikrishnan, V.P.; Vipuldas, V.P. Variational level set and level set method for mri brain image segmentation: A review. In Proceedings of the 2017 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 6–8 April 2017; pp. 1555–1558. [Google Scholar] [CrossRef]
  120. Lok, K.; Shi, L.; Zhu, X.; Wang, D. Fast and robust brain tumor segmentation using level set method with multiple image information. Impact of advanced parallel or cloud computing technologies for image guided diagnosis and therapy. J. X-ray Sci. Technol. 2017, 25, 301–312. [Google Scholar] [CrossRef]
  121. Duth, P.S.; Kulkarni, V.A. An enhanced variational level set method for MRI brain image segmentation using IFCM clustering and LBM. Int. J. Eng. Technol. 2018, 7, 23–28. [Google Scholar] [CrossRef]
  122. Durgadevi, R.; Hemalatha, B.; Kaliappan, K.V.K. Detection of Mammograms Using Honey Bees Mating Optimization Algorithm (M-HBMO). In Proceedings of the 2014 World Congress on Computing and Communication Technologies, Trichirappalli, India, 27 Feburary–1 March 2014. [Google Scholar]
  123. Abdallah, Y.; Abdelhamid, A.; Elarif, T.; Salem, A.B. Intelligent Techniques in Medical Volume Visualization. Procedia Comput. Sci. 2015, 65, 546–555. [Google Scholar] [CrossRef] [Green Version]
  124. Hu, J.; Wei, X.; He, H. Brain Image Segmentation Based on Hypergraph Modeling. In Proceedings of the 2014 IEEE 12th International Conference on Dependable, Autonomic and Secure Computing, Dalian, China, 24–27 August 2014. [Google Scholar] [CrossRef]
  125. Shen, Y.; Hu, J.; Lu, Y.; Wang, X. Stock trends prediction by hypergraph modeling. In Proceedings of the 2012 IEEE International Conference on Computer Science and Automation Engineering, Beijing, China, 22–24 June 2012. [Google Scholar]
  126. Bretto, A.; Gillibert, L. Hypergraph-Based Image Representation. In Graph-Based Representations in Pattern Recognition; Brun, L., Vento, M., Eds.; Springer: Berlin Heidelberg, 2005; pp. 1–11. [Google Scholar]
  127. Chen, V.; Ruan, S. Graph cut segmentation technique for MRI brain tumor extraction. In Proceedings of the 2010 2nd International Conference on Image Processing Theory 2010, Tools and Applications, Paris, France, 7–10 July 2010; pp. 284–287. [Google Scholar] [CrossRef]
  128. Dogra, J.; Jain, S.; Sood, M. Segmentation of MR Images using Hybrid kMean-Graph Cut Technique. Procedia Comput. Sci. 2018, 132, 775–784. [Google Scholar] [CrossRef]
  129. Masterravi. Interactive Segmentation using Graph Cuts. Biometrics, Computer Vision, Image Processing. TECH GEEK, My Understanding of Algorithms and Technology. 2011. Available online: https://masterravi.wordpress.com/2011/05/24/interactive-segmentation-using-graph-cutsmatlab-code/ (accessed on 28 March 2019).
  130. Song, Z.; Tustison, N.; Avants, B.; Gee, J.C. Integrated Graph Cuts for Brain MRI Segmentation. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2006. In Lecture Notes in Computer Science; Larsen, R., Nielsen, M., Sporring, J., Eds.; Springer: Berlin, Heidelberg, 2006; Volume 4191. [Google Scholar] [CrossRef] [Green Version]
  131. Wels, M.; Carneiro, G.; Aplas, A.; Huber, M.; Hornegger, J.; Comaniciu, D.A. Discriminant Model-Constrained Graph Cuts Approach to Fully Automated Pediatric Brain Tumor Segmentation in 3-D MRI. In MICCAI’08 Proceedings of the 11th International Conference on Medical Image Computing and Computer-Assisted Intervention—Part I; Springer: Berlin, Heidelberg, 2008; pp. 67–75. [Google Scholar] [CrossRef] [Green Version]
  132. Dehdasht-Heydari, R.; Gholami, S. Automatic Seeded Region Growing (ASRG) Using Genetic Algorithm for Brain MRI Segmentation. Wirel. Pers. Commun. 2019, 109, 897–908. [Google Scholar] [CrossRef]
  133. Ilhan, U.; Ilhan, A. Brain tumor segmentation based on a new threshold approach. In Proceedings of the 9th International Conference on Theory and Application of Soft Computing, Computing with Words and Perception, ICSCCW 2017, Budapest, Hungary, 22–23 August 2017; Volume 120, pp. 580–587. [Google Scholar] [CrossRef]
  134. Ruf, A.; Greenspan, H.; Goldberger, J. Tissue Classification of Noisy MR Brain Images Using Constrained GMM. In Medical Image Computing and Computer-Assisted Intervention — MICCAI 2005; Duncan, J.S., Gerig, G., Eds.; Springer: Berlin, Germany, 2005; pp. 790–797. [Google Scholar]
  135. Ciofolo, C.; Barillot, C. Brain segmentation with competitive level sets and fuzzy control. In Information Processing in Medical Imaging; Lecture Notes in Computer Science; Springer: Heidelberg, Berlin, Germany, 2005; Volume 3565, pp. 333–344. [Google Scholar]
  136. Dong, P.; Guo, Y.; Gao, Y.; Liang, P.; Shi, Y.; Wang, Q.; Shen, D.; Wu, G. Multi-Atlas Based Segmentation of Brainstem Nuclei from MR Images by Deep Hyper-Graph Learning. In Patch-Based Techniques in Medical Imagin; Wu, G., Coupé, P., Zhan, Y., Munsell, B.C., Rueckert, D., Eds.; Springer International Publishing: Heidelberg, Berlin, Germany, 2016; pp. 51–59. [Google Scholar]
  137. Jenkinson, M.; Beckmann, C.F.; Behrens, T.E.; Woolrich, M.W.; Smith, S.M. FSL. NeuroImage. 2012, 62, 782–790. [Google Scholar] [CrossRef] [Green Version]
  138. Brugger, S.P.; Howes, O.D. Heterogeneity and Homogeneity of Regional Brain Structure in Schizophrenia: A Meta-analysis. JAMA Psychiatry 2017, 74, 1104–1111. [Google Scholar] [CrossRef] [PubMed]
  139. Gupta, L.; Besseling, R.M.; Overvliet, G.M.; Hofman, P.A.; de Louw, A.; Vaessen, M.J.; Aldenkamp, A.P.; Ulman, S.; Jansen, J.F.; Backes, W.H. Spatial heterogeneity analysis of brain activation in fMRI. NeuroImage Clin. 2014, 5, 266–276. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  140. Wang, L.; Li, G.; Adeli, E.; Liu, M.; Wu, Z.; Meng, Y.; Lin, W.; Shen, D. Anatomy-guided joint tissue segmentation and topological correction for 6-month infant brain MRI with risk of autism. Hum. Brain Mapp. 2018, 39, 2609–2623. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  141. Luo, Y.; Liu, L.; Huang, Q.; Li, X. A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images. BioMed Res. Int. 2017. [Google Scholar] [CrossRef] [PubMed]
  142. Nosrati, M.S.; Hamarneh, G. Incorporating Prior Knowledge in Medical Image Segmentation: A Survey. arXiv 2016, arXiv:1607.01092. [Google Scholar]
  143. Klöppel, S.; Stonnington, C.M.; Chu, C.; Draganski, B.; Scahill, R.I.; Rohrer, J.D.; Fox, N.C.; Jack, C.R., Jr.; Ashburner, J.; Frackowiak, R.S. Automatic classification of MR scans in Alzheimer’s disease. Brain 2008, 131, 681–689. [Google Scholar] [CrossRef]
  144. Zhou, L.; Wang, Y.; Li, Y.; Yap, P.-T.; Shen, D. Hierarchical anatomical brain networks for MCI prediction by partial least square analysis. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 20–25 June 2011; pp. 1073–1080. [Google Scholar]
  145. Li, S.; Shi, F.; Pu, F.; Li, X.; Jiang, T.; Xie, S.; Wang, Y. Hippocampal shape analysis of Alzheimer disease based on machine learning methods. Am. J. Neuroradiol. 2007, 28, 1339–1345. [Google Scholar] [CrossRef] [Green Version]
  146. Colliot, O.; Chételat, G.; Chupin, M.; Desgranges, B.; Magnin, B.; Benali, H.; Dubois, B.; Garnero, L.; Eustache, F.; Lehéricy, S. Discrimination between Alzheimer disease, mild cognitive impairment, and normal aging by using automated segmentation of the hippocampus. Radiology 2008, 248, 194–201. [Google Scholar] [CrossRef] [Green Version]
  147. Chaplot, S.; Patnaik, L.M.; Jagannathan, N.R. Classification of magnetic resonance brain images using wavelets as input to support vector machine and neural network. Biomed. Signal Process. Control 2006, 1, 86–92. [Google Scholar] [CrossRef]
  148. Zhang, Y.; Dong, Z.; Wu, L.; Wang, S. A hybrid method for MRI brain image classification. Expert Syst. Appl. 2011, 38, 10049–10053. [Google Scholar] [CrossRef]
  149. Magnin, B.; Mesrob, L.; Kinkingnéhun, S.; Pélégrini-Issac, M.; Colliot, O.; Sarazin, M.; Dubois, B.; Lehéricy, S.; Benali, H. Support vector machine-based classification of Alzheimer’s disease from whole-brain anatomical MRI. Neuroradiology 2009, 51, 73–83. [Google Scholar] [CrossRef] [PubMed]
  150. Chincarini, A.; Bosco, P.; Calvini, P.; Gemme, G.; Esposito, M.; Olivieri, C.; Rei, L.; Squarcia, S.; Rodriguez, G.; Bellotti, R.; et al. Local MRI analysis approach in the diagnosis of early and prodromal Alzheimer’s disease. NeuroImage 2011, 58, 469–480. [Google Scholar] [CrossRef] [PubMed]
  151. Li, Y.; Wang, Y.; Wu, G.; Shi, F.; Zhou, L.; Lin, W.; Shen, D. Discriminant analysis of longitudinal cortical thickness changes in Alzheimer’s disease using dynamic and network features. Neurobiol. Aging 2012, 33, e15–e30. [Google Scholar] [CrossRef] [Green Version]
  152. Plant, C.; Teipel, S.J.; Oswald, A.; Böhm, C.; Meindl, T.; Mourao-Miranda, J.; Bokde, A.W.; Hampel, H.; Ewers, M. Automated detection of brain atrophy patterns based on MRI for the prediction of Alzheimer’s disease. NeuroImage 2010, 50, 162–174. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  153. Graña, M.; Termenon, M.; Savio, A.; Gonzalez-Pinto, A.; Echeveste, J.; Pérez, J.M.; Besga, A. Computer aided diagnosis system for Alzheimer disease using brain diffusion tensor imaging features selected by Pearson’s correlation. Neurosci. Lett. 2011, 502, 225–229. [Google Scholar] [CrossRef] [PubMed]
  154. Wolz, R.; Julkunen, V.; Koikkalainen, J.; Niskanen, E.; Zhang, D.P.; Rueckert, D.; Soininen, H.; Lötjönen, J. Multi-method analysis of MRI images in early diagnostics of Alzheimer’s disease. PLoS ONE 2011, 6, e25446. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  155. Daliri, M.R. Automated diagnosis of Alzheimer disease using the scale-invariant feature transforms in magnetic resonance images. J. Med. Syst. 2012, 36, 995–1000. [Google Scholar] [CrossRef]
  156. Lahmiri, S.; Boukadoum, M. Automatic detection of Alzheimer disease in brain magnetic resonance images using fractal features. In Proceedings of the 6th International IEEE EMBS Conference on Neural Engineering, San Diego, CA, USA, 6–8 November 2013. [Google Scholar]
  157. Joshi, D.; Rana, N.K.; Misra, V.M. Classification of Brain Cancer using Artificial Neural Network. In Proceedings of the 2010 2nd International Conference on Electronic Computer Technology, Kuala Lumpur, Malaysia, 7–10 May 2010. [Google Scholar] [CrossRef]
  158. Latif, G.; Butt, M.M.; Khan, A.H.; Butt, M.O.; Al-Asad, J.F. Automatic Multimodal Brain Image Classification Using MLP and 3D Glioma Tumor Reconstruction. In Proceedings of the 2017 9th IEEE-GCC Conference and Exhibition (GCCCE), Manama, Bahrain, 8–11 May 2017; pp. 1–9. [Google Scholar] [CrossRef]
  159. Natteshan, N.V.; Jothi, J.A. Automatic Classification of Brain MRI Images Using SVM and Neural Network Classifiers. In Advances in Intelligent Informatics; EEl-Alfy, S.M., Thampi, S.M., Takagi, H., Piramuthu, S., Hanne, T., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 19–30. [Google Scholar]
  160. Samanta, A.; Khan, A. Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier. World Academy of Science, Engineering and Technology. Int. J. Biomed. Biol. Eng. 2017, 11, 340–347. [Google Scholar]
  161. Veer, S.S.; Patil, P.M. Brain tumor classification using artificial neural network on mri images. Int. J. Res. Eng. Technol. 2015, 4, 218–226. [Google Scholar]
  162. Tandel, G.S.; Biswas, M.; Kakde, O.G.; Tiwari, A.; Suri, H.S.; Turk, M.; Laird, J.R.; Asare, C.K.; Ankrah, A.A.; Khanna, N.N. A Review on a Deep Learning Perspective in Brain Cancer Classification. Cancers 2019, 11, 111. [Google Scholar] [CrossRef] [Green Version]
  163. Varuna Shree, N.; Kumar, T. Identification and classification of brain tumor MRI images with feature extraction using DWT and probabilistic neural network. Brain Inform. 2018, 5, 23–30. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  164. Arora, A.; Roy, P.; Venktesan, S.; Babu, R. k-NN Based Classification of Brain MRI Images using DWT and PCA to Detect Different Types of Brain Tumour. Int. J. Med. Res. Health Sci. 2017, 6, 15–20. [Google Scholar]
  165. Bharanidharan, N.; Rajaguru, H.; Geetha, V. Performance Analysis of KNN Classifier with and without GLCM Features in Brain Tumor Detection. Int. J. Innov. Technol. Explor. Eng. 2018, 8, 103–106. [Google Scholar]
  166. Meenakshi, R.; Anandhakumar, P. A Hybrid Brain Tumor Classification and Detection Mechanism Using Knn and Hmm. Curr. Med. Imaging 2015, 11, 70. [Google Scholar] [CrossRef]
  167. Sudharani, K.; Sarma, T.C.; Prasad, K.S. Brain stroke detection using K-Nearest Neighbor and Minimum Mean Distance technique. In Proceedings of the 2015 International Conference on Control, Instrumentation, Communication and Computational Technologies, Kumaracoil, India, 18–19 December 2015. [Google Scholar] [CrossRef]
  168. Chaddad, A.; Zinn, P.O.; Colen, R.R. Brain tumor identification using Gaussian Mixture Model features and Decision Trees classifier. In Proceedings of the 48th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 19–21 March 2014; pp. 1–4. [Google Scholar] [CrossRef]
  169. Deepa, A.R.; Sam Emmanuel, W.R.M. Identification and classification of brain tumor through mixture model based on magnetic resonance imaging segmentation and artificial neural network. Concepts Magn. Reson. Part A. 2017, 45, e21390. [Google Scholar] [CrossRef]
  170. Forbes, F. Mixture Models for Image Analysis. In Handbook of Mixture Analysis; Sylvia, F.-S., Celeux, G., Christian, G.C., Robert, P., Eds.; CRC Press: Boca Raton, FL, USA, 2018; pp. 397–418. [Google Scholar]
  171. Gorriz, J.; Segovia, F.; Ramírez, J.; Lassl, A.; Salas-Gonzalez, D. GMM based SPECT image classification for the diagnosis of Alzheimer’s disease. Appl. Soft Comput. 2011, 11, 2313–2325. [Google Scholar] [CrossRef]
  172. Segovia, F.; Górriz, J.M.; Ramírez, J.; Salas-González, D.; Álvarez, I.; López, M.; Padilla, P. Classification of functional brain images using a GMM-based multi-variate approach. Neurosci. Lett. 2010, 474, 58–62. [Google Scholar] [CrossRef]
  173. Chinnu, A. Brain Tumor Classification Using SVM and Histogram Based Image Segmentation. Int. J. Sci. Res. 2015, 4, 1647–1650. [Google Scholar]
  174. Fabelo, H.; Ortega, S.; Casselden, E.; Loh, J.; Bulstrode, H.; Zolnourian, A.; Sarmiento, R. SVM Optimization for Brain Tumor Identification Using Infrared Spectroscopic Samples. Sensors 2018, 18, 4487. [Google Scholar] [CrossRef] [Green Version]
  175. Hamiane, M.; Saeed, F. SVM Classification of MRI Brain Images for Computer-Assisted Diagnosis. Int. J. Electr. Comput. Eng. 2017, 7, 2555. [Google Scholar] [CrossRef] [Green Version]
  176. Vaishnavee, K.B.; Amshakala, K. An automated MRI brain image segmentation and tumor detection using SOM-clustering and Proximal Support Vector Machine classifier. In Proceedings of the 2015 IEEE International Conference on Engineering and Technology (ICETECH), Coimbatore, Indian, 20 March 2015. [Google Scholar] [CrossRef]
  177. Goswami, S.; Tiwari, A.; Pali, V.; Tripathi, A. A Correlative Analysis of SOM and FCM Classifier for Brain Tumour Detection. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 718–723. [Google Scholar]
  178. Ahmmed, R.; Hossain, M.F. Tumor detection in brain MRI image using template based K-means and Fuzzy C-means clustering algorithm. In Proceedings of the 2016 International Conference on Computer Communication and Informatics (ICCCI), Coimbatore, India, 7–9 January 2016; pp. 1–6. [Google Scholar] [CrossRef]
  179. Alam, M.S.; Rahman, M.M.; Hossain, M.A.; Islam, M.K.; Ahmed, K.M.; Ahmed, K.T.; Singh, B.C.; Miah, M.S. Automatic Human Brain Tumor Detection in MRI Image Using Template-Based K Means and Improved Fuzzy C Means Clustering Algorithm. Big Data Cogn. Comput. 2019, 3, 27. [Google Scholar] [CrossRef] [Green Version]
  180. Duraisamy, B.; Shanmugam, J.V.; Annamalai, J. Alzheimer disease detection from structural MR images using FCM based weighted probabilistic neural network. Brain Imaging Behav. 2019, 13, 87–110. [Google Scholar] [CrossRef] [PubMed]
  181. Mathur, N.; Meena, Y.K.; Mathur, S.; Mathur, D. Detection of Brain Tumor in MRI Image through Fuzzy-Based Approach. High-Resolut. Neuroimaging—Basic Phys. Princ. Clin. Appl. 2018. [Google Scholar] [CrossRef] [Green Version]
  182. Miranda, E.; Aryuni, M.; Irwansyah, E. A survey of medical image classification techniques. In Proceedings of the 2016 International Conference on Information Management and Technology (ICIMTech), Bandung, Indonesia, 16–18 November 2016. [Google Scholar]
  183. Bhatnagar, G.; Wu, Q.M.J.; Liu, Z. A new contrast based multimodal medical image fusion framework. Neurocomputing 2015, 157, 143–152. [Google Scholar] [CrossRef]
  184. Apostolova, L.G.; Hwang, K.S.; Kohannim, O.; Avila, D.; Elashoff, D.; Jack, C.R.; Thompson, P.M. ApoE4 effects on automated diagnostic classifiers for mild cognitive impairment and Alzheimer’s disease. NeuroImage Clin. 2014, 4, 461–472. [Google Scholar] [CrossRef] [Green Version]
  185. Gray, K.R.; Aljabar, P.; Heckemann, R.A.; Hammers, A.; Rueckert, D. Random forest-based similarity measures for multi-modal classification of Alzheimer’s disease. NeuroImage 2013, 65, 167–175. [Google Scholar] [CrossRef] [Green Version]
  186. Zu, C.; Jie, B.; Liu, M.; Chen, S.; Shen, D.; Zhang, D.; Alzheimer’s Disease Neuroimaging Initiative. Label-aligned multi-task feature learning for multimodal classification of Alzheimer’s disease and mild cognitive impairment. Brain Imaging Behav. 2016, 10, 1148–1159. [Google Scholar] [CrossRef] [Green Version]
  187. Jie, B.; Zhang, D.; Cheng, B.; Shen, D.; The Alzheimer’s Disease Neuroimaging Initiative. Manifold regularized multitask feature learning for multimodality disease classification. Hum. Brain Mapp. 2015, 36, 489–507. [Google Scholar] [CrossRef] [Green Version]
  188. Trzepacz, P.T.; Yu, P.; Sun, J.; Schuh, K.; Case, M.; Witte, M.M.; Hake, A. Comparison of neuroimaging modalities for the prediction of conversion from mild cognitive impairment to Alzheimer’s dementia. Neurobiol. Aging 2014, 35, 143–151. [Google Scholar] [CrossRef]
  189. Foster, N.L.; Heidebrink, J.L.; Clark, C.M.; Jagust, W.J.; Arnold, S.E.; Barbas, N.R.; DeCarli, C.S.; Scott Turner, R.; Koeppe, R.A.; Higdon, R.; et al. FDG-PET improves accuracy in distinguishing frontotemporal dementia and Alzheimer’s disease. Brain 2007, 130, 2616–2635. [Google Scholar] [CrossRef] [PubMed]
  190. Drzezga, A.; Lautenschlager, N.; Siebner, H.; Riemenschneider, M.; Willoch, F.; Minoshima, S.; Schwaiger, M.; Kurz, A. Cerebral metabolic changes accompanying conversion of mild cognitive impairment into Alzheimer’s disease: A PET follow-up study. Eur. J. Nucl. Med. Mol. Imaging 2003, 30, 1104–1113. [Google Scholar] [PubMed]
  191. Das, S.; Kundu, M.K. NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency. Med Biol. Eng. Comput. 2012, 50, 1105–1114. [Google Scholar] [CrossRef] [PubMed]
  192. Yang, Y.; Tong, S.; Huang, S.; Lin, P. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain. Comput. Math. Methods Med. 2014. [Google Scholar] [CrossRef]
  193. Yang, L.; Guo, B.L.; Ni, W. Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing 2008, 72, 203–211. [Google Scholar] [CrossRef]
  194. Qu, G.; Zhang, D.; Yan, P. Medical image fusion by wavelet transform modulus maxima. Opt. Express 2001, 9, 184–190. [Google Scholar]
  195. Lee, H.; Hong, H. Hybrid surface-and voxel-based registration for MR-PET brain fusion. In Image Analysis and Processing–ICIAP; Springer: Berlin, Germany, 2005; pp. 930–937. [Google Scholar] [CrossRef]
  196. Wang, A.; Sun, H.; Guan, Y. The application of wavelet transform to multimodality medical image fusion. In Proceedings of the 2006 IEEE International Conference on Networking, Sensing and Control, Ft. Lauderdale, FL, USA, 23–25 April 2006; pp. 270–274. [Google Scholar]
  197. Teng, J.; Wang, X.; Zhang, J.; Wang, S.; Huo, P. A Multimodality Medical Image Fusion Algorithm Based on Wavelet Transform. In Proceedings of the International Conference, ICSI 2010, Beijing, China, 12–15 June 2010; pp. 627–633. [Google Scholar] [CrossRef]
  198. Garg, S.; Kiran, K.U.; Mohan, R.; Tiwary, U. Multilevel medical image fusion using segmented image by level set evolution with region competition. In Proceedings of the 27th Annual International Conference of the Engineering in Medicine and Biology Society, Shanghai, China, 17–18 January 2006; pp. 7680–7683. [Google Scholar]
  199. Forbes, F.; Doyle, S.; Lorenzo, D.G.; Barillot, C.; Dojat, M. Adaptive weighted fusion of multiple MR sequences for brain lesion segmentation. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Rotterdam, The Netherlands, 14–17 April 2010; pp. 69–72. [Google Scholar]
  200. Lee, J.-D.; Huang, B.-R.; Huang, C.-H. A surface-projection MMI for the fusion of brain MR and SPECT images. Biomed. Eng. Appl. Basis Commun. 2006, 18, 202–206. [Google Scholar] [CrossRef] [Green Version]
  201. Huang, C.-H.; Lee, J.-D. Improving MMI with enhanced-FCM for the fusion of brain MR and SPECT images. Biomed. Eng. Appl. Basis Commun. 2004, 16, 185–189. [Google Scholar] [CrossRef] [Green Version]
  202. Yuan, K.; Liu, W.; Jia, S.; Xiao, P. Fusion of MRI and DTI to assist the treatment solution of brain tumor. In Proceedings of the Second International Conference on Innovative Computing, Information and Control, Kumamoto, Japan, 5–7 September 2007; p. 620. [Google Scholar]
  203. Hinrichs, C.; Singh, V.; Xu, G.; Johnson, S. MKL for robust multi-modality AD classification. Med. Image Comput. Assist. Interv. 2009, 12, 786–794. [Google Scholar]
  204. Hinrichs, C.; Singh, V.; Xu, G.; Johnson, S.C. Predictive markers for AD in a multimodality framework: An analysis of MCI progression in the ADNI population. NeuroImage 2011, 55, 574–589. [Google Scholar] [CrossRef] [Green Version]
  205. Huang, S.; Li, J.; Ye, J.; Wu, T.; Chen, K.; Fleisher, A.; Reiman, E. Identifying Alzheimer s disease-related brain regions from multimodality neuroimaging data using sparse composite linear discrimination analysis. In Advances in Neural Information Processing systems 24; Shawe-Taylor, J., Zemel, R.S., Bartlett, P.L., Pereira, F., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2011. [Google Scholar]
  206. Zhang, D.; Shen, D.; The Alzheimer’s Disease Neuroimaging Initiative. Multi-modal multitask learning for joint prediction of multiple regression and classification variables in Alzheimer’s disease. NeuroImage 2012, 59, 895–907. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  207. Westman, E.; Muehlboeck, J.-S.; Simmons, A. Combining MRI and CSF measures for classification of Alzheimer’s disease and prediction of mild cognitive impairment conversion. NeuroImage 2012, 62, 229–238. [Google Scholar] [CrossRef] [PubMed]
  208. Liu, F.; Wee, C.Y.; Chen, H.F.; Shen, D.G. Inter-modality relationship constrained multi-modality multitask feature selection for Alzheimer’s Disease and mild cognitive impairment identification. NeuroImage 2014, 84, 466–475. [Google Scholar] [CrossRef] [Green Version]
  209. Dai, Z.; Yan, C.; Wang, Z.; Wang, J.; Xia, M.; Li, K.; He, Y. Discriminative analysis of early Alzheimer’s disease using multi-modal imaging and multi-level characterization with multi-classifier (M3). NeuroImage 2012, 59, 2187–2195. [Google Scholar] [CrossRef] [PubMed]
  210. Liu, S.; Liu, S.; Cai, W.; Che, H.; Pujol, S.; Kikinis, R.; Feng, D.; Fulham, M.J.; ADNI. Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer’s disease. IEEE Trans. Bio-Med. Eng. 2015, 62, 1132–1140. [Google Scholar] [CrossRef] [Green Version]
  211. Cheng, D.; Liu, M. CNNs based multi-modality classification for AD diagnosis. In Proceedings of the 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 14–16 October 2017; pp. 1–5. [Google Scholar] [CrossRef]
  212. James, A.P.; Dasarathy, B.V. Medical Image Fusion—A Survey of the State of the Art. Inf. Fusion 2014, 19, 4–19. [Google Scholar] [CrossRef] [Green Version]
  213. Du, J.; Li, W.; Lu, K.; Xiao, B. A Overview of Multi-modal Medical Image Fusion. Neurocomputing 2016, 215, 3–20. [Google Scholar] [CrossRef]
  214. Mitchell, H. Image Fusion: Theories, Techniques and Applications; Springer Science & Business Media: Berlin, Germany, 2010. [Google Scholar] [CrossRef]
  215. Raut, G.N.; Paikrao, P.L.; Chaudhari, D.S. A Study of Quality Assessment Techniques For Fused Images. Int. J. Innov. Technol. Explor. Eng. 2010, 2, 290–294. [Google Scholar]
  216. Krishn, A.; Bhateja, V.; Himanshi; Sahu, A. PCA Based Medical Image Fusion in Ridgelet Domain. In Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA); Satapathy, S.C., Biswal, B.N., Udgata, S.K., Mandal, J.K., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 475–482. [Google Scholar]
  217. Rani, K.; Sharma, R. Study of Different Image fusion Algorithms. Int. J. Emerg. Technol. Adv. Eng. 2013, 3, 288–291. [Google Scholar]
  218. Kaya, I.E.; Pehlivanlı, A.C.; Sekizkardeş, E.G.; Ibrikci, T. PCA based clustering for brain tumor segmentation of T1w MRI images. Comput. Methods Programs Biomed. 2017, 140, 19–28. [Google Scholar] [CrossRef]
  219. Jiang, Y.; Wang, M. Image fusion with morphological component analysis. Inf. Fusion. 2014, 18, 107–118. [Google Scholar] [CrossRef]
  220. Naidu, V.P.S.; Roal, J.R. Pixel-level Image Fusion using Wavelets and Principal Component Analysis. Def. Sci. J. 2008, 58, 338–352. [Google Scholar] [CrossRef]
  221. Reba, M.N.M.; C’uang, O. Image Quality Assessment for Fused Remote Sensing Imageries. J. Teknol. 2014, 71, 175–180. [Google Scholar] [CrossRef] [Green Version]
  222. Wan, T.; Zhu, C.; Qin, Z. Multifocus image fusion based on robust principal component analysis. Pattern Recognit. Lett. 2013, 34, 1001–1008. [Google Scholar] [CrossRef]
  223. Yang, J.; Han, F.; Zhao, D. A block advanced pca fusion algorithm based on pet/ct. In Proceedings of the 2011 International Conference on Intelligent Computation Technology and Automation (ICICTA), Shenzhen, China, 28–29 March 2011; Volume 2, pp. 925–928. [Google Scholar]
  224. Choi, M.-J.; Kim, H.-C.; Cho, N.I.; Kim, H.O. An Improved Intensity-Hue-Saturation Method for IKONOS Image Fusion. Int. J. Remote Senis. 2008, 13, 1–10. [Google Scholar]
  225. Chen, C. Fusion of PET and MR Brain Images Based on IHS and Log-Gabor Transforms. IEEE Sens. J. 2017, 17, 6995–7010. [Google Scholar] [CrossRef]
  226. Haddadpour, M.; Daneshvar, S.; Seyedarabi, H. PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method. Biomed. J. 2017, 40, 219–225. [Google Scholar] [CrossRef]
  227. Siddiqui, Y. The Modified IHS Method for Fusing Satellite Imagery. In Proceedings of the ASPRS 2003 Annual Conference, Anchorage, Alaska, 5 May 2003; pp. 5–9. [Google Scholar]
  228. Tu, T.M.; Huang, P.S.; Hung, C.L.; Chang, C.P. A Fast Intensity-Hue Saturation Fusion Technique with Spectral Adjustment for IKONOS Imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  229. Chen, R. The analysis of image fusion based on improved Brovery transform. In International Industrial Informatics and Computer Engineering Conference (IIICEC 2015); Atlantis Press: Shaanxi, China, 2015; pp. 1132–1134. [Google Scholar]
  230. Mandhare, R.A.; Upadhyay, P.; Gupta, S. Pixel Level Image Fusion Using Brovey Transforme and Wavelet Transform. International Journal of Advanced Research in Electrical. Electron. Instrum. Eng. 2013, 2, 2690–2695. [Google Scholar]
  231. Taxak, N.; Singhal, S. High PSNR based Image Fusion by Weighted Average Brovery Transform Method. In Proceedings of the 2019 Devices for Integrated Circuit (DevIC), Kalyani, India, 23–24 March 2019; pp. 451–455. [Google Scholar] [CrossRef]
  232. Li, S.; Kang, X.; Hu, J. Image Fusion With Guided Filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar] [CrossRef]
  233. Ham, B.; Cho, M.; Ponce, J. Robust Guided Image Filtering Using Nonconvex Potentials. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 192–207. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  234. He, K.; Sun, J.; Tang, X. Guided Image Filtering. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  235. Yin, H.; Gong, Y.; Qiu, G. Side window guided filtering. Signal Process. 2019, 165, 315–330. [Google Scholar] [CrossRef]
  236. Sahu, D.K.; Parsai, M.P. Different Image Fusion Techniques—A Critical review. Int. J. Mod. Eng. Res. (IJMER) 2012, 2, 4298–4301. [Google Scholar]
  237. Noushad, M.; Preetha, S.L. Image Pair Fusion using Weighted Average Method. Int. J. Sci. Technol. Eng. 2017, 3, 397–402. [Google Scholar]
  238. Song, L.; Lin, Y.; Feng, W.; Zhao, M. A Novel Automatic Weighted Image Fusion Algorithm. In Proceedings of the 2009 International Workshop on Intelligent Systems and Applications, Wuhan, China, 23–24 May 2009. [Google Scholar] [CrossRef]
  239. Gorthi, S.; Cuadra, M.B.; Tercier, P.A.; Allal, A.S.; Thiran, J.P. Weighted shape based averaging with neighbourhood prior model for multiple atlas fusion based medical image segmentation. IEEE Signal Process. Lett. 2013, 20, 1034–1037. [Google Scholar] [CrossRef] [Green Version]
  240. Jiang, D.; Zhuang, D.; Huang, Y.; Fu, J. Survey of Multispectral Image Fusion Techniques in Remote Sensing Applications, Image Fusion and Its Applications. Image Fusion Appl. 2011, 24, 1–23. [Google Scholar] [CrossRef] [Green Version]
  241. Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. Multi-focus image fusion for visual sensor networks in DCT domain. Comput. Electr. Eng. 2011, 37, 789–797. [Google Scholar] [CrossRef]
  242. Cao, L.; Jin, L.; Tao, H.; Li, G.; Zhuang, Z.; Zhang, Y. Multi-focus image fusion based on spatial frequency in discrete cosine transform domain. IEEE Signal Process. Lett. 2015, 22, 220–224. [Google Scholar] [CrossRef]
  243. Naidu, V.P.S. Discrete Cosine Transform based Image Fusion Techniques. J. Commun. Navig. Signal Process. 2012, 1, 35–45. [Google Scholar]
  244. Phamila, Y.A.V.; Amutha, R. Low complexity multifocus image fusion in discrete cosine transform domain. Opt. Appl. 2013, 43. [Google Scholar] [CrossRef]
  245. Phamila, Y.A.V.; Amutha, R. Discrete Cosine Transform based fusion of multi-focus images for visual sensor networks. Signal Process. 2014, 95, 161–170. [Google Scholar] [CrossRef]
  246. Tang, J. A contrast based image fusion technique in the DCT domain. Digit. Signal Process. 2004, 14, 218–226. [Google Scholar] [CrossRef]
  247. Alipour, S.; Houshyari, M.; Mostaar, A. A novel algorithm for PET and MRI fusion based on digital curvelet transform via extracting lesions on both images. Electron. Physician 2017, 9, 4872–4879. [Google Scholar] [CrossRef] [Green Version]
  248. Ali, F.E.; El-Dokany, I.M.; Saad, A.A.; El-Samie, F.A. Fusion of MR and CT Images using the Curvelet Transform. In Proceedings of the 25th National Radio Science Conference, Tanta, Egypt, 18–20 March 2008. [Google Scholar]
  249. Guo, L.; Dai, M.; Zhu, M. Multifocus color image fusion based on quaternion curvelet transform. Opt. Express 2012, 20, 18846. [Google Scholar] [CrossRef]
  250. Indira, K.P.; Hemamalini, R.R. Analysis on Image Fusion Techniques for Medical Applications. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2014, 3, 12051–12057. [Google Scholar] [CrossRef]
  251. Nambiar, R.; Desai, U.; Shetty, V. Medical Image Fusion Analysis Using Curvelet Transform. In Proceedings of the International Conference on Advances in Computing, Communication and Information Science (ACCIS-14), Kerala, India, 27–29 August 2014; pp. 1–8. [Google Scholar] [CrossRef]
  252. Shah, P.; Merchant, S.N.; Desai, U.B. Fusion of surveillance images in infrared and visible band using curvelet, wavelet and wavelet packet transform. Int. J. Wavel. Multiresol. Inf. Process. 2014, 8, 271–292. [Google Scholar] [CrossRef]
  253. Chandana, M.; Amutha, S.; Kumar, N. A Hybrid Multi-focus Medical Image Fusion Based on Wavelet Transform. Int. J. Res. Rev. Comput. Sci. 2011, 2, 948. [Google Scholar]
  254. Chabi, N.; Yazdi, M.; Entezarmahdi, M. An Efficient Image Fusion Method Bsed on Dual Tree Complex Wavelet Transform. In Proceedings of the 8th Iranian Conference on Machine vision and Processing (MVIP), Zanjan, Iran, 10–12 September 2013; pp. 403–407. [Google Scholar]
  255. Huang, P.; Chen, C.; Chen, P.; Lin, P.; Hsu, L.-P. PET and MRI brain image fusion using wavelet transform with structural information adjustment and spectral information patching. In Proceedings of the 2014 IEEE International Symposium on Bioelectronics and Bioinformatics (IEEE ISBB 2014), Chung Li, Taiwan, 11–14 April 2014; pp. 1–4. [Google Scholar] [CrossRef]
  256. Sapkal, R.J.; Kulkarni, S.M. Image fusion based on Wavelet transform for medical application. Int. J. Res. Appl. 2012, 2, 624–627. [Google Scholar]
  257. Siddiqui, U.Z.; Thorat, P.R. A New Approach to Efficient Medical Image Fusion. Int. J. Adv. Res. Comput. Commun. Eng. 2015, 4, 66. [Google Scholar] [CrossRef]
  258. Zheng, Y.; Essock, E.; Hansen, B.; Haun, A. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf. Fusion 2007, 8, 177–192. [Google Scholar] [CrossRef]
  259. Udomhunsakul, S.; Yamsang, P.; Tumthong, S.; Borwonwatanadelok, P. Multiresolution Edge Fusion using SWT and SFM. Proc. World Congr. Eng. 2011, 2, 6–8. [Google Scholar]
  260. Kekre, H.-B.; Sarode, T.-K.; Dhannawat, R.-A. Implementation and Comparison of Different Transform Techniques using Kekre’s Wavelet Transform for Image Fusion. Int. J. Comput. Appl. 2012, 4, 41. [Google Scholar]
  261. Davis, D.K.; Roy, R.C. Hybrid Super Resolution using SWT and CT. Int. J. Comput. Appl. 2012, 59, 0975–8887. [Google Scholar]
  262. Singh, R.; Khare, A. Multiscale Medical Image Fusion in Wavelet Domain. Sci. World J. 2013, 2013. [Google Scholar] [CrossRef] [PubMed]
  263. Dhannawat, R.A.; Sarode, T.K.; Kekre, H.B. Kekre ’s hybrid wavelet transform technique with dct, walsh, hartley and kekre’s transform for image fusion. Int. J. Comput. Eng. Technol. 2014, 4, 195–202. [Google Scholar]
  264. Sahu, A.; Bhateja, V.; Krishn, A. Medical Image Fusion with Laplacian Pyramids. In Proceedings of the International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom), Greater Noida, India, 7–8 November 2014. [Google Scholar]
  265. Tan, H.; Huang, X.; Tan, H.; He, C. Pixel-Like Image Fusion Algorithm Based On Maximum Likelihood And Laplacian Pyramid Transformation. J. Comput. Inf. Syst. 2013, 9, 327–334. [Google Scholar]
  266. Kakerda, R.K.; Kumar, M.; Mathur, G.; Yadav, R.P.; Maheshwari, J.P. Fuzzy Type Image Fusion Using Hybrid DCT-FFT Based Laplacian Pyramid Transform. In Proceedings of the International Conference on Communications and Signal Processing (ICCSP), Melmaruvathur, India, 2–4 April 2015. [Google Scholar]
  267. Yun, S.H.; Kim, J.H.; Kim, S. Image Enhancement Using a Fusion Framework of Histogram Equalization and Laplacian Pyramid. IEEE Trans. Consum. Electron. 2010, 56, 2763–2771. [Google Scholar] [CrossRef]
  268. Wang, W.; Chang, F. A Multi-Focus Image Fusion Method Based On Laplacian Pyramid. J. Comput. 2011, 6, 2559–2566. [Google Scholar] [CrossRef]
  269. Olkkonen, H.; Pesola, P. Gaussian Pyramid Wavelet Transform for Multiresolution Analysis of Images. Graph. Models Image Process. 1996, 58, 394–398. [Google Scholar] [CrossRef]
  270. Tian, J.; Chen, L.; Ma, L.; Yu, W. Multi-focus image fusion using a bilateral gradient-based sharpness criterion. Opt. Commun. 2011, 284, 80–87. [Google Scholar] [CrossRef]
  271. Toet, A. Image fusion by a ratio of low-pass pyramid. Pattern Recognit. Lett. 1989, 9, 245–253. [Google Scholar] [CrossRef]
  272. Bai, X.; Zhou, F.; Xue, B. Edge preserved image fusion based on multiscale toggle contrast operator. Image Vis. Comput. 2011, 29, 829–839. [Google Scholar] [CrossRef]
  273. Ramac, L.C.; Uner, M.K.; Varshney, P.K.; Alford, M.G.; Ferris, D.D. Morphological filters and wavelet-based image fusion for concealed weapons detection. In Proceedings of the Society of Photo-Optical Instrumentation Engineers (SPIE) Aerospace/Defense, Security, and Sensing, Sensor Fusion: Architectures, Algorithms, and Applications II, Orlando, FL, USA, 20 March 1998; Volume 3376, pp. 110–119. [Google Scholar] [CrossRef]
  274. Chandrashekar, L.; Sreedevi, A. A Novel Technique for Fusing Multimodal and Multiresolution Brain Images. Procedia Comput. Sci. 2017, 115, 541–548. [Google Scholar] [CrossRef]
  275. Piella, G. A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion 2003, 4, 259–280. [Google Scholar] [CrossRef] [Green Version]
  276. Do, M.N.; Vetterli, M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Trans. Image Process. 2005, 14, 2091–2106. [Google Scholar] [CrossRef] [Green Version]
  277. Bhatnagar, G. Directive contrast based multimodal Medical image fusion in NSCT domain. IEEE Trans. Multimed. 2013, 15, 1014–1024. [Google Scholar] [CrossRef]
  278. Da Cunha, A.L.; Zhou, J.; Do, M.N. The nonsubsampled contourlet transform: Theory, design, and applications. IEEE Trans. Image Process. 2006, 15, 3089–3101. [Google Scholar] [CrossRef] [Green Version]
  279. Yang, Y.; Tong, S.; Huang, S.; Lin, P. Multifocus Image Fusion Based on NSCT and Focused Area Detection. IEEE Sens. J. 2015, 15, 2824–2838. [Google Scholar]
  280. Zhang, Q.; Guo, B.L. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process. 2009, 89, 1334–1346. [Google Scholar] [CrossRef]
  281. Chai, Y.; Li, H.F.; Qu, J.F. Image fusion scheme using a novel dualchannel PCNN in lifting stationary wavelet domain. Opt. Commun. 2010, 283, 3591–3602. [Google Scholar] [CrossRef]
  282. Javed, U.; Riaz, M.M.; Ghafoor, A.; Ali, S.S.; Cheema, T.A. MRI and PET image fusion using fuzzy logic and image local features. Sci. World J. 2014, 2014. [Google Scholar] [CrossRef] [PubMed]
  283. Jany Shabu, S.L.; Jayakumar, C. Multimodal image fusion using an evolutionary based algorithm for brain tumor detection. Biomed. Res. 2018, 29, 2932–2937. [Google Scholar]
  284. Sui, J.; Adali, T.; Pearlson, G.; Yang, H.; Sponheim, S.R.; White, T.; Calhoun, V.D. A CCA+ICA based model for multi-task brain imaging data fusion and its application to schizophrenia. NeuroImage 2010, 51, 123–134. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  285. Gonzalez-Audicana, M.; Saleta, J.L.; Catalan, R.G. Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1204–1211. [Google Scholar] [CrossRef]
  286. He, C.; Liu, Q.; Li, H.; Wang, H. Multimodal medical image fusion based on HIS and PCA. Procedia Eng. 2010, 7, 280–285. [Google Scholar] [CrossRef] [Green Version]
  287. Zhang, Y.; Hong, G. An IHS and wavelet integrated approach to improve pan-sharpening visual quality of natural colour Ikonos and QuickBird images. Inf. Fusion 2005, 6, 225–234. [Google Scholar] [CrossRef]
  288. Al-Azzawi, N.; Sakim, H.A.; Abdullah, A.K.; Ibrahim, H. Medical Image Fusion Scheme using Complex Contourlet transform based on PCA. In Proceedings of the 31st International conference of the IEEE EMBS, Minneapolis, MN, USA, 2–6 September 2009. [Google Scholar]
  289. Thamarai, M.; Mohanbabu, K. An Improved Image Fusion and Segmentation using FLICM with GA for Medical Diagonosis. Indian J. Sci. Technol. 2016, 9, 12. [Google Scholar] [CrossRef]
  290. Wu, J.; Liu, J.; Tian, J.; Yin, B. Wavelet-based Remote Sensing Image Fusion with PCA and Feature Product. In Proceedings of the 2006 International Conference on Mechatronics and Automation, Luoyang, China, 25–28 June 2006; pp. 2053–2057. [Google Scholar] [CrossRef]
  291. Bedi, S.S.; Khandelwal, R. Comprehensive and Comparative Study of Image Fusion Techniques. Int. J. Soft Comput. Eng. 2013, 3, 300–304. [Google Scholar]
  292. Landau, M.J.; Meier, B.P.; Keefer, L.A. A metaphor-enriched social cognition. Psychol. Bull. 2010, 136, 1045–1067. [Google Scholar] [CrossRef] [Green Version]
  293. Fjell, A.M.; Walhovd, K.B.; Fennema-Notestine, C.; McEvoy, L.K.; Hagler, D.J.; Holland, D.; Brewer, J.B.; Dale, A.M. Alzheimer’s Disease Neuroimaging Initiative (2010) CSF biomarkers in prediction of cerebral and clinical change in mild cognitive impairment and Alzheimer’s disease. J. Neurosci. 2010, 30, 2088–2101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  294. Edison, P.; Carter, S.F.; Rinne, J.O.; Gelosa, G.; Herholz, K.; Nordberg, A.; Brooks, D.J.; Hinz, R. Comparison of MRI based and PET template based approaches in the quantitative analysis of amyloid imaging with PIB-PET. NeuroImage 2013, 70, 423–433. [Google Scholar]
  295. Marcus, D.S.; Fotenos, A.F.; Csernansky, J.G.; Morris, J.C.; Buckner, R.L. Open access series of imaging studies: Longitudinal MRI data in nondemented and demented older adults. J. Cogn. Neurosci. 2010, 22, 2677–2684. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Schematic diagram for magnetic resonance imaging (MRI) brain image interpretation (top) and overall block diagram of the corresponding computer-aided diagnosis (CAD) system (bottom). (A) Enhancement of brain signal and definition of regions of interests (ROIs). (B) Extraction of voxel features in a mathematical representation. (C) Reduction of voxel parameters and segmentation in brain regions. (D) Classification and categorization of patients into normal or abnormal classes.
Figure 1. Schematic diagram for magnetic resonance imaging (MRI) brain image interpretation (top) and overall block diagram of the corresponding computer-aided diagnosis (CAD) system (bottom). (A) Enhancement of brain signal and definition of regions of interests (ROIs). (B) Extraction of voxel features in a mathematical representation. (C) Reduction of voxel parameters and segmentation in brain regions. (D) Classification and categorization of patients into normal or abnormal classes.
Applsci 10 01894 g001
Figure 2. Classification of the various MR brain image segmentation methods described in the literature. In this work, the methods were classified in five categories based on the approaches (in green color): Form, structural approaches, graph theory, region and contour. FCM: Fuzzy C-means, AFCM: Adaptive FCM, MFCM: Modified FCM, FCMS: FCM spatial, FCMSI: FCM with spatial information, FGFCM: Fast generalized FCM, FANTASM: Fuzzy and noise tolerant adaptive segmentation method, PCM: Possibilistic C-means, BCFCM: Bias-corrected FCM, FPCM: Fuzzy PCM, GA: Genetic algorithm, PSO: Particle swarm optimization, MRF: Markov random field, DRFs: Discriminative random fields, HMM: Hidden Markov models, GMM: Gaussian mixture model, DNN: Deep neural network, CNN: Convolutional neural network, MLP: Multi-layer perceptron.
Figure 2. Classification of the various MR brain image segmentation methods described in the literature. In this work, the methods were classified in five categories based on the approaches (in green color): Form, structural approaches, graph theory, region and contour. FCM: Fuzzy C-means, AFCM: Adaptive FCM, MFCM: Modified FCM, FCMS: FCM spatial, FCMSI: FCM with spatial information, FGFCM: Fast generalized FCM, FANTASM: Fuzzy and noise tolerant adaptive segmentation method, PCM: Possibilistic C-means, BCFCM: Bias-corrected FCM, FPCM: Fuzzy PCM, GA: Genetic algorithm, PSO: Particle swarm optimization, MRF: Markov random field, DRFs: Discriminative random fields, HMM: Hidden Markov models, GMM: Gaussian mixture model, DNN: Deep neural network, CNN: Convolutional neural network, MLP: Multi-layer perceptron.
Applsci 10 01894 g002
Figure 3. MRI brain images classification methods described in the literature: In this work, the classification methods have been grouped into two categories (green color) according to the type of learning, namely: supervised and unsupervised learning. DNN: Deep neural network, CNN: Convolutional neural network, MLP: multi-layer perceptron, BPNN: Back propagation neural network, PCNN: Pulse-coupled neural network, SVM: Support vector machines, SVDD: Support vector data description, GMM: Gaussian mixture model, HMM: Hidden Markov models.
Figure 3. MRI brain images classification methods described in the literature: In this work, the classification methods have been grouped into two categories (green color) according to the type of learning, namely: supervised and unsupervised learning. DNN: Deep neural network, CNN: Convolutional neural network, MLP: multi-layer perceptron, BPNN: Back propagation neural network, PCNN: Pulse-coupled neural network, SVM: Support vector machines, SVDD: Support vector data description, GMM: Gaussian mixture model, HMM: Hidden Markov models.
Applsci 10 01894 g003
Figure 4. Multimodal fusion techniques in spatial and frequency domains for brain images. In this work, image fusion methods have been classified into two categories (green color) according to the domain from which the image is transferred, namely spatial and frequency domains.
Figure 4. Multimodal fusion techniques in spatial and frequency domains for brain images. In this work, image fusion methods have been classified into two categories (green color) according to the domain from which the image is transferred, namely spatial and frequency domains.
Applsci 10 01894 g004
Table 1. Related work to the MRI brain regions segmentation using some techniques described in the literature: The efficiency of the proposed automatic segmentation algorithm is demonstrated by experiments applying quantitative measurement parameters (last column of the table) for the evaluation, and by comparison with visual observation of a clinical expert (manual segmentation) or with other state-of-the-art algorithms (penultimate column).
Table 1. Related work to the MRI brain regions segmentation using some techniques described in the literature: The efficiency of the proposed automatic segmentation algorithm is demonstrated by experiments applying quantitative measurement parameters (last column of the table) for the evaluation, and by comparison with visual observation of a clinical expert (manual segmentation) or with other state-of-the-art algorithms (penultimate column).
ReferenceSegmentationDatabaseComparisonMeasuring Parameters
Region Approach Based Techniques
[20]MRFSeveral GD-TI, TI and T2-weighted from radiology department of the Poitiers’s hospitalClinical expert evaluationInformation criterion, MAP criterion
[18]AFCMImages from Brainweb http://www.bic.mni.mcgill.ca/brainwebFANTASMMisclassification rate, mean-squared error
[21]DRFSeveral T1, T1c, and T2 images from 7 patients.MRFJaccard similarity index
[22]SVMSeveral T1, T2, PD and FLAIR-weighted of 45 diabetics.Two experienced neuro-radiologistsPearson correlation, Spearman correlation, coefficient of variation, reliability coefficient
[19]MFCMSimulated and real MR images.FCM, FCMS, FCMSI and FGFCMSimilarity index, false positive ratio, false negative ratio
[23]AMSImages from Brainweb and IBSR http://www.cma.mgh.harvard.edu/ibsr/Adaptive MAP, MPM-MAPTanimoto coefficient
Form Approach Based Techniques
[25]Multi-context waveletSeveral T1-weighted with more than 150 slices for eachWavelet, multigrid waveletNA
[26]Spherical waveletCoronal SPGR images with 124 slices for each of 29 left caudate nucleus structures and 25 left hippocampus structuresActive shape model algorithm and expert neuroanatomist evaluationAverage max error, average min error
[24]Spherical harmonicsPD images of multiple sclerosis patientsSlice stacking techniqueMean error and standard deviation. ANOVA criterion
[27]Level setsImages of 16 patients from Singapore National Cancer CenterRegion-competitionJaccard measure, Hausdorff distance, mean absolute surface distance
Graph Theory Based Techniques
[28]HypergraphImage with 265 × 256 pixel slicesNcut algorithmNA
[29]Adaptive graph cutTen images from the BrainwebAdaptive MRF-MAP, graph cutClassification rate
Hybrid Approach Based Segmentation
[30]FCM/GASeveral images from Talairach stereotaxic atlasManually labeled imagesMean, standard deviation, false positive ration, false negative ration, similarity index and Kappa statistic
[31]Fuzzy/MRFImages from Brainweb and T1-weighted SPGRClinical experts evaluationAverage error rate
[32]Bayesian/AMSImages IBSR and Brainwebk-NN/AMSDice and Tanimoto coefficients
[33]PCM/FCMPET and T1-weighted from ADNI (http://adni.loni.usc.edu/)PCM, FCMTanimoto coefficient, specificity and sensitivity
[34]PCM/BCFCMT1-weighted, PET and SPECT scans from Gabriel Mont pied Hospital, FrancePCM, FCM and hybrid PCM/FCMTanimoto coefficient, specificity and sensitivity
[35]PCM/FCM/GAPET and T1-weighted from ADNIPCM, FCM and hybrid PCM/FCMTanimoto coefficient, specificity and sensitivity
[36]PCM/FCM/GA
20% of noise
T1-weighted, PET and SPECT scans from Gabriel Mont Pied Hospital, France and ADNIPCM, FCMTanimoto coefficient, specificity and sensitivity
[37]FPCM/FCM/GA
20% of noise
SPECT, PET and T1-weighted from Gabriel Mont Pied Hospital and ADNIPCM, FCM and hybrid PCM/FCMTanimoto coefficient, specificity and sensitivity
[17]PFCM/BCFCM/GA 20% of noisePET and T1-weighted from Gabriel Mont Pied Hospital and ADNIFPCM, PCM, FCM and many hybrid clustering algorithmsTanimoto coefficient, Jaccard similarity index
MRF: Markov random field, SVM: Support vector machines, FCM: Fuzzy C-means, AFCM: Adaptive FCM, MFCM: Modified FCM, FCMS: FCM spatial, FCMSI: FCM with spatial information, FGFCM: Fast generalized FCM, AMS: Adaptive mean-shift, MAP: Maximum a posteriori probability, MPM: Maximum posterior marginal, FANTASM: Fuzzy and noise-tolerant adaptive segmentation method, DRFs: Discriminative random fields, GA: Genetic algorithm, k-NN: k-nearest neighbor, PCM: Possibilistic c-means, BCFCM: Bias-corrected FCM, FPCM: Fuzzy PCM, Gd-TI: Gadolinium-titanium, PD: Proton density, FLAIR: Fluid attenuation inversion recovery, T1c: T1 after injecting contrast agent, SPGR: Spoiled gradient recall, NA: Not available.
Table 2. Visual illustration of the MRI brain segmentation using some techniques described in the literature: The examples illustrate the results obtained with the techniques classified in the five categories, namely: (A) Region approach, (B) form approach, (C) graph theory approach, (D) structural approach and (E) contour approach.
Table 2. Visual illustration of the MRI brain segmentation using some techniques described in the literature: The examples illustrate the results obtained with the techniques classified in the five categories, namely: (A) Region approach, (B) form approach, (C) graph theory approach, (D) structural approach and (E) contour approach.
(A) Region Approach-Based Methods
Applsci 10 01894 i001
MRI brain segmentation using Convolutional Neural Networks (CNN): (left) Original image, (right) Results of segmentation; illustration reproduced with the permission of Springer Nature [39].
Applsci 10 01894 i002
Segmentation of brain tumors from MRI with Support vector machines (SVM): (left) Original MRI, (right) Full brain segmentation; illustration reproduced with the permission of Springer Nature [21].
Applsci 10 01894 i003
MRI brain image segmentation with K-Means Algorithm: (left) Original image, (right) Results of segmentation; illustration reproduced with the permission from Elsevier [62].
Applsci 10 01894 i004
Segmentation of brain MRI image with graph cut approach: (left) Original MRI, (right) Segmentation result; illustration reproduced with the permission of Springer Nature [130].
Applsci 10 01894 i005
Segmentation of brain MRI image with Spherical harmonics: (left) frontal oligodendroglioma, (right) the result of segmentation; illustration reproduced with the permission fom Elsevier [113].
Applsci 10 01894 i006
Tissue segmentation from MRI using stochastic models: (left to right) Original MRI, CSF, GM, and WM; illustrations reproduced with the permission of PLoS ONE [86].
Applsci 10 01894 i007
Tissue segmentation from MRI using region growing method: (left to right) Original image, grey matter, white matter and CSF; illustration reproduced with the permission of Springer Nature [132].
Applsci 10 01894 i008
Brain tumor segmentation from MRI image using Threshold method: (left) Original image and (right) Segmentation result; illustration reproduced with the permission from Elsevier [133].
Applsci 10 01894 i009
Tissue segmentation of brain MRI image with Gaussian Markov Models (GMM): (left) Brainweb sample slice, (middle) Ground truth and (right) Result of segmentation; illustration reproduced with the permission of Springer Nature [134].
Applsci 10 01894 i010
Segmentation of brain MRI with means shift: (left) Original MRI and (right) Result of segmentation; illustration reproduced with the permission from Elsevier [62].
Applsci 10 01894 i011
Brain tumor segmentation from MRI image using Fuzzy C-Means (FCM) algorithm: (left) Original image, (right) Results of segmentation; illustration reproduced with the permission of Taylor & Francis Ltd. [50].
Applsci 10 01894 i012
Example of brain tumour segmentation of MRI image using genetic algorithms (GA): (left) Original MRI, (right) the result of segmentation; illustration reproduced with the permission of Taylor & Francis Ltd [50].
(B) Form Approach-Based Methods and (E) Contour Approach-Based Methods
Applsci 10 01894 i013
(B): Segmentation of brain MRI image with atlas approach; illustration reproduced with the permission of the authors [109].
Applsci 10 01894 i014
(B): Examples of brain MRI image segmentation with wavelet approach: (left) original MRI, (right) the result of segmentation; illustration reproduced with the permission of Elsevier [25].
Applsci 10 01894 i015
(B): Segmentation of brain MRI image with level set of the left hemisphere, right hemisphere and cerebellum; illustration reproduced with the permission of Springer Nature [135].
Applsci 10 01894 i016
(B): Segmentation of white matter in a brain MR image with level set approach: Temporal ordering is from top to bottom, left to right; illustration reproduced with the permission of the author [118].
Applsci 10 01894 i017
(B): Example of brain MR image segmentation and bias correction with active contour approach; illustration reproduced with the permission of PLoS ONE [101].
Applsci 10 01894 i018
(E): Multi-scale representation of an MRI brain image: (top row) Gaussian blur scale-space of a sagittal MRI, resolution 1282, (bottom row) Laplacian scale-space of the same image, same scale range; illustrations reproduced with the permission of Springer Nature [93].
(C) Graph theory based methods and (D) Structural approach based methods
Applsci 10 01894 i019
(C): The Brainstem Nuclei segmentation result with hypergraph approach: (left to right) Original image, Brainstem Nuclei, predicted labels and segmentation result (automatic segmentation in red contours and manual segmentations yellow contours); illustration reproduced with the permission of Springer Nature [136].
Applsci 10 01894 i020
(C): Discriminant model-constrained graph cuts approach to fully automated pediatric brain tumor segmentation in 3-D MRI; illustration reproduced with the permission of Springer Nature [131].
Applsci 10 01894 i021
(D): Automated tissue segmentation of brain MRI parsing with line of watershed: (green) gray matter, (red) white matter, (blue) cerebrospinal fluid; illustration reproduced with the permission of Springer Nature [100].
Table 3. Some works described in the literature related to the computer aided-diagnosis systems of Alzheimer’s disease through MRI. These CAD systems used classification methods based on supervised or unsupervised training. The efficiency of the proposed CAD system is demonstrated by estimating the percentage of the following performance measures (last column): Sensitivity (SE) which represents the true positive rate, specificity (SP) which estimates the true negative rate and accuracy (AC) that determines the proportion of true results in the database, whether true positive or true negative. In [143], the reported learning time was around a week, while in [148] the computation time reported was 0.0451 s per image.
Table 3. Some works described in the literature related to the computer aided-diagnosis systems of Alzheimer’s disease through MRI. These CAD systems used classification methods based on supervised or unsupervised training. The efficiency of the proposed CAD system is demonstrated by estimating the percentage of the following performance measures (last column): Sensitivity (SE) which represents the true positive rate, specificity (SP) which estimates the true negative rate and accuracy (AC) that determines the proportion of true results in the database, whether true positive or true negative. In [143], the reported learning time was around a week, while in [148] the computation time reported was 0.0451 s per image.
ReferenceClassification TechniquesDatabasePerformance Measures (%)
ACSESP
[143]Classification: SVM (linear basis kernel) with supervised learningGroup 1: 20 AD and 20 HC, samples from the Rochester community, Minnesota, USA.
Group 2: 14AD and 14 HC from the Dementia Research Centre, Univ. College London, UK.
Group 3: 33 probable mild AD and 57 HC sample in Rochester, Minnesota, USA.
Group 4: 19 subjects with pathologically confirmed FTLD
AD: 96
PMAD: 89
FTLD: 89
NANA
[144]Segmentation: hierarchical networks
Classification: SVM with supervised learning
ADNI: 100 P-MCI and 125 HC subjects
http://adni.loni.usc.edu/
MCI: 84.35NANA
[145]Classification: SVM with leave-1-out CV and 3-fold CV and supervised learning19 AD and 20 HC subjectsAD: 80NANA
[146]Segmentation: SPM5 software from department of Imaging Neuroscience, London, UK and using Student t tests25 AD subjects (11 men, 14 women), 24 MCI subjects (10 men, 14 women) and 25 HC (13 men, 12 women) subjectsAD: 84
MCI: 73
84
75
84
70
[147]Feature extraction: Wavelet coefficients
Classification: SOM neural network with unsupervised learning
SVM (linear, Polynomial, RBF basis kernel) with supervised learning
AANLIB of Harvard Medical School: 46 AD and 6 HC subjects.
http://med.harvard.edu/AANLIB/
AD-SOM: 94
AD-SVM: 98
NANA
[148]Feature extraction: Wavelet transform
Segmentation: PCA
Classification: BPNN with supervised learning
48 AD, glioma, meningioma, visual agnosia, Pick’s disease, sarcoma, and Huntington’s disease and 18 HC subjects100NANA
[149]Classification: SVM (bootstrap method) with supervised learning16 AD and 22 HC subjectsAD: 94.591.596.6
[150]Feature extraction: Random forest
Classification: SVM (bootstrap estimation and 20-fold CV) with supervised learning
144 AD and 189 HC subjectsAD: 0.978994
[151]Classification: SVM (leave-one-out CV) with supervised learning37 AD and 40 HC subjects.AD:96.1NANA
[152]Classification: SVM, Bayes statistics, VFI with supervised learning32 AD, 24 MCI and 18 HC subjectsAD: 92
MCI-c: 75
NANA
[153]Feature selection: Pearson’s correlation
Classification: SVM (linear basis kernel and leave-one-out CV) with supervised learning
20 AD and 25 HC subjects from Hospital de Santiago Apostol, Mexico. AD: 100NANA
[154]Features extraction: MBL
Classification: LDA, SVM with supervised learning
ADNI: 198 AD, 238 S-MCI, 167 progresses MCI and 231 HC subjectsAD-SVM: 86
AD-LDA: NA
94
93
78
85
[155]Feature extraction: SIFT
Segmentation: k-means
Classification: SVM (leave-one-out CV) with supervised learning
100 AD and 98 HC subjects. AD: 86NANA
[156]Feature extraction: fractal analysis
Classification: SVM (quadratic kernel) with supervised learning
13 AD and 10 HC subjects.AD: 100NANA
[7]Segmentation: Hybrid FCM/PCM
Classification: SVM (RBF kernel and leave-one-out CV) with supervised learning
45 AD and 50 HC subjects from ADNI phantom with noisiest images and spatial intensity inhomogeneityAD-MRI: 75
AD-PET: 73
84.87
86.36
81.58
82.67
PMAD: p-mild AD, SVM: Support vector machines, FTLD: frontotemporal lobar degeneration, HC: Healthy control, AD: Alzheimer’s disease, ADNI: Alzheimer disease neuroimaging, P-MCI: Probable mild cognitive impairment, S-MCI: Stable MCI, CV: Cross-validation, SPM: Statistical Parametric Mapping, SOM: Self-organizing maps, RBF: Radial basis function, VFI: voting feature intervals, PCA: Principal component analysis, BPNN: Back propagation neural network, AUC: Area under curve, MBL: Manifold-based learning, LDA: Linear discriminant analysis, SIFT: Scale-invariant feature transforms, FCM: Fuzzy C-means, PCM: Possibilistic C-means algorithm.
Table 4. Some works described in the literature related to the computer aided-diagnosis systems of brain diseases through multimodal fusion: The efficiency of the proposed CAD system is demonstrated by experiments applying quantitative measurement parameters (penultimate column of the table) for the evaluation, and by estimating computation time in seconds for some reported works (last column).
Table 4. Some works described in the literature related to the computer aided-diagnosis systems of brain diseases through multimodal fusion: The efficiency of the proposed CAD system is demonstrated by experiments applying quantitative measurement parameters (penultimate column of the table) for the evaluation, and by estimating computation time in seconds for some reported works (last column).
ReferenceFusion ApproachModalitiesPerformance CriteriaComputation Time (in Sec.)
[191]Hybrid NSCT/PCNN QMIQSQAB/F
MRI-T1 + MRI-T23.91610.65610.68412.2198
MRI + CT1.80280.46510.66522.2220
[192] QMIQSQAB/F
NSCTMRI-T1 + MRI-T23.91330.68920.69612.2189
NSCTMRI + CT1.84990.47030.68142.2245
[183] QMIQSQAB/F
NSCTMRI-T1 + MRI-T23.94930.69500.69902.2194
NSCTMRI + CT1.85030.47250.67722.2198
PCAMRI-T1 + MRI-T23.66270.67600.66450.0333
PCAMRI + CT2.60010.51330.60920.0328
[193] QMIQSQAB/F
ContourletMRI-T1 + MRI-T23.83140.66740.68161.9522
ContourletMRI + CT1.60250.42770.64851.9682
[194] QMIQSQAB/F
WaveletMRI-T1 + MRI-T23.07730.65850.61760.0759
WaveletMRI + CT1.54200.41870.51750.0780
[195] RMSE
Hybrid Surface/VoxelMRI + PET0.047796 24.2
SurfaceMRI + PET1.69186 3.4
VoxelMRI + PET0.050478 639.18
[196] PSNR
WaveletMRI + CT72.1172 NA
PyramidMRI + CT70.1061 NA
Weighted averageMRI + CT68.4456 NA
[197] MeanVarEntropyCros-Ent
Weighted average/WTMRI + CT59.686259.68716.75990.5632NA
WaveletMRI + CT32.067432.06785.85700.8999NA
Weighted average/WTMRI + PET45.353745.35435.67790.7714NA
WaveletMRI + PET30.133430.13395.32720.9629NA
[198] MI
Level setMRI-T1 + MRI-T21.7176 NA
Pixel-basedMRI-T1 + MRI-T21.5540 NA
Edge detectionMRI-T1 + MRI-T21.6726 NA
[199] DC
Bayesian multi-sequence Markov model with adaptive weighted EMMRI-T1 + MRI-T2 + MRI-Flair12 (+8)
(9% SNR)
NA
[200] EFAEFLA    EFSA
SP-MMI algorithmMRI + SPECT0.0010.003    0.007NA
[201] EFAEFLA    EFSA
FCM/MMI algorithmMRI + SPECT0.060.16    0.04NA
[202] DR
Regional growing approachMRI-T2 + DTIEdema: 2.96NA
Tumor solid: 8.07
Tumor: 11.03
QMI: Information theory-based metrics, QS: Image structural similarity-based metrics, QAB/F: Image feature-based metrics, NSCT: Non-subsampled contourlet transform, PCNN: Pulse-coupled neural network, PCA: Principal component analysis, RMSE: Root-mean-square-error, PSNR: Peak-to-peak signal-to-noise ratio, sd: Standard deviation, SP-MMI: Surface-projection maximization mutual information, EFA: Error function of area, EFLA: Error function of long-axis, EFSA: Error function of short-axis, FCM: Fuzzy C-means, MI: Mutual information, WT: Wavelet, DC: Dice similarity coefficients, AC: Correct tumor diagnoses accuracy, DR: WM fiber pixel distribution ratio.
Table 5. Some works described in the literature related to the CAD systems of Alzheimer disease through multimodal fusion with comparison of performance of single-modal and multimodal classification methods using 10-fold cross-validation. The efficiency of the proposed multimodal CAD system is demonstrated by estimating the percentage of the following performance measures: Sensitivity (SE), which represents the true positive rate; specificity (SP), which estimates the true negative rate; and accuracy (AC), which determines the proportion of true results in the database, whether true positive or true negative. For the same purpose, for some work, the area under ROC curve (AUC) value was estimated which determines the diagnostic validity by combining sensitivity and specificity.
Table 5. Some works described in the literature related to the CAD systems of Alzheimer disease through multimodal fusion with comparison of performance of single-modal and multimodal classification methods using 10-fold cross-validation. The efficiency of the proposed multimodal CAD system is demonstrated by estimating the percentage of the following performance measures: Sensitivity (SE), which represents the true positive rate; specificity (SP), which estimates the true negative rate; and accuracy (AC), which determines the proportion of true results in the database, whether true positive or true negative. For the same purpose, for some work, the area under ROC curve (AUC) value was estimated which determines the diagnostic validity by combining sensitivity and specificity.
ReferenceMultimodal ClassifierADNI SubjectsModalitiesPerformance Criteria (%)
AD Vs. HCMCI Vs. HC
ACSESPAUCACSESPAUC
[16]HISNAMRI + PET80-90 NANANANANANANA
[10]MKL51AD + 99MCI +
52HC
MRI
CSF
PET
MRI + FDG−PET
MRI + FDG−PET + CSF
86.2
82.1
86.5
90.6
93.2
86.0
81.9
86.3
91.4
NA
86.3
82.3
86.6
91.6
NA
NA
NA
NA
NA
NA
72.0
71.4
74.5
76.4
NA
78.5
78.0
81.8
80.4
NA
59.6
58.8
66.0
63.3
NA
NA
NA
NA
NA
NA
[203]MKL77AD +
82HC
MRI
FDG−PET
MRI + FDG−PET
75.27
79.36
81.0
63.06
78.61
78.52
81.86
78.94
81.76
82.48
83.9
88.5
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
[204]MKL48AD + 66HC
48AD + 66HC
MRI + FDG−PET
MRI + FDG−PET + CSF + ApoE + Cognitive scores
87.6
92.4
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
[205]SCLDA model49AD + 67HCMRI + FDG−PET94.3NANANANANANANA
[206]M3T45AD + 91MCI + 50HCMRI
FDG−PET
CSF
MRI + FDG−PET + CSF
84.8
84.5
80.5
93.3
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
73.9
79.7
53.6
83.2
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
[207]Multivariate analysis of OPLS96AD + 162MCI + 111HCMRI
CSF
MRI + CSF
87.0
81.6
91.8
83.3
84.4
88.5
90.1
79.3
94.6
0.930
0.861
0.958
71.8
70.3
77.6
66.7
66.7
72.8
79.3
75.7
84.7
78.26
77.06
87.6
[185]Random forest37AD + 75MCI + 35HCMRI
FDG−PET
CSF
Genetic
MRI + PET + CSF + Genet
82.5
86.4
76.1
72.6
89.0
88.6
85.8
72.8
71.3
87.9
75.6
87.1
79.8
74.1
90.0
NA
NA
NA
NA
NA
67.3
53.5
61.7
73.8
74.6
64.3
42.3
61.6
94.7
77.5
73.9
78.0
61.8
26.6
67.9
NA
NA
NA
NA
NA
[208]Multitask feature selection method + MKL51AD + 99MCI + 52HCMRI
FDG−PET
MRI + FDG−PET
91.10
91.02
94.37
91.57
89.02
94.71
92.88
90.58
94.04
96.55
95.84
97.24
73.54
72.08
78.80
81.01
75.56
84.85
65.38
59.23
67.06
78.26
77.06
82.84
[188]Multivariate modeling + SVM50 MCIMRI
FDG−PET
PIB−PET
MRI + PIB−PET
MRI + FDG−PET
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
67
62
45
76
37
37
10
45
53
37
87
97
80
90
90
NA
NA
NA
NA
NA
[184]Multimodal biomarker classifier + SVM95AD + 182MCI + 111HCMRI
CSF
MRI + CSF
MRI + CSF + ApoE
73
82
84
85
NA
NA
NA
NA
NA
NA
NA
NA
78
85
90
88
70
74
77
79
NA
NA
NA
NA
NA
NA
NA
NA
68
77
78
79
[187]M2TFS + MKL51AD + 99MCI + 52HCMRI
FDG−PET
CSF
MRI + FDG−PET
MRI + FDG−PET + CSF
88.68
84.42
82.26
95.00
95.40
84.51
83.53
82.55
94.90
94.71
92.50
84.81
81.54
95.00
95.77
94
91
87
97
98
73.12
67.11
70.72
79.27
82.99
78.28
75.96
71.62
85.86
89.39
63.65
50.19
69.04
66.54
70.77
79
72
75
82
84
[186]MKL with multitask feature learning51AD + 99MCI + 52HCMRI
FDG−PET
MRI + FDG−PET
92.25
91.65
95.95
92.16
92.94
95.10
92.12
90.19
96.54
96
96
97
73.84
74.34
80.26
77.27
85.35
84.95
66.92
53.46
70.77
77
78
81
[209]M316AD + 22HCMRI + PET89.4787.590.91NANANANANA
[210]DNN180AD + 160 MCI + 204HC
85AD + 67MCI + 77 HC
MRI
MRI + FDG−PET
82.59
91.40
86.83
92.32
77.78
90.42
NA
NA
71.98
82.10
49.52
60.00
84.31
92.32
NA
NA
[211]CNN93 AD + 100 HCMRI + PET89.6487.19294.45NANANANA
[17]SVDD77 AD + 82 HCMRI
FDG−PET
MRI + FDG−PET
88.15
85.16
93.65
89.02
86.84
90.08
90.18
84.14
92.75
95.00
92.04
97.30
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
NA
FDG: fluorodeoxyglucose, CSF: Cerebrospinal fluid, PIB: Pittsburgh compound B, ApoE: apolipoprotein E54 allel, HIS: Hue-intensity-saturation, MKL: Multi- kernel learning, SCLDA: Sparse composite linear discriminant analysis, M3T: Multimodal multitask, OPLS: Orthogonal partial least squares, M2TFS: Manifold regularized multitask feature learning, M3: Multimodal imaging and multi-level characteristics with multi-classifier, DNN: Deep neural network, CNN: Convolutional neural network, SVDD: Support vector data description.

Share and Cite

MDPI and ACS Style

Lazli, L.; Boukadoum, M.; Mohamed, O.A. A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion. Appl. Sci. 2020, 10, 1894. https://doi.org/10.3390/app10051894

AMA Style

Lazli L, Boukadoum M, Mohamed OA. A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion. Applied Sciences. 2020; 10(5):1894. https://doi.org/10.3390/app10051894

Chicago/Turabian Style

Lazli, Lilia, Mounir Boukadoum, and Otmane Ait Mohamed. 2020. "A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion" Applied Sciences 10, no. 5: 1894. https://doi.org/10.3390/app10051894

APA Style

Lazli, L., Boukadoum, M., & Mohamed, O. A. (2020). A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion. Applied Sciences, 10(5), 1894. https://doi.org/10.3390/app10051894

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop