Next Article in Journal
A Method with Double Inertial Type and Golden Rule Line Search for Solving Variational Inequalities
Previous Article in Journal
Regenerative Analysis and Approximation of Queueing Systems with Superposed Input Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Transfer Learning Approach: Early Prediction of Alzheimer’s Disease on US Healthy Aging Dataset

by
Kishor Kumar Reddy C
1,
Aarti Rangarajan
2,
Deepti Rangarajan
2,
Mohammed Shuaib
3,
Fathe Jeribi
3,* and
Shadab Alam
3
1
Department of Computer Science and Engineering, Stanley College of Engineering and Technology for Women, Hyderabad 500001, Telangana, India
2
Department of Electronics and Communication Engineering, Stanley College of Engineering and Technology for Women, Hyderabad 500001, Telangana, India
3
Department of Computer Science, College of Engineering and Computer Science, Jazan University, Jazan 45142, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(14), 2204; https://doi.org/10.3390/math12142204
Submission received: 29 June 2024 / Revised: 8 July 2024 / Accepted: 10 July 2024 / Published: 13 July 2024

Abstract

:
Alzheimer’s disease (AD) is a growing public health crisis, a very global health concern, and an irreversible progressive neurodegenerative disorder of the brain for which there is still no cure. Globally, it accounts for 60–80% of dementia cases, thereby raising the need for an accurate and effective early classification. The proposed work used a healthy aging dataset from the USA and focused on three transfer learning approaches: VGG16, VGG19, and Alex Net. This work leveraged how the convolutional model and pooling layers work to improve and reduce overfitting, despite challenges in training the numerical dataset. VGG was preferably chosen as a hidden layer as it has a more diverse, deeper, and simpler architecture with better performance when dealing with larger datasets. It consumes less memory and training time. A comparative analysis was performed using machine learning and neural network algorithm techniques. Performance metrics such as accuracy, error rate, precision, recall, F1 score, sensitivity, specificity, kappa statistics, ROC, and RMSE were experimented with and compared. The accuracy was 100% for VGG16 and VGG19 and 98.20% for Alex Net. The precision was 99.9% for VGG16, 96.6% for VGG19, and 100% for Alex Net; the recall values were 99.9% for all three cases of VGG16, VGG19, and Alex Net; and the sensitivity metric was 96.8% for VGG16, 97.9% for VGG19, and 98.7% for Alex Net, which has outperformed when compared with the existing approaches for the classification of Alzheimer’s disease. This research contributes to the advancement of predictive knowledge, leading to future empirical evaluation, experimentation, and testing in the biomedical field.

1. Introduction

Alzheimer’s disease is a pressing global issue, with an alarming prediction of 131.5 billion individuals worldwide projected to be affected by the year 2050 [1]. This condition instigates small strokes in the brain, leading to gradual cell deterioration and nerve complications [2]. While its primary causes are linked to factors such as age, lifestyle choices, and health-related parameter variations like blood pressure and diabetes, detecting the disease early poses a significant challenge [3], and accurate diagnosis remains elusive [4]. The brain, responsible for processing information, retaining memories, solving intricate puzzles, and facilitating communication with the other organs, is profoundly impacted by Alzheimer’s. In the realm of treatment, acetylcholinesterase inhibitors like donepezil are prescribed for mild to severe dementia, while memantine is recommended for moderate to severe cases. The diagnostic journey encompasses steps such as detection, evaluation, diagnosis, and treatment, marking initial assessments crucial for patient care [5]. A robust health detection model holds a key to identifying internal health issues at an early stage, thus enhancing the process for timely intervention. Research indicates that the entorhinal cortex is among the primary areas affected in the early stages of Alzheimer’s progression. Data from the World Health Organization [6] in 2023–2024 showcases country-specific male and female mortality rates per 100,000 related to Alzheimer’s disease, offering insights into the differential impact across populations. Figure 1 visually presents a comparative analysis of male and female mortality rates in the form of a bar graph, shedding light on the gender-specific implications of this pervasive disease.
In countries with a large population, like the United States of America [2], there are currently 4.5 million individuals affected by Alzheimer’s disease, a number estimated to increase to 14 million by 2050. One study [1] on dementia suggested an asymptomatic phase lasting 6–10 weeks after the onset, indicating the presence of preclinical Alzheimer’s disease before functional impairment is evident. A follow-up clinical study [7], spanning four years, showed that 29.1% of patients in a preclinical stage progressed to MCI. A notable case study involved a 63-year-old Caucasian patient diagnosed with Alzheimer’s disease, characterized by cognitive decline, reduced arithmetic capabilities, and clinical evidence of decreased t-tau and p-tau tangles. Diagnostic imaging techniques such as MRI and PET scans are crucial for detecting structural and molecular changes in the brain [8]. MRI assessments can identify alterations in grey and white matter composition in the brain using the RBF-SVM method [9], calculating volumes of the brain regions with the ADNI dataset, and employing multi-atlas propagation for refinement segmentation. Machine learning methods, including radial-based classifiers, neural networks in deep learning frameworks, and decision tree classifiers, have been used to predict disease progression, including the splits into groups such as Alzheimer’s disease vs. health control and early mild cognitive [10] impairment vs. late-mild cognitive impairment has an accuracy of 89% for each dataset, which is evaluated using cross-validation, n-fold stratification, and k-fold stratification in the classification techniques. In [11], the model of a neural network based on a deep learning framework predicted modeling choices using three classifiers. The models used are linear single-year models, MCI-to-AD prediction problems, and non-linear single-year models, progressing from CN to MCI.
In [12], the authors compared the decision tree classifier for repeated data splitting based on cut-off values, random forest, support vector machine, hyperplane separating two categories of variables, and gradient boosting, and XG Boosting was used to maximize speed and efficiency. Voting classifiers, chosen with the majority of votes, combine different datasets and algorithms to predict outcomes accurately and efficiently. The OASIS dataset was used for brain disorder diagnosis, involving 150 right-handed patients aged 60–96 years with attributes of gender, age and clinical dementia rating to find out if the person was classified as having dementia or not having dementia. Cross-validation calculates accuracy with an iteration of (n − 1) analyzed ‘n’ times. This research [13] provided insights into medial temporal lobe atrophy, a neurodegenerative sign of Alzheimer’s, using coronal MRI slices covering the temporal lobe with a CNN approach to classify AD and controls using 2D images as input data. The authors in [14] differentiated the differences between types of dementia like vascular, Lewy body, frontotemporal, and mixed dementia and concluded that Random Forest Grid Search Cross Validation was better compared to other algorithms. Moreover, in [15], feature selection methods like Bestfirst and CfssubsetEval, along with algorithms such as Naïve Bayes, logistic regression, SMO/SMV, and random forest techniques, were crucial for predicting cognitive impairment algorithms. Advanced model evaluation measures were utilized to assess the performance of classification models, ensuring accurate predictions regarding cognitive health and disease progression. The authors [16] examined stage–wise data classification and collection, data processing and feature extraction, and segmentation performed through SPM12 software, as well as data acquisition achieved through an MR scanner.
In recent years, academicians and researchers have shown interest in computer-assisted learning methodologies to analyze and predict disease using medical data. The surveyed algorithms apply traditional analysis techniques like deep learning, XGBoost, support vector machine, decision tree, logistic regression, KNN, random forest, Transfer Learning, Voting-Classifier, Laser-Induced Breakdown Spectroscopy, and Multi-Layer Perceptron. Amyloid and tau-tangle image processing techniques [1] are not readily available to process and classify diseases. The algorithm’s performance [2], including its precision, recall, and F1 score, were evaluated on males and females using KNN, logistic regression, SVM, and AdaBoost but were not efficient. The study lacked [10] an interpretation of well-defined methods to interpret deep-learning models used for clinical decision-making. The accuracy decreased with increasing time horizons for CN subjects [11], and molecular biomarkers were ineffective for CN compared to MCI. The OASIS dataset and specified ML algorithm [12] are limited algorithms like random forest and decision tree, and their applicability to later stages of the disease was not explored. The study limited itself only to the [17] textural radiomics features extracted from grey matter probability volumes in subcortical regions, limiting the features of the brain. The authors of [18] explored the performance analysis of the deep neural network model on other neurodegenerative diseases or conditions beyond AD. The performance of the models used with the CNN [13] dataset validation was slightly lower than within-dataset validation. The [19] RNN model requires significant computational resources due to the complexity of deep neural networks, considered a limitation for real-time applications. The interpretability of results from deep neural networks can be challenging compared to traditional statistical models [20]. Further, the quality of EEG data [21], if noisy or corrupted, affect the performance of the model, and a limited dataset may give incorrect results. MLP was used [22] for the classification of patients, but further validation with larger and more diverse cohorts is necessary to confirm the robustness and reliability of the classification method.
The limitations of machine learning algorithms are data dependency, complex algorithms, and overfitting [23]. To overcome them, Visual Geometry Group-16, Visual Geometry Group-19, and Alex Net transfer learning approaches are used as they are simple, unified, and have pre-trained models with greater depth. The proposed transfer learning approach model with neural network approach algorithms with modalities of VGG16, VGG19, and AlexNet are used for building the model by choosing inputs and evaluating them on healthy datasets from the Behavioral Risk Surveillance Systems [18] to classify and predict Alzheimer’s disease and enhance diagnostic accuracy by providing a more comprehensive view of an individual’s health. The proposed algorithms, by and large, are used for image data, but to convert them to numerical data, several steps like data reshaping, adaptation of the input layer, data processing by the convolutional layers, and training are involved. VGG16 reshapes data and feeds it into the convolutional layer, and the model is trained from scratch or fine-tuned as it is exclusively used for numerical data. The convolutional layers’ output is fed into the fully connected layers. VGG19 has 19 deeper layers, and thereby, capturing the complex features and hierarchical representations is easier. VGG16 and VGG19 use 3 × 3 convolutional and max pooling layers, ensuring that they capture broader and better features. Reshaped data are fed into the VGG19 architecture, and they need to be modified to accept the specific reshaped dimensions. The convolutional layer helps with data processing and applies filters to the reshaped feature arrangement. Extensive hyperparameter tuning and regularization can be used to reduce overfitting.
The detailed analysis of the existing relevant work, along with the models and demerits, is specified in Table 1. The existing exhaustive literature survey was based on neural networks and machine learning algorithms, as they provide better accuracy in comparison to the present traditional approaches. The training data were 80% training.
The main contributions of the proposed work are summarized below:
  • The chosen healthy aging dataset was pre-processed; the Transfer Learning Approach model was used; and VGG16, VGG19, and Alex Net algorithms were applied that aim to classify Alzheimer’s disease (AD) and employ 10-fold cross-validation.
  • A reliable and efficient model was trained, tested, evaluated, and compared with the existing neural network and machine learning models.
  • Data were validated, and performance evaluation was described in a confusion matrix. The metrics evaluated were accuracy, error rate, precision, recall, F1 score, sensitivity, specificity, Kappa statistics, ROC, and RMSE. These were contrasted with existing algorithms like K-Nearest Neighbors, logistic regression, decision tree classifiers, random forest classifier, XGBoost, support vector machine, AdaBoost, voting classifiers, Laser-Induced Breakdown Spectroscopy, Bidirectional LSTM, Naïve Bayes with feature selection, Gaussian NB, CAE, and CHAID.
The rest of the paper is organized as follows: Section 2 gives insights into the methodology proposed, and Section 3 discusses the results, and lists the metric comparisons in tabular form. To predict and classify Alzheimer’s disease, neural network approaches are discussed. Section 4 concludes the paper, followed by the references.

2. Materials and Methods

The dataset is available on Kaggle, which is an enormous repository of datasets covering various domains of healthcare, finance, and social sciences. The sample source is an extract from Alzheimer’s Disease and Healthy Aging Data in the United States, which contains data from the Behavioral Risk Surveillance Systems (BFRSS) [28], which collects telephone-related surveys of United States citizens related to chronic health conditions and tabulates the values. The data were pre-processed but initially had about 250,000 row dataset values with 16 tabular columns. A sample dataset is tabulated before feature engineering and normalization in Table 2.

2.1. Dataset Description

  • Start of year: The feature gives which year the data were collected.
  • Location: This feature tells the location and area the person presently lives in.
  • Topic: This feature gives the effects on health and hygiene that have several affected diseases such as mental health conditions like mental distress, depression, and memory loss.
  • Age Range: This feature gives the range of age that is an older generation of adults having these diseases and captures the age using this feature.
  • Gender: This feature shows if the person’s sex is male or female.
  • Low_Confidence (Low_C), High_Confidence (High_C), and Data_Confidence (Data_C): This feature gives the lower confidence limit to provide the probability of predictions of the algorithm.
  • Class: This feature describes if the person is categorized into either mental health or a cognitive decline.
The proposed architecture for the model of Alzheimer’s disease, illustrated in Figure 2, shows the architecture of VGG16 model building and evaluation stage that includes the convolutional stages of VGG16, VGG19, and Alex Net.

2.2. Dataset Pre-Processing

The raw dataset had missing values, and using data handling, and feature engineering, missing values were removed, deleted, and transformed. The data were pre-processed using feature engineering after removing attributes like serial number, start year, and location, as indicated in Table 3. The sample dataset after data pre-processing is shown in the Table 3, which indicates a filled dataset extracted from Alzheimer’s disease (AD) and Healthy Aging Data in the US, which contains data from Behavioral Risk Surveillance Systems (BFRSS) [28], which were extracted and transformed. Data analysis was used to clean, remove the independent attributes, transform, and model the dataset into a suitable format for processing the data.
The correlation matrix uses a statistical technique to evaluate relationships, including direction (positive or negative) and strength (low, medium, or high) between variables in a dataset, wherein the goal is to identify patterns. The correlation coefficient (r) is measured by and always lies between −1 (strongly negative) and +1 (strongly positive), with a neutral 0. The correlation matrix improves accuracy, while the confusion matrix helps to understand data; each cell represents the correlation between two variables. On analysis, if the characteristics of the variables are related, an investigatory examination of the variables predicts if they are dependent or independent. Heatmap visualizes data in a 2-dimensional format in the form of coloured maps using hue and saturation for colour variation and display while replacing the numeric data with colours. The lower triangular matrix shown in Figure 3 has unique pairs of variables excluding diagonals and displays the lower triangle to visualize matrices with no redundancy. Clustered correlation, or hierarchical clustering of correlations, is used when dealing with larger datasets, and it works on grouping based on correlation patterns, as visualized in Figure 4.
The correlation heatmap represents the direction and strength of relationships between variables and is shown in Figure 5. According to the analysis of heatmap, strongly correlated image variables appear darker, and weakly correlated blocks will have lighter colors or neutral shades.

2.3. Model Building and Evaluation

VGG16 is the most preferred CNN model used for the transfer learning approach to develop a predictive model for speeding up the training process trained on different datasets, and frozen to avoid information destruction. It has a uniform architecture with 138 million parameters. With the input of 224 × 224 × 3, it matches the neural networks and undergoes a series of multi-convolutional operations, and a max pooling stride of 2 with a size of 2 × 2 is present from a series of input to output layers. Size is proportionally reduced from 224 to 7. Table 4 shows the VGG16 layers and their pooling with dimensions. The benefit of using smaller multiple layers is that the activation function accompanies the convolutional layers in improving future network functions to converge quickly, which reduces the tendency to overfit during training. VGG16 has three fully connected layers, of which the first two layers have 4096 channels, the third layer has 1000 channels, and there are 13 convolutional layers.

2.3.1. VGG16 for Alzheimer’s Disease Classification

The VGG16 architecture in Figure 6 includes the input, convolutional layer, ReLU function, hidden layer, pooling layer, and fully connected layers. The input of 224 × 224 was used, and the model creators attempted to keep it constant by cropping the same from the center of each image. Convolutional layers used the smallest possible receptive field of 3 × 3, and 1 × 1 was used for linear transformation of input. ReLu (Rectified Linear Unit Activation Function) was for training time reduction, providing matching outputs for positive inputs and zero for a negative input, and activating the VGG network’s hidden layer. After convolution, to preserve spatial resolution, a convolutional stride of 1 pixel was used. Hidden layers used VGG rather than Alex Net because Alex Net increases the memory consumption and training time with the trade-off of a very low accuracy increase. The pooling layer was used to reduce dimensionality, reducing the number of available filters from 64 to 128, 256, and up to 512. Practical usage of VGG16 applies to image recognition or classification, image detection and localization, and image embedding vectors.

2.3.2. VGG19 for Alzheimer’s Disease Classification

VGG19 is an extension of VGG16 with a deeper layered architecture, needing more accuracy with a tradeoff of computational resources. VGG19 has additional layers of convolutional blocks of 16 organized into 5 layers, followed by intermittent max pooling. Fully connected layers have 4096 neurons with ReLU hidden layers, and output has 1000 neurons with softmax activation, as shown in Figure 7. It takes more training time, and the choice depends on the specific requirements of the task and available computational resources. The ReLU function shown in Equation (1) shows the computation of gradients, which is helpful for deep neural networks. The activation function ( A [ l ] = R e L U ( Z [ l ] ) changes as a function used in neural networks and is a ReLU (Rectified Linear Unit). When output is greater than zero, it returns the same value as output; otherwise, it returns a zero value. The range is as shown in Equation (2). The main objective of the activation function is to induce non-linearity in the data.
R e L U ( v ) = max ( 0 , v )
R e L U v =    0 ,   if   v < 0 x ,   if   v 0
The pseudo-code is outlined below the explanation. The VGG16 algorithm for Alzheimer’s classification is outlined below (Algorithm 1).
Algorithm 1: VGG16 Algorithm for Alzheimer’s disease Classification
  • Input Layer: Accept Numerical Data Input and normalize if necessary // csv file
  • Outputs: Comparison Table, Results, and Graphs
  • Convolutional Blocks:
    • Multiple convolutional operations followed by ReLu Activation function, use 3 × 3 kernels with same padding.
    • Batch normalization, max-pooling 2 × 2 window with stride = 2 after each convolutional layer block.
    • Equations
      Convolutional Operation: Z [ l ] = W   l   A l 1 + b [   l ]
      W l —weight, b [ l ] —bias, A l 1 —activation of the previous layer, A l —activation of current layer
      A [ l ] = M a x P o o l ( A l 1 )
  • Fully Connected Layer:
    • Flatten convolutional block’s output, stack, use ReLu activation function for hidden layers.
      A [ l ] = F l a t t e n ( A l 1 )
  • Output Layer: Dataset classes and use softmax activation for multi-class classification.
    • A [ l ] = S o f t m a x ( Z l )
  • Model Compilation: Identify loss functions- cross entropy classification, choose an optimizer– (Adam, SDG) parameters and specify metrics.
    J = 1 m   i = 1 m c = 1 C y i ( c ) log ( a i ( c ) )
    m—Number of examples, C—number of classes, yi(c)—true label, ai(c)—predicted probability
  • Model Training:
    • Batch fed training data, optimize weights with back propagation.
    • Train and validate, adjust hyper parameters, if needed.
  • Model Evaluation: Train dataset and evaluate performance metrics.
  • Fine Tuning and Transfer Learning: Unfreeze layers and retrain model, usage of transfer learning using pre-trained weights used on larger datasets.
  • Deployment: Deploy trained models, optimize for memory speed and footprint –quantization and model pruning.
  • Model Usage: Deploy on unseen data.
The VGG19 algorithm for Alzheimer’s classification is shown below, which uses convolutional layers of 3 × 3 filters to preserve the spatial resolution of the input (Algorithm 2). It is a CNN architecture that has made a significant contribution to deep learning techniques in computer vision and image classification. The algorithm of Alex Net for Alzheimer’s classification is shown below (Algorithm 3), and its corresponding convolutional layers for VGG19 and AlexNet are shown in Figure 7 and Figure 8.
Algorithm 2: VGG19 Algorithm for Alzheimer’s disease classification
  • Input Layer—Accept numerical input as data and normalize when needed.
  • Convolutional blocks—
    • Stack multiple convolutional layers with ReLU activation function, using 3 × 3 ‘same’ padding and max pooling.
      z c o n v = c o n v o l u t i o n a l W c o n v , A p r e v + b c o n v
      Wconv = weight, Aprev = Activation, bconv = bias
      A c o n v = R e L U ( Z c o n v )
      A p o o l = M a x p o o l ( A c o n v )
    • Apply max-pooling 2 × 2 window with stride = 2 after every two convolutional layers.
  • Fully Connected Layers—
    Flatten the last convolutional block’s output and stack fully connected layers with ReLU activation.
    A f l a t t e n = F l a t t e n ( A p o o l )
  • Output Layer-Dense layer with softmax activation.
  • Loss Function – Cross-entropy of multiclass classification.
    J = 1 m   ( y log A o u t p u t + 1 y log ( 1 A o u t p u t ) )
  • Optimization-Stochastic Gradient Optimizer or any other.
    W = w α d W
    b = b α d b
    α—learning rate, dW and db—gradients of loss function with weights and bias parameters
  • Training—Feed in batches, compute loss, back propagation gradient, update weight and repeat convergence.
  • Evaluate model on separate test dataset, calculate the metrics.
  • Deploy model for new data inference.
Algorithm 3: Alex Net Algorithm for Alzheimer’s disease classification
  • Input Layer—Accept numerical input as data and normalize as and when needed.
  • Convolutional Layers—Apply ReLU function and repeat for additional convolutional layers.
    z c o n v 1 = c o n v o l u t i o n a l W c o n v 1 , A p r e v + b c o n v 1
    A c o n v 1 = R e L U Z c o n v 1
    Max pooling has
  • Fully connected convolutional layer, flatten and activate ReLU function.
  • Compute Linear Transformation and apply loss function, optimize, train, and evaluate model.

3. Results and Discussion

Data visualization was observed in the correlation matrix. It used a heatmap to correlate the variables. The proposed VGG16, VGG19, and Alex Net models have been tested on Kaggle datasets taken from Alzheimer’s disease (AD) and Healthy Aging Data in the US [28]. The dataset after pre-processing had 31,676 instances, and the model was built using Python 3.7.0. For experimentation, software specifications included 8 GB of RAM, a 64-bit operating system, an i5 Intel Core Processor, and Windows, version 11.
Pre-clinical Stage 1 is a starting stage, and Stage 6 is severe AD dementia. The need is to promote people by increasing the level of their sedentary life by either being physically active, allowing them to take part in social activities, having a healthy and balanced lifestyle and diet, or keeping their mind and body active. Diet and exercise for early symptomatic AD aim to maintain cognitive function through the intake of green leafy vegetables, proteins, and nuts; physical exercise. Cholinesterase inhibitors are used. Donepezil is used for Alzheimer’s treatment and can be detected through scanning techniques like CT (computer tomography) and PET (positron emission tomography) [29]. Predicted labels help classify and evaluate the model’s correctness and accuracy. Therefore, the model‘s performance was predicted through accuracy, precision, F1 score, and recall [30]. To analyze the key indicator metrics of models, we used a confusion matrix, which gave the correctness and incorrectness of the model. The true positives and negatives gave correctly predicted values, while the false positives and negatives gave wrongly predicted values. The generated training and testing confusion matrix over AD and healthy aging data in the US is visualized in Figure 9 for VGG16 and Figure 10 for VGG19, and Figure 11 displays the Alex Net confusion matrix. On training, the model accuracy was 99.9%, and on testing, the accuracy was 99.89%.
Accuracy was measured as true positive (TP) + true negative (TN) to the total instances (true positives and negatives (TP and TN) + false positives and false negatives (FP +FN)). Accuracy predicts the exact correct outcomes for a total number of outcomes. However, accuracy alone cannot evaluate the model; therefore, we used other parameters, such as precision, F1 score and sensitivity, specificity, and recall. Results are tabulated in Table 5 and graphically shown in Figure 12. Equation (3) gives the parameter accuracy of how often the model makes correct predictions across the classes in the dataset. Error rate gives the model’s degree of prediction of error with respect to the true model, and Equation (4) gives the parameter error rate formulas.
A c c u r a c y = TN + TP TN + TP + FN + FP
E r r o r R a t e = ( 100 % A c c u r a c y )
The proposed models, VGG16, VGG19, and Alex Net, outperformed when compared with existing neural network approaches, for which the accuracy was 98.20% for VGG19 and 100% for VGG16 and Alex Net, and the error rate was very minimal for the neural network approaches.
Results are tabulated in Table 6, and a graphical sketch of accuracy and error rate is shown in Figure 13 for a neural network approach. VGG16 and Alex Net had 99.9%, and VGG19 had 98.8%, outperforming the existing neural network approach with the graph in Figure 14. Figure 15 indicates the training and testing data for the proposed VGG16 model for accuracy, and for each epoch, a complete pass through the entire training algorithm was required to determine when the parameters of the model needed to be updated and train it for multiple epochs, and its values are tabulated in Table 7.
The performance measures of VGG19 and Alex Net are tabulated in Table 8 and Table 9, with graphical approaches in Figure 15, Figure 16, Figure 17 and Figure 18. After each epoch, the model’s performance was evaluated on a separate dataset called the validation dataset, which was used to detect underfitting or overfitting. The number of additional epochs may be decided based on performance and convergence area criteria.
The precision values of different algorithms are observed in Table 10. If the precision value is high, it means that the model’s positive predictions are correct, while a low value suggests that its false positive predictions are high. Formulae to calculate precision are given as true positive (TP) to the total true and false positive rates of 1, stated as a good classifier. Figure 19 represents the precision parameter in the graphical sketch proposed models: 99.9% for VGG16, 96.6% for VGG19, and Alex Net of 100%, in comparison with existing neural network and ML approaches for precision. Equation (5) specifies the precision parameter, indicating the model’s ability to avoid false-positive predictions.
P r e c i s i o n = TP TP + FP
Recall values are given in Table 11. If the value is 1 (high), it is called a true positive rate or recall, used for effective binary classification models. Figure 20 shows the graphical aspect of recall, while a low recall value suggests that the model has significant missing instances. The recall value must be high for scenarios like banking and medical diagnosis. The proposed VGG16, VGG19, and Alex Net models had 99.9%, outperforming the existing neural network and ML approaches for recall. Equation (6) gives the metric recall, indicating the correct instances classified by the model.
R e c a l l = TP TP + FN
The F1 score specifies the performance metrics of the classification model, where precision and recall are the combinations of the F1 score in Table 12, which can be calculated as the harmonic mean of precision and recall, taking into account the false positives and negatives, A high F1 score indicates good precision and recall, while a low F1 score indicates a lack of the aspects. A line graph is shown in Figure 21. The proposed models for VGG16 values were 99.9%, 98.2% for VGG19, and 98.8% for Alex Net, which outperformed compared to the existing neural network and ML approaches for the F1 score parameter. Equation (7) gives the F1 score, which combines precision and recall into a single metric, providing a balance between their model performances.
F 1   S c o r e = 2 Recall     Precision Recall + Precision
In Table 13, the comparison of all the sensitivity parameters and their percentages is indicated. Sensitivity interprets how well the machine learning detects any positive instances, as indicated in Figure 22. The proposed models for VGG16 had a value of 96.8% for VGG16, 97.9% for VGG19, and 98.9% for Alex Net, outperforming existing neural networks and ML approaches for the sensitivity metric. Equation (8) defines the sensitivity that gives the model the ability to identify positive instances within a dataset.
S e n s i t i v i t y = TP TP + FN
Table 14 indicates the specificity score, which was formulated as the negative of the true negative to the total true and false predictions in the dataset, and its corresponding graphical analysis is shown in Figure 23. The true negative or negative instance was identified by the model or a true-negative rate. The sum of true and false negatives must be equal to 1. The proposed models for VGG16 had 96.5% performance for VGG16, 97.7% for VGG19, and 98.8% for Alex Net, outperforming in comparison to the existing neural network and ML approaches for specificity scores. Equation (9) complements sensitivity and identifies the negative instances within a dataset.
Specificity = TN TN + FN
For accessing better performance, Cohen Kappa was utilized. Cohen Kappa is used for analyzing reliability, which demonstrates that the degree of data representation is appropriately illustrated and evaluated. Cohen is executed based on the equation, where Pr (a) is the observed agreement and Pr (e) is the chance agreement, which always lies between 0 and 1, as given in Equation (10).
k = Pr   ( a ) Pr   ( e ) 1 Pr   ( e )
Agreements were classified based on values, such as 0.10–0.20 as minor, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as substantial, and 0.81–1.00 as nearly perfect agreements [22]. The comparison of Kappa statistics values is classified in Table 15, and the corresponding graphical value is seen in Figure 24.
The proposed models for VGG16 had 0.96, 0.95 for VGG19, and 0.96 Alex Net outperformed when compared with the existing neural network and ML approaches for Cohen Kappa statistics. The Kappa statistics for various algorithms were compared with Naïve Bayes, Logistic Regression, SVM, C4.5, and CHAID from surveys [15,31] and are tabulated.
Receiver operating characteristic is abbreviated as ROC, used for the performance measure of classification modeled according to the tabulated values in Table 16 and using the graphical analysis in Figure 25. Between the true- (sensitivity) and false-positive rate (1-specificity) at numerous threshold settings, there was a trade-off. Under the ROC (area under curve) curve, if more area was covered, its model performance was better. The curve of the receiver operating characteristics was plotted by plotting true- and false-positive rates at diverse threshold levels. ROC for Alzheimer’s classification was compared using different algorithms shown in tabular form and compared with algorithms of Naïve Bayes, Logistic Regression, SVM, random forest, and MLP [11]. Equations (11) and (12) give the actual positive cases and actual negative cases correctly identified by the classifier.
T P R = TP TP + FN
F P R = FP FP + TN
RMSE is the root mean square error used to measure accuracy metrics for a given regression model. The mean square error rate for Alzheimer’s disease classification was compared using different algorithms shown in tabular form [1] and compared with algorithms of Naïve Bayes, Logistic Regression, SVM, random forest, and MLP. The square root of an average of squared differences between the predicted and actual values, shown below in Table 17 and its graphical display in Figure 26, which only has a reduced value of 1.30%, has outperformed the existing approaches. The lower the RSME was, the better the regression model was in terms of accuracy.

4. Conclusions

Alzheimer’s is a progressive, neurodegenerative disease that primarily affects cognitive, memory, and behavioral functions, of which 60–80% are affected throughout the world, and it is important to reduce the risk by diagnosing the disease early. Better quality of life, prevention, or delay of the onset helps to preserve cognitive health, enhancing the well-being of society. The study for Alzheimer’s disease diagnosis focuses on the proposed transfer learning approaches, which were the VGG16, VGG19, and Alex Net models. The models were trained, and the data were split into different training and testing darasets from the dataset of Alzheimer’s disease (AD) using Healthy Aging Data in the US.
  • The three proposed methods of transfer learning approach were VGG16, VGG19, and AlexNet. VGG16 and VGG19 are well-known methods for depth and small convolutional filters that allow them to learn intricate patterns in images. They are highly accurate but need a lot of memory, Alex Net is faster and less resource-intensive in trade-off with accuracy.
  • The evaluation and comparison of the three proposed methods, VGG16, VGG19, and Alex Net, were outperformed in terms of accuracy: 100% for VGG16, 100% for VGG19, and 98.20% for Alex Net; the precision was 99.9% for VGG16, 96.6% for VGG16, and 100% for Alex Net; recall values were 99.9% for VGG16, VGG19, and Alex Net; the F1 score was 99.9% for VGG16, 98.2% for VGG19, and 99.9% for Alex Net; the sensitivity was 96.8% for VGG16, 97.9% for VGG19, and 98.9% for Alex Net; the specificity was 96.5% for VGG16, 97.7% for VGG19, and 98.8% for Alex Net; Kappa statistics were 0.96 for VGG16, 0.95 for VGG19, and 0.96 for Alex Net; the RMSE was very good and nil for VGG16 and Alex Net and very negligible for VGG19, which were outperformed in the experimented metrics when compared with the existing approaches.
  • VGG16 was computationally intensive but had a slow interference time due to its depth, VGG19 had higher memory usage but had a slower interference time due to its increased depth. AlexNet had simpler and faster-to-train architecture but compromised on accuracy.
The proposed study has a beneficial outcome, and continued research efforts are required to identify and assess their impact on onset using various approaches. Efforts to address Alzheimer’s disease, which is a growing concern, have significance in research and development to understand the cause, its treatment, and finding its potential cure. Public health initiatives include raising awareness, improving early diagnosis, and providing support for patients and caregivers. The introduction of new screening and diagnostic tools could ultimately help lower the burden on specialists and ensure patients are diagnosed in a timely manner.
A few limitations of the proposed model using the algorithms: VGG16, VGG19, and AlexNet are complicated for numerical data as it has large parameters which may increase the computational cost and longer training times. Numerical data were proposed, but they lack spatial relationships; therefore, they may become less effective. Filters were not used and are also prone to overfit if the dataset is not large. The algorithms may lack feature importance often required for numerical data, so this may be interpreted as a limitation.

Author Contributions

Conceptualization, K.K.R.C. and A.R.; Methodology, K.K.R.C., D.R. and A.R.; Software, A.R. and D.R.; Validation, D.R.; Formal analysis, M.S.; Investigation, M.S. and F.J.; Resources, F.J.; Data curation, M.S.; Writing—original draft, K.K.R.C. and A.R.; Writing—review & editing, F.J. and S.A.; Visualization, S.A.; Supervision, S.A.; Project administration, F.J. and A.R.; Funding acquisition, F.J. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the funding of the Deanship of Graduate Studies and Scientific Research, Jazan University, Saudi Arabia, through Project Number: GSSRD-24.

Data Availability Statement

Data from ADNI, AIBL, PPMI, NACC, and OASIS can be downloaded from publicly available resources. Source data for figures are provided in this paper. The source data provided in this paper was taken from the Kaggle open-source dataset [18].

Conflicts of Interest

All the authors certified that there are no conflicts of interest.

Abbreviations

ADAlzheimer’s disease
BFRSSBehavioral Risk Surveillance Systems (Dataset)
OASISOpen Access Series of Imaging Studies
CNNConvolutional Neural Network
WHOWorld Health Organization
APPAmyloid Protein Precursor
VGG16Visual Geometry Group-16
VGG19Visual Geometry Group-19
MLMachine learning
ROCReceiver operating characteristics
RMSERoot mean square error
PESN1Presenilin-1
PSEN2Presenilin-2
APPAmyloid Protein Precursor
NFTNeurofibrillary tangles
CAEConvolutional Auto Encoder
CHAIDChi-Squared Automatic Interaction Detection
EOADEarly-Onset Alzheimer Disease
PETPositron Emission Tomography
CTComputer Tomography
KNNK-Nearest Neighbors
CNCognitive Normal
MCIMild Cognitive Impairment
EMCIEarly Mild Cognitive Impairment
LMCILate Mild Cognitive Impairment
MRIMagnetic Resonance Imaging
SNUBHSeoul National University College of Medicine

References

  1. Porsteinsson, A.P.; Isaacson, R.S.; Knox, S.; Sabbagh, M.N.; Rubino, I. Diagnosis of Early Alzheimer’s Disease: Clinical Practice in 2021. J. Prev. Alzheimer’s Dis. 2021, 8, 371–386. [Google Scholar] [CrossRef]
  2. Malavika, G.; Rajathi, N.; Vanitha, V.; Parameswari, P. Alzheimer Disease Forecasting Using Machine Learning Algorithm. Biosc. Biotech. Res. Comm. Spec. Issue 2020, 13, 15–19. [Google Scholar]
  3. Hakami, F.; Madkhali, M.A.; Saleh, E.; Ayoub, R.; Moafa, S.; Moafa, A.; Alnami, B.; Maashi, B.; Khubrani, S.; Busayli, W. Awareness and Perception toward Alzheimer’s Disease among Residents Living in the Jazan Province, Saudi Arabia: A Cross-Sectional Study. Cureus 2023, 15, e44505. [Google Scholar] [CrossRef] [PubMed]
  4. Alam, S.; Bhatia, S.; Shuaib, M.; Khubrani, M.M.; Alfayez, F.; Malibari, A.A.; Ahmad, S. An Overview of Blockchain and IoT Integration for Secure and Reliable Health Records Monitoring. Sustainability 2023, 15, 5660. [Google Scholar] [CrossRef]
  5. Elmahdy, M.H.; Ajeebi, M.E.; Hudisy, A.A.; Madkhali, J.M.; Madkhali, A.M.; Hakami, A.A. Knowledge and Attitudes towards Dementia among Clinical Years Medical Students at Jazan University: A Cross-Sectional Study. Int. J. Innov. Res. Med. Sci. 2020, 5, 95–99. [Google Scholar] [CrossRef]
  6. World Health Rankings. Available online: https://www.worldlifeexpectancy.com/cause-of-death/alzheimers-dementia/by-country/ (accessed on 8 July 2024).
  7. Cho, S.H.; Woo, S.; Kim, C.; Kim, H.J.; Jang, H.; Kim, B.C.; Kim, S.E.; Kim, S.J.; Kim, J.P.; Jung, Y.H. Disease Progression Modelling from Preclinical Alzheimer’s Disease (AD) to AD Dementia. Sci. Rep. 2021, 11, 4168. [Google Scholar] [CrossRef] [PubMed]
  8. Liu, J.; Hlávka, J.; Hillestad, R.J.; Mattke, S. Assessing the Preparedness of the US Health Care System Infrastructure for an Alzheimer’s Treatment; RAND: Santa Monica, CA, USA, 2017; ISBN 0833099469. [Google Scholar]
  9. Toshkhujaev, S.; Lee, K.H.; Choi, K.Y.; Lee, J.J.; Kwon, G.-R.; Gupta, Y.; Lama, R.K. Classification of Alzheimer’s Disease and Mild Cognitive Impairment Based on Cortical and Subcortical Features from MRI T1 Brain Images Utilizing Four Different Types of Datasets. J. Healthc. Eng. 2020, 2020, 3743171. [Google Scholar] [CrossRef]
  10. Venugopalan, J.; Tong, L.; Hassanzadeh, H.R.; Wang, M.D. Multimodal Deep Learning Models for Early Detection of Alzheimer’s Disease Stage. Sci. Rep. 2021, 11, 3254. [Google Scholar] [CrossRef]
  11. Karaman, B.K.; Mormino, E.C.; Sabuncu, M.R.; Initiative, A.D.N. Machine Learning Based Multi-Modal Prediction of Future Decline toward Alzheimer’s Disease: An Empirical Study. PLoS ONE 2022, 17, e0277322. [Google Scholar] [CrossRef]
  12. Kavitha, C.; Mani, V.; Srividhya, S.R.; Khalaf, O.I.; Tavera Romero, C.A. Early-Stage Alzheimer’s Disease Prediction Using Machine Learning Models. Front. Public Health 2022, 10, 853294. [Google Scholar] [CrossRef]
  13. Bae, J.B.; Lee, S.; Jung, W.; Park, S.; Kim, W.; Oh, H.; Han, J.W.; Kim, G.E.; Kim, J.S.; Kim, J.H. Identification of Alzheimer’s Disease Using a Convolutional Neural Network Model Based on T1-Weighted Magnetic Resonance Imaging. Sci. Rep. 2020, 10, 22252. [Google Scholar] [CrossRef]
  14. Javeed, A.; Dallora, A.L.; Berglund, J.S.; Ali, A.; Ali, L.; Anderberg, P. Machine Learning for Dementia Prediction: A Systematic Review and Future Research Directions. J. Med. Syst. 2023, 47, 17. [Google Scholar] [CrossRef]
  15. Rajayyan, S.; Mustafa, S.M.M. Comparative Analysis of Performance Metrics for Machine Learning Classifiers with a Focus on Alzheimer’s Disease Data. Acta Inform. Pragensia 2023, 12, 54–70. [Google Scholar] [CrossRef]
  16. Ebadi, A.; Dalboni da Rocha, J.L.; Nagaraju, D.B.; Tovar-Moll, F.; Bramati, I.; Coutinho, G.; Sitaram, R.; Rashidi, P. Ensemble Classification of Alzheimer’s Disease and Mild Cognitive Impairment Based on Complex Graph Measures from Diffusion Tensor Images. Front. Neurosci. 2017, 11, 56. [Google Scholar] [CrossRef]
  17. Toro, C.A.O.; Sánchez, N.G.; Gonzalo-Martín, C.; García, R.G.; González, A.R.; Ruiz, E.M. Radiomics Textural Features Extracted from Subcortical Structures of Grey Matter Probability for Alzheimers Disease Detection. In Proceedings of the 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS), Cordoba, Spain, 5–7 June 2019; pp. 391–397. [Google Scholar]
  18. Prajapati, R.; Khatri, U.; Kwon, G.R. An Efficient Deep Neural Network Binary Classifier for Alzheimer’s Disease Classification. In Proceedings of the 2021 international conference on artificial intelligence in information and communication (ICAIIC), Jeju Island, Republic of Korea, 13–16 April 2021; pp. 231–234. [Google Scholar]
  19. Liu, X.; Li, J.; Cao, P. Modeling Disease Progression with Deep Neural Networks. In Proceedings of the Fourth International Symposium on Image Computing and Digital Medicine, Shenyang, China, 5–8 December 2020; pp. 32–34. [Google Scholar]
  20. Alqahtani, N.; Alam, S.; Aqeel, I.; Shuaib, M.; Mohsen Khormi, I.; Khan, S.B.; Malibari, A.A. Deep Belief Networks (DBN) with IoT-Based Alzheimer’s Disease Detection and Classification. Appl. Sci. 2023, 13, 7833. [Google Scholar] [CrossRef]
  21. Vicchietti, M.L.; Ramos, F.M.; Betting, L.E.; Campanharo, A.S.L.O. Computational Methods of EEG Signals Analysis for Alzheimer’s Disease Classification. Sci. Rep. 2023, 13, 8184. [Google Scholar] [CrossRef]
  22. Lin, S.-K.; Hsiu, H.; Chen, H.-S.; Yang, C.-J. Classification of Patients with Alzheimer’s Disease Using the Arterial Pulse Spectrum and a Multilayer-Perceptron Analysis. Sci. Rep. 2021, 11, 8882. [Google Scholar] [CrossRef]
  23. Kirola, M.; Memoria, M.; Shuaib, M.; Joshi, K.; Alam, S.; Alshanketi, F. A Referenced Framework on New Challenges and Cutting-Edge Research Trends for Big-Data Processing Using Machine Learning Approaches. In Proceedings of the 2023 International Conference on Smart Computing and Application (ICSCA), Hail, Saudi Arabia, 5–6 February 2023; pp. 1–5. [Google Scholar]
  24. Bari Antor, M.; Jamil, A.H.M.S.; Mamtaz, M.; Monirujjaman Khan, M.; Aljahdali, S.; Kaur, M.; Singh, P.; Masud, M. A Comparative Analysis of Machine Learning Algorithms to Predict Alzheimer’s Disease. J. Healthc. Eng. 2021, 2021, 9917919. [Google Scholar] [CrossRef]
  25. Basheer, S.; Bhatia, S.; Sakri, S.B. Computational Modeling of Dementia Prediction Using Deep Neural Network: Analysis on OASIS Dataset. IEEE Access 2021, 9, 42449–42462. [Google Scholar] [CrossRef]
  26. Marzban, E.N.; Eldeib, A.M.; Yassine, I.A.; Kadah, Y.M.; Initiative, A.D.N. Alzheimer’s Disease Diagnosis from Diffusion Tensor Images Using Convolutional Neural Networks. PLoS ONE 2020, 15, e0230409. [Google Scholar] [CrossRef] [PubMed]
  27. Ortiz, A.; Lozano, F.; Gorriz, J.M.; Ramirez, J.; Martinez Murcia, F.J.; Initiative, A.D.N. Discriminative Sparse Features for Alzheimer’s Disease Diagnosis Using Multimodal Image Data. Curr. Alzheimer Res. 2018, 15, 67–79. [Google Scholar] [CrossRef]
  28. Alzheimers Disease And Healthy Aging Dataset. Available online: https://www.kaggle.com/datasets/ssarkar445/alzheimers-disease-and-healthy-aging (accessed on 14 February 2024).
  29. Arafah, A.; Khatoon, S.; Rasool, I.; Khan, A.; Rather, M.A.; Abujabal, K.A.; Faqih, Y.A.; Rashid, H.; Rashid, S.M.; Bilal Ahmad, S.; et al. The Future of Precision Medicine in the Cure of Alzheimer’s Disease. Biomedicines 2023, 11, 335. [Google Scholar] [CrossRef]
  30. Tabassum, S.; Kotnala, C.B.; Masih, R.K.; Shuaib, M.; Alam, S.; Alar, T.M. Performance Analysis of Machine Learning Techniques for Predicting Water Quality Index Using Physiochemical Parameters. In Proceedings of the 2023 International Conference on Sustainable Computing and Smart Systems (ICSCSS), Coimbatore, India, 14–16 June 2023; pp. 372–377. [Google Scholar]
  31. Saputra, R.A.; Agustina, C.; Puspitasari, D.; Ramanda, R.; Pribadi, D.; Indriani, K. Detecting Alzheimer’s Disease by the Decision Tree Methods Based on Particle Swarm Optimization. In Proceedings of the Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2020; Volume 1641, p. 12025. [Google Scholar]
  32. Jahan, S.; Abu Taher, K.; Kaiser, M.S.; Mahmud, M.; Rahman, M.S.; Hosen, A.S.M.S.; Ra, I.-H. Explainable AI-Based Alzheimer’s Prediction and Management Using Multimodal Data. PLoS ONE 2023, 18, e0294253. [Google Scholar] [CrossRef]
  33. Monica Moore, M.; Díaz-Santos, M.; Vossel, K. Alzheimer’s Association 2021 Facts and Figures Report; Alzheimer’s Association: Washington, DC, USA, 2021. [Google Scholar]
  34. Oh, K.; Chung, Y.-C.; Kim, K.W.; Kim, W.-S.; Oh, I.-S. Classification and Visualization of Alzheimer’s Disease Using Volumetric Convolutional Neural Network and Transfer Learning. Sci. Rep. 2019, 9, 18150. [Google Scholar] [CrossRef]
  35. Trambaiolli, L.R.; Lorena, A.C.; Fraga, F.J.; Kanda, P.A.M.; Anghinah, R.; Nitrini, R. Improving Alzheimer’s Disease Diagnosis with Machine Learning Techniques. Clin. EEG Neurosci. 2011, 42, 160–165. [Google Scholar] [CrossRef]
Figure 1. Male and female death rate (per 100,000) for countries with Alzheimer’s disease in 2023–2024 [3].
Figure 1. Male and female death rate (per 100,000) for countries with Alzheimer’s disease in 2023–2024 [3].
Mathematics 12 02204 g001
Figure 2. Proposed model using VGG16, VGG19, and Alex Net for Alzheimer’s disease classification.
Figure 2. Proposed model using VGG16, VGG19, and Alex Net for Alzheimer’s disease classification.
Mathematics 12 02204 g002
Figure 3. Lower triangle correlation heatmap for Alzheimer’s disease classification.
Figure 3. Lower triangle correlation heatmap for Alzheimer’s disease classification.
Mathematics 12 02204 g003
Figure 4. Clustered correlation heatmap for Alzheimer’s disease classification.
Figure 4. Clustered correlation heatmap for Alzheimer’s disease classification.
Mathematics 12 02204 g004
Figure 5. Correlation heatmap for Alzheimer’s disease classification.
Figure 5. Correlation heatmap for Alzheimer’s disease classification.
Mathematics 12 02204 g005
Figure 6. VGG16 and its architecture for Alzheimer’s disease classification.
Figure 6. VGG16 and its architecture for Alzheimer’s disease classification.
Mathematics 12 02204 g006
Figure 7. VGG19 convolutional layers for Alzheimer’s disease classification.
Figure 7. VGG19 convolutional layers for Alzheimer’s disease classification.
Mathematics 12 02204 g007
Figure 8. Alex Net architecture for Alzheimer’s disease classification.
Figure 8. Alex Net architecture for Alzheimer’s disease classification.
Mathematics 12 02204 g008
Figure 9. VGG16 Confusion matrix for Alzheimer’s disease classification.
Figure 9. VGG16 Confusion matrix for Alzheimer’s disease classification.
Mathematics 12 02204 g009
Figure 10. VGG19 Confusion matrix for Alzheimer’s disease classification.
Figure 10. VGG19 Confusion matrix for Alzheimer’s disease classification.
Mathematics 12 02204 g010
Figure 11. Alex Net Confusion matrix for Alzheimer’s disease classification.
Figure 11. Alex Net Confusion matrix for Alzheimer’s disease classification.
Mathematics 12 02204 g011
Figure 12. Accuracy and error rate comparison for neural network approaches for Alzheimer’s classification.
Figure 12. Accuracy and error rate comparison for neural network approaches for Alzheimer’s classification.
Mathematics 12 02204 g012
Figure 13. Accuracy and error rate for machine learning approach for Alzheimer’s classification.
Figure 13. Accuracy and error rate for machine learning approach for Alzheimer’s classification.
Mathematics 12 02204 g013
Figure 14. Epoch and accuracy for VGG16 model for Alzheimer’s classification.
Figure 14. Epoch and accuracy for VGG16 model for Alzheimer’s classification.
Mathematics 12 02204 g014
Figure 15. Accuracy of VGG19 for Alzheimer’s disease classification.
Figure 15. Accuracy of VGG19 for Alzheimer’s disease classification.
Mathematics 12 02204 g015
Figure 16. VGG19 loss model for Alzheimer’s disease classification.
Figure 16. VGG19 loss model for Alzheimer’s disease classification.
Mathematics 12 02204 g016
Figure 17. Accuracy of Alex Net for Alzheimer’s disease classification.
Figure 17. Accuracy of Alex Net for Alzheimer’s disease classification.
Mathematics 12 02204 g017
Figure 18. Alex Net model loss for Alzheimer’s disease classification.
Figure 18. Alex Net model loss for Alzheimer’s disease classification.
Mathematics 12 02204 g018
Figure 19. Precision parameter for Alzheimer’s disease classification.
Figure 19. Precision parameter for Alzheimer’s disease classification.
Mathematics 12 02204 g019
Figure 20. Metric recall graph for Alzheimer’s disease classification.
Figure 20. Metric recall graph for Alzheimer’s disease classification.
Mathematics 12 02204 g020
Figure 21. F1 score graph of various algorithms for Alzheimer’s disease classification.
Figure 21. F1 score graph of various algorithms for Alzheimer’s disease classification.
Mathematics 12 02204 g021
Figure 22. Sensitivity graph for Alzheimer’s disease classification on various algorithms.
Figure 22. Sensitivity graph for Alzheimer’s disease classification on various algorithms.
Mathematics 12 02204 g022
Figure 23. Specificity metric graph for Alzheimer’s disease classification.
Figure 23. Specificity metric graph for Alzheimer’s disease classification.
Mathematics 12 02204 g023
Figure 24. Kappa statistics graph for Alzheimer’s disease classification.
Figure 24. Kappa statistics graph for Alzheimer’s disease classification.
Mathematics 12 02204 g024
Figure 25. ROC graph for Alzheimer’s disease classification.
Figure 25. ROC graph for Alzheimer’s disease classification.
Mathematics 12 02204 g025
Figure 26. RMSE metric graph for Alzheimer’s disease classification.
Figure 26. RMSE metric graph for Alzheimer’s disease classification.
Mathematics 12 02204 g026
Table 1. Existing work analysis on Alzheimer’s disease.
Table 1. Existing work analysis on Alzheimer’s disease.
Author(s) and YearProposed Algorithm and DatasetDemerits
C.Kavitha et al., 2022 [12]
  • Gradient boosting, decision tree, random forest, and SVM
  • OASIS dataset
  • The accuracy and reliability depended on the training data.
  • Use of sensitive medical data led to privacy concerns.
Batuhan K. Karamanet al., 2022 [11]
  • Neural network using deep learning
  • Time horizon was a major dependency.
A.P.Porsteinsson et al., 2021 [1]
  • No particular model, guidelines extracted from U.S. Food and Drug Administration (FDA)
  • National Institute of Aging –Alzheimer’s Association (NIA-AA)
  • Amyloid and tau-tangle image processing techniques were not readily available.
Janani Venugopalan et al., 2021 [10]
  • MRI and deep learning techniques
  • Alzheimer’s Disease Neuro-Imaging Dataset (ADNI)
  • Limited to the use of clinical settings.
  • ADNI may not represent the entire diversity of the population.
Morshedul Bari Antori et al., 2021 [24]
  • Longitudinal magnetic resonance imaging with 5-fold cross-validation
  • OASIS dataset
  • Data were pre-processed, and unnecessary data of null values of SES and MMSE 19 and 2 missing values were removed.
Shakila Basheer et al., 2021 [25]
  • Deep neural network
  • Oasis dataset
  • No explicit demerits.
Eman N. Marzban et al., 2020 [26]
  • Neural networks
  • ADNI dataset
  • Hurdles encountered in ML were neural networks.
Jong Bin Bae et al., 2020 [13]
  • CNN-based AD using MRI
  • ADNI and SNBUH Hospital dataset
  • MRI images acquired via the SNUBH dataset used scanners from only one manufacturer (Philips) and ADNI dataset scanners (Simens, GE).
Andres Ortiz et al., 2018 [27]
  • Combinational SVC classifiers
  • ADNI dataset
  • Only sparse brain images were taken into importance.
César A. Ortiz Toroet al., 2019 [17]
  • ReliefF algorithm
  • ADNI dataset
  • Reduction arose due to corrupted to duplicate images in the dataset.
Table 2. Data description before feature engineering and normalization [1].
Table 2. Data description before feature engineering and normalization [1].
S No.Year StartLocationTopicQuestionAgeGenderLow_ CHigh_CData_CClass
12020New HampshireMental DistressPercentage of adults who felt mental distress50–64F12.81815.2Mental Health
22021IndianaMental DistressPercentage of adults who felt mental distress65 or olderF710.28.5Mental Health
32015IdahoMental DistressPercentage of adults who felt mental distress50–64M12.719.916Mental Health
42020WyomingDepressionLifetime diagnosis of depression50–64M7.213.59.9Mental Health
52021North EastMemory LossMemory loss or cognitive decline interfering with household chores50–64F32.645.839Cognitive Decline
62021IdahoMental DistressFrequent mental distress of older Adults50–64M7.412.59.7Mental Health
72018MidwestMemory LossMemory loss or cognitive decline interfering with household chores65 or olderM16.540.927Cognitive Decline
82020WisconsinDepressionLifetime diagnosis of depression65 or olderF12.719.215.6Mental Health
92016UtahMemory LossMemory loss or cognitive decline interfering with household chores65 or olderF12.731.720.6Cognitive Decline
102021MaineMemory LossFrequent mental distress of older adults65 or olderM917.512.6Cognitive Decline
112017HawaiiMemory Loss Frequent mental distress of older adults50–64F44.371.558.5Cognitive Decline
122019MichiganMemory LossFrequent mental distress of older adults50–64F28.157.442Cognitive Decline
132018MidwestMemory LossFrequent mental distress of older adults65 or olderM16.540.927Cognitive Decline
142021IndianaDepressionLifetime diagnosis of depression65 or olderF710.28.5Mental Health
152019ConnecticutDepressionLifetime diagnosis of depression50–64M13.618.415.9Mental Health
Table 3. Data description after feature engineering and normalization.
Table 3. Data description after feature engineering and normalization.
S No.TopicQuestionLow_C_LimitHigh_C_LimitData_ValueAgeGenderClass
10928.834.431.6004
20949.151.650.3104
391313.814.814.3003
40954.556.455.5104
5141412.81815.2002
60948.470.259.8104
73025.86.56.2114
8172860.261.861005
926372.64.63.6014
10271066.471.669.1004
113025.66.86.2004
120932.33935.6004
130928.739.433.9004
14193418.421.519.9002
150933.838.436.1004
Table 4. VGG16 convolutional and output dimensional architecture.
Table 4. VGG16 convolutional and output dimensional architecture.
S No.LayersConvolutionPoolingOutput Dimension
11 & 2Convolutional layer of 64 channel of 3 × 3 kernel with padding 1 and stride 1224 × 224 × 64112 × 112 × 64
23 & 4Convolutional layer of 128 channel of 3 × 3 kernel112 × 112 × 12856 × 56 × 128
35, 6, 7Convolutional layer of 256 channel of 3 × 3 kernel56 × 56 × 25628 × 28 × 256
48, 9, 10Convolutional layer of 512 channel of 3 × 3 kernel28 × 28 × 51214 × 14 × 512
511, 12, 13Convolutional layer of 512 channel of 3 × 3 kernel14 × 14 × 5127 × 7 × 512
Table 5. Accuracy and error rate comparison with Neural Network approaches for Alzheimer’s classification.
Table 5. Accuracy and error rate comparison with Neural Network approaches for Alzheimer’s classification.
S No.AlgorithmAccuracy (%)Error Rate (%)
1VGG161000
2VGG1998.201.80
3Alex Net1000
4Machine and Deep learning models (M-CapNet) [25]92.397.61
5Deep Learning (ADDTLA) [24]91.708.30
6CNN-based AD using 2D input data image [31]8911
7Deep Neural Network Binary Classifier [18]85.1914.81
8CNN (Scratch) [25]60.6439.60
9Soft Voting Classifier [12]8614
10Transfer Learning Approaches [13]73.2026.70
11Multilayer Perceptron [32]74.2325.77
Table 6. Accuracy and error rate of machine learning approaches for Alzheimer’s classification.
Table 6. Accuracy and error rate of machine learning approaches for Alzheimer’s classification.
S No.AlgorithmAccuracy (%)Error Rate (%)
1VGG1699.90.01
2VGG1998.81.20
3Alex Net99.90.01
4SVM and Bidirectional LSTM [14]91.28.70
5K-Means and CNN [32]88.611.40
6SVM with Linear kernel [32]8812
7Random Forest Classifier [33]81.318.70
8Auto-encoder based unsupervised learning [34]86.613.40
9Support Vector Machine [17]81.618.30
10XGBoost [12]85.914.08
11Voting Classifier [12]8416
12Laser –Induced Breakdown Spectroscopy [24]8020
13SVM technique and PCA [32]78.321.6
14Logistic Regression [33]74.725.3
15Decision Tree [33]8026.7
16KNN [2]66.920
17CAE (Transfer Learning) [34]73.233.1
18ICAE (Transfer Learning) [34]73.926.70
19Naïve Bayes with Feature Selection [34]8526.0
20Ensemble with feature selection (AD-CT) [11]8015
21Ensemble with feature selection (MCI-CT) [11]7016.70
22AdaBoost Classifier [2]80.330
Table 7. Performance measures of VGG16 model for Alzheimer’s classification.
Table 7. Performance measures of VGG16 model for Alzheimer’s classification.
EpochPrecisionRecallF1 ScoreSupport
00.9998760.9998760.9998768061
11.0000001.0000001.0000006935
21.0000001.0000001.0000006391
31.0000001.0000001.00000010,024
40.9999640.9999280.99994627,750
51.0000001.0000001.00000017,258
60.9998441.0000000.9999226391
Table 8. Performance measures of the VGG19 model for Alzheimer’s classification.
Table 8. Performance measures of the VGG19 model for Alzheimer’s classification.
EpochPrecisionRecallF1 ScoreSupport
00.9662990.9995040.9826218061
10.9154410.9694300.9416636935
20.9714160.9996870.9853496391
30.9990030.9998000.9940210,024
40.9994350.9566850.97759327,750
50.9865810.9968710.99169917,258
60.9803620.9998440.9900076391
Table 9. Performance measures of the Alex Net model for Alzheimer’s classification.
Table 9. Performance measures of the Alex Net model for Alzheimer’s classification.
EpochPrecisionRecallF1 ScoreSupport
01.0000000.9997520.9998768061
11.0000001.0000001.0000006935
21.0000001.0000001.0000006391
31.0000001.0000001.00000010,024
40.9999280.9999640.99994627,750
51.0000001.0000001.00000017,258
60.9998441.0000000.9999226391
Table 10. Parameter precision of algorithms for Alzheimer’s classification.
Table 10. Parameter precision of algorithms for Alzheimer’s classification.
S No.AlgorithmPrecision (%)
1VGG1699.9
2VGG1996.6
3Alex Net100
4Decision Tree Classifier [12]80
5Random Forest Classifier [12]85
6Support Vector Machine [12]77
7XGBoost [12]85
8Voting Classifier [12]83
9Logistic Regression [33]74.7
Table 11. Recall values of various algorithms for Alzheimer’s disease classification.
Table 11. Recall values of various algorithms for Alzheimer’s disease classification.
S No.AlgorithmRecall (%)
1VGG1699.9
2VGG1999.9
3Alex Net99.9
4Decision Tree Classifier [12]79
5XGBoost [12]80
6Voting Classifier [12]83
7Logistic Regression [33]70
8Random Forest [33]70
9Ensemble with feature selection (AD-CT) [11]80
10Ensemble with feature selection (AD-MCI) [11]80
11Ensemble with feature selection (MCI-CT) [11]50
Table 12. F1 score of various algorithms for Alzheimer’s classification.
Table 12. F1 score of various algorithms for Alzheimer’s classification.
S No.AlgorithmF1 Score (%)
1VGG1699.9
2VGG1998.20
3Alex Net99.9
4GaussianNB [12]94
5Decision Tree Classifier [12]78
7Random Forest Classifier [12]80
8Support Vector Machine [12]79
9XGBoost [12]85
10Voting Classifier [12]85
11Ensemble with feature selection (AD-CT) [11]79.8
12Ensemble with feature selection (AD-MCI) [11]82.5
13Ensemble with feature selection (MCI-CT) [11]66.7
Table 13. Sensitivity scores for various algorithms of Alzheimer’s classification.
Table 13. Sensitivity scores for various algorithms of Alzheimer’s classification.
S No.AlgorithmSensitivity (%)
1VGG1696.8
2VGG1997.9
3Alex Net98.9
4Laser-Induced Breakdown Spectroscopy and Machine Learning [24]85
5CNN Scratch [34]61.05
6CAE [34]61.43
7ICAE [34]62.56
8CAE (Transfer Learning) [34]74.96
9ICAE (Transfer Learning) [34]77.46
10SVM with RBF [14]82.65
11Multilayer Perceptron [22]72.14
Table 14. Specificity values for ML algorithm for Alzheimer’s disease classification.
Table 14. Specificity values for ML algorithm for Alzheimer’s disease classification.
S No.Algorithm Specificity (%)
1VGG1696.5
2VGG1997.7
3Alex Net98.8
4qEEG Processing Technique [35]91.70
5Laser-Induced Breakdown Spectroscopy and Machine Learning [24]75
6CAE [34]60.04
7ICAE [34]60.41
8CAE (Transfer Learning) [34]71.53
9ICAE (Transfer Learning) [34]70.71
10SVM with RBF [14]87.17
11Ensemble with feature selection (AD-CT) [11]67
12Ensemble with feature selection (AD-MCI) [11]67
13Ensemble with feature selection (MCI-CT) [11]43
14Multilayer Perceptron [22]79.40
Table 15. Kappa statistics comparison for Alzheimer’s classification [28,33].
Table 15. Kappa statistics comparison for Alzheimer’s classification [28,33].
S No.Algorithm Kappa Statistics
1VGG160.96
2VGG190.95
3Alex Net0.96
4Naïve Bayes 0.54
5Logistic Regression0.78
6SVM0.54
7Random Forest 0.84
8C4.5 0.79
9CHAID 0.80
10ID3 0.80
11C4.5+PSO0.82
12Random Forest + PSO0.88
13ID3+PSO0.808
Table 16. ROC comparison for various algorithms for Alzheimer’s disease classification.
Table 16. ROC comparison for various algorithms for Alzheimer’s disease classification.
S No.AlgorithmROC
1VGG161.00
2VGG191.00
3Alex Net1.00
4Naïve Bayes [12]0.98
5Logistic Regression [12]0.99
6SVM [12]0.99
7Random Forest [12]1.00
8Multilayer Perceptron [17]0.74
Table 17. RMSE error rate for Alzheirmer’s classification [1].
Table 17. RMSE error rate for Alzheirmer’s classification [1].
S No.AlgorithmRMSE (%)
1VGG160
2VGG191.30
3Alex Net0
4Naïve Bayes 0.33
5Logistic Regression0.27
6SVM0.33
7Random Forest 0.25
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reddy C, K.K.; Rangarajan, A.; Rangarajan, D.; Shuaib, M.; Jeribi, F.; Alam, S. A Transfer Learning Approach: Early Prediction of Alzheimer’s Disease on US Healthy Aging Dataset. Mathematics 2024, 12, 2204. https://doi.org/10.3390/math12142204

AMA Style

Reddy C KK, Rangarajan A, Rangarajan D, Shuaib M, Jeribi F, Alam S. A Transfer Learning Approach: Early Prediction of Alzheimer’s Disease on US Healthy Aging Dataset. Mathematics. 2024; 12(14):2204. https://doi.org/10.3390/math12142204

Chicago/Turabian Style

Reddy C, Kishor Kumar, Aarti Rangarajan, Deepti Rangarajan, Mohammed Shuaib, Fathe Jeribi, and Shadab Alam. 2024. "A Transfer Learning Approach: Early Prediction of Alzheimer’s Disease on US Healthy Aging Dataset" Mathematics 12, no. 14: 2204. https://doi.org/10.3390/math12142204

APA Style

Reddy C, K. K., Rangarajan, A., Rangarajan, D., Shuaib, M., Jeribi, F., & Alam, S. (2024). A Transfer Learning Approach: Early Prediction of Alzheimer’s Disease on US Healthy Aging Dataset. Mathematics, 12(14), 2204. https://doi.org/10.3390/math12142204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop