Next Article in Journal
Prognostic Role of Initial Thromboelastography in Emergency Department Patients with Primary Postpartum Hemorrhage: Association with Massive Transfusion
Previous Article in Journal
Clinical Application of Different Liquid Biopsy Components in Hepatocellular Carcinoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying Progression-Specific Alzheimer’s Subtypes Using Multimodal Transformer

by
Diego Machado Reyes
1,
Hanqing Chao
1,
Juergen Hahn
1,
Li Shen
2,
Pingkun Yan
1,* and
for the Alzheimer’s Disease Neuroimaging Initiative
1
Department of Biomedical Engineering, Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
2
Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
*
Author to whom correspondence should be addressed.
Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf (accessed on 29 February 2024).
J. Pers. Med. 2024, 14(4), 421; https://doi.org/10.3390/jpm14040421
Submission received: 15 March 2024 / Revised: 1 April 2024 / Accepted: 8 April 2024 / Published: 15 April 2024

Abstract

:
Alzheimer’s disease (AD) is the most prevalent neurodegenerative disease, yet its current treatments are limited to stopping disease progression. Moreover, the effectiveness of these treatments remains uncertain due to the heterogeneity of the disease. Therefore, it is essential to identify disease subtypes at a very early stage. Current data-driven approaches can be used to classify subtypes during later stages of AD or related disorders, but making predictions in the asymptomatic or prodromal stage is challenging. Furthermore, the classifications of most existing models lack explainability, and these models rely solely on a single modality for assessment, limiting the scope of their analysis. Thus, we propose a multimodal framework that utilizes early-stage indicators, including imaging, genetics, and clinical assessments, to classify AD patients into progression-specific subtypes at an early stage. In our framework, we introduce a tri-modal co-attention mechanism (Tri-COAT) to explicitly capture cross-modal feature associations. Data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (slow progressing = 177, intermediate = 302, and fast = 15) were used to train and evaluate Tri-COAT using a 10-fold stratified cross-testing approach. Our proposed model outperforms baseline models and sheds light on essential associations across multimodal features supported by known biological mechanisms. The multimodal design behind Tri-COAT allows it to achieve the highest classification area under the receiver operating characteristic curve while simultaneously providing interpretability to the model predictions through the co-attention mechanism.

1. Introduction

Alzheimer’s disease (AD) is the most prevalent neurodegenerative disorder, affecting over 6.5 million people in the US alone, and its rate is expected to keep increasing [1]. Current therapies for AD are mainly focused on the management of symptoms and promising drugs that can slow the progression of the disease [2,3,4]. Therefore, early diagnosis of neurodegenerative diseases is crucial. However, early diagnosis of AD presents a significant challenge since memory loss only develops in the MCI stage of the AD continuum and cognitive decline can vary among patients due to the disease’s heterogeneity [5]. Therefore, it is crucial to develop methods capable of characterizing the factors that influence disease progression and identifying individual patients with progression-specific subtypes at an early stage.
AD is traditionally diagnosed based on characteristic cognitive decline and behavioral deficits that do not become apparent until intermediate to late stages of the disease. More recently, early-stage indicators such as imaging-based and fluid biomarkers have shown great potential for early detection of AD [6]. Fluid biomarkers found in blood and CSF have now become standard methods of diagnosing early AD patients [7,8,9], even showing great potential for subtyping AD [10]. Similarly, recent imaging-based approaches such as brain connectivity analysis in the form of connectomes have shown promising results for early diagnosis [11]. AD subtypes have been previously identified based on hallmark AD biomarkers obtained from brain imaging such as beta-amyloid [12] and tau accumulation [13,14].
Data-driven approaches have focused on classifying patients according to subtypes based on disease progression from mild cognitive impairment (MCI, a prodromal stage of AD) to AD conversion [15,16,17,18,19]. Current methods for subtyping AD and related disorders have focused mainly on using longitudinal data from clinical assessments for unsupervised learning [20,21,22]. For clustering approaches such as in [15,17,20,22], the authors subtyped AD patients using single modalities at baseline, such as blood markers [17], genomic data [20], or traits derived from imaging from longitudinal measurements [22]. These single-modality approaches using baseline data have shown the potential of early-stage indicators for AD subtyping. However, most of them fail to show how they relate biologically to AD development or use multiple time points, which hinders the ability to diagnose patients at early stages after AD onset.
Deep learning models have effectively identified diagnostic groups [23] and subtypes [18] using multimodal imaging data and correlation-based approaches that allow a greater explainability of of the relationships between the features learned by the model. Nevertheless, these are limited by two key factors, namely, the use of only imaging data and the fact that correlation-based approaches treat the clustering goal indirectly. Works on related disorders have shown the relevance of multimodal approaches [24] using non-negative matrix factorization and Gaussian mixture models or employing autoencoders and long short-term memory (LSTM) networks to learn deep embeddings of disease progression [25]. However, this requires longitudinal data that involve several time points. While several clustering and deep learning-based approaches have high accuracy when having multiple time points, the performance decreases significantly given only baseline data. This is driven by the subtle expression of the symptoms tracked in the clinical assessments limiting the scope of the models. Early-stage indicators such as imaging traits or genomic risk factors are rarely used and are simply aggregated to clinical assessments as additional indicators. Therefore, it is essential to target early-stage indicators such as genetics, imaging, and cognitive assessments.
Multimodal deep learning approaches can combine different modalities to provide a much more informed picture of disease drivers and aid in disease subtyping [26]. However, it is not a trivial task to identify the relevant features across modalities and how to fuse them. The rest of this section first reviews the related works on multimodal fusion and then provides an overview of our proposed method.

1.1. Multimodal Fusion

Multimodal fusion, while very promising, poses new challenges. There are multiple ways to fuse the data and stages of encoding where to fuse. The effectiveness of the strategy varies depending on data modalities and tasks. One of the key factors is the similarity between the modalities. Highly heterogeneous data, such as imaging, genetics, and clinical data, might not be immediately fused. Their difference in type, signal-to-noise ratio, and dimensionality makes it very challenging to combine them without first projecting them into a similar space. The relationship between input and output is equally important to consider when fusing data. For example, clinical assessments reflect the direct impacts of the disease, while genetic data describe the building blocks of cells. The phenotype-related information available in clinical assessments requires considerably less processing than what might be required for genomic data to find the connection with a disease.
As illustrated in Figure 1, the existing multimodal fusion strategies can be grouped into three main categories [27], namely, early, intermediate (also called joint), and late fusion. It is essential to choose the right approach based on the task at hand and the data used. Early-stage fusion has shown very promising results in recent vision–language models [28,29], while late-fusion strategies have traditionally been very effective in aggregating machine learning model decisions. Early- and late-stage fusion strategies, while effective for certain tasks, are not ideal for AD subtyping using multimodal data. Early-stage fusion struggles with dealing with highly heterogeneous and differently biologically related data such as genetics, imaging, and clinical data. These require further encoding to first learn highly informative representations for every modality and condense them into a similar latent space. While late-stage fusion strategies can be very effective at aggregating the decisions based on each modality, they cannot learn the feature relationships across modalities, severely limiting the usefulness of the model. The intermediate fusion approach tackles both challenges by first learning the crucial patterns associated with each modality. In the next stage, it uses the condensed patterns from each modality to learn the cross-modal relationships. This enables a more harmonious fusion of the heterogeneous modalities.

1.2. Overview and Contributions

Despite the potential advantages of multimodal approaches, several technical challenges exist to effectively leverage the key data of each modality. The high heterogeneity of the data modalities for AD subtyping and the explicit learning of the cross-modal interactions need to be addressed. Previous approaches in related fields have proposed dual co-attention mechanisms to explicitly learn the cross-modal feature interaction and joint data representations [30]. While they have shown very promising results, these have yet to be explored for the progression-specific subtyping of neurodegenerative disease and its corresponding modalities. Moreover, in AD and related disorder subtyping, three critical but highly heterogeneous modalities are needed to be fused, i.e., imaging, genetics, and clinical data. Therefore, in this paper, a tri-modal co-attention (Tri-COAT) framework is proposed that can explicitly learn the interactions between the multimodal features towards the task of classifying subtypes. While deep learning models for disease assessment promise improved accuracy, they remain limited in the interpretability of the results. This is a major entry barrier for the inclusion of deep learning models in the medical field.
Our contributions in this paper are twofold in both the application and technique. First, our framework incorporates features of three early-stage biomarker modalities and provides a cutting-edge approach to the progression-specific subtyping of early neurodegenerative diseases. Second, regarding the technical innovation, our new tri-modal co-attention framework can efficiently and explicitly learn the interactions between highly heterogeneous modalities, encode the information into a joint representation, and provide explainability to the cross-modal interactions. The proposed Tri-COAT achieved state-of-the-art performance on the landmark Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset [31] and provided key insights into the biological pathways leading to neurodegenerative disease development.
The rest of the paper is organized as follows. Section 2 presents the methods with details of each component of Tri-COAT. Next, in Section 3, the dataset and experimental design are described. In Section 4, the results are presented and discussed. Finally, in Section 5, conclusions are drawn, and future directions are presented.

2. Method

Our proposed framework can be divided into two main parts. As seen in Figure 2, single-modality encoders are first built using transformer modules to learn feature representations for each modality. Second, the Tri-COAT mechanism explicitly learns the critical cross-modal feature relationships and uses them to weigh the feature representation. The jointly learned representation is processed through a multilayer perceptron (MLP) for disease subtype classification.

2.1. Single-Modality Encoding

Three branches encode each modality individually. Each branch is comprised of a transformer encoder with several transformer layers. This is inspired by previous works in which transformer models have been proposed for imaging-derived connectomes [32] and genotype data [33]. Each branch learns representations of a modality to later combine them into a joint representation through the Tri-COAT mechanism.
Imaging modality. The imaging feature encoder branch uses MRI-derived quantitative traits as input. These quantitative traits are derived from T1-weighted MRI scans. The scans are first segmented based on the FreeSurfer atlas for cross-sectional processing. Next, for each reconstructed region of interest (ROI), the cortical region, cortical volume, thickness average, thickness standard deviation, and surface area are calculated. Further details are described in Section 3.1. The imaging traits are then used to build tokens, where each token represents an ROI in the brain and is comprised of four imaging-derived traits (cortical thickness average, cortical thickness standard deviation, surface area, and volume from cortical parcellation). Let X I R M × 4 represent the imaging input to the proposed model, where M = 72 and is the number of ROIs. Then, the token dimensions are expanded through a fully connected layer to match the model dimensions k. The imaging tokenization allows the building of an initial representation for each ROI rather than each trait, leading to a smaller number of input features and a more biologically informative and interpretable input.
Genetic modality. The genotype branch has as input single nucleotide polymorphism (SNP) data. After quality control and preprocessing of the genotype data as described in Section 3.1, tokens for each SNP are built. Each token is composed of the allele dosage from the patient, the corresponding odds ratio and rare allele frequency obtained from the most recent AD GWAS study, and whether the SNP is within an intergenic region (regulatory region) as a binary label. Then, the token dimensions are expanded through a fully connected layer to k / 2 . Let X SNP R N × k / 2 represent the genotype input to the model, where N = 70 and is the number of SNPs filtered out (see Section 3.1 for details). Moreover, based on the chromosome for which each SNP is located in, an additional embedding for each SNP can be built. By including the chromosome embedding, location knowledge for each SNP can be incorporated. Using an embedding layer, an embedding for each chromosome can be obtained X Chr R N × k / 2 . Finally, X SNP and X Chr are concatenated to obtain the final genotype embedding X G R N × k . Similarly to the imaging data encoding, the genetic tokenization allows the building of more informative input structures to the genetic encoder. By providing additional attributes for each SNP beyond the patient mutation status, the model can learn richer patterns of characteristics that relate each SNP.
Clinical modality. The clinical data are already very closely related to the outcome of interest and contain only few features; therefore, no further extensive tokenization is performed. As there is just one value per clinical assessment, the tokens are directly built with one dimension. Let X C R B × 1 represent the clinical input to our model, where B = 7 and is the number of clinical features. Next, the token dimensions are expanded through a fully connected layer to match the model dimensions k.
Single-modality encoders. After the tokenization of each modality, they are fed into independent transformer encoders with L layers. The full process of the L-th layer in our transformer encoder is formulated as follows:
F l = MHA ( LN ( F l 1 ) ) + F l 1 ,
and
F l = FF ( LN ( F l ) ) + F l ,
where LN ( · ) is the normalization layer [34] and MHA ( · ) is multihead self-attention [35].

2.2. Tri-Modal Co-Attention

After each transformer encoder has learned a new representation for each modality, these are then used to learn the cross-modal feature relationships to guide the co-attention process on the clinical branch. In other words, the imaging and genomic features are employed to modulate the clinical learning process by highlighting the key hidden features that share relationships across modalities. The intuition behind the proposed approach is that as the clinical data are most closely related to the disease phenotype, this branch will carry most of the necessary information to classify the patients. Nevertheless, the imaging and genomic data also provide valuable information. The idea is analogous to the clinical data being the subject and verb in a sentence while the imaging and genomic data are the adjectives and adverbs. These two elements enrich the representation of the health status of a patient, analogous to enriching a sentence for a fuller meaning.
Let X E m b R { M , N , B } × k represent the learned representation of a given single-modality encoder. These become query matrices for the genetics Q G and imaging Q I data, and key K C , value V C matrices for the clinical data. Following an attention mechanism structure, the co-attention between two modalities is computed as follows:
CoAttn ( { G , I } , C ) = softmax Q { G , I } K C T d k V C T .
Next, the resulting co-attention filtered value matrices are concatenated to obtain a final joint representation. This joint representation is then flattened and used to classify the patients into the clusters through an MLP.

3. Materials and Experiments

3.1. Dataset

The Alzheimer’s Disease Neuroimaging Initiative (ADNI) [31] database is a landmark dataset for the advancement of our understanding of Alzheimer’s disease. ADNI [31] is composed of a wide range of data modalities including MRI and PET images, genetics, cognitive tests, CSF, and blood biomarkers. Longitudinal data for all subjects were selected for up to two years of progression after disease onset due to the very high missingness rate (percentage of data points missing across patients for a given time point) present for time points after the two years. Subjects were clustered using k-means into three main groups based on their Mini-Mental State Examination (MMSE) scores as can be seen in Figure 3. These clusters match the cognitive decline rate of patients over time. The MMSE score at each visit (baseline, 6 months, 12 months, and 24 months) was used to determine the cognitive decline for each patient. As each patient may have a different starting level at baseline, the baseline measurement is subtracted from each of the following time points; thus, all patients start at 0. Then, using k-means clustering, using k = 3 , slow, intermediate, and fast cognitive decline groups are defined. As seen in Table 1, the raw average MMSE score at baseline is comparable across all groups with a steep decrease for the fast and intermediate groups at 24 months. The slow or otherwise stable group MMSE score at 24 months is comparable to the one at the initial stage. On the other hand, all three groups are age matched. Similarly, sex distributions across groups is maintained, with male subjects representing approximately 60% of the subjects for each group.
Data preprocessing: The data for the imaging and genotype modalities were processed previously by the tokenization process following best practices for the corresponding modality as described below.
Imaging: The FreeSurfer image analysis suite [36] was used to conduct cortical reconstruction and volumetric segmentation. T1-weighted MRI scans were segmented based on the FreeSurfer atlas for cross-sectional processing, enabling group comparison at a specific time point [37]. For each reconstructed cortical region, cortical volume, thickness average, thickness standard deviation, and surface area measurements were labeled by the 2010 Desikan-Killany atlas. The UCSF ADNI team conducted this process [38].
Genotype: The genotype variants were filtered using the intersection between the List of AD Loci and Genes with Genetic Evidence Compiled by the ADSP Gene Verification Committee and the most recent genome-wide association study (GWAS) on AD [39]. The odds ratios, rare allele frequency, and intergenic region binary trait were obtained from the most recent GWAS study with accession number (GCST90027158), accessed through the GWAS catalog [40]. Furthermore, the genotype variants were processed for sample and variant quality controls using PLINK1.9 [41].
Clinical: The clinical assessment features corresponded to seven different cognitive metrics available through ADNI [31]. These were Logical Memory-Delayed Recall (LDELTOTAL), Digit Symbol Substitution (DIGITSCOR), Trail Making Test B (TRABSCOR), and Rey Auditory Verbal Learning Test (RAVLT) scores: immediate, learning, forgetting, and percent forgetting. Values for the imaging and clinical modalities were normalized for each training set before they were used as inputs to the network.

3.2. Experimental Design

All models underwent training and evaluation using a 10-fold stratified cross-testing approach. Initially, the entire dataset was divided into 10 folds, with one fold reserved for testing and the remaining nine for training. Subsequently, this training set was further divided into 5 folds for a 5-fold stratified cross-validation process for hyperparameter tuning. This robust framework was designed to prevent any data leakage. The optimal hyperparameters were determined during each experimental run by selecting the best-performing model based on the validation set. Each of the 10 test sets was evaluated 5 times, using hyperparameters determined by the validation sets, resulting in a total of 50 evaluations for each method. Predictions were evaluated using the area under the receiver operating curve (AUROC). A one-vs-one strategy was employed where the average AUROC of all possible pairwise combinations of classes was computed for a balanced metric. This was implemented using Sci-Kit Learn API [42], which implements the method described in [43]. The mean AUROC and standard deviation across all 50 runs are reported for each method in Table 2. The model was compared against the stage-wise deep learning intermediate fusion model introduced in [44] and several well-established traditional ML models—random forest (RF) and support vector machine (SVM) with radial-basis function (RBF) kernel. Similarly, each of the branches of the model was used as comparison using a series of transformer encoder layers and MLP head for classification.
Tri-COAT consists of four transformer layers for each of the single-modality encoders, with four attention heads per transformer layer. The tri-model co-attention process is done in a single-head attention mechanism. The classifying MLP has one hidden layer with 256 units. An embedding dimension of k = 256 was used for all modalities. The model dimensions for the single-modality encoders were kept at 256 throughout, as this combination achieved the best results on the validation set. The final MLP had the concatenated class tokens resulting from the tri-modal co-attention module and computed the output logits for each one of the three possible classes. Adam was used to optimize both the Tri-COAT and stage-wise MLP models using learning rates of 0.0001 for Tri-COAT and 0.0001 for the stage-wise fusion benchmarking model. All deep learning models were trained using cross-entropy loss. All deep learning models were implemented using PyTorch, while the RF and SVM models were implemented using scikit-learn. All deep learning models were trained for 100 epochs, and the best checkpoint, meaning the epoch with the highest AUROC for the validation set, was selected for model evaluation. The stage-wise deep learning fusion model had dimensions of 64 units for the single-modality layers, 32 for the second-stage, and 16 for the final stage. The model dimensions were selected following the described hyperparameters in [44]. The SVM used an RBF kernel and a regularization parameter C of 1. The random forest used gini as its criterion for leaf splitting, 100 trees, and no maximum depth.

4. Results and Discussion

4.1. Clustering, Label Definition

Following the literature, the number of clusters was set to three main groups [45,46,47]. The MMSE score was used as an indicator of mental decline. Based on the speed of the progression of the mental decline over a period of two years, three groups were defined, i.e., the fast-, intermediate-, and slow-progressing subtypes. K-means clustering was used to assign each subject to one of the groups. Through this process, labels were defined for all subjects. Only baseline data were used as input to Tri-COAT and all competing models. Based on the baseline data, Tri-COAT was able to effectively classify the subjects into their corresponding progression-specific subtypes.

4.2. AD Progression-Specific Subtype Classification

As seen in Table 2, Tri-COAT outperformed the single-modality ablations and baseline models, achieving an average AUROC of 0.734 ± 0.076 across all test sets in the 10-fold cross-testing framework. For the single-modality ablation studies, each modality was used independently to classify the AD subtype. For Tri-COAT, a single-modality transformer encoder backbone and MLP head were used. For the stage-wise fusion model, it was adapted to MLPs using the same number of hidden layers as the first plus last stage of the multimodal version. For SVM and RF, no variations were required. Each modality was evaluated using the same 10-fold cross-testing–5-fold-cross-validation hyperparameter tuning framework. Moreover, the single-modality ablation models outperformed their baseline counterparts for the clinical and genetics modalities. In contrast, the baselines achieved better performances for the imaging-derived traits. For the three modalities, the clinical modality achieved the best classification AUROC, followed by the imaging and genetics modalities. This is expected, as biologically, the same order follows for the closest relation between the observed phenotype and the mechanisms behind it. Clinical (cognitive) assessments are the closest to the MMSE metric, followed by imaging (changes in the brain morphology), which is directly related to the observed phenotype and genetics being the farthest apart from the expressed symptoms. Both the comparative models and Tri-COAT achieve higher performances in their multimodal configuration compared to single modalities, agreeing with previous literature regarding multimodal approaches for classification of AD and related disorders.
Furthermore, as seen in Table 3, Tri-COAT outperformed variations of itself using alternative fusion strategies. The early fusion model considerably underperforms achieving an AUROC of 0.571 ± 0.053 because of its limited capabilities to simultaneously encode highly heterogeneous data with distinct biological-level relationships to the outcome. Similarly, the late-stage fusion model underperforms, as it is limited to joining the predictions from each branch and cannot learn the relationships between the different modalities in the latent space.

4.3. Biomarker Associations Learned by Co-Attention

One of the key advantages of Tri-COAT compared to the baseline models and traditional deep learning approaches is the ability to learn insights into the cross-modal feature associations. In order to explore the learned relationships, the model with the highest test AUROC from the evaluation framework was selected for attention visualization in which the learned attention scores were averaged across all test subjects. Chord plots were drawn using the circlize R library [48] to visualize the cross-modal attention. As seen in Figure 4, Tri-COAT identified key associations between the Trails making test B score (TRABSCORE) in the clinical–imaging and clinical–genetics associations. This score tests for the cognitive ability of the patient for working memory and secondarily task-switching ability [49]. The clinical literature shows a strong correlation between gyri structures—temporal gyrus and parahippocampal gyrus (LTransTemp, LPH)—and the TRABSCORE [50]. Similarly, for the clinical–genotype association, TRABSCORE was found to be associated with the CD2AP gene, which has been clinically identified as a driver for the AD hallmark—neurofibrillary tangles (NFT) in the temporal gyrus region [51]. This is a very exciting finding for our network, as it establishes a putative relationship between genetics (CD2AP gene), brain ROIs (temporal gyrus), and clinical symptoms (TRABSCORE).

5. Conclusions

AD is the most prevalent neurodegenerative disease, and all current treatments are limited to slowing disease progression. Therefore, early diagnosis is essential. Furthermore, there are multiple subtypes with different rates of cognitive decline. In order to move closer to personalized medicine, it is essential to have a better understanding of the heterogeneity surrounding the development of this disease. However, early subtyping is a very challenging task. Our proposed model, Tri-COAT, was able to effectively classify AD patients into three main progression-specific subtypes using prodromal factors measured at baseline.
Moreover, the model was able to identify multiple putative cross-modal biomarker networks. The putative biomarkers provide enhanced interpretability for Tri-COAT and shed light on possible exciting therapeutic targets. Nevertheless, the generalizability of applying the learned features to other datasets remains to be tested.
The future directions are very exciting, as Tri-COAT could be extended to other heterogeneous neurodegenerative diseases such as Parkinson’s disease. Moreover, as shown in this work, multimodal approaches achieved the best results. A promising future direction is to incorporate further modalities such as PET imaging and transcriptomic data. PET imaging could provide further clarity towards the accumulation of fluid biomarkers and their impact towards neurodegeneration. Similarly, transcriptomic data could provide an intermediate biological step between the genotype and brain endophenotypes. These could lead to an enhanced understanding of the underlying mechanisms and provide further therapeutic targets.

Author Contributions

Conceptualization, D.M.R., H.C., L.S. and P.Y.; Formal analysis, D.M.R.; Funding acquisition, D.M.R., J.H. and P.Y.; Investigation, D.M.R.; Methodology, D.M.R.; Project administration, P.Y.; Resources, P.Y.; Supervision, P.Y.; Visualization, D.M.R.; Writing—original draft, D.M.R.; Writing—review & editing, H.C., J.H., L.S. and P.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the training grant T32AG078123, the NSF CAREER award 2046708, NSF IIS 1837964, and NIH grants U01 AG066833, U01 AG068057, and RF1 AG063481.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used in the preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD).

Acknowledgments

Data collection and sharing for this project were funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd. and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alzheimer’s Association. 2022 Alzheimer’s disease facts and figures. Alzheimer’s Dement. 2022, 18, 700–789. [Google Scholar] [CrossRef] [PubMed]
  2. Dhillon, S. Aducanumab: First Approval. Drugs 2021, 81, 1437–1443. [Google Scholar] [CrossRef] [PubMed]
  3. Swanson, C.J.; Zhang, Y.; Dhadda, S.; Wang, J.; Kaplow, J.; Lai, R.Y.K.; Lannfelt, L.; Bradley, H.; Rabe, M.; Koyama, A.; et al. A randomized, double-blind, phase 2b proof-of-concept clinical trial in early Alzheimer’s disease with lecanemab, an anti-AB protofibril antibody. Alzheimer’s Res. Ther. 2021, 13, 80. [Google Scholar] [CrossRef] [PubMed]
  4. Shcherbinin, S.; Evans, C.D.; Lu, M.; Andersen, S.W.; Pontecorvo, M.J.; Willis, B.A.; Gueorguieva, I.; Hauck, P.M.; Brooks, D.A.; Mintun, M.A.; et al. Association of Amyloid Reduction After Donanemab Treatment With Tau Pathology and Clinical Outcomes: The TRAILBLAZER-ALZ Randomized Clinical Trial. JAMA Neurol. 2022, 79, 1015–1024. [Google Scholar] [CrossRef] [PubMed]
  5. Foster, N.L.; Bondi, M.W.; Das, R.; Foss, M.; Hershey, L.A.; Koh, S.; Logan, R.; Poole, C.; Shega, J.W.; Sood, A.; et al. Quality improvement in neurology. Neurology 2019, 93, 705–713. [Google Scholar] [CrossRef] [PubMed]
  6. Hassen, S.B.; Neji, M.; Hussain, Z.; Hussain, A.; Alimi, A.M.; Frikha, M. Deep learning methods for early detection of Alzheimer’s disease using structural MR images: A survey. Neurocomputing 2024, 576, 127325. [Google Scholar] [CrossRef]
  7. McKhann, G.M.; Knopman, D.S.; Chertkow, H.; Hyman, B.T.; Jack, C.R., Jr.; Kawas, C.H.; Klunk, W.E.; Koroshetz, W.J.; Manly, J.J.; Mayeux, R.; et al. The diagnosis of dementia due to Alzheimer’s disease: Recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimer’s Dement. 2011, 7, 263–269. [Google Scholar] [CrossRef] [PubMed]
  8. Dubois, B.; Feldman, H.H.; Jacova, C.; Hampel, H.; Molinuevo, J.L.; Blennow, K.; DeKosky, S.T.; Gauthier, S.; Selkoe, D.; Bateman, R.; et al. Advancing research diagnostic criteria for Alzheimer’s disease: The IWG-2 criteria. Lancet Neurol. 2014, 13, 614–629. [Google Scholar] [CrossRef] [PubMed]
  9. Tao, Q.Q.; Lin, R.R.; Wu, Z.Y. Early Diagnosis of Alzheimer’s Disease: Moving Toward a Blood-Based Biomarkers Era. Clin. Interv. Aging 2023, 18, 353–358. [Google Scholar] [CrossRef]
  10. Dubois, B.; von Arnim, C.A.F.; Burnie, N.; Bozeat, S.; Cummings, J. Biomarkers in Alzheimer’s disease: Role in early and differential diagnosis and recognition of atypical variants. Alzheimer’s Res. Ther. 2023, 15, 175. [Google Scholar] [CrossRef]
  11. Zhang, S.; Zhao, H.; Wang, W.; Wang, Z.; Luo, X.; Hramov, A.; Kurths, J. Edge-centric effective connection network based on muti-modal MRI for the diagnosis of Alzheimer’s disease. Neurocomputing 2023, 552, 126512. [Google Scholar] [CrossRef]
  12. Collij, L.E.; Salvadó, G.; Wottschel, V.; Mastenbroek, S.E.; Schoenmakers, P.; Heeman, F.; Aksman, L.; Wink, A.M.; Berckel, B.N.; van de Flier, W.M.; et al. Spatial-Temporal Patterns of B-Amyloid Accumulation. Neurology 2022, 98, e1692–e1703. [Google Scholar] [CrossRef] [PubMed]
  13. Bejanin, A.; Schonhaut, D.R.; La Joie, R.; Kramer, J.H.; Baker, S.L.; Sosa, N.; Ayakta, N.; Cantwell, A.; Janabi, M.; Lauriola, M.; et al. Tau pathology and neurodegeneration contribute to cognitive impairment in Alzheimer’s disease. Brain 2017, 140, 3286–3300. [Google Scholar] [CrossRef] [PubMed]
  14. Vogel, J.W.; Young, A.L.; Oxtoby, N.P.; Smith, R.; Ossenkoppele, R.; Strandberg, O.T.; La Joie, R.; Aksman, L.M.; Grothe, M.J.; Iturria-Medina, Y.; et al. Four distinct trajectories of tau deposition identified in Alzheimer’s disease. Nat. Med. 2021, 27, 871–881. [Google Scholar] [CrossRef]
  15. Mitelpunkt, A.; Galili, T.; Kozlovski, T.; Bregman, N.; Shachar, N.; Markus-Kalish, M.; Benjamini, Y. Novel Alzheimer’s disease subtypes identified using a data and knowledge driven strategy. Sci. Rep. 2020, 10, 1327. [Google Scholar] [CrossRef]
  16. Badhwar, A.; McFall, G.P.; Sapkota, S.; Black, S.E.; Chertkow, H.; Duchesne, S.; Masellis, M.; Li, L.; Dixon, R.A.; Bellec, P. A multiomics approach to heterogeneity in Alzheimer’s disease: Focused review and roadmap. Brain 2020, 143, 1315–1331. [Google Scholar] [CrossRef] [PubMed]
  17. Martí-Juan, G.; Sanroma, G.; Piella, G.; Alzheimer’s Disease Neuroimaging Initiative and the Alzheimer’s Disease Metabolomics Consortium. Revealing heterogeneity of brain imaging phenotypes in Alzheimer’s disease based on unsupervised clustering of blood marker profiles. PLoS ONE 2019, 14, e0211121. [Google Scholar] [CrossRef]
  18. Feng, Y.; Kim, M.; Yao, X.; Liu, K.; Long, Q.; Shen, L.; for the Alzheimer’s Disease Neuroimaging Initiative. Deep multiview learning to identify imaging-driven subtypes in mild cognitive impairment. BMC Bioinform. 2022, 23, 402. [Google Scholar] [CrossRef]
  19. El-Sappagh, S.; Ali, F.; Abuhmed, T.; Singh, J.; Alonso, J.M. Automatic detection of Alzheimer’s disease progression: An efficient information fusion approach with heterogeneous ensemble classifiers. Neurocomputing 2022, 512, 203–224. [Google Scholar] [CrossRef]
  20. Emon, M.A.; Heinson, A.; Wu, P.; Domingo-Fernández, D.; Sood, M.; Vrooman, H.; Corvol, J.C.; Scordis, P.; Hofmann-Apitius, M.; Fröhlich, H. Clustering of Alzheimer’s and Parkinson’s disease based on genetic burden of shared molecular mechanisms. Sci. Rep. 2020, 10, 19097. [Google Scholar] [CrossRef]
  21. Wen, J.; Varol, E.; Sotiras, A.; Yang, Z.; Chand, G.B.; Erus, G.; Shou, H.; Abdulkadir, A.; Hwang, G.; Dwyer, D.B.; et al. Multi-scale semi-supervised clustering of brain images: Deriving disease subtypes. Med. Image Anal. 2022, 75, 102304. [Google Scholar] [CrossRef]
  22. Poulakis, K.; Pereira, J.B.; Muehlboeck, J.S.; Wahlund, L.O.; Smedby, O.; Volpe, G.; Masters, C.L.; Ames, D.; Niimi, Y.; Iwatsubo, T.; et al. Multi-cohort and longitudinal Bayesian clustering study of stage and subtype in Alzheimer’s disease. Nat. Commun. 2022, 13, 4566. [Google Scholar] [CrossRef] [PubMed]
  23. Odusami, M.; Maskeliūnas, R.; Damaševičius, R. Optimized Convolutional Fusion for Multimodal Neuroimaging in Alzheimer’s Disease Diagnosis: Enhancing Data Integration and Feature Extraction. J. Pers. Med. 2023, 13, 1496. [Google Scholar] [CrossRef]
  24. Dadu, A.; Satone, V.; Kaur, R.; Hashemi, S.H.; Leonard, H.; Iwaki, H.; Makarious, M.B.; Billingsley, K.J.; Bandres-Ciga, S.; Sargent, L.J.; et al. Identification and prediction of Parkinson’s disease subtypes and progression using machine learning in two cohorts. NPJ Parkinson’s Dis. 2022, 8, 1–12. [Google Scholar] [CrossRef]
  25. Su, C.; Hou, Y.; Xu, J.; Xu, J.; Brendel, M.; Maasch, J.R.M.A.; Bai, Z.; Zhang, H.; Zhu, Y.; Henchcliffe, C.; et al. Parkinson’s Disease Progression, 2022. Pages: 2021.07.18.21260731. medRxiv 2022, 2021.07.18.21260731. [Google Scholar] [CrossRef]
  26. Nguyen, N.D.; Wang, D. Multiview learning for understanding functional multiomics. PLoS Comput. Biol. 2020, 16, e1007677. [Google Scholar] [CrossRef]
  27. Huang, S.C.; Pareek, A.; Seyyedi, S.; Banerjee, I.; Lungren, M.P. Fusion of medical imaging and electronic health records using deep learning: A systematic review and implementation guidelines. NPJ Digit. Med. 2020, 3, 136. [Google Scholar] [CrossRef] [PubMed]
  28. Alayrac, J.B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. Flamingo: A Visual Language Model for Few-Shot Learning. arXiv 2022, arXiv:2204.14198. [Google Scholar] [CrossRef]
  29. Akbari, H.; Yuan, L.; Qian, R.; Chuang, W.H.; Chang, S.F.; Cui, Y.; Gong, B. VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. arXiv 2021, arXiv:2104.11178. [Google Scholar] [CrossRef]
  30. Chen, R.J.; Lu, M.Y.; Weng, W.H.; Chen, T.Y.; Williamson, D.F.K.; Manz, T.; Shady, M.; Mahmood, F. Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 4015–4025. [Google Scholar]
  31. Jack, C.R., Jr.; Bernstein, M.A.; Fox, N.C.; Thompson, P.; Alexander, G.; Harvey, D.; Borowski, B.; Britson, P.J.; Whitwell, J.L.; Ward, C.; et al. The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imaging 2008, 27, 685–691. [Google Scholar] [CrossRef]
  32. Machado-Reyes, D.; Kim, M.; Chao, H.; Shen, L.; Yan, P. Connectome transformer with anatomically inspired attention for Parkinson’s diagnosis. In Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, New York, NY, USA, 7–10 August 2022; BCB ’22. pp. 1–4. [Google Scholar] [CrossRef]
  33. Machado-Reyes, D.; Kim, M.; Chaoh, H.; Hahn, J.; Shen, L.; Yan, P. Genomics transformer for diagnosing Parkinson’s disease. In Proceedings of the 2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI), Ioannina, Greece, 27–30 September 2022; pp. 1–4, ISSN 2641-3604. [Google Scholar] [CrossRef]
  34. Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer Normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
  35. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
  36. Fischl, B. FreeSurfer. NeuroImage 2012, 62, 774–781. [Google Scholar] [CrossRef] [PubMed]
  37. Fischl, B.; Dale, A.M. Measuring the thickness of the human cerebral cortex from magnetic resonance images. Proc. Natl. Acad. Sci. USA 2000, 97, 11050–11055. [Google Scholar] [CrossRef] [PubMed]
  38. Hartig, M.; Truran-Sacrey, D.; Raptentsetsang, S.; Simonson, A.; Mezher, A.; Schuff, N.; Weiner, M. UCSF Freesurfer Methods; ADNI Alzheimers Disease Neuroimaging Initiative: San Francisco, CA, USA, 2014. [Google Scholar]
  39. Bellenguez, C.; Küçükali, F.; Jansen, I.E.; Kleineidam, L.; Moreno-Grau, S.; Amin, N.; Naj, A.C.; Campos-Martin, R.; Grenier-Boley, B.; Andrade, V.; et al. New insights into the genetic etiology of Alzheimer’s disease and related dementias. Nat. Genet. 2022, 54, 412–436. [Google Scholar] [CrossRef] [PubMed]
  40. Sollis, E.; Mosaku, A.; Abid, A.; Buniello, A.; Cerezo, M.; Gil, L.; Groza, T.; Güneş, O.; Hall, P.; Hayhurst, J.; et al. The NHGRI-EBI GWAS Catalog: Knowledgebase and deposition resource. Nucleic Acids Res. 2023, 51, D977–D985. [Google Scholar] [CrossRef]
  41. Purcell, S.; Neale, B.; Todd-Brown, K.; Thomas, L.; Ferreira, M.A.R.; Bender, D.; Maller, J.; Sklar, P.; de Bakker, P.I.W.; Daly, M.J.; et al. PLINK: A tool set for whole-genome association and population-based linkage analyses. Am. J. Hum. Genet. 2007, 81, 559–575. [Google Scholar] [CrossRef] [PubMed]
  42. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  43. Hand, D.J.; Till, R.J. A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems. Mach. Learn. 2001, 45, 171–186. [Google Scholar] [CrossRef]
  44. Zhou, T.; Thung, K.H.; Zhu, X.; Shen, D. Effective feature learning and fusion of multimodality data using stage-wise deep neural network for dementia diagnosis. Hum. Brain Mapp. 2019, 40, 1001–1016. [Google Scholar] [CrossRef] [PubMed]
  45. Doody, R.S.; Massman, P.; Dunn, J.K. A Method for Estimating Progression Rates in Alzheimer Disease. Arch. Neurol. 2001, 58, 449–454. [Google Scholar] [CrossRef]
  46. Doody, R.S.; Pavlik, V.; Massman, P.; Rountree, S.; Darby, E.; Chan, W. Predicting progression of Alzheimer’s disease. Alzheimer’s Res. Ther. 2010, 2, 2. [Google Scholar] [CrossRef]
  47. Prosser, A.; Evenden, D.; Holmes, R.; Kipps, C. Progression modelling of cognitive decline and associated FDG-PET imaging features in Alzheimer’s disease. Alzheimer’s Dement. 2020, 16, e045900. [Google Scholar] [CrossRef]
  48. Gu, Z.; Gu, L.; Eils, R.; Schlesner, M.; Brors, B. Circlize implements and enhances circular visualization in R. Bioinformatics 2014, 30, 2811–2812. [Google Scholar] [CrossRef] [PubMed]
  49. Terada, S.; Sato, S.; Nagao, S.; Ikeda, C.; Shindo, A.; Hayashi, S.; Oshima, E.; Yokota, O.; Uchitomi, Y. Trail Making Test B and brain perfusion imaging in mild cognitive impairment and mild Alzheimer’s disease. Psychiatry Res. Neuroimaging 2013, 213, 249–255. [Google Scholar] [CrossRef] [PubMed]
  50. Matías-Guiu, J.A.; Cabrera-Martín, M.N.; Valles-Salgado, M.; Pérez-Pérez, A.; Rognoni, T.; Moreno-Ramos, T.; Carreras, J.L.; Matías-Guiu, J. Neural Basis of Cognitive Assessment in Alzheimer Disease, Amnestic Mild Cognitive Impairment, and Subjective Memory Complaints. Am. J. Geriatr. Psychiatry 2017, 25, 730–740. [Google Scholar] [CrossRef]
  51. Camacho, J.; Rábano, A.; Marazuela, P.; Bonaterra-Pastra, A.; Serna, G.; Moliné, T.; Ramón y Cajal, S.; Martínez-Sáez, E.; Hernández-Guillamon, M. Association of CD2AP neuronal deposits with Braak neurofibrillary stage in Alzheimer’s disease. Brain Pathol. 2021, 32, e13016. [Google Scholar] [CrossRef]
Figure 1. The three main multimodal fusion strategies, early, intermediate, and late fusion, for deep learning methods.
Figure 1. The three main multimodal fusion strategies, early, intermediate, and late fusion, for deep learning methods.
Jpm 14 00421 g001
Figure 2. Illustration of the proposed framework for AD subtyping consisting of two main sections: single-modality encoding and tri-modal attention with joint encoding.
Figure 2. Illustration of the proposed framework for AD subtyping consisting of two main sections: single-modality encoding and tri-modal attention with joint encoding.
Jpm 14 00421 g002
Figure 3. AD progression-specific subtype clusters based on a decrease in the MMSE at each visit. (a) Each line represents the average score across patients for each cluster, and the shadow represents one standard deviation. (b) Individual lines per patient are plotted.
Figure 3. AD progression-specific subtype clusters based on a decrease in the MMSE at each visit. (a) Each line represents the average score across patients for each cluster, and the shadow represents one standard deviation. (b) Individual lines per patient are plotted.
Jpm 14 00421 g003
Figure 4. Cross-modal associations of key AD biomarkers visualized from the learned co-attention.
Figure 4. Cross-modal associations of key AD biomarkers visualized from the learned co-attention.
Jpm 14 00421 g004
Table 1. Subject data distribution reported as the mean ± standard deviation for the MMSE and age and as counts per category for the number of participants and sex.
Table 1. Subject data distribution reported as the mean ± standard deviation for the MMSE and age and as counts per category for the number of participants and sex.
SlowIntermediateFast
Participants17730215
MMSE (Baseline)27.35 ± 2.5127.66 ± 1.8624.93 ± 3.55
MMSE (24 months)28.15± 2.1523.86 ± 3.6815.9 ± 4.84
Age73.26 ± 7.8272.44 ± 7.5571.22 ± 3.922
SexM: 102 F: 75M: 185 F: 117M: 9 F: 6
Table 2. Mean AUROC ± SD of 10-fold cross-testing results. The proposed model significantly outperformed all the baseline models. The statistical significance was evaluated by paired t-test with α = 0.005 , except for the entry where α = 0.05 (shown in italics).
Table 2. Mean AUROC ± SD of 10-fold cross-testing results. The proposed model significantly outperformed all the baseline models. The statistical significance was evaluated by paired t-test with α = 0.005 , except for the entry where α = 0.05 (shown in italics).
MethodFullImagingGeneticsClinical
SVM0.705 ± 0.0360.669 ± 0.0600.525 ± 0.0340.639 ± 0.078
RF0.684 ± 0.0480.677 ± 0.0520.505 ± 0.0310.659 ± 0.087
Stage-wise fusion0.641 ± 0.0170.557 ± 0.0960.562 ± 0.0780.655 ± 0.057
Tri-COAT0.734 ± 0.0760.648 ± 0.0560.539 ± 0.0840.697 ± 0.063
Table 3. Mean AUROC ± SD of 10-fold cross-testing results. Tri-COAT significantly outperformed early and late fusion variants. The statistical significance was evaluated by paired t-test with α = 0.005 , except for the entry where α = 0.05 (shown in italics).
Table 3. Mean AUROC ± SD of 10-fold cross-testing results. Tri-COAT significantly outperformed early and late fusion variants. The statistical significance was evaluated by paired t-test with α = 0.005 , except for the entry where α = 0.05 (shown in italics).
MethodAUROC
Early0.571 ± 0.053
Late0.604 ± 0.048
Tri-COAT0.734 ± 0.076
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Machado Reyes, D.; Chao, H.; Hahn, J.; Shen, L.; Yan, P.; for the Alzheimer’s Disease Neuroimaging Initiative. Identifying Progression-Specific Alzheimer’s Subtypes Using Multimodal Transformer. J. Pers. Med. 2024, 14, 421. https://doi.org/10.3390/jpm14040421

AMA Style

Machado Reyes D, Chao H, Hahn J, Shen L, Yan P, for the Alzheimer’s Disease Neuroimaging Initiative. Identifying Progression-Specific Alzheimer’s Subtypes Using Multimodal Transformer. Journal of Personalized Medicine. 2024; 14(4):421. https://doi.org/10.3390/jpm14040421

Chicago/Turabian Style

Machado Reyes, Diego, Hanqing Chao, Juergen Hahn, Li Shen, Pingkun Yan, and for the Alzheimer’s Disease Neuroimaging Initiative. 2024. "Identifying Progression-Specific Alzheimer’s Subtypes Using Multimodal Transformer" Journal of Personalized Medicine 14, no. 4: 421. https://doi.org/10.3390/jpm14040421

APA Style

Machado Reyes, D., Chao, H., Hahn, J., Shen, L., Yan, P., & for the Alzheimer’s Disease Neuroimaging Initiative. (2024). Identifying Progression-Specific Alzheimer’s Subtypes Using Multimodal Transformer. Journal of Personalized Medicine, 14(4), 421. https://doi.org/10.3390/jpm14040421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop