Next Article in Journal
Investigating p62 Concentrations in Cerebrospinal Fluid of Patients with Dementia: A Potential Autophagy Biomarker In Vivo?
Previous Article in Journal
Advances on Cellular Clonotypic Immunity in Amyotrophic Lateral Sclerosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Resting-State Functional MRI Adaptation with Attention Graph Convolution Network for Brain Disorder Identification

1
School of Mathematics Science, Liaocheng University, Liaocheng 252000, China
2
Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
*
Authors to whom correspondence should be addressed.
Brain Sci. 2022, 12(10), 1413; https://doi.org/10.3390/brainsci12101413
Submission received: 18 September 2022 / Revised: 13 October 2022 / Accepted: 17 October 2022 / Published: 20 October 2022
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)

Abstract

:
Multi-site resting-state functional magnetic resonance imaging (rs-fMRI) data can facilitate learning-based approaches to train reliable models on more data. However, significant data heterogeneity between imaging sites, caused by different scanners or protocols, can negatively impact the generalization ability of learned models. In addition, previous studies have shown that graph convolution neural networks (GCNs) are effective in mining fMRI biomarkers. However, they generally ignore the potentially different contributions of brain regions- of-interest (ROIs) to automated disease diagnosis/prognosis. In this work, we propose a multi-site rs-fMRI adaptation framework with attention GCN (A2GCN) for brain disorder identification. Specifically, the proposed A2GCN consists of three major components: (1) a node representation learning module based on GCN to extract rs-fMRI features from functional connectivity networks, (2) a node attention mechanism module to capture the contributions of ROIs, and (3) a domain adaptation module to alleviate the differences in data distribution between sites through the constraint of mean absolute error and covariance. The A2GCN not only reduces data heterogeneity across sites, but also improves the interpretability of the learning algorithm by exploring important ROIs. Experimental results on the public ABIDE database demonstrate that our method achieves remarkable performance in fMRI-based recognition of autism spectrum disorders.

1. Introduction

Resting-state functional magnetic resonance imaging (rs-fMRI) is an imaging technique that uses blood-oxygen-level-dependent (BOLD) signals to obtain functional graphs of brain activity while subjects are at rest [1]. Compared with other fMRI techniques, rs-fMRI has advantages because it is non-invasive and has high tissue resolution, and it can skillfully detect the difference between the functional activity network of the human brain under pathological conditions and that of the normal human brain [2]. At the same time, benefiting from the progress of scanning hardware and scanning technology, as well as the rapid development of computer vision technology, rs-fMRI has gradually become one of the effective means to study the human brain in recent years. Relying on rs-fMRI technology, researchers have made remarkable achievements in the auxiliary diagnosis, pathogenesis research, objective biomarker search and other aspects of mental disorders such as Autism Spectrum Disorder (ASD) and Major Depressive Disorder [3,4].
Currently, the application of machine learning/deep learning in natural image analysis is very successful. In contrast, its use in the analysis of neuroimaging data presents some unique problems, including dimensional disaster, small sample size, and limited true labels [5,6]. With the continued efforts of researchers, public multi-site neuroimage datasets, increasing the sample size and statistical power of data, are helping to promote the adoption of data-driven machine learning/deep learning techniques. However, the study of multi-site datasets will face another important challenge. That is, the distribution of data between sites is often quite different due to external factors such as different scanners or protocols [7,8]. This will severely limit the generalization ability of machine/deep learning models, as such algorithms often start with the assumption that all data remain the same distribution [9,10,11].
Studies have shown that detection of abnormal low-frequency fluctuations in the BOLD signals caused by pathological changes in the resting state will facilitate the analysis of brain connectivity and provide scientific and reliable treatment options before and after surgery [12]. Typically in studies of neuroimaging data, brain functional connectivity networks (FCNs) attempt to establish a potential causal link between two regions-of-interest (ROIs) based on linear temporal correlations [13]. Previous studies usually use statistical measures of FCNs (including betweenness centrality, degree centrality, and other features) to construct prediction models [14,15]. These practices often rely on extensive expert knowledge and are subjective, expensive, and time-consuming. FCN is usually defined as a complex non-Euclidean space graph structure [16]. In recent years, graph neural networks, especially graph convolutional networks (GCNs), have become one of the effective tools to deal with irregular graph data. GCN is a natural extension of the convolutional neural network in a graph domain [17,18]. It can be used as a feature extractor to learn node feature information and structure information end-to-end at the same time, which is the best choice for graph data learning task at present [19,20]. When GCN is naturally used to analyze rs-fMRI data, comprehensive mapping of brain FC patterns can effectively describe the functional activity of the brain [21,22]. However, existing studies usually ignore the potential contribution of different brain functional regions to the diagnosis of brain diseases, thus affecting the interpretability of the GCN model.
As shown in Figure 1, we construct a domain adaptation model with attention GCN (A2GCN) of multi-site rs-fMRI for ASD diagnosis. For the convenience of description, we set a known site as the source domain, and define the site to be predicted as the target domain. In this paper, we focus on the classification task of graphs. Therefore, we first construct the corresponding FCNs based on the rs-fMRI data of subjects from the source/target domains, and take the FCNs as the corresponding source/target graphs. Then, we use GCN as a feature extractor to capture the nodes/ROIs representations from the source/target graphs respectively through the graph convolution layers. In addition, the node attention mechanism is applied to explore the contribution weight of nodes/ROIs automatically. Finally, the objective function composed of multiple loss functions is jointly optimized, so as to establish a cross-domain classification model with a wider application range. We will use rs-fMRI data from the three sites (NYU, UM, UCLA) of the public ABIDE database [23] to identify ASD patients from healthy controls (HCs) to evaluate the performance of our approach.
The rest of this work is shown below: In Section 2, we briefly review the related research results of this work. In Section 3, we present our method and experimental setup. In Section 4, we introduce the data used in this work, the competing algorithms, and report the performance of different algorithms. At the same time, ablation experiments are added to investigate the contribution of key components in our proposed model. In Section 5, we discuss several extension studies related to this work and propose future related work. Finally, in Section 6, we summarize our proposed method.

2. Related Work

2.1. Graph Convolution Network for fMRI Analysis

At present, the application of deep learning framework, especially the graph convolutional networks (GCNs) model, to graph-structured data has aroused a warm response worldwide [24,25]. GCN is used to advance the feature learning of the network, which integrates the central node characteristics and graph topology information in the convolutional layer [26]. In particular, GCN has achieved impressive results in helping researchers build mathematical models for computer-assisted diagnosis of brain diseases and process and analyze neuroimaging data quickly and efficiently [27]. For example, Wang et al. [28] defined a GCN architecture based on features of fMRI for brain disorder analysis. Based on the spatiotemporal information of rs-fMRI time series, Yao et al. [29] constructed time-adaptive GCN architecture to study the periodic characteristics of the human brain. Gadgil et al. [30] focused on the short subsequence of BOLD signal, so as to construct a spatio-temporal GCN architecture and explore the non-stationary properties of FC. Traditional GCN research usually regards feature representations of each node as independently and equally. That is, they did not consider the unique contribution of each specific node/ROI to rs-fMRI analysis. In this paper, we will establish a ROI/node feature attention mechanism based on GCN to learn potential functional dependencies among brain regions, which allows us to identify those most informative brain regions for diagnosis. This will significantly improve the interpretability of GCN models for automated fMRI analysis.

2.2. Domain Adaptation for Brain Disorder Diagnosis

Data acquired from multiple imaging sites are correlated but distributed differently, which is a classic domain adaptation problem [31,32]. According to the latest research, domain adaptation related algorithms can be roughly summarized into two categories: (1) supervised domain adaptation. The target domain samples contain a large or small amount of label information; (2) unsupervised domain adaptation. There is no data label available for the target domain [33]. This work will focus on the problem of unsupervised domain adaptation, that is, samples from the source domain contain complete data labels, while samples from the target domain to be analyzed have no label information, which is more valuable and challenging for applications. In recent years, in order to achieve domain alignment, many cross-domain classification algorithms have been proposed, including adaptive methods based on discrepancy, adversarial learning and data reconstruction [34]. In recent years, domain adaptation technology has also achieved remarkable results in the field of medical imaging. Ingalhalikar et al. [35] coordinated multi-site neuroimaging data based on empirical Bayes formula to improve the accuracy of brain diagnostic classification. Guan et al. [32] defined a multi-site domain attention model based on deep learning for brain disease recognition. Zhang et al. [36] constructed an unsupervised domain adversarial network and established a brain disease prediction model with good classification performance. In this paper, we adopt the classical domain adaptation algorithm, that is, calculate the mean absolute error (MAE) and covariance of the source domain and the target domain at the same time, so as to guide the gradual alignment of node features learned from different domains and alleviate the domain offset problem.

3. Methodology

In this section, we will first describe the concepts and notation related to the unsupervised domain adaptation problem (as shown in Table 1), and then introduce our approach in detail.

3.1. Notation and Problem Formulation

In general, a feature space X of data and its marginal probability distribution P ( X ) will form a domain D . In this work, the source domain data from the distribution P ( X s ) can be expressed as X s R M s × D s ; target domain data from distribution P ( X t ) can be represented as X t R M t × D t , where D s and D t are the feature dimension, and M s and M t are defined as the sample size in the source domain and target domain, respectively. In the unsupervised domain adaptation problem, the feature space and label space of the data from the source domain and the target domain are usually consistent, but the data distribution is different, that is, P ( X s ) P ( X t ) . Our goal is to use the information learned from the source domain to assist in the graph classification task of a completely unmarked target domain. Our task is to build a good graph classification model for the target domain without any label based on labeled source domain.
In this article, we focus on representation learning of nodes on a graph. Therefore, we first build a graph for each subject of the source domain and target domain. A subject from the source domain is represented as a graph G s = ( V s , A s , X s , Y s ) , where V s represents a labeled collection of nodes in G s , and A s R N s × N s represents the weighted adjacency matrix to quantify the connection strength between nodes. N s = | V s | represents the number of nodes/ROIs of G s . X s R N s × D s is the eigenmatrix of graph G s , and the i-th row of X s is the eigenvector related to node i. Y s R M s is the label of G s . In this paper, the label value of normal people is 0 and the category label of patients is 1. Similarly, each subject from the target domain is also defined as a graph G t = ( V t , A t , X t ) , which is a completely unlabeled network. V t is the node set. N t = | V t | is the number of nodes/ROIs in G t . A t R N t × N t is the weighted adjacency matrix. X t R N t × D t represents the feature matrix of G t .

3.2. Proposed Method

The model A2GCN designed in this paper mainly includes three modules: node representation learning, node attention mechanism and domain adaptation module as shown in Figure 2. In addition, our model will be described in detail below.

3.2.1. Node Representation Learning

To facilitate the classification task of downstream graphs, we use GCN to capture the node representation information on each graph.
First, we used the preprocessed BOLD signal to calculate the Pearson’s correlation coefficient (PC) between nodes on the graph, and defined it as the functional connectivity e i j [ 1 , 1 ] of the i-th and j-th brain regions, as follows:
e i j = ( v i v ¯ i ) ( v j v ¯ j ) ( v i v ¯ i ) ( v i v ¯ i ) ( v j v ¯ j ) ( v j v ¯ j )
where v i R t s , v i V s or V t , and it is the average time series signal from the i-th ROI. t s is the number of time points of the ROI. In addition, the v ¯ i represents the mean vector corresponding to v i .
Thus, for the graph, the adjacency matrix A k R N k × N k will be defined as:
A k i j = 1 , i = j e i j , o t h e r w i s e
where k represents source domain s or target domain t. At the same time, for simplicity and convenience, we describe the feature matrix X k R N k × N k , of each graph through the correlation coefficient (i.e., X k i j = e i j ).
According to the traditional GCN model, given the input feature matrix X k and adjacency matrix A k , the output of the l + 1 -th hidden layer of the neural network H is:
H ( l + 1 ) = σ ( D ˜ 1 2 A k D ˜ 1 2 H ( l ) W ( l ) )
where D ˜ 1 2 A k D ˜ 1 2 is the normalization of the adjacency matrix A k , and D ˜ i i = Σ j A k i j . W is the trainable weight matrix, that is, the parameters of the network; σ ( · ) is the activation function, and the ReLU function is used here. H ( l ) represents the feature matrix of the layer l network. l = 0 , then H = X k .

3.2.2. Node Attention Mechanism

For each graph, the potential impact of nodes/ROIs features learned from the GCN module on related brain diseases is different. Therefore, this paper proposes a node attention mechanism module to automatically mine the weight of nodes on the graph. See Figure 2 for details. After learning the node representation module, we naturally obtain new embedded representations of the source and target domains, that is, H s R N × D from the source domain graph and H t R N × D from the target domain graph. At this point, N = N s = N t , that is, the brains of subjects from different domains will be divided into the same number of functional areas. In addition, D = D s = D t .
Then, max pooling is performed on H k to generate the comprehensive representation of nodes, i.e., H m a x k . We send the composite node representations to the two fully connected layers respectively to automatically generate the node’s attention score, i.e., H a t t k , and it is defined as:
H a t t k = σ ( W k H m a x k + B k )
where B k is the bias term. The dimension of hidden layer of full connection layer is N, and N. The sigmoid function as a nonlinear activation function is used to constrain each element in the range [ 0 , 1 ] . Among them, the ROIs that contribute more to the predicted results for the model will be assigned more weight, while the brain regions that contribute less will be assigned less weight.
Therefore, the final node representation is expressed as:
Z k = H a t t k H k + H k
where ⊙ represents the dot product operation, which weights the features of each extracted node.

3.2.3. Domain Adaptation Module

For cross-domain classification, we propose to jointly optimize the three losses to reduce domain shift. Graph-level classification tasks typically use the readout operation to extract graph representations [37,38]. This can lead to missing important information, which can negatively affect feature alignment between domains. Therefore, we will choose to use mean absolute error (MAE) loss ( L M ) and CORAL loss [39] ( L A ) respectively to align features before and after the readout operation.
MAE Loss L M : Considering the reality, we believe that, for the same disease and the same classification task, the node representation of the graph obtained from different domains should have a certain consistency.
L M ( Z s , Z t ) = 1 N × M × D i = 1 N | Z i s Z i t |
where M = M s = M t is the number of samples in source or target domains.
CORAL Loss L A : First, readout graph-level representations of nodes using average pooling and max pooling:
G k = 1 N i = 1 N Z i k m a x i = 1 N Z i k
where denotes concatenation.
Meanwhile, CORAL loss is defined as the covariance distance of the features of source domain and target domain:
L A ( G s , G t ) = 1 4 D 2 C s C t 2 F
where · represents the Frobenius norm.
The covariance of source domain ( C s ) or target domain ( C t ) is:
C k = 1 M 1 ( G i k G i k ( I G i k ) ( I G i k ) M )
where I is a column vector with all elements 1, and i 1 , , M .
Cross Entropy Loss L C . Take the cross entropy loss as the source domain classifier loss. Its objective is to minimize the classification loss of the source domain data when the data label is intact:
L C ( f C ( G s ) , Y s ) = 1 M s i = 1 M s Y i s l o g ( Y i ^ s )
where Y i s represents the real category label of the i-th graph of source domain, and Y i ^ s represents the label prediction result of the i-th graph of source domain. We set two fully connected layers f C as the label classifier for the source domain.
Finally, we obtain the overall objective function of model A2GCN:
L = L C + γ 1 L M + γ 2 L A
where γ 1 and γ 2 are hyperparameters used to balance the contribution weights of L C , L M and L A .

3.3. Implementation

The proposed A2GCN model is implemented based on PyTorch platform. For fair comparison, we will use the same epoch and learning rate for all involved domain adaptation learning tasks, that is, the epoch is set to 150, the learning rate is 0.0001, and Adam is used as the optimizer to optimize the model. This A2GCN is composed of two layers of the graph convolution layer and two layers of the fully connected layer, and the output feature dimensions are set as 32 32 64 2 . The convolution layer is nonlinearly activated using the ReLU function, and the dropout of the fully connected layer is 0.4. In order to extract more discriminative pathological features and establish a cross-domain classification model with good performance, we divided the model training into two stages. According to Equation (11), we first pre-train the node representation learning and attention mechanism module for 50 epochs. L C is set to 0. Both the hyperparameters γ 1 and γ 2 are set to 1. In the second stage, the above modules and category classifiers are further jointly trained for 100 epochs through Equation (11), while both the balance parameters γ 1 and γ 2 are set to 0.5.

4. Experiments

4.1. Data

To evaluate the effectiveness of our proposed approach, we use NYU, UM, and UCLA from the public Autism Brain Imaging Data Exchange (ABIDE) website (http://fcon_1000.projects.nitrc.org/indi/abide/ (accessed on 20 September 2022)) to validate our model. Meanwhile, the data from these three sites have also been used by Wang et al. [40]. Specifically, the NYU site included 164 subjects, including 71 with ASD and 93 with HC. The UM site included 113 subjects, 48 with ASD, and 65 with HC. The UCLA site included 74 subjects, 36 with ASD, and 38 with HC. We built the graph based on these three sites. The phenotypic information of the subjects involved in this study is shown in Table 2. The rs-fMRI data are from the Preprocessed Connectome Project initiative (http://preprocessed-connectomes-project.org (accessed on 20 September 2022)).
Rs-fMRI data collected at different sites will be preprocessed by a widely accepted pipeline (the Configurable Pipeline for the Analysis of Connectomes (C-PAC) [41]). The steps of preprocessing mainly include: (1) slice timing, head motion correction, (2) nuisance signal regression (ventricular, cerebrospinal fluid (CSF), white matter signal, etc.), (3) template spatial standardization of the Montreal Neurological Institute (MNI) [42], and (4) temporal filtering. Then, we use the classical AAL atlas to divide each subject’s brain into 116 functional regions and extract their average time series. Finally, each subject can generate a corresponding symmetric functional connectivity matrix based on the extracted signals, and the size of the matrix is 116 × 116 (according to Equation (2)). The element of the matrix represents the PC between paired ROIs.

4.2. Experimental Settings

In this study, we will establish a classification model through four cross-site prediction tasks: NYU→UM, NYU→UCLA, UM→NYU, UM→UCLA. The dataset before the arrow is defined as the source domain, and the dataset after the arrow is set as the target domain. The source domain samples all contained complete category labels, while the target domain subjects had no label information. Considering the limited number of samples, we will use all source/target domain samples for training and testing all target domain subjects. In order to make the result more reasonable, we repeat the training process 10 times, and take the mean value and standard deviation of each algorithm as the final result.
In this study, we will set seven metrics to evaluate the performance of the model, including: Accuracy (ACC), Precision (Pre), Recall (Rec), F1-Score (F1), Balanced accuracy (BAC), Negative predictive value (NPV), and Area under curve (AUC). The greater the value of these indexes, the better the classification performance of the model. These metrics are calculated as follows: ACC = T P + T N T P + F N + F P + T N , Pre = T P T P + F P , Rec = T P T P + F N , NPV = T N T N + F N , BAC = T P 2 ( T P + F N ) + T N 2 ( T N + F P ) , F1 = 2 P r e × R e c P r e + R e c . The TN, TP, FN, and FP represent True Negative, True Positive, False Negative, and False Positive, respectively.

4.3. Competing Methods

In this work, we compare the proposed A2GCN with five single-domain models: (1) Degree centrality (DC), (2) Feature fusion using betweenness centrality and degree centrality (BD), (3) Feature fusion using betweenness centrality, degree centrality, and closeness centrality (BDC), (4) Deep neural networks (DNN), and (5) Graph convolutional networks (GCN). At the same time, we compare A2GCN with three state-of-the-art domain adaptation methods: (1) Cross-domain model based on multi-layer perceptron (DNNC), (2) Maximum Mean Discrepancy (MMD), and (3) Domain Adversarial Neural Network (DANN). More details of these competing methods are introduced below.
(1)
DC: This method measures the degree of nodes in the FCNs as the features of subjects. Specifically, according to Equation (2), for each subject, we can generate FCN of the size of 116 × 116, where each element in FCN is the correlation coefficient between node pairs calculated by PC. First, the degree centrality (DC) indexes of each node in the FCN are calculated. Then, the model DC takes the 116 × 1-dimensional feature vector representation obtained by computing DC for each subject as the input of the SVM classifier.
(2)
BD: This method combines the betweenness centrality (BC) and DC of nodes as the features of subjects. Based on Equation (2), the FCN of each subject is obtained, and then the BC and DC of nodes are respectively calculated. The BC and DC are concatenated into 232 × 1-dimensional vectors according to rows, used as the input of SVM.
(3)
BDC: To mitigate the lack of information or noise pollution caused by manually defined features, we further calculate the BC, DC, and closeness centrality (CC) of the node of each subject FCN. The model BDC is further sequentially splicing the DC, BC, and CC values of each subject to form a feature representation of 348 × 1-dimensional as the input of the SVM classifier.
(4)
DNN: According to the classical practice, we take the FCN of the subject in the upper triangle and pull it into a vector. In order to prevent dimensional disaster, the principal component analysis (PCA) algorithm limits the dimension of variables to 64 dimensions. Then, the features after dimensionality reduction are used as the input of model DNN. The model DNN is composed of two fully connected layers, and the output dimension is: 16 2 .
(5)
GCN: GCN can combine the topological structure of the graph to deeply mine the potential information of nodes. Our A2GCN is inspired by GCN. Obviously, if we set γ 1 = 0 , γ 2 = 0 , A2GCN will crash to GCN. Similar to our proposed A2GCN method, first, we construct the source and target graphs, respectively, based on the FCNs of the subjects. Then, based on the source graphs, the cross entropy loss is optimized to train the classification model with good performance. Finally, the GCN model is applied directly to the target graphs to make prediction. The model GCN consists of two convolutional layers and two fully connected layers, and the output dimension is: 32 32 64 2 .
(6)
DNNC: We transform our A2GCN model feature extractor GCN into multi-layer perceptron (MLP) to construct a simple cross-domain classification model. The model inputs are the same as the settings for the DNN model above. The output dimension of the network is set to 32 2 . At the same time, add CORAL loss minimization domain offset. The covariance between the sample features of the source domain and the target domain is defined as CORAL loss. Meanwhile, CORAL loss can minimize the domain offset without additional parameters. This method is basic and efficient, and it is also one of the losses used in our A2GCN.
(7)
MMD: The Maximum Mean Discrepancy (MMD) method aims to reduce differences of the domain distribution by MMD. This deep transfer model uses the GCN as a feature extractor. MAE loss and CORAL loss in our model are replaced by the MMD loss [9]. Then, the two-layer MLP is used as a category classifier for MMD. The number of neurons in the output layer of convolution layer and fully connected layer is consistent with our A2GCN method. The reference code (https://github.com/jindongwang/transferlearning (accessed on 20 September 2022)) is publicly available.
(8)
DANN: The Domain Adversarial Neural Network (DANN) [43] is a domain adaptive method based on confrontational learning. The DANN method uses a gradient inversion layer (GRL) as Q λ ( x ) = x with a reversal gradient Q λ x = λ I to train a domain classifier. The adaptation parameter λ of GRL refers to [43,44]. Here, x represents the representation of the extracted graph. The two-layer fully connected layer is used as the domain classifier of DANN to establish the adversarial loss. The hidden layer dimension is set to 64 2 ; the dropout is 0.4, and ReLU is responsible for nonlinear activation. Then, the two-layer MLP is used as a category classifier for DANN. Dimensions of the output layer of the convolution layer or fully connected layer are consistent with A2GCN.
Note that the three conventional machine learning methods (i.e., DC, BD, and BDC) and two deep learning methods (i.e., DNN and GCN) are single-domain approaches, while the three deep learning methods (i.e., DNNC, MMD, and DANN) are state-of-the-art domain adaptation methods for cross-domain classification.

4.4. Results

The quantitative results of the A2GCN and several competing methods in ASD vs. HC classification will be reported in Table 3. We observe the following interesting findings.
(1)
The four cross-domain classification models (i.e., DNNC, MMD, DANN, and A2GCN) achieved better results in most cases compared with several single-domain classification models (i.e., DC, BDC, DNN, and GCN). This means that the introduction of domain adaptation learning module helps to enhance the classification performance of the model, which may benefit from the transferable feature representation across sites learned by the model.
(2)
Graph-based (i.e., GCN, MMD, DANN, and A2GCN)) methods usually produce better classification results than traditional classical methods based on manually defined node features (i.e., DC, BD, and BDC) and network embeddings (i.e., DNN and DNNC). Because these traditional methods only consider the characteristics of nodes, however, those methods that use GCN as feature extractors can update and aggregate the features of nodes on the graph end-to-end with the help of the underlying topology information of FCNs, in order to learn more discriminative node representation, which may be more beneficial for ASD auxiliary diagnosis.
(3)
The experimental results of the proposed A2GCN consistently outperform all competing methods. This indicates that A2GCN can achieve effective domain adaptation and reduce data distribution differences, thus improving the robustness of the model.
(4)
Compared with three advanced cross-domain methods (i.e., DNNC, MMD, and DANN), our proposed A2GCN method has a competitive advantage in various domain adaptation tasks. This may be because our method adds node attention mechanism modules, which can make intelligent use of different contributions of brain regions. Meanwhile, our method adopts MAE loss and CORAL loss to align different domains step by step. These operations can partially alleviate the negative effects of noisy areas.

4.5. Ablation Study

The proposed A2GCN contains two key components, namely, node attention mechanism module and domain adaptation module. To evaluate the contribution of these two parts, we compare the proposed A2GCN with its three variants:
(1)
A2GCN_A: Similar to the A2GCN method, firstly, the source graph and the target graph are respectively constructed based on the subject’s FCNs. Then, the node representation on the source graph is learned based on GCN. At the same time, the node attention mechanism model mentioned in Section 3.2.2 is added to set different weight values for different nodes/brain regions of the source graph. Then, cross entropy is used to calculate the classification loss. Finally, the model trained in the source domain is applied to the prediction of the target domain graph.
(2)
A2GCN_M: First, based on the subject’s FCNs, the model constructs the source graph and the target graph respectively. Then, according to the node representation learning module in Section 3.2.1, the node features on the source graph and the target graph are simultaneously learned based on GCN. Then, the node attention mechanism module in Section 3.2.2 is added, and the weighted node features are used to calculate the MAE loss between domains (domain adaptation module). Finally, the cross entropy is used to calculate the classification loss.
(3)
A2GCN_C: First, the model uses FCNs to construct source and target graphs. Like A2GCN, this model learns the node features of different domains based on GCN according to the node representation learning module in Section 3.2.1. Then, after the readout operation, the CORAL loss (domain adaptation module) between domains is calculated based on the extracted graph representation vector. The cross entropy is used to calculate the classification loss of the source domain.
In Figure 3, we report the corresponding ACC and AUC values. As shown in Figure 3, we can find that the performance of three variants A2GCN_A (without domain adaptation module), A2GCN_M (with attention mechanism module and part of domain adaptation module), and A2GCN_C (without domain attention mechanism module) are significantly degraded in the corresponding transfer learning task. In particular, A2GCN_A achieved the worst performance in most cases. The underlying reason could be that attention mechanisms play a role in extracting more discriminative features. In addition, it also shows that using MAE loss and CORAL loss to align the learned features step by step during training can reduce the data information loss caused by readout-related pooling operations, thus significantly improving the robustness and transmission performance of A2GCN. More results on the influence of parameters and model pre-training can be found in Supplementary Materials.

5. Discussion

5.1. Visualization of Data Distribution

To visually demonstrate the features learned through the proposed A2GCN, we use the t-SNE [45] tool to visualize the data distribution of different imaging sites before and after domain adaptation. In Figure 4, the blue and red dots represent the source and target domains, respectively. To visualize the regional heterogeneity before domain adaptation, we flattened the upper triangle of the FCN matrix for each sample of each site. The vector representation is obtained, which is further reduced to 64 dimensions by the PCA method as the original representation of the sample. From Figure 4a, we can observe that there is a significant domain shift between the distribution of the source domain and the target domain. We use the t-SNE algorithm to visualize feature distribution of the source and target domains after the feature extractor GCN in different cross-site classification tasks (through A2GCN), with results reported in Figure 4b. In Figure 4b, red and blue dots are closely clustered together. This means that the distributions of the node representations of the two domains learned by our method are close, and the domain heterogeneity has been substantially reduced. At the same time, we calculated the Frobenius norm of the covariance (CF) between samples in the source domain and the target domain, which is used to measure the difference of data distribution between different sites. It is observed that the CF between different sites is significantly reduced after domain adaptation. These results show that A2GCN can effectively extract transferable features and reduce domain shift.

5.2. Most Informative Brain Regions

One of the main focuses of this work is to use interpretable deep learning algorithms to discover the underlying differences between ASD and HC subjects. An interesting question is to identify the most informative brain regions for ASD detection. In the task of “NYU→UM”, we randomly select 10 subjects from the UM site. We then extract the features of these subjects after the attention mechanism module, select 19 brain regions with strong correlation, and visualize them using BrainNet [46] tool, with results shown in Figure 5. In Figure 5, the color of brain regions is randomly assigned, and the stick-like connections between brain regions indicate strong FC between them. For ASD vs. HC classification, we find that the most informative brain regions include the hippocampus, parahippocampal gyrus, putamen lentiform, and the vicinity of thalamus, which is also consistent with previous studies [47,48]. It validates the potential application value of our model in the discovery of rs-fMRI biomarkers for ASD identification, thus helping to improve the interpretability of learning algorithms in automated brain disease detection.

5.3. Limitations and Future Work

Although our proposed A2GCN method has achieved good results in the prediction of ASD, there is still challenging work to be considered in the future. First, in our current work, only knowledge transfer between a single source domain and a target domain is considered. It is also interesting to explore the shared features of multiple source domains to reduce the heterogeneity of data and thus improve the learning performance of the target domain. Second, the size of the training sample is relatively small. We hope to add unlabeled samples from other public datasets to assist in pre-training the proposed network in a semi-supervised learning manner, aiming to further improve model generalization capability [49].

6. Conclusions

In this paper, we construct a multi-site unsupervised rs-fMRI domain adaptation framework (A2GCN) with an attention mechanism for ASD diagnosis. The framework automatically extracts rs-fMRI features from brain FCNs with the help of the GCN model. The attention mechanism is used to explore the contribution of different brain regions to the automatic detection of brain diseases and explore the interpretable features of brain regions. In addition, our method explores mean absolute error and covariance-based constraints to alleviate data distribution differences among imaging sites. We evaluate our proposed method using rs-fMRI data from a real multi-site dataset (ABIDE). Experimental results show that the A2GCN has significant advantages over several advanced methods.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/brainsci12101413/s1, Figure S1: Classification performance by the proposed model based on different parametric values. The abscissa represents the ratio of MAE loss to CORAL loss (γ1:γ2) during model training; Figure S2: Impact of pre-training times on model classification results. The abscissa represents the epoch values set during the pre-training process.

Author Contributions

Conceptualization, M.L.; methodology, Y.C.; software, Y.C.; investigation, Y.C.; writing—original draft preparation, Y.C.; writing—review and editing, L.Q. and H.R.; supervision, M.L.; project administration, M.L. and L.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data used in this study are available from the corresponding author on reasonable request.

Acknowledgments

Y.C, H.R., and L.Q. were partly supported by the National Natural Science Foundation of China (Nos. 62176112, 61976110 and 11931008), the Taishan Scholar Program of Shandong Province, and the Natural Science Foundation of Shandong Province (No. ZR202102270451).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Buckner, R.L.; Krienen, F.M.; Yeo, B.T. Opportunities and limitations of intrinsic functional connectivity MRI. Nat. Neurosci. 2013, 16, 832–837. [Google Scholar] [CrossRef] [PubMed]
  2. McCarty, P.J.; Pines, A.R.; Sussman, B.L.; Wyckoff, S.N.; Jensen, A.; Bunch, R.; Boerwinkle, V.L.; Frye, R.E. Resting State Functional Magnetic Resonance Imaging Elucidates Neurotransmitter Deficiency in Autism Spectrum Disorder. J. Pers. Med. 2021, 11, 969. [Google Scholar] [CrossRef] [PubMed]
  3. Subah, F.Z.; Deb, K.; Dhar, P.K.; Koshiba, T. A deep learning approach to predict Autism Spectrum Disorder using multisite resting-state fMRI. Appl. Sci. 2021, 11, 3636. [Google Scholar] [CrossRef]
  4. Walsh, M.J.; Wallace, G.L.; Gallegos, S.M.; Braden, B.B. Brain-based sex differences in autism spectrum disorder across the lifespan: A systematic review of structural MRI, fMRI, and DTI findings. NeuroImage Clin. 2021, 31, 102719. [Google Scholar] [CrossRef] [PubMed]
  5. Shrivastava, S.; Mishra, U.; Singh, N.; Chandra, A.; Verma, S. Control or autism-classification using convolutional neural networks on functional MRI. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; pp. 1–6. [Google Scholar]
  6. Niu, K.; Guo, J.; Pan, Y.; Gao, X.; Peng, X.; Li, N.; Li, H. Multichannel deep attention neural networks for the classification of Autism Spectrum Disorder using neuroimaging and personal characteristic data. Complexity 2020, 2020. [Google Scholar] [CrossRef]
  7. Yamashita, A.; Yahata, N.; Itahashi, T.; Lisi, G.; Yamada, T.; Ichikawa, N.; Takamura, M.; Yoshihara, Y.; Kunimatsu, A.; Okada, N.; et al. Harmonization of resting-state functional MRI data across multiple imaging sites via the separation of site differences into sampling bias and measurement bias. PLoS Biol. 2019, 17, e3000042. [Google Scholar] [CrossRef] [Green Version]
  8. Lee, J.; Kang, E.; Jeon, E.; Suk, H.I. Meta-modulation Network for Domain Generalization in Multi-site fMRI Classification. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Virtual Event, 27 September–1 October 2021; Springer: Berlin, Germany, 2021; pp. 500–509. [Google Scholar]
  9. Zhang, Y.; Liu, T.; Long, M.; Jordan, M. Bridging theory and algorithm for domain adaptation. In Proceedings of the International Conference on Machine Learning (PMLR), Long Beach, CA, USA, 9–15 June 2019; pp. 7404–7413. [Google Scholar]
  10. Farahani, A.; Voghoei, S.; Rasheed, K.; Arabnia, H.R. A brief review of domain adaptation. Adv. Data Sci. Inf. Eng. 2021, 877–894. [Google Scholar] [CrossRef]
  11. You, K.; Long, M.; Cao, Z.; Wang, J.; Jordan, M.I. Universal domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2720–2729. [Google Scholar]
  12. Jiang, X.; Zhang, L.; Qiao, L.; Shen, D. Estimating functional connectivity networks via low-rank tensor approximation with applications to MCI identification. IEEE Trans. Biomed. Eng. 2019, 67, 1912–1920. [Google Scholar] [CrossRef]
  13. Xing, X.; Li, Q.; Wei, H.; Zhang, M.; Zhan, Y.; Zhou, X.S.; Xue, Z.; Shi, F. Dynamic spectral graph convolution networks with assistant task training for early MCI diagnosis. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; Springer: Berlin, Germany, 2019; pp. 639–646. [Google Scholar]
  14. Jie, B.; Wee, C.Y.; Shen, D.; Zhang, D. Hyper-connectivity of functional networks for brain disease diagnosis. Med. Image Anal. 2016, 32, 84–100. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, Y.; Jiang, X.; Qiao, L.; Liu, M. Modularity-Guided Functional Brain Network Analysis for Early-Stage Dementia Identification. Front. Neurosci. 2021, 15, 956. [Google Scholar] [CrossRef]
  16. Zhang, D.; Huang, J.; Jie, B.; Du, J.; Tu, L.; Liu, M. Ordinal pattern: A new descriptor for brain connectivity networks. IEEE Trans. Med. Imaging 2018, 37, 1711–1722. [Google Scholar] [CrossRef] [PubMed]
  17. Niepert, M.; Ahmed, M.; Kutzkov, K. Learning convolutional neural networks for graphs. In Proceedings of the International Conference on Machine Learning (PMLR), New York, NY, USA, 20–22 June 2016; pp. 2014–2023. [Google Scholar]
  18. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  19. Anirudh, R.; Thiagarajan, J.J. Bootstrapping graph convolutional neural networks for Autism spectrum disorder classification. In Proceedings of the ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 3197–3201. [Google Scholar]
  20. Cao, M.; Yang, M.; Qin, C.; Zhu, X.; Chen, Y.; Wang, J.; Liu, T. Using DeepGCN to identify the Autism spectrum disorder from multi-site resting-state data. Biomed. Signal Process. Control 2021, 70, 103015. [Google Scholar] [CrossRef]
  21. Yu, S.; Wang, S.; Xiao, X.; Cao, J.; Yue, G.; Liu, D.; Wang, T.; Xu, Y.; Lei, B. Multi-scale enhanced graph convolutional network for early mild cognitive impairment detection. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin, Germany, 2020; pp. 228–237. [Google Scholar]
  22. Parisot, S.; Ktena, S.I.; Ferrante, E.; Lee, M.; Guerrero, R.; Glocker, B.; Rueckert, D. Disease Prediction Using Graph Convolutional Networks: Application to Autism Spectrum Disorder and Alzheimer’s Disease. Med. Image Anal. 2018, 48, 117–130. [Google Scholar] [CrossRef] [Green Version]
  23. Di Martino, A.; Yan, C.G.; Li, Q.; Denio, E.; Castellanos, F.X.; Alaerts, K.; Anderson, J.S.; Assaf, M.; Bookheimer, S.Y.; Dapretto, M.; et al. The Autism brain imaging data exchange: Towards a large-scale evaluation of the intrinsic brain architecture in Autism. Mol. Psychiatry 2014, 19, 659–667. [Google Scholar] [CrossRef]
  24. Abu-El-Haija, S.; Kapoor, A.; Perozzi, B.; Lee, J. N-GCN: Multi-scale graph convolution for semi-supervised node classification. In Proceedings of the Uncertainty In Artificial Intelligence (PMLR), Virtual, 3–6 August 2020; pp. 841–851. [Google Scholar]
  25. Zhang, M.; Cui, Z.; Neumann, M.; Chen, Y. An end-to-end deep learning architecture for graph classification. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  26. Chen, Y.; Ma, G.; Yuan, C.; Li, B.; Zhang, H.; Wang, F.; Hu, W. Graph convolutional network with structure pooling and joint-wise channel attention for action recognition. Pattern Recognit. 2020, 103, 107321. [Google Scholar] [CrossRef]
  27. Ktena, S.I.; Parisot, S.; Ferrante, E.; Rajchl, M.; Lee, M.; Glocker, B.; Rueckert, D. Metric learning with spectral graph convolutions on brain connectivity networks. NeuroImage 2018, 169, 431–442. [Google Scholar] [CrossRef]
  28. Wang, L.; Li, K.; Hu, X.P. Graph convolutional network for fMRI analysis based on connectivity neighborhood. Netw. Neurosci. 2021, 5, 83–95. [Google Scholar] [CrossRef]
  29. Yao, D.; Sui, J.; Yang, E.; Yap, P.T.; Shen, D.; Liu, M. Temporal-adaptive graph convolutional network for automated identification of major depressive disorder using resting-state fMRI. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Lima, Peru, 4 October 2020; Springer: Berlin, Germany, 2020; pp. 1–10. [Google Scholar]
  30. Gadgil, S.; Zhao, Q.; Pfefferbaum, A.; Sullivan, E.V.; Adeli, E.; Pohl, K.M. Spatio-temporal graph convolution for resting-state fMRI analysis. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; Springer: Berlin, Germany, 2020; pp. 528–538. [Google Scholar]
  31. Csurka, G. A comprehensive survey on domain adaptation for visual applications. Domain Adapt. Comput. Vis. Appl. 2017, 1–35. [Google Scholar] [CrossRef]
  32. Guan, H.; Liu, Y.; Yang, E.; Yap, P.T.; Shen, D.; Liu, M. Multi-site MRI harmonization via attention-guided deep domain adaptation for brain disorder identification. Med. Image Anal. 2021, 71, 102076. [Google Scholar] [CrossRef]
  33. Guan, H.; Liu, M. Domain adaptation for medical image analysis: A survey. IEEE Trans. Biomed. Eng. 2021, 69, 1173–1185. [Google Scholar] [CrossRef] [PubMed]
  34. Wang, M.; Deng, W. Deep visual domain adaptation: A survey. Neurocomputing 2018, 312, 135–153. [Google Scholar] [CrossRef] [Green Version]
  35. Ingalhalikar, M.; Shinde, S.; Karmarkar, A.; Rajan, A.; Rangaprakash, D.; Deshpande, G. Functional connectivity-based prediction of Autism on site harmonized ABIDE dataset. IEEE Trans. Biomed. Eng. 2021, 68, 3628–3637. [Google Scholar] [CrossRef] [PubMed]
  36. Zhang, J.; Liu, M.; Pan, Y.; Shen, D. Unsupervised conditional consensus adversarial network for brain disease identification with structural MRI. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Shenzhen, China, 13–17 October 2019; Springer: Berlin, Germany, 2019; pp. 391–399. [Google Scholar]
  37. Cangea, C.; Veličković, P.; Jovanović, N.; Kipf, T.; Liò, P. Towards sparse hierarchical graph classifiers. arXiv 2018, arXiv:1811.01287. [Google Scholar]
  38. Lee, J.; Lee, I.; Kang, J. Self-attention graph pooling. In Proceedings of the International Conference on Machine Learning (PMLR), Long Beach, CA, USA, 9–15 June 2019; pp. 3734–3743. [Google Scholar]
  39. Sun, B.; Saenko, K. Deep coral: Correlation alignment for deep domain adaptation. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin, Germany, 2016; pp. 443–450. [Google Scholar]
  40. Wang, M.; Zhang, D.; Huang, J.; Yap, P.T.; Shen, D.; Liu, M. Identifying Autism Spectrum Disorder with multi-site fMRI via low-rank domain adaptation. IEEE Trans. Med. Imaging 2019, 39, 644–655. [Google Scholar] [CrossRef]
  41. Craddock, C.; Sikka, S.; Cheung, B.; Khanuja, R.; Ghosh, S.S.; Yan, C.; Li, Q.; Lurie, D.; Vogelstein, J.; Burns, R.; et al. Towards automated analysis of connectomes: The configurable pipeline for the analysis of connectomes (C-PAC). Front. Neuroinform. 2013, 42, 10–3389. [Google Scholar]
  42. Tzourio-Mazoyer, N.; Landeau, B.; Papathanassiou, D.; Crivello, F.; Etard, O.; Delcroix, N.; Mazoyer, B.; Joliot, M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 2002, 15, 273–289. [Google Scholar] [CrossRef]
  43. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 2016, 17, 2096-2030. [Google Scholar]
  44. Wu, M.; Pan, S.; Zhou, C.; Chang, X.; Zhu, X. Unsupervised domain adaptive graph convolutional networks. In Proceedings of the Web Conference 2020, Taipei, Taiwan, 20–24 April 2020; pp. 1457–1467. [Google Scholar]
  45. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  46. Xia, M.; Wang, J.; He, Y. BrainNet Viewer: A network visualization tool for human brain connectomics. PLoS ONE 2013, 8, e68910. [Google Scholar] [CrossRef] [Green Version]
  47. Sussman, D.; Leung, R.; Vogan, V.; Lee, W.; Trelle, S.; Lin, S.; Cassel, D.; Chakravarty, M.; Lerch, J.; Anagnostou, E.; et al. The Autism puzzle: Diffuse but not pervasive neuroanatomical abnormalities in children with ASD. NeuroImage Clin. 2015, 8, 170–179. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Sun, L.; Xue, Y.; Zhang, Y.; Qiao, L.; Zhang, L.; Liu, M. Estimating sparse functional connectivity networks via hyperparameter-free learning model. Artif. Intell. Med. 2021, 111, 102004. [Google Scholar] [CrossRef] [PubMed]
  49. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
Figure 1. Architecture of the proposed multi-site resting-state fMRI adaptation framework (A2GCN) with an attention-guided GCN for brain disorder identification. The A2GCN consists of three components: (1) With the help of GCN model, rs-fMRI features are automatically extracted from the brain graph from the source or target domains; (2) Explore the potential contribution of different brain regions to automatic detection of brain diseases by using attention mechanism; (3) Under the constraints of mean absolute error and covariance, the objective function (composed of MAE loss, CORAL loss and cross entropy loss) is established for knowledge transfer between different domains.
Figure 1. Architecture of the proposed multi-site resting-state fMRI adaptation framework (A2GCN) with an attention-guided GCN for brain disorder identification. The A2GCN consists of three components: (1) With the help of GCN model, rs-fMRI features are automatically extracted from the brain graph from the source or target domains; (2) Explore the potential contribution of different brain regions to automatic detection of brain diseases by using attention mechanism; (3) Under the constraints of mean absolute error and covariance, the objective function (composed of MAE loss, CORAL loss and cross entropy loss) is established for knowledge transfer between different domains.
Brainsci 12 01413 g001
Figure 2. Structure of node attention mechanism module. X i R N and H i R D are the input and output of the convolutional layer, respectively. i = 1 , , N . After the two-layer graph convolution, the spatial dimension of the output layer is limited to N × D × M . With the help of the max pooling operation, the global feature descriptor ( H m a x ) of N × 1 × M is generated from the tensor, and then it is mapped into an attention score ( H a t t ) through the fully connected layer, and the dimension is unchanged. Dot product this attention score with the original N × D × M tensor ( H = [ S 1 , S 2 , , S M ] ). The result of the dot product is added to the original N × D × M tensor (H), and finally each node gets the feature with the attention mechanism reweighting. FC: Fully connected layers.
Figure 2. Structure of node attention mechanism module. X i R N and H i R D are the input and output of the convolutional layer, respectively. i = 1 , , N . After the two-layer graph convolution, the spatial dimension of the output layer is limited to N × D × M . With the help of the max pooling operation, the global feature descriptor ( H m a x ) of N × 1 × M is generated from the tensor, and then it is mapped into an attention score ( H a t t ) through the fully connected layer, and the dimension is unchanged. Dot product this attention score with the original N × D × M tensor ( H = [ S 1 , S 2 , , S M ] ). The result of the dot product is added to the original N × D × M tensor (H), and finally each node gets the feature with the attention mechanism reweighting. FC: Fully connected layers.
Brainsci 12 01413 g002
Figure 3. Ablation studies are performed to verify the effect of different components in the proposed model. A2GCN_A (without domain adaptation module), A2GCN_M (with attention mechanism module and part of domain adaptation module), and A2GCN_C (without domain attention mechanism module) are three variations of our model. ACC: Accuracy; AUC: Area under curve.
Figure 3. Ablation studies are performed to verify the effect of different components in the proposed model. A2GCN_A (without domain adaptation module), A2GCN_M (with attention mechanism module and part of domain adaptation module), and A2GCN_C (without domain attention mechanism module) are three variations of our model. ACC: Accuracy; AUC: Area under curve.
Brainsci 12 01413 g003
Figure 4. Visualization of (a) the original data distribution before domain adaptation and (b) the data distribution after adjustment through our proposed domain adaptation model for ABIDE data set. The blue dots are from the source domain and the red dots are from the target domain. CF: Frobenius norm of the covariance between the source and target domains.
Figure 4. Visualization of (a) the original data distribution before domain adaptation and (b) the data distribution after adjustment through our proposed domain adaptation model for ABIDE data set. The blue dots are from the source domain and the red dots are from the target domain. CF: Frobenius norm of the covariance between the source and target domains.
Brainsci 12 01413 g004
Figure 5. Visualization of the 19 brain regions generated by 10 randomly selected subjects from the UM site (according to the results of A2GCN in the domain adaptation task of “NYU→UM”). Colors of brain regions are randomly assigned, just for better visualization. The stick-like connections between brain regions indicate strong functional connectivity between them.
Figure 5. Visualization of the 19 brain regions generated by 10 randomly selected subjects from the UM site (according to the results of A2GCN in the domain adaptation task of “NYU→UM”). Colors of brain regions are randomly assigned, just for better visualization. The stick-like connections between brain regions indicate strong functional connectivity between them.
Brainsci 12 01413 g005
Table 1. Notations and descriptions used in this paper.
Table 1. Notations and descriptions used in this paper.
  NotationDescription
   G s = ( V s , A s , X s , Y s ) Source graph
   G t = ( V t , A t , X t ) Target graph
   V s , V t Set of nodes
   Y s R M s Source data label
   A , A s , A t Adjacency matrix
   X s R M s × D s Source feature matrix
   X t R M t × D t Target feature matrix
   H s , H t Learned features
   Z s , Z t Learned features
   M , M s , M t Number of samples
   N , N s , N t Number of nodes on the graph
   D , D s , D t Feature dimension
   f C Source domain classifier
   L C , L M , L A Loss function
   γ 1 , γ 2 The balance parameters
Table 2. Demographic information of three sites (NYU, UM, UCLA) of the public ABIDE dataset. Values are counted as mean ± standard deviation. M/F: Male/Female; ASD: Autism Spectrum Disorder; HC: Healthy Controls.
Table 2. Demographic information of three sites (NYU, UM, UCLA) of the public ABIDE dataset. Values are counted as mean ± standard deviation. M/F: Male/Female; ASD: Autism Spectrum Disorder; HC: Healthy Controls.
Name of the siteCategoryGender (M/F)Age
NYUASD (N = 71)66/517.59 ± 7.84
HC (N = 93)79/1416.49 ± 7.68
UMASD (N = 48)43/517.05 ± 8.36
HC (N = 65)56/917.35 ± 7.12
UCLAASD (N = 36)28/816.27 ± 6.48
HC (N = 38)31/714.65 ± 4.97
Table 3. Results of different models in ASD vs. NC classification task based on rs-fMRI data in NYU, UM, and UCLA sites. The data set preceding the arrow represents the source domain, and the arrow is followed by the target domain to predict. Values are reported as mean ± standard deviation. DC: Degree centrality; BD: Feature fusion using betweenness centrality and degree centrality; BDC: Feature fusion using betweenness centrality, degree centrality, and closeness centrality; DNN: Deep neural networks; GCN: Graph convolutional networks; DNNC: Cross-domain model based on multi-layer perceptron; MMD: Maximum Mean Discrepancy; DANN: Domain Adversarial Neural Network; ACC: Accuracy; Pre: Precision; Rec: Recall; F1: F1-Score; BAC: Balanced accuracy: NPV: Negative predictive value; AUC: Area under curve. The bold values mean to highlight the experiment results.
Table 3. Results of different models in ASD vs. NC classification task based on rs-fMRI data in NYU, UM, and UCLA sites. The data set preceding the arrow represents the source domain, and the arrow is followed by the target domain to predict. Values are reported as mean ± standard deviation. DC: Degree centrality; BD: Feature fusion using betweenness centrality and degree centrality; BDC: Feature fusion using betweenness centrality, degree centrality, and closeness centrality; DNN: Deep neural networks; GCN: Graph convolutional networks; DNNC: Cross-domain model based on multi-layer perceptron; MMD: Maximum Mean Discrepancy; DANN: Domain Adversarial Neural Network; ACC: Accuracy; Pre: Precision; Rec: Recall; F1: F1-Score; BAC: Balanced accuracy: NPV: Negative predictive value; AUC: Area under curve. The bold values mean to highlight the experiment results.
Source→TargetMethodACC (%)Pre (%)Rec (%)F1 (%)BAC (%)NPV (%)AUC (%)
DC53.54 ± 1.8846.33 ± 0.2554.60 ± 1.8950.55 ± 9.3254.17 ± 1.4562.86 ± 4.0454.60 ± 1.89
BD56.64 ± 1.2549.29 ± 1.0158.17 ± 0.2357.00 ± 0.9058.09 ± 0.5267.06 ± 0.5458.17 ± 0.23
BDC54.43 ± 1.8747.48 ± 1.5556.51 ± 2.1256.17 ± 2.0756.3 ± 2.0265.54 ± 2.6956.51 ± 2.12
DNN58.85 ± 0.6258.67 ± 1.6058.78 ± 1.7058.39 ± 1.1758.78 ± 1.7065.99 ± 2.7351.72 ± 4.19
NYU→UMGCN61.07 ± 1.2560.65 ± 0.9560.84 ± 0.8960.61 ± 1.0560.84 ± 0.8967.49 ± 0.3559.28 ± 0.02
DNNC61.07 ± 1.2561.36 ± 3.4961.11 ± 3.5960.27 ± 2.3561.11 ± 3.5969.10 ± 6.8059.89 ± 9.31
MMD66.82 ± 0.6366.20 ± 0.6966.04 ± 0.4666.09 ± 0.4566.12 ± 0.3571.32 ± 0.1665.77 ± 1.32
DANN66.82 ± 0.6366.72 ± 0.2067.07 ± 0.1666.56 ± 0.4565.19 ± 2.5170.61 ± 5.5764.35 ± 0.84
A2GCN (Ours)72.27 ± 0.5171.94 ± 0.5072.35 ± 0.5271.97 ± 0.4972.35 ± 0.5278.23 ± 0.9770.90 ± 1.53
DC58.79 ± 2.8657.51 ± 2.7658.77 ± 2.8857.92 ± 3.3358.78 ± 2.8960.03 ± 3.0258.77 ± 2.88
BD56.08 ± 2.8755.04 ± 2.9756.02 ± 2.8953.89 ± 3.4756.00 ± 2.8956.99 ± 2.8156.02 ± 2.89
BDC58.79 ± 0.9557.74 ± 0.8458.75 ± 0.9757.34 ± 1.4158.74 ± 0.9859.75 ± 1.1060.11 ± 0.96
DNN60.14 ± 0.9560.11 ± 0.9660.05 ± 0.8860.03 ± 0.8560.05 ± 0.8860.76 ± 0.3259.83 ± 1.91
NYU→UCLAGCN61.49 ± 0.9561.50 ± 1.0061.44 ± 1.0961.40 ± 1.0861.44 ± 1.0962.44 ± 2.0658.19 ± 1.76
DNNC60.81 ± 3.8260.88 ± 3.9260.60 ± 3.8360.46 ± 3.8560.60 ± 3.8360.47 ± 3.2953.77 ± 3.98
MMD66.89 ± 0.9666.94 ± 0.8566.92 ± 0.8866.88 ± 0.9466.92 ± 0.8868.50 ± 0.1164.51 ± 1.91
DANN66.90 ± 0.9567.14 ± 1.3466.96 ± 1.1466.82 ± 0.9366.96 ± 1.1469.28 ± 3.6865.87 ± 0.52
A2GCN (Ours)69.82 ± 1.5670.09 ± 1.5669.83 ± 1.5669.71 ± 1.5669.83 ± 1.5671.38 ± 1.5667.03 ± 1.56
DC53.66 ± 0.8646.31 ± 1.4152.66 ± 1.4145.62 ± 3.7652.65 ± 1.4659.00 ± 1.4152.66 ± 1.41
BD57.02 ± 0.4350.33 ± 0.4656.45 ± 0.6851.53 ± 1.6656.52 ± 0.7362.59 ± 0.8956.46 ± 0.67
BDC53.66 ± 0.8647.23 ± 0.9154.46 ± 1.2753.06 ± 2.1154.48 ± 1.2461.70 ± 1.6554.46 ± 1.27
DNN59.15 ± 1.7358.57 ± 1.7458.65 ± 1.7558.59 ± 1.7558.65 ± 1.7564.45 ± 1.5855.49 ± 2.08
UM→NYUGCN63.11 ± 0.4362.96 ± 0.0363.15 ± 0.0962.83 ± 0.2063.15 ± 0.0969.27 ± 1.0364.35 ± 0.24
DNNC60.68 ± 1.2959.99 ± 1.6560.00 ± 1.8559.95 ± 1.7360.00 ± 1.8565.49 ± 2.2162.68 ± 3.57
MMD66.16 ± 1.2965.44 ± 1.3465.08 ± 1.2665.18 ± 1.2765.08 ± 1.2669.04 ± 0.9466.18 ± 2.17
DANN66.16 ± 0.4365.59 ± 0.6765.50 ± 1.0965.47 ± 0.9065.50 ± 1.0970.14 ± 2.0565.34 ± 0.69
A2GCN (Ours)68.70 ± 0.7068.73 ± 0.6369.07 ± 0.6568.56 ± 0.6869.07 ± 0.6575.52 ± 0.7166.77 ± 0.43
DC54.73 ± 0.9553.81 ± 0.6854.65 ± 1.0051.00 ± 3.5654.57 ± 1.0955.48 ± 1.3254.65 ± 1.00
BD54.73 ± 0.9653.28 ± 1.1054.80 ± 0.8655.03 ± 0.3354.79 ± 0.8856.32 ± 0.6254.80 ± 0.86
BDC56.08 ± 4.7854.39 ± 4.4056.21 ± 4.8456.93 ± 5.0956.18 ± 4.8158.03 ± 5.2856.21 ± 4.84
DNN56.76 ± 3.8356.79 ± 4.0256.69 ± 4.0956.47 ± 4.1956.69 ± 4.0958.16 ± 5.1052.31 ± 1.39
UM→UCLAGCN61.49 ± 0.9561.47 ± 0.9361.44 ± 0.8861.43 ± 0.8861.44 ± 0.8862.33 ± 0.2458.52 ± 1.50
DNNC60.14 ± 0.9560.13 ± 0.9660.05 ± 1.0960.00 ± 1.1460.05 ± 1.0960.84 ± 1.8746.50 ± 4.86
MMD65.54 ± 0.9665.54 ± 0.9765.50 ± 1.0365.49 ± 1.0365.50 ± 1.0366.29 ± 1.8265.24 ± 1.60
DANN65.54 ± 0.9665.57 ± 0.9365.57 ± 0.9365.54 ± 0.9565.57 ± 0.9367.12 ± 0.6461.26 ± 4.45
A2GCN (Ours)70.61 ± 2.5671.71 ± 3.4270.65 ± 2.2070.22 ± 2.2370.52 ± 2.2970.92 ± 3.0971.29 ± 1.29
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chu, Y.; Ren, H.; Qiao, L.; Liu, M. Resting-State Functional MRI Adaptation with Attention Graph Convolution Network for Brain Disorder Identification. Brain Sci. 2022, 12, 1413. https://doi.org/10.3390/brainsci12101413

AMA Style

Chu Y, Ren H, Qiao L, Liu M. Resting-State Functional MRI Adaptation with Attention Graph Convolution Network for Brain Disorder Identification. Brain Sciences. 2022; 12(10):1413. https://doi.org/10.3390/brainsci12101413

Chicago/Turabian Style

Chu, Ying, Haonan Ren, Lishan Qiao, and Mingxia Liu. 2022. "Resting-State Functional MRI Adaptation with Attention Graph Convolution Network for Brain Disorder Identification" Brain Sciences 12, no. 10: 1413. https://doi.org/10.3390/brainsci12101413

APA Style

Chu, Y., Ren, H., Qiao, L., & Liu, M. (2022). Resting-State Functional MRI Adaptation with Attention Graph Convolution Network for Brain Disorder Identification. Brain Sciences, 12(10), 1413. https://doi.org/10.3390/brainsci12101413

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop