1. Introduction
At present, the clinical diagnosis of neurodegenerative diseases is mainly based on the scale method, which can quickly evaluate and screen the severity of the patient’s disease, but it is not sensitive enough for the early stage of disease [
1,
2]. For these reasons, neuroimaging methods are mostly supplemented in the study of changes in brain functional patterns, such as positron emission tomography (PET), structural magnetic resonance imaging (sMRI), and functional magnetic resonance imaging (fMRI). Among them, PET requires contrast agents, which are harmful to the body; sMRI is suitable to capture microscopic levels of neuronal loss and gray matter atrophy; however, not all neurodegenerative diseases produce these changes. The neuroimaging data represented by fMRI based on blood oxygenation level dependent (BOLD) have the advantages of being non-invasive, repeatable, and high spatiotemporal resolution, which not only has an excellent display effect on brain structure, but can also reflect the timing changes of the brain’s functional status [
3]; therefore, it is an ideal method to study the activity patterns and the relationships across the brain. Moreover, it provides new insight into the study of the functional status of the brain in patients with neurodegenerative diseases. Modern brain imaging techniques and statistical physics, especially complex network theory, provides the necessary basis and analysis methods for the study of human brain functional networks (BFN) using neuroimaging data. The studies of BFN based on neuroimaging data such as fMRI are significant for the analysis and study of neurological diseases [
4,
5,
6].
The fMRI-based BFN are constructed such that brain regions or voxels are used as nodes and the correlations of BOLD changes between them are used as connection conditions, where most of them are static brain functional networks [
7,
8]. Static networks can reflect the connectivity pattern of the brain, but ignore the timing change information of BOLD signals; however, with the development of temporal graphs, dynamic brain functional networks (D-BFN) gradually enter the field of view of related researchers and have received more and more attention.
In the research of D-BFN, Wang et al. [
9] explored the association between alterations in the dynamic brain networks’ trajectory and cognitive decline in the AD spectrum; the results show that all resting-state networks (RSNs) had an increase in connectivity within networks by enhancing inner cohesive ability, while 7 out of 10 RSNs were characterized by a decrease in connectivity between networks, which indicated a weakened connector among networks from the early stage to dementia. Jie et al. [
10] defined a new measure to characterize the spatial variability of DCNs, and then proposed a novel learning framework to integrate both temporal and spatial variabilities of DCNs for automatic brain disease diagnosis. The results of a study on 149 subjects with baseline rs-fMRI data from the Alzheimer’s disease neuroimaging initiative (ADNI) suggest that the method can not only improve the classification performance, but also provide insights into the spatio-temporal interaction patterns of brain activity and their changes in brain disorders. Aldo et al. [
11] proposed that spatial maps constituting the nodes in the functional brain network and their associated time-series were estimated using spatial group independent component analysis and dual regression; whole-brain oscillatory activity was analyzed both globally (metastability) and locally (static and dynamic connectivity). Morin et al. [
12] used a dynamic network analysis of fMRI data to identify changes in functional brain networks that are associated with context-dependent rule learning; the results support a framework by which a stable ventral attention community and more flexible cognitive control community support sustained attention and the formation of rule representations in successful learners. Moguilner et al. [
13] proposed data-driven machine learning pipeline based on dynamic connectivity fluctuation analysis (DCFA) on RS-fMRI data from 300 participant; the results show that non-linear dynamical fluctuations surpass two traditional seed-based functional connectivity approaches and provide a pathophysiological characterization of global brain networks in neurodegenerative conditions (AD and bvFTD) across multicenter data. Wang et al. [
14] investigated the effects of driving fatigue on the reorganization of dynamic functional connectivity through our newly developed temporal brain network analysis framework. The method provides new insights into dynamic characteristics of functional connectivity during driving fatigue and demonstrate the potential for using temporal network metrics as reliable biomarkers for driving fatigue detection. Different from conventional studies focusing on static descriptions on functional connectivity (FC) between brain regions in rs-fMRI, recent studies have resorted to D-BFN to characterize the dynamic changes of FC, since dynamic changes of FC may indicate changes in macroscopic neural activity patterns in cognitive and behavioral aspects.
The D-BFNs are usually established by using the sliding window method, which is used to segment the BOLD signals into small pieces—this leads to the size of the window and the length of the split signal directly affecting the effectiveness of the D-BFN. Moreover, due to the sampling point limitation of the BOLD signals, the number of windows used to segment the signals will not be too large, so there is no way to establish the instantaneous BFN of each data segment well. This will lead to the D-BFN established by the sliding window method being discrete. Moreover, because there is a lack of transition between network snapshots, some changing trend information cannot be displayed. The problem that must be faced with the D-BFN based on fMRI data is that the BOLD signal in the data will not have a long duration, which is determined by the characteristics of high spatial resolution and general temporal resolution of fMRI data [
15,
16]. To establish a D-BFN that can clearly reflect the dynamic changes of brain connectivity, it is necessary not only to consider how to establish a continuous BFN model that is continuous within the time range of data acquisition, but also to extend more time sequence information on the basis of existing time sequence data [
17].
Dynamic networks are widely used in social network analysis [
18,
19], recommendation systems, epidemiology, and others. Representing a complex network as a time-dependent structure allows the network model to exploit not only structural pattern but also temporal patterns. Learning continuous-time dynamics on complex networks is essential for understanding, predicting, and controlling complex systems in science and engineering. Nevertheless, this task is very challenging due to the combinatorial complexity of high-dimensional system structures, the elusive continuous-time nonlinear dynamics, and their structural dynamics dependence. The brain functional network model based on continuous time should consider how to simulate the complete temporal dynamics system by using the existing image data. To meet these challenges, appropriate methods are needed to learn continuous temporal dynamics on complex brain functional networks in a data-driven manner. In recent years, graph neural networks (GNN) have attracted much attention for their excellent performance in a series of network scientific tasks such as link prediction and node classification. Despite the fact that the graph neural network is very popular and the dynamic network models have demonstrated its advantages, little attention is paid to graph neural networks used for dynamic networks. Ordinary differential equation systems (ODEs) [
20] area one of the most important mathematical tools used to generate models in physical, biological, chemical, and engineering fields, among others. This prompts researchers to provide effective numerical methods to solve such equations. Higher-order ordinary differential equations are commonly used to solve time-series problems; therefore, one may consider combining ordinary differential equation systems and graph neural networks to learn to compute continuous temporal dynamics on BFNs in a data-driven manner.
D-BFN is usually established by using time window partition to segment BOLD signals, which leads to the size of window and the length of split signals directly affecting the effectiveness of dynamic brain function network [
13,
21]. Moreover, due to the limitation of sampling points of BOLD signal, the number of divided windows will not be too significant, so there is no way to establish the instantaneous brain function network of each data segment, and it is difficult to predict the change trend of BFN in the next stage after data acquisition.
As there has been no research to consider the dynamic continuity of the network in the establishment of the model of D-BFN, this paper attempts to calculate the continuity information. There are some deep learning methods that can establish the time dynamic response on the network [
22], among which the method of Zang et al. [
23] is outstanding. The neural dynamics on complex networks (NDCN) model proposed by Zang et al. [
23] combines ordinary differential equation systems (ODEs) and graph neural networks (GNNs) to learn continuous-time dynamics on complex networks in a data-driven way. In the view of Skarding et al. [
22], there is, as of yet, no continuous DGNN encoder for any general-purpose dynamic network; however, while this approach to modeling dynamics has been discussed by earlier works, to the best of our knowledge, it has not been implemented in practice. For this reason, we discuss the possibility of using the NDCN method in the study of D-BFN.
An extensible dynamic brain function network model is established by using the NDCN. NDCN integrates the GNN layer numerically in continuous time, so as to capture the continuous-time dynamics on the network. This method mainly includes two important functions. One is to calculate the network structure of each instantaneous in the dynamic network within the BOLD signal length by using the existing data, that is, interpolation prediction. The other is to predict the continuous changes of BOLD signal collected by dynamic brain function network in the next time, that is, extensible prediction.
The contributions can be summarized as follows:
- 1
The NDCN-Brain gives the meaning of the continuous-time network dynamics to the depth and hidden outputs of GNNs, respectively, and predicts continuous-time dynamics on BFN.
- 2
An extended dynamic brain function network model structure is established, which makes up for the length limitation of fMRI data and improves the temporal resolution of the network in a certain range.
- 3
The network is applied to the auxiliary diagnosis of cognitive impairment and high diagnostic performance is obtained.
The remainder of the paper is organized as follows, the method used in the model is introduced in
Section 2, and the model is tested in
Section 3. Finally, the paper is summarized in
Section 4.
2. Methods
2.1. Overview
An extended dynamic brain function network based on neural dynamics on complex networks (NDCN) is proposed as shown in
Figure 1.
For the D-BFN model described in this study, the BOLD signal is divided into several slices by window partition, and then the network within each segment is calculated. Then, in-snapshots within the signal length are obtained by using these segment networks through NDCN interpolation prediction, and out-snapshots outside the signal are predicted by extrapolation prediction. These in-snapshots and out-snapshots together constitute a more complex dynamic brain function network model: NDCN-Brain. Using this network dynamics analysis method, we can effectively capture the instantaneous network structure that cannot be obtained by window partition, and predict the network with unknown signal space. Finally, the D-BFN model was tested. Most of the snapshots (including in and out) in the established BFN maintain the conventional attributes of the BFN well in the test. The model was used to classify and predict cognitive impairment data such as Alzheimer’s (AD), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and normal control (NC) through dynamic network features. It was found that the proposed NDCN-Brain model has a better classification effect than the static network model and the conventional dynamic network model in terms of classification.
As shown in
Figure 1 our method is as follows. First, the fMRI data are preprocessed, the preprocessed data are registered into the brain template of power264, and then the BOLD signals of 264 brain regions corresponding to the brain regions are obtained; then, using time window interval sampling, a plurality of signal segments are collected in BOLD signals. The adjacency matrices in each signal segment are calculated, and then the sequential adjacency matrices are brought into the NDCN to obtain the continuous-time dynamics time system. The interpolation prediction is used to analyze every instantaneous snapshots (i.e., in-snapshots) in the dynamic time system, and the extrapolation prediction is used to predict multiple snapshots (i.e., out-snapshots) in the future. These snapshots are combined to obtain an extended dynamic brain functional network structure (in-snapshots, out-snapshots). We can use this model to diagnose cognitive impairment. In the diagnosis process, dynamic network features are extracted first and then diseases are classified based on SVM.
2.2. The Brain Functional Network Based on NDCN
In this paper, the neural dynamics on complex networks (NDCN) method proposed by Zang et al. [
23] is used to construct a dynamic extensible BFN. This method combines ordinary differential equation systems (ODEs) and graph neural networks (GNNs) to analyze the temporal dynamics of the network in a data-driven manner. Because the brain functional network can also be described as a continuous dynamic network structure in time, the NDCN method can be used to construct the network.
2.2.1. Neural Dynamics on Complex Networks
In the theory of Zang et al. [
23], a graph structure with dynamic continuity time can be described as:
In which can be expressed as n interconnected nodes in a dynamic continuous system. Each node is characterized by d dimension. represents a network structure that captures how nodes interact. is a parameter that controls how the system evolves with time. is the initial state of the system at time . The equation represents the instantaneous change rate of dynamics on the graph. In addition, nodes may have various semantic tags , which encoded by a hot code, and parameter represents this classification function.
The basic framework of NDCN can be summarized as follows:
where
is the “running” loss of the continuous time dynamics on graph from
to
.
is the “terminal” loss at time
T. By integrating
over time
t from initial state
, also known as solving the initial value problem for this differential equation system, can obtain the continuous-time network dynamics
at arbitrary time moment
.
Moreover, to further increase the express ability of our model, NDCN encodes the network signal
from the original space to
in a hidden space, and learn the dynamics in such a hidden space. Then, the NDCN model becomes:
where the first constraint transforms
into hidden space
through encoding function
. The second constraint is the governing dynamics in the hidden space. The third constraint decodes the hidden signal back to the original space with decoding function
. The design of
,
f, and
is flexible and can adapt to any deep neural structure, of which GNN is the best. NDCN can realize prediction in two directions, namely interpolation prediction and extrapolation prediction. For one system
, at time
, NDCN predicts the prediction result of instantaneous network in continuous time system, that is, interpolation prediction, which is named ’in-snapshots’. At time
, NDCN has knowledge of network dynamics outside the system, that is, the prediction results of extrapolated prediction, which is named ’out-snapshots’.
2.2.2. Dynamic Network Modeling Based on NDCN
Before constructing a continuous network, the original discrete-time system needs to be sampled and used for the learning of the continuous-time system. For the fMRI-based brain function network, the BOLD signals are sampled from front to back by the time window partition method. The time series of each brain region is segmented. Specifically, for fMRI data with BOLD signal length
T, the whole time series (m is the number of nodes)
is divided into several time windows with length
t, and the interval between each window is
s, that is, the moving step is
s. Finally,
n time segments are obtained, and each time period obtained is called a window time. The formula for calculating the total number of windows generated by the time window method is as follows.
Figure 2 shows a schematic diagram of the sliding time window, where
T is the length of the fMRI time series,
t is the size of the sliding time window, and
s is the step size of the sliding time window movement.
The Pearson correlation coefficient [
24] is used to calculate the correlation between BOLD signals in each time window, and the calculation method is as follows:
where
represents two brain regions in each time slice. Pearson correlation coefficient was calculated in each slice. Then, we obtain the correlation matrix
, that is, an original snapshot of the network. We sample
n snapshots in a continuous-time system
for NDCN training. After the training is completed, for any time
, we can use interpolation prediction to obtain in-snapshots
; as for the model for
, the out-snapshot can be obtained by extrapolation prediction
, and finally all the in-snapshot and out-snapshot form an extended dynamic brain function network model with model time dimension
. This model
can be called NDCN-Brain.
2.3. Cognitive Impairment CAD Based on NDCN-Brain
Computer-aided diagnosis (CAD) [
25,
26] generally includes two parts, namely feature extraction and classification. Here we use dynamic aggregation coefficient as the feature of dynamic network and then use support vector machines (SVM) to classify the feature data.
It is known that the brain usually performs multiple functions in the form of partitions, and even general physiological activities will be completed in a multi-regional cooperative way, which is also reflected in the brain network. Because the aggregation coefficient is used to describe the degree of aggregation between vertices in a graph, the global aggregation coefficient can give an evaluation of the aggregation degree of the whole graph; therefore, for a dynamic network structure, each snapshot can give a global aggregation coefficient as a feature, and these features can form an aggregation coefficient sequence, which is used as dynamic network features for classification.
For a general graph structure, that is, a snapshot in this paper, where represents the set of vertices, denotes the set of edges, and denotes the connection of vertices and .
Each vertex is connected with a different number of other vertices, and
is used to represent the set of edges connected with vertex
:
The number of edges in is the degree of vertex , denoted as .
If
is used to denote the global aggregation coefficient of snapshot,
is used to denote the number of closed three-point groups in the graph, and
is used to denote the number of open three-point groups, then
If it is expressed in
, it can also be written as
For people with cognitive impairment or the extended dynamic brain functional network model , the feature vector of aggregation coefficient can be expressed as .
Then, support vector machines (SVM) are used to classify the feature vectors of different cognitive impairments, which can realize the auxiliary diagnosis of cognitive impairment diseases.