Next Article in Journal
Image-Guided Percutaneous Ablation for Primary and Metastatic Tumors
Next Article in Special Issue
Physics Informed Neural Networks (PINN) for Low Snr Magnetic Resonance Electrical Properties Tomography (MREPT)
Previous Article in Journal
Modeling and Simulating an Orthodontic System Using Virtual Methods
Previous Article in Special Issue
Applications of Explainable Artificial Intelligence in Diagnosis and Surgery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NDCN-Brain: An Extensible Dynamic Functional Brain Network Model

1
School of Computer Science & Engineering, Northeastern University, Shenyang 110819, China
2
Key Laboratory of Big Data Management and Analytics, Northeastern University, Shenyang 110819, China
3
College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(5), 1298; https://doi.org/10.3390/diagnostics12051298
Submission received: 6 April 2022 / Revised: 17 May 2022 / Accepted: 23 May 2022 / Published: 23 May 2022
(This article belongs to the Special Issue Machine Extractable Knowledge from the Shape of Anatomical Structures)

Abstract

:
As an extension of the static network, the dynamic functional brain network can show continuous changes in the brain’s connections. Then, limited by the length of the fMRI signal, it is difficult to show every instantaneous moment in the construction of a dynamic network and there is a lack of effective prediction of the dynamic changes of the network after the signal ends. In this paper, an extensible dynamic brain function network model is proposed. The model utilizes the ability of extracting and predicting the instantaneous state of the dynamic network of neural dynamics on complex networks (NDCN) and constructs a dynamic network model structure that can provide more than the original signal range. Experimental results show that every snapshot in the network obtained by the proposed method has a usable network structure and that it also has a good classification result in the diagnosis of cognitive impairment diseases.

1. Introduction

At present, the clinical diagnosis of neurodegenerative diseases is mainly based on the scale method, which can quickly evaluate and screen the severity of the patient’s disease, but it is not sensitive enough for the early stage of disease [1,2]. For these reasons, neuroimaging methods are mostly supplemented in the study of changes in brain functional patterns, such as positron emission tomography (PET), structural magnetic resonance imaging (sMRI), and functional magnetic resonance imaging (fMRI). Among them, PET requires contrast agents, which are harmful to the body; sMRI is suitable to capture microscopic levels of neuronal loss and gray matter atrophy; however, not all neurodegenerative diseases produce these changes. The neuroimaging data represented by fMRI based on blood oxygenation level dependent (BOLD) have the advantages of being non-invasive, repeatable, and high spatiotemporal resolution, which not only has an excellent display effect on brain structure, but can also reflect the timing changes of the brain’s functional status [3]; therefore, it is an ideal method to study the activity patterns and the relationships across the brain. Moreover, it provides new insight into the study of the functional status of the brain in patients with neurodegenerative diseases. Modern brain imaging techniques and statistical physics, especially complex network theory, provides the necessary basis and analysis methods for the study of human brain functional networks (BFN) using neuroimaging data. The studies of BFN based on neuroimaging data such as fMRI are significant for the analysis and study of neurological diseases [4,5,6].
The fMRI-based BFN are constructed such that brain regions or voxels are used as nodes and the correlations of BOLD changes between them are used as connection conditions, where most of them are static brain functional networks [7,8]. Static networks can reflect the connectivity pattern of the brain, but ignore the timing change information of BOLD signals; however, with the development of temporal graphs, dynamic brain functional networks (D-BFN) gradually enter the field of view of related researchers and have received more and more attention.
In the research of D-BFN, Wang et al. [9] explored the association between alterations in the dynamic brain networks’ trajectory and cognitive decline in the AD spectrum; the results show that all resting-state networks (RSNs) had an increase in connectivity within networks by enhancing inner cohesive ability, while 7 out of 10 RSNs were characterized by a decrease in connectivity between networks, which indicated a weakened connector among networks from the early stage to dementia. Jie et al. [10] defined a new measure to characterize the spatial variability of DCNs, and then proposed a novel learning framework to integrate both temporal and spatial variabilities of DCNs for automatic brain disease diagnosis. The results of a study on 149 subjects with baseline rs-fMRI data from the Alzheimer’s disease neuroimaging initiative (ADNI) suggest that the method can not only improve the classification performance, but also provide insights into the spatio-temporal interaction patterns of brain activity and their changes in brain disorders. Aldo et al. [11] proposed that spatial maps constituting the nodes in the functional brain network and their associated time-series were estimated using spatial group independent component analysis and dual regression; whole-brain oscillatory activity was analyzed both globally (metastability) and locally (static and dynamic connectivity). Morin et al. [12] used a dynamic network analysis of fMRI data to identify changes in functional brain networks that are associated with context-dependent rule learning; the results support a framework by which a stable ventral attention community and more flexible cognitive control community support sustained attention and the formation of rule representations in successful learners. Moguilner et al. [13] proposed data-driven machine learning pipeline based on dynamic connectivity fluctuation analysis (DCFA) on RS-fMRI data from 300 participant; the results show that non-linear dynamical fluctuations surpass two traditional seed-based functional connectivity approaches and provide a pathophysiological characterization of global brain networks in neurodegenerative conditions (AD and bvFTD) across multicenter data. Wang et al. [14] investigated the effects of driving fatigue on the reorganization of dynamic functional connectivity through our newly developed temporal brain network analysis framework. The method provides new insights into dynamic characteristics of functional connectivity during driving fatigue and demonstrate the potential for using temporal network metrics as reliable biomarkers for driving fatigue detection. Different from conventional studies focusing on static descriptions on functional connectivity (FC) between brain regions in rs-fMRI, recent studies have resorted to D-BFN to characterize the dynamic changes of FC, since dynamic changes of FC may indicate changes in macroscopic neural activity patterns in cognitive and behavioral aspects.
The D-BFNs are usually established by using the sliding window method, which is used to segment the BOLD signals into small pieces—this leads to the size of the window and the length of the split signal directly affecting the effectiveness of the D-BFN. Moreover, due to the sampling point limitation of the BOLD signals, the number of windows used to segment the signals will not be too large, so there is no way to establish the instantaneous BFN of each data segment well. This will lead to the D-BFN established by the sliding window method being discrete. Moreover, because there is a lack of transition between network snapshots, some changing trend information cannot be displayed. The problem that must be faced with the D-BFN based on fMRI data is that the BOLD signal in the data will not have a long duration, which is determined by the characteristics of high spatial resolution and general temporal resolution of fMRI data [15,16]. To establish a D-BFN that can clearly reflect the dynamic changes of brain connectivity, it is necessary not only to consider how to establish a continuous BFN model that is continuous within the time range of data acquisition, but also to extend more time sequence information on the basis of existing time sequence data [17].
Dynamic networks are widely used in social network analysis [18,19], recommendation systems, epidemiology, and others. Representing a complex network as a time-dependent structure allows the network model to exploit not only structural pattern but also temporal patterns. Learning continuous-time dynamics on complex networks is essential for understanding, predicting, and controlling complex systems in science and engineering. Nevertheless, this task is very challenging due to the combinatorial complexity of high-dimensional system structures, the elusive continuous-time nonlinear dynamics, and their structural dynamics dependence. The brain functional network model based on continuous time should consider how to simulate the complete temporal dynamics system by using the existing image data. To meet these challenges, appropriate methods are needed to learn continuous temporal dynamics on complex brain functional networks in a data-driven manner. In recent years, graph neural networks (GNN) have attracted much attention for their excellent performance in a series of network scientific tasks such as link prediction and node classification. Despite the fact that the graph neural network is very popular and the dynamic network models have demonstrated its advantages, little attention is paid to graph neural networks used for dynamic networks. Ordinary differential equation systems (ODEs) [20] area one of the most important mathematical tools used to generate models in physical, biological, chemical, and engineering fields, among others. This prompts researchers to provide effective numerical methods to solve such equations. Higher-order ordinary differential equations are commonly used to solve time-series problems; therefore, one may consider combining ordinary differential equation systems and graph neural networks to learn to compute continuous temporal dynamics on BFNs in a data-driven manner.
D-BFN is usually established by using time window partition to segment BOLD signals, which leads to the size of window and the length of split signals directly affecting the effectiveness of dynamic brain function network [13,21]. Moreover, due to the limitation of sampling points of BOLD signal, the number of divided windows will not be too significant, so there is no way to establish the instantaneous brain function network of each data segment, and it is difficult to predict the change trend of BFN in the next stage after data acquisition.
As there has been no research to consider the dynamic continuity of the network in the establishment of the model of D-BFN, this paper attempts to calculate the continuity information. There are some deep learning methods that can establish the time dynamic response on the network [22], among which the method of Zang et al. [23] is outstanding. The neural dynamics on complex networks (NDCN) model proposed by Zang et al. [23] combines ordinary differential equation systems (ODEs) and graph neural networks (GNNs) to learn continuous-time dynamics on complex networks in a data-driven way. In the view of Skarding et al. [22], there is, as of yet, no continuous DGNN encoder for any general-purpose dynamic network; however, while this approach to modeling dynamics has been discussed by earlier works, to the best of our knowledge, it has not been implemented in practice. For this reason, we discuss the possibility of using the NDCN method in the study of D-BFN.
An extensible dynamic brain function network model is established by using the NDCN. NDCN integrates the GNN layer numerically in continuous time, so as to capture the continuous-time dynamics on the network. This method mainly includes two important functions. One is to calculate the network structure of each instantaneous in the dynamic network within the BOLD signal length by using the existing data, that is, interpolation prediction. The other is to predict the continuous changes of BOLD signal collected by dynamic brain function network in the next time, that is, extensible prediction.
The contributions can be summarized as follows:
1
The NDCN-Brain gives the meaning of the continuous-time network dynamics to the depth and hidden outputs of GNNs, respectively, and predicts continuous-time dynamics on BFN.
2
An extended dynamic brain function network model structure is established, which makes up for the length limitation of fMRI data and improves the temporal resolution of the network in a certain range.
3
The network is applied to the auxiliary diagnosis of cognitive impairment and high diagnostic performance is obtained.
The remainder of the paper is organized as follows, the method used in the model is introduced in Section 2, and the model is tested in Section 3. Finally, the paper is summarized in Section 4.

2. Methods

2.1. Overview

An extended dynamic brain function network based on neural dynamics on complex networks (NDCN) is proposed as shown in Figure 1.
For the D-BFN model described in this study, the BOLD signal is divided into several slices by window partition, and then the network within each segment is calculated. Then, in-snapshots within the signal length are obtained by using these segment networks through NDCN interpolation prediction, and out-snapshots outside the signal are predicted by extrapolation prediction. These in-snapshots and out-snapshots together constitute a more complex dynamic brain function network model: NDCN-Brain. Using this network dynamics analysis method, we can effectively capture the instantaneous network structure that cannot be obtained by window partition, and predict the network with unknown signal space. Finally, the D-BFN model was tested. Most of the snapshots (including in and out) in the established BFN maintain the conventional attributes of the BFN well in the test. The model was used to classify and predict cognitive impairment data such as Alzheimer’s (AD), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and normal control (NC) through dynamic network features. It was found that the proposed NDCN-Brain model has a better classification effect than the static network model and the conventional dynamic network model in terms of classification.
As shown in Figure 1 our method is as follows. First, the fMRI data are preprocessed, the preprocessed data are registered into the brain template of power264, and then the BOLD signals of 264 brain regions corresponding to the brain regions are obtained; then, using time window interval sampling, a plurality of signal segments are collected in BOLD signals. The adjacency matrices in each signal segment are calculated, and then the sequential adjacency matrices are brought into the NDCN to obtain the continuous-time dynamics time system. The interpolation prediction is used to analyze every instantaneous snapshots (i.e., in-snapshots) in the dynamic time system, and the extrapolation prediction is used to predict multiple snapshots (i.e., out-snapshots) in the future. These snapshots are combined to obtain an extended dynamic brain functional network structure (in-snapshots, out-snapshots). We can use this model to diagnose cognitive impairment. In the diagnosis process, dynamic network features are extracted first and then diseases are classified based on SVM.

2.2. The Brain Functional Network Based on NDCN

In this paper, the neural dynamics on complex networks (NDCN) method proposed by Zang et al. [23] is used to construct a dynamic extensible BFN. This method combines ordinary differential equation systems (ODEs) and graph neural networks (GNNs) to analyze the temporal dynamics of the network in a data-driven manner. Because the brain functional network can also be described as a continuous dynamic network structure in time, the NDCN method can be used to construct the network.

2.2.1. Neural Dynamics on Complex Networks

In the theory of Zang et al. [23], a graph structure with dynamic continuity time can be described as:
d X ( t ) d t = f ( X , G , W , t )
In which X ( t ) R n × d can be expressed as n interconnected nodes in a dynamic continuous system. Each node is characterized by d dimension. G = ( V , E ) represents a network structure that captures how nodes interact. W ( t ) is a parameter that controls how the system evolves with time. X ( 0 ) = X 0 is the initial state of the system at time t = 0 . The equation f : R n × d R n × d represents the instantaneous change rate of dynamics on the graph. In addition, nodes may have various semantic tags Y ( X , Θ ) ( 0 , 1 ) n × k , which encoded by a hot code, and parameter Θ represents this classification function.
The basic framework of NDCN can be summarized as follows:
arg min L W ( t ) , Θ ( T ) = 0 T R ( X , G , W , t ) d t + S ( Y ( X ( T ) , Θ ) ) subject to d X ( t ) d t = f ( X , G , W , t ) , X ( 0 ) = X 0
where 0 T R ( X , G , W , t ) d t is the “running” loss of the continuous time dynamics on graph from t = 0 to t = T . S ( Y ( X ( T ) , Θ ) ) is the “terminal” loss at time T. By integrating d X d t = f ( X , G , W , t ) over time t from initial state X 0 , also known as solving the initial value problem for this differential equation system, can obtain the continuous-time network dynamics X ( t ) = X ( 0 ) + 0 T f ( X , G , W , τ ) at arbitrary time moment t > 0 .
Moreover, to further increase the express ability of our model, NDCN encodes the network signal X ( t ) from the original space to X h ( t ) in a hidden space, and learn the dynamics in such a hidden space. Then, the NDCN model becomes:
arg min W ( t ) , Θ ( T ) L = 0 T R ( X , G , W , t ) d t + S ( Y ( X ( T ) , Θ ) ) subject to X h ( t ) = f e ( X ( t ) , W e ) , X ( 0 ) = X 0 d X ( t ) d t = f ( X h , G , W h , t ) X ( t ) = f d ( X h ( t ) , W d )
where the first constraint transforms X ( t ) into hidden space X h ( t ) through encoding function f e . The second constraint is the governing dynamics in the hidden space. The third constraint decodes the hidden signal back to the original space with decoding function f d . The design of f e , f, and f d is flexible and can adapt to any deep neural structure, of which GNN is the best. NDCN can realize prediction in two directions, namely interpolation prediction and extrapolation prediction. For one system ( 0 . T ] , at time t < T , NDCN predicts the prediction result of instantaneous network in continuous time system, that is, interpolation prediction, which is named ’in-snapshots’. At time t > T , NDCN has knowledge of network dynamics outside the system, that is, the prediction results of extrapolated prediction, which is named ’out-snapshots’.

2.2.2. Dynamic Network Modeling Based on NDCN

Before constructing a continuous network, the original discrete-time system needs to be sampled and used for the learning of the continuous-time system. For the fMRI-based brain function network, the BOLD signals are sampled from front to back by the time window partition method. The time series of each brain region is segmented. Specifically, for fMRI data with BOLD signal length T, the whole time series (m is the number of nodes) B O L D i ( i = 1 , 2 , , m ) is divided into several time windows with length t, and the interval between each window is s, that is, the moving step is s. Finally, n time segments are obtained, and each time period obtained is called a window time. The formula for calculating the total number of windows generated by the time window method is as follows.
n = T t s + 1
Figure 2 shows a schematic diagram of the sliding time window, where T is the length of the fMRI time series, t is the size of the sliding time window, and s is the step size of the sliding time window movement.
The Pearson correlation coefficient [24] is used to calculate the correlation between BOLD signals in each time window, and the calculation method is as follows:
r = N x i y i x i y i N x i 2 ( x i ) 2 N y i 2 ( y i ) 2
where x i , y i represents two brain regions in each time slice. Pearson correlation coefficient was calculated in each slice. Then, we obtain the correlation matrix X ( t ^ ) , that is, an original snapshot of the network. We sample n snapshots in a continuous-time system X t ^ 1 , , X t n 0 t 1 < < t n T for NDCN training. After the training is completed, for any time i < T , we can use interpolation prediction to obtain in-snapshots X ( i ) ; as for the model for j > T , the out-snapshot can be obtained by extrapolation prediction X ( j ) , and finally all the in-snapshot and out-snapshot form an extended dynamic brain function network model with model time dimension >   t . This model [ X ( 1 ) , , X ( i ) , X ( T ) , , X ( j ) , ] can be called NDCN-Brain.

2.3. Cognitive Impairment CAD Based on NDCN-Brain

Computer-aided diagnosis (CAD) [25,26] generally includes two parts, namely feature extraction and classification. Here we use dynamic aggregation coefficient as the feature of dynamic network and then use support vector machines (SVM) to classify the feature data.
It is known that the brain usually performs multiple functions in the form of partitions, and even general physiological activities will be completed in a multi-regional cooperative way, which is also reflected in the brain network. Because the aggregation coefficient is used to describe the degree of aggregation between vertices in a graph, the global aggregation coefficient can give an evaluation of the aggregation degree of the whole graph; therefore, for a dynamic network structure, each snapshot can give a global aggregation coefficient as a feature, and these features can form an aggregation coefficient sequence, which is used as dynamic network features for classification.
For a general graph structure, that is, a snapshot G s n a p s h o t = ( V , E ) in this paper, where V = v 1 , v 2 , , v n represents the set of vertices, E = e i j : ( i , j ) S [ 1 , , n ] 2 denotes the set of edges, and e i j denotes the connection of vertices v i and v j .
Each vertex is connected with a different number of other vertices, and L ( i ) is used to represent the set of edges connected with vertex v i :
L ( i ) = v j : e i j E e j i E
The number of edges in L ( i ) is the degree of vertex v i , denoted as k i : k j = | L ( i ) | .
If G t o t a l s is used to denote the global aggregation coefficient of snapshot, G is used to denote the number of closed three-point groups in the graph, and G is used to denote the number of open three-point groups, then
G t o t a l s = 3 × G 3 × G + 2 × G
If it is expressed in k i , it can also be written as
G t o t a l s = 3 × G i = 1 n ( k i 2 )
For people with cognitive impairment or the extended dynamic brain functional network model [ X ( 1 ) , . . . , X ( i ) , . . . X ( T ) , . . . , X ( j ) , . . . ] , the feature vector of aggregation coefficient can be expressed as [ G t o t a l 1 , . . . , G t o t a l i , . . . G t o t a l T , . . . , G t o t a l j , ] .
Then, support vector machines (SVM) are used to classify the feature vectors of different cognitive impairments, which can realize the auxiliary diagnosis of cognitive impairment diseases.

3. Results

3.1. Experimental Data and Preprocessing

Alzheimer’s disease data are sourced from the Alzheimer’s Disease Neuroimaging Initiative database http://ADNI.oni.usc.edu/ (accessed on 18 September 2021). The data obtained include Alzheimer’s (AD), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and normal control (NC).
ADNI provides ethics statements for AD, EMCI, LMCI, and NC. For comparison, 95 samples were reserved in each group. Table 1 shows the detailed data.
For each subject, the resting state fMRI DPARSF toolbox [27] and statistical parameter mapping software package (SPM12) [28] were used to preprocess the data, and the standard preprocessing was applied to the rs-fMRI data set. First, slice correction was carried out. The fMRI time series was recalibrated using spatial transformations to compensate for the effects of head motion. All images are normalized to MNI (Montreal Neurological Institute) space and resampled to 3 mm voxels. Bandpass filtering (0.01–0.2 Hz) was used to preserve valid information.

3.2. Network Analysis

In order to verify the effectiveness of the network, the small world attribute was used as the verification method of each snapshot. In previous studies, it has been found that the human brain is a complex network with efficient “small world” attributes [29,30]. If the brain network is in its normal working state, the network mode under most time snapshots of the dynamic network follows the small world attribute. For the in-snapshots within <T time and out-snapshots outside >T time in the model, we verified whether the small world attribute was missing, and compared the differences between different people from cognitive impairment groups.
The shortest path length of the small world network is smaller than that of the regular network, and the clustering coefficient of the small world network is larger than that of the random network. Studies have shown that brain functional networks belong to small world networks, so the evaluation of brain functional networks can be described by the small world attributes of brain networks, and can also reflect the activity ability of parts or whole regions of the brain.
In order to judge the small world characteristics, the characteristic path length L and aggregation coefficient C are compared with the same measures estimated in random networks. The number of nodes, average degree, and degree distribution in random networks are the same as those in interested networks.
Random networks are expressed as H L R , H I R , H H R . The clustering coefficient is expressed as C L R , C I R , C H R . The shortest path is represented as L L R , L I R , L H R . It is usually expected in a small world network that γ = C / C R > 1 and λ = L / L R > 1 . The characteristics of the small world are σ = γ / λ , which are usually greater than 1 ( σ > 1 ).
In addition, a plurality of signals corresponding to time positions were intercepted from the original BOLD signals as comparison signals, and 1 loss and 2 loss was used as comparison to verify the similarity with the actual signals. For position i, a network composed of BOLD signals with a starting time [ i + 3 , i 3 ] and a length the same as the time window length intercepted during training was used for comparison.

3.2.1. In-Snapshots Analysis

In the process of training, from the beginning of the signal, a number of BOLD signals were obtained with five sampling points as the step size and six sampling points as the signal length; an instantaneous network in every window was established for the training of the model. The instantaneous snapshots in the signal time range were extracted from the acquired model and the usability of these in-snapshots was tested. The test consists of three parts: small world attribute change, signal similarity measure, and the influence of sampled signal window size.
The first were the 1 loss and 2 loss. In fMRI data, a volume is a time point. According to the length of the data signal, we can obtain 140 in-snapshots in the model. For primitive in-snapshots i (predicted value), a network composed of BOLD signals with a [ i + 3 , i 3 ] time window length intercepted during training (true value) is used for comparison. The comparison results are shown in Table 2; means and errors of all the same subjects are shown in the table. The table mainly compares the error between the predicted value of NDCN model and the network partitioned by traditional windows. In addition, some other deep learning methods are compared, we use GNN [31] as a graph structure extractor and use LSTM/GRU/RNN [32] to learn the temporal relationships between ordered structured sequences.
It can be seen from the table that the proposed NDCN method, whether 1 loss and 2 loss, NDCN obtains the smallest error value.
In addition, it is necessary to compare the influence of sliding window size on the obtained results. It can be seen from Figure 3 and Figure 4, that when the size of sliding window is 6, the loss is the smallest; all experiments were carried out on the basis of this window size.
After that, it was necessary to verify whether every time segment in the dynamic network obtained by the NDCN model still had the attribute of a small world, which is an important basis for the usability of the model. In Table 3, no matter what cognitive impairment, their small world attributes were preserved.
In the experiment, it was found that most snapshots retained the small world attribute, but the proportion was different among different people with diseases. According to the experimental results, it is found that the more serious the cognitive impairment is, the more snapshots lose the small world attribute in their dynamic network. Among them, only 70.7 % of the fragments in the dynamic network of AD patients retain the small world attribute. Healthy people have the largest number, maintaining 92.1 % .

3.2.2. Out-Snapshots Analysis

The instantaneous snapshots outside the signal time range were extracted from the obtained model and the usability of these out-snapshots was tested; 70 out-snapshots were calculated, which also includes two parts: small world attribute change and signal similarity measurement. The same comparison method as in-snapshots was adopted. The 1 loss and 2 loss are shown in Table 4.
It can be seen that, compared to other deep learning methods, out-snapshots using the NDCN model also feature small errors.
After that, it is still necessary to verify whether every time segment in the dynamic network obtained by NDCN model still has the attribute of small world, which is an important basis for the usability of the model. In Table 5, no matter what people with cognitive impairment, their small world attributes have been preserved.
In the experiment, it was found that most out-snapshots retained the small world attribute, but the proportion was different among people with different diseases. According to the experimental results, it is also found that the more serious the cognitive impairment is, the more snapshots lose the small world attribute in their dynamic network. Among them, only 65.7 % of the fragments in the dynamic network of AD patients retain the small-world attribute. Healthy people have the largest number, maintaining 88.6 % .

3.2.3. Comparison with Continuous Dynamic Network Models

In the existing research, there have been some methods that can be used to calculate the continuous time dynamics model, but these calculation methods have unique pertinence and are not fully applicable to the brain function network; therefore, we compare some methods here to prove the superiority of the NDCN-Brain method, such as RNN-based, TTP-based, and time-embedding-based models by NC subjects.
It can be seen from Table 6 that NDCN model is more suitable for the modeling of continuous dynamic brain function network.
Table 6. Comparison with continuous dynamic network models.
Table 6. Comparison with continuous dynamic network models.
Model TypeModel Name 1 Loss 2 Loss
RNN-basedStreaming GNN [33]34.6 ± 4.222.8 ± 3.9
JODIE [34]35.3 ± 2.823.6 ± 3.1
TTP-basedKnow-Evolve [35]38.6 ± 5.428.1 ±  2.8
DyREP [36]33.7 ± 4.142.6 ± 2.2
LDG [37]41.6 ± 3.427.5 ± 5.3
GHN [38]33.4 ± 4.225.9 ± 4.5
Time-embedding-basedTGAT [39]33.7 ± 2.726.5 ± 3.8
TGN [40]29.6 ± 4.216.7 ± 3.6
NDCNNDCN-brain25.7 ± 3.712.8 ± 3.7

3.2.4. Auxiliary Diagnosis for Cognitive Impairment

The purpose of ADNI is to develop a multisite, longitudinal, prospective, and naturalistic study of normal cognitive aging, mild cognitive impairment (MCI), and early Alzheimer’s disease as a public domain research resource to facilitate the scientific evaluation of neuroimaging and other biomarkers for the onset and progression of MCI and Alzheimer’s disease (AD) [41]. Early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and AD are different stages in the development of AD (NC→EMCI→LMCI→AD). Identifying different stages is helpful to understand the characteristic changes of brain network in corresponding stages, and provide reference for disease research and prevention; therefore, the proposed method is used to distinguish the stages of Alzheimer’s development.
Auxiliary diagnosis mainly uses SVM to classify the features. The network structure includes in-snapshots and out-snapshots. Here, the aggregation coefficient of network [ G t o t a l 1 , , G t o t a l i , G t o t a l T , , G t o t a l j , ] was used as the feature. In the experimental comparison, the traditional window partition method was used as the baseline. Three classifiers, KNN, naive Bayesian (NB), and SVM were compared [42,43]. The experimental results are shown in Table 7.
Classification mainly focuses on the distinction between cognitive impairment and normal population. From the experimental results, it can be seen that the classification effect for Alzheimer’s is the best, which is 82.5 % , while late cognitive impairment and early cognitive impairment are second ( 77.5 % and 72.5 % ). This is because the more serious the cognitive impairment, the higher the discrimination of its dynamic network. Finally, we can see that the dynamic network of NDCN model has certain advantages compared with direct window partition. It can not only use the information of BOLD signal, but also predict the network information in the future, and integrate its characteristics for classification.
In addition, the influence of in-snapshots and out-snapshots on classification is verified. Here, the classifier only uses SVM. In Table 8, the baseline is the method of window partition, and the results of in-snapshots and out-snapshots are compared, respectively. The experimental results show that the NDCN method has the best diagnostic effect.
It can be inferred that when the degree of cognitive impairment deepens, the accumulation of brain structure decreases, and the network in the normal brain will show higher accumulation. The accumulation of cognitive impairment patients will become worse, and the realization of function will be limited. This is why clustering coefficient can be used as a feature for classification. In addition, because the dynamic network construction method used in this paper is no longer directly practical time window, but increases the inference of network timing changes, and the information is retained more completely, so it has better classification performance.

4. Conclusions

In order to solve the problems of instantaneous network extraction and network development prediction outside the signal, an extensible dynamic brain function network model is proposed. The model utilizes the ability of extracting and predicting the instantaneous state of the dynamic network of NDCN and constructs a dynamic network model structure that can provide more than the original signal range. Experimental results show that every snapshot in the network obtained by the proposed method has a usable network structure and it also has a good classification result in the diagnosis of cognitive impairment diseases.
The method proposed in this paper calculates the time dynamic response of brain function network. There is still some room for improvement. The model calculates the internal characteristics of the network, but this feature is only used to calculate the time dynamic response, while the feature used in disease classification is the agglomeration coefficient after extracting time segments, which cannot fully reflect the changes of network connection under each time slice. In addition, by changing the loss function, the characteristic information of the network can be further extracted. If the connection change information of network nodes can be extracted, the model can also be used in the research of network integration and separation, which will be verified by further experiments. The model only analyzes the brain network of Alzheimer’s disease and its application in other diseases needs to be developed.

Author Contributions

Conceptualization, J.X. and Z.W. (Zhongyang Wang); methodology, Z.W. (Zhongyang Wang); writing—original draft preparation, Z.W. (Zhongyang Wang); writing–review and editing, J.X. and X.W.; visualization, Z.W. (Zhiqiong Wang) and Q.C.; supervision, J.X.; project administration, Z.W. (Zhiqiong Wang); funding acquisition, Z.W. (Zhiqiong Wang). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Key R&D Program of China (2021YFB3300900); National Natural Science Foundation of China (62072089); Fundamental Research Funds for the Central Universities of China (N2116016, N2104001, N2019007, N2224001-10); Neusoft Group Co., Ltd. Open Project support (NCBETOP2102).

Data Availability Statement

The data are from Alzheimer’s Disease Neuroimaging Initiative database http://ADNI.oni.usc.edu/ (accessed on 18 September 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Patterson, C. World Alzheimer Report 2018: The State of the Art of Dementia Research: New Frontiers; Alzheimer’s Disease International (ADI): London, UK, 2018. [Google Scholar]
  2. Atluri, G.; Iii, M.; Lim, K.O.; Kumar, V. The Brain-Network Paradigm: Using Functional Imaging Data to Study How the Brain Works. Computer 2016, 49, 65–71. [Google Scholar] [CrossRef]
  3. Bullock, M.; Jackson, G.D.; Abbott, D.F. Artifact Reduction in Simultaneous EEG-fMRI: A Systematic Review of Methods and Contemporary Usage. Front. Neurol. 2021, 12, 622719. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Z.; Xin, J.; Wang, Z.; Yao, Y.; Zhao, Y.; Qian, W. Brain functional network modeling and analysis based on fMRI: A systematic review. Cogn. Neurodyn. 2021, 15, 389–403. [Google Scholar] [CrossRef] [PubMed]
  5. Shi, M.; Liu, S.; Chen, H.; Geng, W.; Yin, X.; Chen, Y.C.; Wang, L. Disrupted brain functional network topology in unilateral acute brainstem ischemic stroke. Brain Imaging Behav. 2021, 15, 444–452. [Google Scholar] [CrossRef]
  6. Qi, L.; Yin, Y.; Bu, L.; Tang, Z.; Dong, G. Acute VR competitive cycling exercise enhanced cortical activations and brain functional network efficiency in MA-dependent individuals. Neurosci. Lett. 2021, 757, 135969. [Google Scholar] [CrossRef]
  7. Ivanoska, I.; Trivodaliev, K.; Kalajdziski, S.; Zanin, M. Statistical and Machine Learning Link Selection Methods for Brain Functional Networks: Review and Comparison. Brain Sci. 2021, 11, 735. [Google Scholar] [CrossRef]
  8. Kucyi, A.; Esterman, M.; Capella, J.; Green, A.; Whitfield-Gabrieli, S. Prediction of stimulus-independent and task-unrelated thought from functional brain networks. Nat. Commun. 2021, 12, 1793. [Google Scholar] [CrossRef]
  9. Wang, Q.; He, C.; Wang, Z.; Zhang, Z.; Xie, C. Dynamic Connectivity Alteration Facilitates Cognitive Decline in Alzheimer’s Disease Spectrum. Brain Connect. 2020, 11, 213–224. [Google Scholar] [CrossRef]
  10. Jie, B.; Liu, M.; Shen, D. Integration of temporal and spatial properties of dynamic connectivity networks for automatic diagnosis of brain disease. Med. Image Anal. 2018, 47, 81–94. [Google Scholar] [CrossRef]
  11. Córdova-Palomera, A.; Kaufmann, T.; Persson, K.; AlnaS, D.; Doan, N.T.; Moberget, T.; Lund, M.J.; Barca, M.L.; Engvig, A.; BraKhus, A. Disrupted global metastability and static and dynamic brain connectivity across individuals in the Alzheimer’s disease continuum. Sci. Rep. 2017, 7, 40268. [Google Scholar]
  12. Morin, T.M.; Chang, A.E.; Ma, W.; Mcguire, J.T.; Stern, C.E. Dynamic Network Analysis Demonstrates the Formation of Stable Functional Networks During Rule Learning. Cereb. Cortex 2021, 31, 5511–5525. [Google Scholar] [CrossRef] [PubMed]
  13. Moguilner, S.; Garcia, A.; Perl, Y.S.; Tagliazucchi, E.; A, I. Dynamic brain fluctuations outperform connectivity measures and mirror pathophysiological profiles across dementia subtypes: A multicenter study. NeuroImage 2021, 225, 117522. [Google Scholar] [CrossRef] [PubMed]
  14. Wang, H.; Liu, X.; Hu, H.; Wan, F.; Li, T.; Gao, L.; Bezerianos, A.; Sun, Y.; Jung, T.P. Dynamic Reorganization of Functional Connectivity Unmasks Fatigue Related Performance Declines in Simulated Driving. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1790–1799. [Google Scholar] [CrossRef] [PubMed]
  15. Khajehpour, H.; Mohagheghian, F.; Ekhtiari, H.; Makkiabadi, B.; Harirchian, M.H. Computer-aided classifying and characterizing of methamphetamine use disorder using resting-state EEG. Cogn. Neurodyn. 2019, 13, 519–530. [Google Scholar] [CrossRef]
  16. Mulert, C.; JaGer, L.; Schmitt, R.; Bussfeld, P.; Pogarell, O.; Moller, H.J.; Juckel, G.; Hegerl, U. Integration of fMRI and simultaneous EEG: Towards a comprehensive understanding of localization and time-course of brain activity in target detection. Neuroimage 2004, 22, 83–94. [Google Scholar] [CrossRef]
  17. Strangman, G.; Culver, J.P.; Thompson, J.H.; Boas, D.A. A quantitative comparison of simultaneous BOLD fMRI and NIRS recordings during functional brain activation. Neuroimage 2002, 17, 719–731. [Google Scholar] [CrossRef]
  18. Cohen, J.; Eom, K.; Henry, T.; Bricken, C.; Sheridan, M. Dynamic Reconfiguration of Functional Brain Networks in ADHD After Methylphenidate Administration Relates to Improvements in Response Control. Biol. Psychiatry 2020, 87, S19. [Google Scholar] [CrossRef]
  19. Jiang, X.; Ma, X.; Geng, Y.; Zhao, Z.; Kendrick, K.M. Intrinsic, dynamic and effective connectivity among large-scale brain networks modulated by oxytocin. NeuroImage 2021, 87, 117668. [Google Scholar] [CrossRef]
  20. Dallas, S.; Machairas, K.; Papadopoulos, E. A Comparison of Ordinary Differential Equation Solvers for Dynamical Systems with Impacts. J. Comput. Nonlinear Dyn. 2017, 12, 061016. [Google Scholar] [CrossRef]
  21. Omidvarnia, A.; Pedersen, M.; Walz, J.M.; Vaughan, D.N.; Abbott, D.F.; Jackson, G.D. Dynamic regional phase synchrony (DRePS). Hum. Brain Mapp. 2016, 37, 1970–1985. [Google Scholar] [CrossRef]
  22. Skarding, J.; Gabrys, B.; Musial, K. Foundations and Modeling of Dynamic Networks Using Dynamic Graph Neural Networks: A Survey. IEEE Access 2021, 9, 79143–79168. [Google Scholar] [CrossRef]
  23. Zang, C.; Wang, F. Neural Dynamics on Complex Networks. In Proceedings of the KDD2020, Virtual Conference, 23–27 August 2020; pp. 892–902. [Google Scholar]
  24. Wang, J. Pearson Correlation Coefficient. In Encyclopedia of Systems Biology; Springer: New York, NY, USA, 2013; p. 1671. [Google Scholar] [CrossRef]
  25. Ahmadi, A.; Kashefi, M.; Shahrokhi, H.; Nazari, M.A. Computer aided diagnosis system using deep convolutional neural networks for ADHD subtypes. Biomed. Signal Process. Control. 2021, 63, 102227. [Google Scholar] [CrossRef]
  26. Chatterjee, I.; Agarwal, M.; Rana, B.; Lakhyani, N.; Kumar, N. Bi-objective approach for computer-aided diagnosis of schizophrenia patients using fMRI data. Multimed. Tools Appl. 2018, 77, 26991–27015. [Google Scholar] [CrossRef]
  27. Yan, C.; Zang, Y. DPARSF: A MATLAB Toolbox for ‘Pipeline’ Data Analysis of Resting-State fMRI. Front. Syst. Neurosci. 2010, 4, 13. [Google Scholar] [CrossRef] [Green Version]
  28. Eickhoff, S.B.; Stephan, K.E.; Mohlberg, H.; Grefkes, C.; Fink, G.R.; Amunts, K.; Zilles, K. A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage 2005, 25, 1325–1335. [Google Scholar] [CrossRef]
  29. Achard, S. A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. J. Neurosci. Off. J. Soc. Neurosci. 2006, 26, 63–72. [Google Scholar] [CrossRef]
  30. Wang, J.; Liang, W.; Zang, Y.; Hong, Y.; Yong, H. Parcellation-dependent small-world brain functional networks: A resting-state fMRI study. Hum. Brain Mapp. 2010, 30, 1511–1523. [Google Scholar] [CrossRef]
  31. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. Computational capabilities of graph neural networks. IEEE Trans. Neural Netw. 2009, 20, 81–102. [Google Scholar] [CrossRef] [Green Version]
  32. Lipton, Z.C.; Berkowitz, J.; Elkan, C. A Critical Review of Recurrent Neural Networks for Sequence Learning. arXiv 2015, arXiv:1506.00019. [Google Scholar]
  33. Ma, Y.; Guo, Z.; Ren, Z.; Zhao, E.; Tang, J.; Yin, D. Streaming Graph Neural Networks. arXiv 2018, arXiv:1810.10627. [Google Scholar]
  34. Kumar, S.; Zhang, X.; Leskovec, J. Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019. [Google Scholar] [CrossRef]
  35. Wu, J.; Cao, M.; Cheung, J.C.K.; Hamilton, W.L. TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion. arXiv 2020, arXiv:2010.03526. [Google Scholar]
  36. Trivedi, R.; Farajtabar, M.; Biswal, P.; Zha, H. Representation Learning over Dynamic Graphs. arXiv 2018, arXiv:1803.04051. [Google Scholar]
  37. Knyazev, B.; Augusta, C.; Taylor, G.W. Learning Temporal Attention in Dynamic Graphs with Bilinear Interactions. arXiv 2019, arXiv:1909.10367. [Google Scholar] [CrossRef]
  38. Han, Z.; Ma, Y.; Wang, Y.; Gunnemann, S.; Tresp, V. Graph Hawkes Neural Network for Forecasting on Temporal Knowledge Graphs. arXiv 2020, arXiv:2003.13432. [Google Scholar]
  39. Xu, D.; Ruan, C.; Korpeoglu, E.; Kumar, S.; Achan, K. Inductive Representation Learning on Temporal Graphs. arXiv 2020, arXiv:2002.07962. [Google Scholar]
  40. Rossi, E.; Chamberlain, B.; Frasca, F.; Eynard, D.; Monti, F.; Bronstein, M. Temporal Graph Networks for Deep Learning on Dynamic Graphs. arXiv 2020, arXiv:2006.10637. [Google Scholar]
  41. Mueller, S.G.; Weiner, M.W.; Thal, L.J.; Petersen, R.C.; Jack, C.; Jagust, W.; Trojanowski, J.Q.; Toga, A.W.; Beckett, L. Alzheimer’s Disease Neuroimaging Initiative; Springer: New York, NY, USA, 2008; pp. 182–189. [Google Scholar]
  42. Sreemathy, J.; Balamurugan, P.S. An efficient text classification using knn and naive bayesian. Int. J. Comput. Sci. Eng. 2012, 4, 392–396. [Google Scholar]
  43. Burges, C. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
Figure 1. Overview of dynamic brain function network based on NDCN.
Figure 1. Overview of dynamic brain function network based on NDCN.
Diagnostics 12 01298 g001
Figure 2. Sliding time window schematic diagram.
Figure 2. Sliding time window schematic diagram.
Diagnostics 12 01298 g002
Figure 3. Influence of sliding window on 1 loss.
Figure 3. Influence of sliding window on 1 loss.
Diagnostics 12 01298 g003
Figure 4. Influence of sliding window on 2 loss.
Figure 4. Influence of sliding window on 2 loss.
Diagnostics 12 01298 g004
Table 1. Details of ADNI data set.
Table 1. Details of ADNI data set.
DatasetSubjets (Male/Female)AgesSlicesTETRNo. TRs
AD47/4873.6 ± 153.31 mm30 ms3.0 s140
EMCI47/4872.7 ± 123.31 mm30 ms3.0 s140
LMCI47/4874.8 ± 143.31 mm30 ms3.0 s140
NC47/4875.1 ± 133.31 mm30 ms3.0 s140
Table 2. The 1 loss and 2 loss comparison results of in-snapshots.
Table 2. The 1 loss and 2 loss comparison results of in-snapshots.
DatasetLSTM-GNNGRU-GNNRNN-GNNNDCN-Brain
1 Loss 2 Loss 1 Loss 2 Loss 1 Loss 2 Loss 1 Loss 2 Loss
AD51.8 ± 3.133.1 ± 3.644.8 ± 3.035.3 ± 2.743.1 ± 3.432.6 ± 4.223.7 ± 3.612.7 ± 4.4
LMCI50.7 ± 2.834.4 ± 2.447.3 ± 2.832.6 ± 3.642.9 ± 3.631.4 ± 4.523.4 ± 2.814.6 ± 2.7
EMCI51.5 ± 3.436.7 ± 2.843.5 ± 3.331.8 ± 3.346.1 ± 3.329.7 ± 4.624.5 ± 4.313.8 ± 3.3
NC49.8 ± 2.437.9 ± 2.942.7 ± 3.637.1 ± 2.641.7 ± 3.233.2 ± 4.326.1 ± 4.714.3 ± 4.2
Table 3. Average small world results.
Table 3. Average small world results.
Dataset C ¯ L ¯ C ¯ R L ¯ R γ ¯ λ ¯ σ ¯ σ > 1
AD0.68341.89030.39471.75421.73141.07751.6067 70.7 % ( 99 / 140 )
LMCI0.65431.76540.38761.64551.68801.07281.5734 83.5 % ( 117 / 140 )
EMCI0.58741.87630.43221.73551.35901.08111.2571 87.8 % ( 123 / 140 )
NC0.64461.99760.38721.73651.66471.15031.4471 92.1 % ( 129 / 140 )
Table 4. The 1 loss and 2 loss comparison results of out-snapshots.
Table 4. The 1 loss and 2 loss comparison results of out-snapshots.
DatasetLSTM-GNNGRU-GNNRNN-GNNNDCN-Brain
1 Loss 2 Loss 1 Loss 2 Loss 1 Loss 2 Loss 1 Loss 2 Loss
AD52.6 ± 3.331.7 ± 3.443.2 ± 2.834.3 ± 2.244.2 ± 3.532.7 ± 4.322.6 ± 3.414.8 ± 4.2
LMCI51.2 ± 2.432.1 ± 3.345.2 ± 3.134.2 ± 3.443.6 ± 3.431.8 ± 4.122.3 ± 2.513.9 ± 4.7
EMCI49.5 ± 3.233.6 ± 3.145.3 ± 2.833.6 ± 3.442.2 ± 3.728.7 ± 4.327.4 ± 3.314.6 ± 4.3
NC52.0 ± 2.332.4 ± 3.943.6 ± 2.632.2 ± 3.643.1 ± 3.529.2 ± 4.125.7 ± 3.712.8 ± 3.7
Table 5. Average small world results.
Table 5. Average small world results.
Dataset C ¯ L ¯ C ¯ R L ¯ R γ ¯ λ ¯ σ ¯ σ > 1
AD0.69071.75320.45331.67321.52371.04781.4541 65.7 % ( 46 / 70 )
LMCI0.63431.73440.36731.54331.72691.12381.5366 75.7 % ( 53 / 70 )
EMCI0.58741.56730.35761.53121.64261.02351.6047 84.3 % ( 59 / 70 )
NC0.64561.8340.23981.63222.69221.12362.3960 88.6 % ( 62 / 70 )
Table 7. Auxiliary diagnosis for cognitive impairment.
Table 7. Auxiliary diagnosis for cognitive impairment.
DatasetClassifierMethodAccuracy %Sensitivity %Specificity %AUC
AD vs. NCKNNBaseline65.070.060.00.65
NDCN-brain79.582.079.00.81
NBBaseline67.575.060.00.67
NDCN-brain81.481.082.50.84
SVMBaseline70.075.065.00.69
NDCN-brain82.585.080.00.87
LMCI vs. NCKNNBaseline62.575.050.00.59
NDCN-brain74.578.073.00.78
NBBaseline60.065.055.00.63
NDCN-brain76.577.573.00.75
SVMBaseline62.570.055.00.63
NDCN-brain77.580.075.00.78
EMCI vs. NCKNNBaseline57.560.055.00.56
NDCN-brain71.572.565.00.70
NBBaseline60.060.060.00.57
NDCN-brain70.575.070.00.74
SVMBaseline62.575.050.00.59
NDCN-brain72.575.070.00.74
Table 8. The influence of in-snapshots and out-snapshots.
Table 8. The influence of in-snapshots and out-snapshots.
DatasetMethodAccuracy %Sensitivity %Specificity %AUC
AD vs. NCBaseline70.075.065.00.69
In-snapshots77.578.080.50.82
Out-snapshots74.575.078.00.78
NDCN-brain (In + Out)82.585.080.00.87
LMCI vs. NCBaseline62.570.055.00.63
In-snapshots72.073.065.00.75
Out-snapshots68.572.068.00.70
NDCN-brain (In + Out)77.580.075.00.78
EMCI vs. NCBaseline62.575.050.00.59
In-snapshots68.575.070.00.68
Out-snapshots65.570.065.00.64
NDCN-brain (In + Out)72.575.070.00.74
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Z.; Xin, J.; Chen, Q.; Wang, Z.; Wang, X. NDCN-Brain: An Extensible Dynamic Functional Brain Network Model. Diagnostics 2022, 12, 1298. https://doi.org/10.3390/diagnostics12051298

AMA Style

Wang Z, Xin J, Chen Q, Wang Z, Wang X. NDCN-Brain: An Extensible Dynamic Functional Brain Network Model. Diagnostics. 2022; 12(5):1298. https://doi.org/10.3390/diagnostics12051298

Chicago/Turabian Style

Wang, Zhongyang, Junchang Xin, Qi Chen, Zhiqiong Wang, and Xinlei Wang. 2022. "NDCN-Brain: An Extensible Dynamic Functional Brain Network Model" Diagnostics 12, no. 5: 1298. https://doi.org/10.3390/diagnostics12051298

APA Style

Wang, Z., Xin, J., Chen, Q., Wang, Z., & Wang, X. (2022). NDCN-Brain: An Extensible Dynamic Functional Brain Network Model. Diagnostics, 12(5), 1298. https://doi.org/10.3390/diagnostics12051298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop