Next Article in Journal
Sustainable Environmental Economics in Farmers’ Production Factors via Irrigation Resources Utilization Using Technical Efficiency and Allocative Efficiency
Previous Article in Journal
Analysis of Activated Materials of Disposed Medical Linear Accelerators according to Clearance Level for Self-Disposal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PFT: A Novel Time-Frequency Decomposition of BOLD fMRI Signals for Autism Spectrum Disorder Detection

1
Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha P.O. Box 3411, Qatar
2
Department of Electronics, College of Science and Technology, University of Saida, Saida 20000, Algeria
3
Institute of Computing, Kohat University of Science and Technology, Kohat 26000, Pakistan
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(5), 4094; https://doi.org/10.3390/su15054094
Submission received: 26 January 2023 / Revised: 17 February 2023 / Accepted: 19 February 2023 / Published: 23 February 2023

Abstract

:
Diagnosing Autism spectrum disorder (ASD) is a challenging task for clinicians due to the inconsistencies in existing medical tests. The Internet of things (IoT) has been used in several medical applications to realize advancements in the healthcare industry. Using machine learning in tandem IoT can enhance the monitoring and detection of ASD. To date, most ASD studies have relied primarily on the operational connectivity and structural metrics of fMRI data processing while neglecting the temporal dynamics components. Our research proposes Progressive Fourier Transform (PFT), a novel time-frequency decomposition, together with a Convolutional Neural Network (CNN), as a preferred alternative to available ASD detection systems. We use the Autism Brain Imaging Data Exchange dataset for model validation, demonstrating better results of the proposed PFT model compared to the existing models, including an increase in accuracy to 96.7%. These results show that the proposed technique is capable of analyzing rs-fMRI data from different brain diseases of the same type.

1. Introduction

The neurological abnormality known as Autism Spectrum Disorder (ASD) affects a person’s capacity for interaction and communication and is defined by a broad range of symptoms and degrees of impairment [1]. A major issue with ASD is that there is no medical test to determine the illness, making diagnosis challenging. Doctors diagnose ASD from the patient’s developmental history and behavior. There is no definitive cure to treat ASD; the only option is to control the disorder’s growth. ASD can be recognized at the age of eighteen months or younger in the majority of instances, and a diagnosis by an experienced expert can be deemed quite trustworthy by the age of two. However, many children do not receive a clear diagnosis until they are much older. As a result of this identification delay, children with ASD may miss out on essential early assistance. Research has reported that early diagnosis can yield remarkably positive long-term results in children’s development and well-being [2]. According to the US Centers for Disease Control, among 8-year-old children nearly 1 in 44 have been diagnosed with ASD, while about one in 100 children worldwide have autism. This estimate is considered to be quite low, however, due to the limited availability of data from low and middle-income countries.
Various studies have attempted to employ brain imaging approaches for the early identification and diagnosis of ASD. Among the most common brain imaging techniques examined during the resting state include electroencephalography (EEG), magnetoencephalography (MEG), magnetic resonance imaging (MRI), and functional MRI (fMRI) [3]. fMRI provides the best spatial resolution among the non-invasive approaches discussed above, and has adequate temporal resolution compared to other methods [4]. In fMRI, blood flow variations and blood oxygenation conditions in the brain regions are described by the blood oxygen level-dependent (BOLD) approach [1,5]. Recent improvements in the utilization of machine learning approaches for the classification of ASD have been reviewed in [6]. The progress of various machine and deep learning models has been reviewed in [7], specifically for the identification/detection and classification of ASD using fMRI data. Innovations in Internet of things (IoT) technology have been used in several medical applications to realize advancements in the healthcare industry. Using machine learning with IoT can result in enhancements in the monitoring and detection of ASD. In [8], a Raspberry Pi system with IoT was used to analyze EEG data for early seizure detection, with wearable devices based on an IoT ASD detection model used for data collection and monitoring. The sensors in these devices can collect ASD patients’ body temperature, heart rate, facial gestures, eye tracking, hand activity, and motor motion. In a cloud environment, data collected using machine learning models can be used to aid in decision-making. An application for ASD detection (AutBot) that uses face and emotion recognition and analyzes the patient’s performance based on performing suggested activities was developed in [9], while machine learning with an assistive smart environment ha been proposed to help ASD kids to communicate with others more easily [10].
Resting-state fMRI (rs-fMRI) is a technique that is frequently used to investigate the intrinsic brain network at rest. Thanks to the availability of the ABIDE dataset, researchers have reported various models for classifying individuals with Autism Spectrum Disorder (ASD) using resting-state functional magnetic resonance imaging (rs-fMRI) data [1,11,12]. Abraham et al. (2017) [13] presented various machine-learning approaches to classify individuals with Autism Spectrum Disorder (ASD); utilizing a Support Vector Machine (SVM) classifier, they achieved accuracy, sensitivity, and specificity rates of 66.9%, 53.2%, and 78.3%, respectively. In order to analyze patterns in the functional connectivity (FC) matrix, a classifier based on deep learning was employed by [14], achieving accuracy, sensitivity, and specificity of 70%, 74%, and 63%, respectively. In addition to these findings, a convolutional neural network (CNN) model for the automated detection of ASD was proposed in [15]. In [16], the ABIDE-I dataset and CC400 functional parcellation atlas of the brain were used to successfully identify ASD with an accuracy of 70.22%. Automated ASD detection based on structural MRI scans and CNN showed accuracy, sensitivity, and specificity of 72%, 71%, and 73%, respectively. Recent studies related to fMRI have assumed that brain activity remains constant while the scanning session is in process, neglecting the temporal dynamics [1]. This assumption might result in considerable data loss. Although static characteristics minimize computing complexity by assuming activity constancy across time, they may fail to account for changes within the scanning period. According to research, analyzing temporal dynamic aspects can help to distinguish between normal and diseased brain processes [17,18]. A systematic review of ASD diagnosis using rs-fMRI and machine learning algorithms suggested SVM as the best classifier [19]. Similarly, Brahim and Farrugia [20] proposed Graph Fourier transformation with SVM to analyze rs-fMRI, outperforming several similar approaches. Another study presented Graph Fourier transformation in combination with SVM in an approach based on graph signal processing for EEG data classification [21]. The authors of [22] presented automated identification of ASD utilizing hybrid deep lightweight features taken from EEG data. Short Fourier transform was used in this hybrid approach to generate images of spectrograms; SVM was utilized as a classifier, achieving an accuracy of 96.44%. In [23], the authors extracted features from EEG and areas of interest for eye tracking using power spectrum analysis. Using the minimum redundancy maximum relevance method and SVM classifier, features relevant to the diagnosis of ASD were selected. By combining both types of data, a classification accuracy rate of 85.44% was achieved.
Motivated by the above research works, this study proposes another method for ASD detection. The contributions of this research are as follows:
  • Progressive Fourier Transform (PFT) is utilized for simultaneous analysis of BOLD signals in both the time and frequency domains, thereby providing a better understanding of the signal’s characteristics over time;
  • PFT is simple to operate computationally, providing a wide variety of mathematical techniques for analyzing the resulting transformed signals;
  • PFT can eliminate image or audio noise from signals via thresholding of the Progressive Fourier coefficients.
The rest of this paper is organized as follows: a step-wise explanation of the research methodology is presented in Section 2; the results obtained during this research work are discussed in Section 3; finally, Section 4 concludes the research work presented in this paper.

2. Proposed Methodology Using PFT

The proposed model based on PFT for the diagnosis of ASD is described in this section. We first discuss the system architecture in detail. Figure 1 illustrates the flowchart of the proposed BOLD rs-fMRI signals using PFT and pre-trained DenseNet201.

2.1. Data Preparation

The Rs-FMRI data underwent a data preparation stage in which the data were pre-processed, which is discussed in detail below.

2.1.1. Rs-fMRI Data

The rs-fMRI dataset used in [1] was utilized in this work as the benchmark dataset. This dataset comprises 41 ASD and 41 neurotypical control (NC) cases from the ABIDE dataset, which was obtained from several separate neuro-imaging sites [1].

2.1.2. Preprocessing and ROI Extraction

Collected raw data must usually be preprocessed in order to reduce noise and artifacts. This step must be performed prior to the analysis. To preprocess and analyze our fMRI dataset, we utilized the DPARSF MATLAB toolkit [24]. Our fMRI experiment used functional T2* images, which have a lower spatial and greater temporal resolution. These are made up of a series of discrete MR images that allow the monitoring of oxygenation changes in the brain over time, and are utilized to research the brain and its architecture. BOLD contrast is often used in fMRI, and allows the ratio of oxygenated to de-oxygenated hemoglobin in the blood to be calculated. It does not directly assess brain activity; rather, it assesses the metabolic needs (oxygen consumption) of the active neurons. Changes in the MR signal caused by immediate neural activity are referred to as the hemodynamic response function. As brain activity rises, the metabolic demand for oxygen and nutrients rises as well, necessitating the extraction of oxygen from the blood. As a result, the hemoglobin becomes paramagnetic, causing distortions in the magnetic field and a drop in T2*.
Spikes or ghosting in fMRI data can be caused by electrical instability in the MRI system. Thus, as an initial step, we rejected the first five volumes and only used volumes in which the MRI system demonstrated good equilibrium. Next, we used slicing to perform time correction, which is necessary because discrepancies in the acquisition time of various voxels can lead to difficulties in fMRI data processing. To eliminate mismatches in the time series of photos in terms of head location, we used realignment to compensate for differences in head motions [1].

2.1.3. Region-of-Interest

Spatial normalization is used to align the brain’s shape, size, and orientation across participants in order to turn brain images into a shared or standardized template space. The functional pictures were normalized via unified segmentation of the T1 images followed by insertion into a standard brain template known as the Montreal Neurological Institute (MNI) template. Then, to enhance the spatial smoothing, we used a Gaussian kernel with a full width at half maximum (FWHM) of 8 mm. For the purpose to split the brain into 116 ROIs, automated anatomical labeling (AAL) was chosen as the standard brain atlas [25]. As different cognitive states arise, such a collection of ROIs functions together when the network changes. We selected the default mode network (DMN), as it is more effective than other brain networks while a patient is awake and at rest [26]. DMN provides a robust signal of brain neuronal activity in ASD individuals, which can be recovered in terms of temporal dynamic features. The DMN is made up of 22 areas spread over the human brain’s right and left hemispheres.

2.2. BOLD Dynamic Feature Extraction

In the BOLD signal feature extraction stage, the preprocessed data were converted into scalograms and a pre-trained CNN was applied to the data in order to learn the features.

2.2.1. BOLD Signal Extraction

The Fourier transform is a mathematical tool used to study and describe signals and systems. It provides a technique capable of breaking down a complex signal into its component frequencies and amplitudes, which can then be utilized to investigate and comprehend the features of the signal. The Fourier transform is a cornerstone of many disciplines of science and engineering, including signal processing, image processing, control systems, and telecommunications. It has wide-ranging applications, from audio and picture compression to the study of quantum physics and optics. The Fourier transform is a powerful tool that has proven to be a fundamental tool in the analysis of signals and systems.
First, the time-frequency features at each signal were extracted using a new transformation method called “Progressive Fourier Transform”. In this approach, the Fourier transform of the one-dimensional BOLD signals is obtained as the value slides along the time axis, resulting in a two-dimensional representation of the BOLD signal. As the signal is transmitted or received, its frequency decomposition is calculated to analyze how the frequencies change over time. Mathematically, this is represented as in Equation (1):
X ( f , u ) = u e j 2 π f t x ( t ) 1 { t < u } d t
where f and u represent the signal frequency, which varies along the BOLD signal x ( t ) , e is a mathematical constant that represents the base of the natural logarithm, and j is an imaginary unit, which is defined as the square root of 1 .
To better explain this novel transform method, we can consider the following example.
First, consider a random signal x ( t ) that is composed of three different sinusoidal signals. The frequencies of these signals are 7.5 Hz, 15 Hz, and 25 Hz at different intervals i.e.,
x ( t ) = 20 s i n ( 2 π 7.5 t ) 1 t [ 2 , 2.5 ] + 40 s i n ( 2 π 15 t ) 1 t [ 2 , 3.2 ] + 50 s i n ( 2 π 25 t ) 1 t [ 1 , 3.7 ] ,
where 1 t I ( t ) is equal to one if t belongs to the interval I and is zero otherwise.
Figure 2 part (c) represents the PFT of the above signal x ( t ) . From parts (b) and (c) of Figure 2, it is clear that PFT is more precise than wavelet transform when the frequencies are found precisely; that is, in comparison with the wavelet images, the lines in a PFT image are finer and more slender.

2.2.2. Scalogram Conversion

Figure 2 shows the corresponding time− frequency scalogram of the mixed signal. The new transform has high precision in differentiating between frequencies.
The scalogram plot illustrates the time−frequency features of BOLD signals, with 22 images produced per individual based on the number of DMN regions. Thus, a total of 1804 images were generated for the 82 patients included in the study. The suggested technique uses scalogram pictures as input to the pre-trained CNN, which previous research has shown to be competitive in detecting ASD [1].

2.2.3. Data Split

After the conversion of the BOLD data into scalogram images the dataset was separated, with 70% used for training and 30% for testing.

2.2.4. Feature Extraction

As a major component of deep learning architectures, CNN is necessary for the use of local convolution filters in collecting regional information. CNN is a one-of-a-kind network utilized in medical image processing, with many successful applications, particularly in biomedical research. In this research, we chose the CNN architecture having the highest accuracy as in [1], namely the DenseNet-201, which was used for feature extraction from scalogram pictures in [27]. A detailed description of the pre-trained models utilized in our research can be found in [1], as shown in Figure 3.

2.3. Classification

The extracted feactures were utilized to train the SVM and KNN classifier models for final assessment and diagnosis during the classification stage.

2.3.1. Classifiers

After being extracted by the activation layer, the pre-trained CNN features were fitted using a K-nearest neighbors (KNN) classifier.

2.3.2. Evaluation Matrices

Lastly, The effectiveness of each model was assessed using the evaluation metrics presented in Equations (2)–(5). Here, TP (True Positive) is a crucial score that indicates whether the model achieves precise identification of individuals with Autism Spectrum Disorder (ASD), TN (True Negative) indicates whether the model correctly predicts individuals as non-ASD, False Negative (FN) indicates that the model mistakenly predicts non-ASD cases, and False Positive (FP) indicates that the model improperly predicts a patient as having ASD.
A c c u r a c y = ( T P + T N ) ( T P + T F + F P + F N )
P r e c i s i o n = T P ( T P + F P )
S e n s i t i v i t y = T P ( T P + F N )
S p e c i f i c i t y = T N ( T N + F P )

3. Results Discussion

We followed the research conducted in [1] so as to compare the results of our proposed model with the benchmark model. The scalogram features were extracted for signals from 22 sections of the DMN using pre-trained CNN architectures, then the performance was assessed using a KNN classifier.
The BOLD signals are derived from the DMN’s 22 regions. Figure 4 shows an example of the time−series plots of the BOLD signals and scalogram images for a single DMN region of an ASD patient and an NC patient.
Based on the performance results shown in [1] for the four pre-trained architectures, we selected the top-ranked alternative, DenseNet201. DenseNet201 was trained using the scalogram for the classification task, then the features from each layer were retrieved to use as input to the KNN classifier. Table 1 reports our findings based on the test dataset. The DenseNet-201 network was used to discover the best network for scalogram image classification, which was then input to the KNN for classification, achieving results of 96.67%, 95.14%, and 98.43% for accuracy, sensitivity, and specificity, respectively.
Accuracy, sensitivity, and specificity were the performance measures considered in [1] as well. Our proposed model was able to realize enhancements in all three performance measures. The feature extraction layer of DenseNet201 is the key factor behind its superior performance compared to other CNN architectures; known as “conv3 block1 concat”, it is positioned at layer 58 out of 708 levels, which is deeper than in other CNN designs. As a consequence, DenseNet201 has improved feature representation, as can be seen in Table 1. The superior performance of DenseNet-201 is the result of its distinct dense model, in which individual layers accept feature mappings from all preceding layers, leading to the features with varying levels of complexity. The dense block design of the DenseNet-201 permits it to be more compact while requiring fewer parameters than other CNN architectures.
Table 2 reports the performance measures for three different kernels used with the SVM classifier together with the results of different neighborhoods used for the KNN classifier. The KNN classifier was tested with K set to 1, 3, and 5; the respective accuracy, sensitivity, and specificity results are shown in Table 3, with k = 1 providing the best performance. Thus, DenseNet201 was subsequently evaluated with the KNN classifier and k = 1.
Figure 5 shows the confusion matrix when classifying the scalograms of BOLD rs-fMRI signals as either ASD or NC patients. It can be seen that DenseNet-201 with KNN is the most accurate model, with an accuracy of 96.7%. To anticipate the behavior of our system in an actual application, the model was assessed using k-fold cross-validation. Table 4 shows the performance of DenseNet-201 with KNN for 5, 10, 15, and 20 folds. Table 5 shows that better results of accuracy, sensitivity, specificity, and precision are attained with ten folds (96.7%, 96.6%, 96.9%, and 96.8%, respectively). These results indicate that the model is not overfitted and has robust generalization capabilities with respect to novel data.
Table 5 presents additional performance comparisons for the binary classification of ASD vs. NC. Methods based on the static FC of the Pearson correlation [13,14,16] and the covariance matrix [28] achieve an accuracy of 79.2%, lower than the dynamic FC. The approach proposed in [1], which used time−frequency decomposition and input the WCT of putamen-R and 115 brain regions to a CNN, achieved an accuracy of 89.9%. This was a significant improvement of 9.8% compared to the [29] previous approach of Bernas et al., which used the same WCT with only seven brain networks and only in-phase components input to the SVM classifier.
Previous ASD prediction models based on the Pearson correlation coefficients between BOLD signals [15] and structural MRI images [16] have been able to achieve 70.2% accuracy only. Similarly, using the time−frequency components of the BOLD signals by converting them to scalograms of 22 DMN brain regions and inputting them to a CNN with KNN as a classifier [1] has only achieved an accuracy of 89.8%. In contrast, our proposed model based on the PFT achieves an accuracy of 96.7%, which is 10.1% higher than the CWT scalogram-based approach demonstrated in [1]. Therefore, PFT provides better time-frequency decomposition of BOLD signals, with frequencies being more accurately detected in time compared to wavelet decomposition, as shown in Figure 2. Hence, for this purpose the PFT represents a better feature extractor than the wavelet decomposition.
The proposed technique was tested with various pre-trained models and classifiers in order to improve its detection performance. Thus far, DenseNet-201 and 1-NN have yielded better results, as indicated in Table 1, Table 2 and Table 3. Using only the pre-trained model to classify images extracted from PFT leads to weak accuracy. The accuracy of our initial model was lower, primarily due to the small size of our dataset, which was insufficient for the pre-trained model. To achieve better results, it is necessary to use the pre-trained model as a feature extractor and then use another classifier to perform feature classification, as shown in Figure 6.

4. Conclusions

In this study, we utilized the temporal dynamic characteristics of the BOLD data from particular brain areas for ASD categorization. The PFT, which is effectively a 2D representation of the time−frequency component, was employed to derive the temporal dynamic features of the BOLD signals. Our proposed model provided the best classification results, outperforming other recently published approaches. The new progressive Fourier transform is able to produce clearer scalogram pictures, producing improved feature extraction and classification. Due to the inability to collect fMRI data in real time due to the complexity of the required technology and its high cost, this research used a previously available dataset for model evaluation. In our future research, we intend to focus on the use of IoT for ASD detection.

Author Contributions

Conceptualization, S.B.B. and A.T.; Methodology, S.B.B., A.T., D.A.-T. and M.Q.; Validation, A.T.; Formal analysis, D.A.-T.; Investigation, S.H., D.A.-T. and M.Q.; Resources, S.B.B.; Writing—original draft, A.T.; Writing—review and editing, S.H., D.A.-T. and M.Q.; Visualization, S.H.; Supervision, S.B.B.; Funding acquisition, S.B.B. All authors have read and agreed to the published version of the manuscript.

Funding

Open Access funding provided by the Qatar National Library.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available online at: https://doi.org/10.3389/conf.fninf.2013.09.00041 (accessed on 25 January 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Al-Hiyali, I.; Yahya, N.; Faye, I.; Khan, Z. Autism spectrum disorder detection based on wavelet transform of BOLD fMRI signals using pre-trained convolution neural network. Int. J. Integr. Eng. 2021, 13, 49–56. [Google Scholar] [CrossRef]
  2. Grzadzinski, R.; Amso, D.; Landa, R.; Watson, L.; Guralnick, M.; Zwaigenbaum, L.; Deak, G.; Estes, A.; Brian, J.; Bath, K.; et al. Pre-symptomatic intervention for autism spectrum disorder (ASD): Defining a research agenda. J. Neurodev. Disord. 2021, 13, 1–23. [Google Scholar] [CrossRef] [PubMed]
  3. Hull, J.; Dokovna, L.; Jacokes, Z.; Torgerson, C.; Irimia, A.; Van Horn, J.D. Resting-state functional connectivity in autism spectrum disorders: A review. Front. Psychiatry 2017, 7, 205. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Crosson, B.; Ford, A.; McGregor, K.M.; Meinzer, M.; Cheshkov, S.; Li, X.; Walker-Batson, D.; Briggs, R.W. Functional imaging and related techniques: An introduction for rehabilitation researchers. J. Rehabil. Res. Dev. 2010, 47, vii. [Google Scholar] [CrossRef] [PubMed]
  5. Fu, Z.; Tu, Y.; Di, X.; Biswal, B.B.; Calhoun, V.D.; Zhang, Z. Associations between functional fonnectivity dynamics and BOLD dynamics are heterogeneous across brain networks. Front. Hum. Neurosci. 2017, 11, 593. [Google Scholar] [CrossRef] [Green Version]
  6. Xu, M.; Calhoun, V.; Jiang, R.; Yan, W.; Sui, J. Brain imaging-based machine learning in autism spectrum disorder: Methods and applications. J. Neurosci. Methods 2021, 361, 109271. [Google Scholar] [CrossRef]
  7. Feng, W.; Liu, G.; Zeng, K.; Zeng, M.; Liu, Y. A review of methods for classification and recognition of ASD using fMRI data. J. Neurosci. Methods 2022, 368, 109456. [Google Scholar] [CrossRef]
  8. Khiani, S.; Mohamed Iqbal, M.; Dhakne, A.; Sai Thrinath, B.; Gayathri, P.; Thiagarajan, R. An effectual IOT coupled EEG analysing model for continuous patient monitoring. Meas. Sens. 2022, 24, 100597. [Google Scholar] [CrossRef]
  9. Shelke, N.A.; Rao, S.; Verma, A.K.; Kasana, S.S. Autism Spectrum Disorder Detection Using AI and IoT; Association for Computing Machinery: New York, NY, USA, 2022. [Google Scholar]
  10. Mohamed, A.H.; Mohamed, H.; Mosa, E.H.; Alqahtani, A. An AI-Enabled Internet of Things Based Autism Care System for Improving Cognitive Ability of Children with Autism Spectrum Disorders. Comput. Intell. Neurosci. 2022, 2022, 2247675. [Google Scholar]
  11. Yang, X.; Zhang, N.; Schrader, P. A study of brain networks for autism spectrum disorder classification using resting-state functional connectivity. Mach. Learn. Appl. 2022, 8, 100290. [Google Scholar] [CrossRef]
  12. Agastinose Ronicko, J.F.; Thomas, J.; Thangavel, P.; Koneru, V.; Langs, G.; Dauwels, J. Diagnostic classification of autism using resting-state fMRI data improves with full correlation functional brain connectivity compared to partial correlation. J. Neurosci. Methods 2020, 345, 108884. [Google Scholar] [CrossRef]
  13. Abraham, A.; Milham, M.P.; Di Martino, A.; Craddock, R.C.; Samaras, D.; Thirion, B.; Varoquaux, G. Deriving reproducible biomarkers from multi-site resting-state data: An autism-based example. NeuroImage 2017, 147, 736–745. [Google Scholar] [CrossRef] [Green Version]
  14. Heinsfeld, A.S.; Franco, A.R.; Craddock, C.R.; Buchweitz, A.; Meneguzzi, F. Identification of autism spectrum disorder using deep learning and the ABIDE dataset. NeuroImage Clin. 2018, 17, 16–23. [Google Scholar] [CrossRef]
  15. Sherkatghanad, Z.; Akhondzadeh, M.; Salari, S.; Zomorodi-Moghadam, M.; Abdar, M.; Acharya, R.U.; Khosrowabadi, R.; Salari, V. Automated detection of autism spectrum disorder using a convolutional neural network. Front. Neurosci. 2020, 13, 1325. [Google Scholar] [CrossRef] [Green Version]
  16. Aghdam, M.A.; Sharifi, A.; Pedram, M.M. Diagnosis of autism spectrum disorders in young children based on resting-state functional magnetic resonance imaging data using convolutional neural networks. J. Digit. Imaging 2019, 32, 899–918. [Google Scholar] [CrossRef]
  17. Deco, G.; Jirsa, V.; Friston, K.J. The dynamical structural basis of brain activity. In Principles of Brain Dynamics: Global State Interactions; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  18. Atasoy, S.; Deco, G.; Kringelbach, M.L.; Pearson, J. Harmonic brain modes: A unifying framework for linking space and time in brain dynamics. Neuroscientist 2018, 24, 277–293. [Google Scholar] [CrossRef]
  19. Santana, C.P.; de Carvalho, E.A.; Rodrigues, I.D.; Bastos, G.S.; de Souza, A.D.; de Brito, L.L. rs-fMRI and machine learning for ASD diagnosis: A systematic review and meta-analysis. Sci. Rep. 2022, 12, 6030. [Google Scholar] [CrossRef]
  20. Brahim, A.; Farrugia, N. Graph Fourier transform of fMRI temporal signals based on an averaged structural connectome for the classification of neuroimaging. Artif. Intell. Med. 2020, 106, 101870. [Google Scholar] [CrossRef]
  21. Miri, M.; Abootalebi, V.; Saeedi-Sourck, H.; Behjat, H. EEG-based Motor Imagery Decoding via Graph Signal Processing on Learned Graphs. bioRxiv 2022, 13, 1–16. [Google Scholar]
  22. Baygin, M.; Dogan, S.; Tuncer, T.; Datta Barua, P.; Faust, O.; Arunkumar, N.; Abdulhay, E.W.; Emma Palmer, E.; Rajendra Acharya, U. Automated ASD detection using hybrid deep lightweight features extracted from EEG signals. Comput. Biol. Med. 2021, 134, 104548. [Google Scholar] [CrossRef]
  23. Kang, J.; Han, X.; Song, J.; Niu, Z.; Li, X. The identification of children with autism spectrum disorder by SVM approach on EEG and eye-tracking data. Comput. Biol. Med. 2020, 120, 103722. [Google Scholar] [CrossRef] [PubMed]
  24. Yan, C.; Zang, Y. DPARSF: A MATLAB toolbox for “pipeline” data analysis of resting-state fMRI. Front. Syst. Neurosci. 2010, 4, 13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Tzourio-Mazoyer, N.; Landeau, B.; Papathanassiou, D.; Crivello, F.; Etard, O.; Delcroix, N.; Mazoyer, B.; Joliot, M. Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single. Neuroimage 2002, 15, 273–289. [Google Scholar] [CrossRef] [PubMed]
  26. Minyoung, J.; Hirotaka, K.; Daisuke, S.N.; Makoto, I.; Tomoyo, M.; Keisuke, I.; Mizuki, A.; Sumiyoshi, A.; Toshio, M.; Akemi, T.; et al. Default mode network in young male adults with autism spectrum disorder: Relationship with autism spectrum traits. Mol. Autism 2014, 5, 1–11. [Google Scholar]
  27. Yahya, N.; Musa, H.; Ong, Z.Y.; Elamvazuthi, I. Classification of motor functions from electroencephalogram EEG signals based on an integrated method comprised of common spatial pattern and wavelet transform framework. Sensors 2019, 19, 4878. [Google Scholar] [CrossRef] [Green Version]
  28. Chen, H.; Duan, X.; Liu, F.; Lu, F.; Ma, X.; Zhang, Y.; Uddin, L.Q.; Chen, H. Multivariate classification of autism spectrum disorder using frequency-specific resting-state functional connectivity—A multi-center study. Prog. Neuro-Psychopharmacol. Biol. Psychiatry 2016, 64, 1–9. [Google Scholar] [CrossRef]
  29. Bernas, A.; Aldenkamp, A.P.; Zinger, S. Wavelet coherence-based classifier: A resting-state functional MRI study on neurodynamics in adolescents with high-functioning autism. Comput. Methods Programs Biomed. 2018, 154, 143–151. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed model using PFT with pre-trained DenseNet201.
Figure 1. Flowchart of the proposed model using PFT with pre-trained DenseNet201.
Sustainability 15 04094 g001
Figure 2. Time−frequency scalogram of mixed signal: (a) x(t) time series plot; (b) wavelet time-frequency scalogram; (c) progressive Fourier transform.
Figure 2. Time−frequency scalogram of mixed signal: (a) x(t) time series plot; (b) wavelet time-frequency scalogram; (c) progressive Fourier transform.
Sustainability 15 04094 g002aSustainability 15 04094 g002b
Figure 3. Feature extraction using the proposed pre-trained model.
Figure 3. Feature extraction using the proposed pre-trained model.
Sustainability 15 04094 g003
Figure 4. Time−series graph, showing BOLD signals on the left and related time frequency scalogram on the right for NC (first row) and ASD (second row).
Figure 4. Time−series graph, showing BOLD signals on the left and related time frequency scalogram on the right for NC (first row) and ASD (second row).
Sustainability 15 04094 g004
Figure 5. Confusion matrix based on the test dataset for the DenseNet201 with KNN model.
Figure 5. Confusion matrix based on the test dataset for the DenseNet201 with KNN model.
Sustainability 15 04094 g005
Figure 6. Confusion matrix for the classifier without feature extraction.
Figure 6. Confusion matrix for the classifier without feature extraction.
Sustainability 15 04094 g006
Table 1. Performance of the proposed PFT model on the test dataset.
Table 1. Performance of the proposed PFT model on the test dataset.
Pre-Trained ModelClassifierAccuracy %Sensitivity %Specificity %
DenseNet-201KNN96.795.198.4
SenseNet-201SVM93.096.192.8
Table 2. DenseNet-201 performance using SVM classifier with varied kernel functions.
Table 2. DenseNet-201 performance using SVM classifier with varied kernel functions.
KernelAccuracy %Sensitivity %Specificity %
Linear94.497.494.1
Polynomial83.4978.470.5
Gaussian83.393.972.4
Table 3. Performance measures for DenseNet-201 and KNN classifier with different values of k.
Table 3. Performance measures for DenseNet-201 and KNN classifier with different values of k.
k-NNAccuracy %Sensitivity %Specificity %
196.795.198.4
394.597.691.9
593.798.390.1
Table 4. DenseNet201 with KNN model performance measurements (±standard deviation) utilizing k-fold cross-validation.
Table 4. DenseNet201 with KNN model performance measurements (±standard deviation) utilizing k-fold cross-validation.
k-FoldAccuracy %Sensitivity %Specificity %Precision
5-fold96.4 ± 1.596.0 ± 1.696.6 ± 1.196.7 ± 1.5
10-fold96.7 ± 1.696.6 ± 1.496.9 ± 1.596.8 ± 2.1
15-fold96.6 ± 1.896.9 ± 2.197.1 ± 1.996.7 ± 2.3
20-fold97.4 ± 1.897.0 ± 2.497.5 ± 2.597.0 ± 2.2
Table 5. Comparison between the proposed model and previous studies.
Table 5. Comparison between the proposed model and previous studies.
PaperClassifierFC ModellingMethodSubjectAccuracy (%)
[28]SVMStatic FCPearson Correlation24079.2
[13]SVMStatic FCCovariance matrix87167
[14]DNNStatic FCPearson Correlation103570
[29]SVMDynamic FCWavelet coherence5480
[15]DNNStatic FCPearson correlation87170.2
[16]CNNDynamic FCMRI11670.2
[1]CNN + KNNDynamic FCWavelet coherence7289.8
Proposed MethodCNNDynamic FCPFT8268
Proposed MethodCNN + KNNDynamic FCPFT8296.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Belhaouari, S.B.; Talbi, A.; Hassan, S.; Al-Thani, D.; Qaraqe, M. PFT: A Novel Time-Frequency Decomposition of BOLD fMRI Signals for Autism Spectrum Disorder Detection. Sustainability 2023, 15, 4094. https://doi.org/10.3390/su15054094

AMA Style

Belhaouari SB, Talbi A, Hassan S, Al-Thani D, Qaraqe M. PFT: A Novel Time-Frequency Decomposition of BOLD fMRI Signals for Autism Spectrum Disorder Detection. Sustainability. 2023; 15(5):4094. https://doi.org/10.3390/su15054094

Chicago/Turabian Style

Belhaouari, Samir Brahim, Abdelhamid Talbi, Saima Hassan, Dena Al-Thani, and Marwa Qaraqe. 2023. "PFT: A Novel Time-Frequency Decomposition of BOLD fMRI Signals for Autism Spectrum Disorder Detection" Sustainability 15, no. 5: 4094. https://doi.org/10.3390/su15054094

APA Style

Belhaouari, S. B., Talbi, A., Hassan, S., Al-Thani, D., & Qaraqe, M. (2023). PFT: A Novel Time-Frequency Decomposition of BOLD fMRI Signals for Autism Spectrum Disorder Detection. Sustainability, 15(5), 4094. https://doi.org/10.3390/su15054094

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop