Next Article in Journal
Can Ultrasound and Contrast-Enhanced Ultrasound Help Differentiate between Subpleural Focal Organizing Pneumonia and Primary Lung Malignancy?
Next Article in Special Issue
Image Embeddings Extracted from CNNs Outperform Other Transfer Learning Approaches in Classification of Chest Radiographs
Previous Article in Journal
The RALE Score Versus the CT Severity Score in Invasively Ventilated COVID-19 Patients—A Retrospective Study Comparing Their Prognostic Capacities
Previous Article in Special Issue
Deep Transfer Learning for the Multilabel Classification of Chest X-ray Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detecting Coronary Artery Disease from Computed Tomography Images Using a Deep Learning Technique

by
Abdulaziz Fahad AlOthman
1,*,
Abdul Rahaman Wahab Sait
1 and
Thamer Abdullah Alhussain
2
1
Department of Documents and Archive, Center of Documents and Administrative Communication, King Faisal University, P.O. Box 400, Al Hofuf 31982, Al-Ahsa, Saudi Arabia
2
Programming and Electronic Services Department, Admission and Registration Deanship, King Faisal University, P.O. Box 400, Al Hofuf 31982, Al-Ahsa, Saudi Arabia
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(9), 2073; https://doi.org/10.3390/diagnostics12092073
Submission received: 23 June 2022 / Revised: 13 August 2022 / Accepted: 23 August 2022 / Published: 26 August 2022

Abstract

:
In recent times, coronary artery disease (CAD) has become one of the leading causes of morbidity and mortality across the globe. Diagnosing the presence and severity of CAD in individuals is essential for choosing the best course of treatment. Presently, computed tomography (CT) provides high spatial resolution images of the heart and coronary arteries in a short period. On the other hand, there are many challenges in analyzing cardiac CT scans for signs of CAD. Research studies apply machine learning (ML) for high accuracy and consistent performance to overcome the limitations. It allows excellent visualization of the coronary arteries with high spatial resolution. Convolutional neural networks (CNN) are widely applied in medical image processing to identify diseases. However, there is a demand for efficient feature extraction to enhance the performance of ML techniques. The feature extraction process is one of the factors in improving ML techniques’ efficiency. Thus, the study intends to develop a method to detect CAD from CT angiography images. It proposes a feature extraction method and a CNN model for detecting the CAD in minimum time with optimal accuracy. Two datasets are utilized to evaluate the performance of the proposed model. The present work is unique in applying a feature extraction model with CNN for CAD detection. The experimental analysis shows that the proposed method achieves 99.2% and 98.73% prediction accuracy, with F1 scores of 98.95 and 98.82 for benchmark datasets. In addition, the outcome suggests that the proposed CNN model achieves the area under the receiver operating characteristic and precision-recall curve of 0.92 and 0.96, 0.91 and 0.90 for datasets 1 and 2, respectively. The findings highlight that the performance of the proposed feature extraction and CNN model is superior to the existing models.

1. Introduction

Coronary artery disease (CAD) has recently become regarded as one of the most dangerous and life-threatening chronic diseases [1]. Blockage and narrowing of the coronary arteries is the primary cause of heart failure. The coronary arteries must be open to provide the heart with adequate blood [2,3,4]. According to a recent survey, the United States has the highest heart disease prevalence and the highest ratio of heart disease patients [5]. Shortness of breath, swelling feet, fatigue, and other symptoms of heart disease are among the most frequent. CAD is the most common type of heart disease, which can cause chest discomfort, stroke, and heart attack. Besides heart disease, there are heart rhythm issues, congestive heart failure, congenital heart disease, and cardiovascular disease [6].
Traditional methods of investigating cardiac disease are complex [7,8,9,10]. The lack of medical diagnostic instruments and automated systems makes pulmonary heart disease detection and treatment challenging in developing nations. However, to reduce the impact of CAD, an accurate and appropriate diagnosis of cardiac disease is necessary. Developing countries experience an alarming rise in the number of people dying from heart disease [11,12,13,14,15,16]. According to WHO, CAD is the most frequent type of heart disease, claiming the lives of 360,900 individuals globally in 2019 [17]. The sum accounts for nearly 30% of all deaths worldwide. The number of persons who are victimized is increasing exponentially. Multiple risk factors are involved in the CAD prediction. Thus, healthcare centers require a tool to detect CAD at earlier stages. The recent developments in CNN models enable researchers to develop a prediction model for CAD. However, CNN’s structure is complex and needs an excellent graphical processing unit (GPU) to process complex images.
Among conventional approaches, analytical angiography is considered one of the most accurate procedures for detecting heart abnormalities. The disadvantages of angiography include the expensive cost, various side effects, and the need for a high level of technological competence [18]. Due to human error, conventional methods often yield inaccurate diagnoses and take longer to complete. In addition, it is a costly and time-consuming method for diagnosing disease and requires considerable processing.
Artificial intelligence (AI) applications have been increasingly included in clinical diagnostic systems during the last three decades to improve their accuracy. Data-driven decision-making using AI algorithms has been increasingly common in the CAD field in recent years [19]. The diagnostic accuracy can be improved by automating and standardizing the interpretation and inference processes. AI-based systems can help speed up decision-making. Healthcare centers can obtain, evaluate, and interpret data from these emerging technologies and facilitate better patient service [20]. The raw data can significantly affect the quality and performance of AI approaches. As a result, extensive collaboration between AI engineers and clinical professionals is required to improve the quality of diagnosis [21]. The recent CAD detection technique is based on images. Faster predictions can be made for clinicians and computer scientists by deleting irrelevant features. The key features representing the crucial part of CAD decide the performance of the AI techniques [22]. Many studies use deep learning (DL) to determine the existence of CAD.
Convolutional neural networks (CNN) are becoming increasingly popular in medical image processing. CNN was initially demonstrated in medical image analysis in the work of [23] for lung nodule diagnosis. Numerous medical imaging techniques are based on this concept [24,25,26,27]. Using a pre-trained network as a feature generator and fine-tuning a pre-trained network to categorize medical pictures are two strategies to transmit the information stored in the pre-trained CNNs. Standard networks can be divided into multiple classes as pre-trained medical image analysis models. Kernels with large receptive fields are used in the higher layers near the input, while smaller kernels are used in the deeper levels. Among the networks in this group, AlexNet is the most widely used and has many applications in medical image processing [28,29,30,31].
Deep learning networks are advanced AI techniques and have gained popularity in the medical field. The first network in this category was GoogleNet [32,33,34,35,36]. However, there is a shortcoming in the existing methods, such as more computation time and high-end systems. In addition, the performance of the current CNN architectures is limited in terms of accuracy and F-Measure. In addition, literature is scarce related to integrating feature minimization and CAD techniques. Therefore, this study intends to develop a CNN-based classifier to predict CAD with high accuracy. The objective of the study is as follows:
  • To build a CNN model to predict CAD from CT images.
  • To improve the performance of CNN by reducing the number of features.
The research questions of the proposed study are:
Research Question-1 (RQ1): How to improve the performance of a CAD detection technique?
Research Question-2 (RQ2): How to evaluate the performance of a CAD detection technique?
The structure of the study is organized as follows: Section 2 presents the recent literature related to CNN and CAD. Section 3 outlines the methodology of the proposed research. Results and discussion are highlighted in Section 4. Finally, Section 5 concludes the study with its future improvement.

2. Literature Review

High-accuracy data-mining techniques can identify risk factors for heart disease. Studies on the diagnosis of CAD can be found in existing studies [1,2,3,4,5]. Artificial immune recognition system (AIRS), K nearest neighbor (KNN), and clinical data were used to develop a system for diagnosing CAD and achieved an accuracy rate of 87%.
The authors [1] developed and evaluated a deep-learning algorithm for diagnosing CAD based on facial photographs. Patients who underwent coronary angiography or CT angiography at nine Chinese locations participated in a multicenter cross-sectional study to train and evaluate a deep CNN to detect CAD using patient facial images. More than 5796 patients were included in the study and were randomly assigned to training and validation groups for algorithm development. According to the findings, a deep-learning algorithm based on facial photographs can help predict CAD.
According to a study [2], the combination of semi-upright and supine stress myocardial perfusion imaging with deep learning can be used to predict the presence of obstructive disease. The total perfusion deficit was calculated using standard gender and camera type limits. A study [3] employed interferometric OCT in cardiology to describe coronary artery tissues, yielding a resolution of between 10 and 20 μm. Using OCT, the authors [3] investigated the various deep learning models for robust tissue characterization to learn the various intracoronary pathological formations induced by Kawasaki disease. A total of 33 historical cases of intracoronary cross-sectional pictures from different pediatric patients with KD are used in the experimentation. The authors analyzed in-depth features generated from three pre-trained convolutional networks, which were then compared. Moreover, voting was conducted to determine the final classification.
The authors [6] used deep-learning analysis of the myocardium of the left ventricle to identify individuals with functionally significant coronary stenosis in rest coronary CT angiography (CCTA). There were 166 participants in the study who had invasive FFR tests and CCTA scans taken sequentially throughout time. Analyses were carried out in stages to identify patients with functionally significant stenosis of the coronary arteries.
Using deep learning, the researchers [7] investigated the accuracy of the automatic prediction of obstructive disease from myocardial perfusion imaging compared to the overall perfusion deficit. Single-photon emission computed tomography may be used to build deep convolutional neural networks that can better predict coronary artery disease in individual patients and individual vessels. Obstructive disease was found in 1018 patients (62%) and 1797 of 4914 (37%) arteries in this study. A larger area under the receiver-operating characteristic curve for illness prediction using deep learning than for total perfusion deficits. Myocardial perfusion imaging can be improved using deep learning compared to existing clinical techniques.
In the study [8], several deep-learning algorithms were used to classify electrocardiogram (ECG) data into CAD, myocardial infarction, and congestive heart failure. In terms of classification, CNNs and LSTMs tend to be the most effective architectures to use. This study built and verified a 16-layer LSTM model using a 10-fold cross-validation procedure. The accuracy of the classification was 98.5%. They claimed their algorithm might be used in hospitals to identify and classify aberrant ECG patterns.
Author [9] proposed an enhanced DenseNet algorithm based on transfer learning techniques for fundus medical imaging. Medical imaging data from the fundus has been the subject of two separate experiments. A DenseNet model can be trained from scratch or fine-tuned using transfer learning. Pre-trained models from a realistic image dataset to fundus medical images are used to improve the model’s performance. Fundus medical image categorization accuracy can be improved with this method, which is critical for determining a patient’s medical condition.
The study [10] developed and implemented a heterogeneous low-light image-enhancing approach based on DenseNet generative adversarial network. Initially, a generative adversarial network is implemented using the DenseNet framework. The generative adversarial network is employed to learn the feature map from low-light to normal-light images.
To overcome the gradient vanishing problem in deep networks, the DenseNet convolutional neural network with dense connections combines ResNet and Highway’s strengths [11,12]. As a result, all network layers can be directly connected through the DenseNet. Each layer of the network is directly related to the next layer. It is important to remember that each subsequent layer’s input is derived from the output of all preceding layers. The weak information transmitted in the deep network is the primary cause of the loss of gradients [13]. A more efficient way to reduce gradient disappearance and improve network convergence is to use the dense block design, in which each layer is directly coupled to input and loss [14].
The authors [15] employed a bright-pass filter and logarithmic transformation to improve the quality of an image. Simultaneous reflectance and illumination estimation (SRIE) was given a weighted variational model by the authors [16] to deal with the issue of overly enhanced dark areas. Authors [17] developed low light image enhancement by illumination map estimation (LIME), which simply estimates the illumination component. The reflection component of the image was calculated using local consistency and structural perception restrictions, and the output result was based on this calculation.
The study [18] used the Doppler signal and a neural network to gain the best possible CAD diagnosis. By combining the exercise test data with a support vector machine (SVM), the authors [19] achieved an accuracy of 81.46% in the diagnosis of coronary artery disease (CAD). By employing multiple neural networks, authors [20] achieved an accuracy of 89.01% for CAD diagnosis using the Cleveland dataset [21]. It is possible to forecast artery stenosis disease using various feature selection approaches, including CBA, filter, genetic algorithm, wrapper, and numerical and nominal attribute selection. Also, Ref. [22] uses a new feature creation method to diagnose CAD.
Inception-v3 [24] is an enhanced version of GoogleNet and is applied in medical image analysis. It categorizes knee images by training support vector machines using deep feature extraction from CaffeNets. Adults’ retinal fundus pictures were analyzed using a fine-tuned network to detect diabetic retinopathy [24]. Classification results utilizing fine-tuned networks compete with human expert performance [25]. Recent research has focused on applying deep learning techniques to segment retinal optical coherence tomography (OCT) images [26,27,28]. Combining CNN and graph search methods, OCT retinal images are segmented. Layer border classification probabilities are used in the Cifar-CNN architecture to partition the graph search layer [29,30].
Authors [31] proposed a deep learning technique to quantify and segment intraregional cystoid fluid using fuzzy CNN. Geographic atrophy (GA) segmentation using a deep network is the subject of another study [33]. An automated CAD detector was developed using a CNN with encoder–decoder architecture [34]. In another study, researchers employed GoogleNet to identify retinal diseases in OCT pictures [35].
Several grayscale features collected from echocardiogram pictures of regular and CAD participants were proposed in [36] as a computer-aided diagnosis approach. In [24], ECG data from routine and CAD participants was evaluated for HR signals. Various methods were used to examine the heart rate data, including non-linear analysis, frequency, and time-domain. They found that CAD participants’ heart rate signals were less erratic than normal subjects. The recent CNN models are widely applied in CAD diagnostics [36]. In [37], the authors proposed a model for identifying cardiovascular diseases and obtained a prediction accuracy of 96.75%. Ali Md Mamun et al. [38] argued that a simple supervised ML algorithm can predict heart disease with high accuracy. The authors [39] developed a biomedical electrocardiogram (ECG)-based ML technique for detecting heart disease. Jiely Yan et al. [40], proposed a model to predict ion channel peptide from the images. Table 1 outlines the features and limitations of the existing CNN models.

3. Research Methodology

According to the research questions, the researchers developed a CNN architecture to predict positive CAD patients from CT images. Figure 1 presents the proposed architecture. Initially, the images are processed to extract the features. The CNN model treats the extracted features, generating output through an activation function. The following part of this section provides the information related to datasets, feature extraction, CNN construction, and evaluation metrics.
In this study, researchers employed two datasets of CT angiography images. The details of the datasets are as follows:
Dataset 1 [4] contains coronary artery image sets of 500 patients. A number of 18 views of the same straightened coronary artery are shown in each mosaic projection view (MPV). The Training–Validation–Test picture sets have a 3/1/1 ratio (300/100/100) with 50% normal and 50% sick cases for each patient in the subset. To improve modeling and dataset balance, 2364 (i.e., 394 × 6) artery pictures were obtained from the 300 training instances. Only 2304 images of the training dataset were augmented: 1. the standard component; 2. all the validation images; and 3. all the testing images. The balance was maintained in the validation dataset by randomly selecting one artery per normal case (50 images) and sick patient (50 images). Figure 2a,b outlines the CT images of positive and negative CAD patients.
Dataset 2 [5] consists of CT angiography images of 200 patients. This dataset used images from a multicenter registry of patients who had undergone clinically indicated coronary computed tomography angiography (CCTA). The annotated ground truth included the ascending and descending aortas (PAA, DA), superior and inferior vena cavae (SVC, IVC), pulmonary artery (PA), coronary sinus (CS), right ventricular wall (RVW), and left atrial wall (LAW). Figure 3 shows the CT images of dataset_2. Table 2 outlines the description of the datasets. Both datasets contain CT images of CAD and Non-CAD patients.
The study applies the following steps for identifying CAD using CNN architecture from datasets:
Step 1: Preprocess images
The CCTA images are processed to fit the feature extraction phase. All images are converted into 600 × 600 pixels. The image size suits the feature extraction process to generate a reduced set of features without losing any valuable data.
Step 2: Feature extraction
The proposed study applies an enhanced features from accelerated segment test (FAST) [6] algorithm for extracting features to support the pooling layer of CNN to produce effective feature maps to answer RQ1. To reduce the processing time of the FAST algorithm, researchers employed the enhanced FAST [5]. Figure 4 showcases the feature extracted from a 4 × 4 image into a 2 × 2 image. In addition, it highlights that the actual image can be reconstructed from a 2 × 2 image to a 4 × 4 image.
The extraction process is described as follows:
Let image I of M1 × M2 pixels be divided into segments S1 × Sn. The number of segments is N1 × N2, where N1 = M1/S1 and N2 = M2/Sn. The segments are represented in Equation (1).
I = [ S d 1 , 1 S d 1 , 2 S d 1 , N n   S d N 1 , 1 S d N 1 , 2   S d N 1 , N n   ]  
where Sdx,y referred to the image segment in the x and y direction and is described in Equation (2).
Sdx,y = I(i,j)
where i and j represent the size of the image segment, Sdx,y.
Both Equations (3) and (4) describe the pixel values of image segments.
i = ( y 1 ) M 2 , ( y 1 ) M 2 1 ,   ,   y M 2 1
j = ( x 1 ) M 1 , ( x 1 ) M 1 1 ,   ,   y M 1 1
The transformation function ensures that the image or segment can be reconstructed to its original form. It supports the proposed method to backtrack the CNN network to fine-tune its performance. The transformation function for each segment is mentioned in Equation (5) as follows:
φ M d x , y = Z S 1 M d x , y Z M 2 T
where φ M d x , y represents a part of an extracted feature from the image segment, x = 1 , , N 1 , y = 1 , , N n and T represents the transform matrix, Z M 1 Z M 1 O , O represents the order of the transformation. The segment can be reconstructed as in Equation (6).
S d x , y = Z S 1 T φ S d x , y Z S n
Sequentially, the process must be repeated N1 × Nn times to extract a set of features from the image. Thus, the transform co-efficient of all image segments can be integrated using Equations (7)–(11).
φ = [ Z S 1 S d 1 , 1 Z S n T Z S 1 S d 1 , N n Z S 2 T Z S 1 S d N 1 , 1 Z S n T Z S 1 S d N 1 , N n Z S 2 T ]
Equations (8) and (9) denote the features F S 1 and F S n , which represent the features that can be constructed using Z s 1 & Z s n , as follows:
F S 1 = [ Z S 1 O O O Z S 1 O O O Z S 1 ]   order   of   N 1
F S n = [ Z S n O O O Z S n Z S n O O O Z S n ]   order   of   N n
Equation (10) shows a sample set of features, n m .
x X ( F S ( n , x ) F S ( m , x ) ) = n m
Equation (11) defines the reconstruction of the image using the extracted features.
I = F S 1 T   φ   F S n
Step 3: Processing features
The extracted features F S 1   F S n are treated as an input for the proposed CNN. DenseNet ensures the transmission of information between the layers. One of the features of the DenseNet is the direct link between each layer. Thus, a back propagation method can be implemented in DenseNet. The feature extraction process reduces the number of blocks in DenseNet and improves its performance. Therefore, the modified DenseNet contains a smaller number of blocks and parameters. Research studies highlight that the complex network requires a greater number of samples. This study applies DenseNet-161 (K = 48), which includes three block modules. Figure 5 illustrates the proposed DenseNet model. Most CNN models depend on the features to make a decision. Thus, the feature extraction process is crucial in disease detection techniques. The minimal set of features reduces the training time of the CNN model. In addition, the features should support CNN to generate effective results. Researchers applied an edge-detection technique.
Step 3.1: Pooling layer
Two-dimensional filters are used to integrate the features in the area covered by the two-dimensional filter as it slides over each feature map channel. The dimension of the pooling layer output is in Equation (12):
( I h f + 1 ) / l ( I w f + 1 ) / s I c
where I h —the height of the feature map, I w —width of the feature map, I c —number of channels in the map, f—filter size, l—stride length
Step 3.2: Generating output
Transfer learning is adopted to alter the architecture of DenseNet. Leaky ReLu is used as the activation function. The existing CNN includes are employed. GITHUB portal (https://github.com/titu1994/DenseNet accessed on 7 December 2021) is utilized to implement the existing CNN architecture. The studies [10,18,21] are employed to evaluate the performance of the proposed CNN (PCNN) model. In addition, CNN models, including GoogleNet and Inception V3, are used for performance evaluation. The following form of the sigmoid function is applied for implementing the modified DenseNet. Figure 6 represents the proposed feature extraction for pre-processing the CT images and extracting the valuable features. Furthermore, Figure 7 highlights the proposed CNN technique for predicting CAD from the CT images.
The study constructs a feed-forward back propagation network. Thus, Leaky ReLu is employed in the study as an activation function in Equation (13) to produce an outcome.
f ( x ) = max ( 0 , x )
Leaky ReLu considers negative value as a minimal linear component of X. The definition of Leaky ReLu is defined as:
Def Leaky_function(I)
  If feature(I) < 0:
return 0.01 * f(I)
  Else:
return f(I)
Step 4: Evaluation metrics
The study applies the benchmark evaluation metrics, including accuracy, recall, precision, and F-measure, to provide a solution for RQ2. The metrics are computed as shown in Equations (14)–(18):
True positive (TPCI) = predicting a valid positive CAD patient from CT images (CI).
True negative (TNCI) = predicting a valid negative CAD patient from CI.
False positive (FPCI) = predicting a negative CAD patient as positive from CI.
False negative (FNCI) = predicting a positive CAD patient as negative from CI.
Recall = TP CI TP CI + FN CI
Precision = TP CI TP CI + FP CI
F measure = 2 Recall Precision Recall + Precision
Accuracy = TP CI + TN CI TP CI + TN CI + FP CI + FN CI
Specificity = TN CI TN CI + FP CI
In addition, Matthews correlation coefficient (MCC) (Equation (19)) and Cohen’s Kappa (K) (Equation (20)) are employed to ensure the performance of the proposed method.
MCC = ( TP CI TN CI ) ( FP CI FN CI )   ( TP CI + FP CI ) ( TP CI + FN CI ) ( TN CI + FP CI ) ( TN CI + FN CI )
The minimum MCC is −1, which indicates a wrong prediction, whereas the maximum MCC is +1, which denotes a perfect prediction.
K = 2 ( ( TP CI TN CI ) ( FP CI FN CI ) ) ( TP CI + FP CI ) ( FP CI + TN CI ) ( TP CI + FN CI ) ( FN CI + TN CI )
MCC and K are class symmetric, reflecting the ML technologies’ classification accuracy. Finally, CNN technique computational complexity is presented to find the time and space complexities.
In order to ensure the predictive uncertainty of the proposed CNN (PCNN), the researchers applied standard deviation (SD) and entropy (E). The mathematical expression of the confidence interval (CI) is defined in Equation (21).
CI = a ± z σ N
where a represents the mean of the predictive distribution of an image a ( i ) , N is the total number of predictions, and z is the critical value of the distribution. The researchers computed CI at 95% confidence. Thus, the value of Z is 1.96.
Finally, the researchers followed E of the prediction to evaluate the uncertainty of the proposed model. It is calculated over the mean predictive distribution. The mathematical expression of E is defined in Equation (22).
E ( P ( y | a ) = i = 1 C P ( y | a ) log ( P ( y | a )

4. Experiment and Results

The PCNN is implemented in Python with Windows 10 Professional platform. The existing algorithms are developed using the GITHUB portal. Both datasets are divided into training and testing sets. Accordingly, the CNN architectures are trained with a relevant training set of dataset_1 and dataset_2.
To evaluate the performance of PCNN, the dataset is utilized using 5-fold cross-validation. Statistical tests, including SD, CI using binary class classification, and E are applied accordingly on the dataset_1 and dataset_2. Table 3 presents the implementation of PCNN during the cross-validation using daaset_1. It highlights that PCNN achieves more than 98% accuracy, precision, recall, F-measure, and specificity, respectively. Likewise, Table 4 denotes the cross-validation outcome for dataset_2.

4.1. Uncertainty Estimation

In this study, the researchers apply Monte Carlo dropout (MC dropout) to compute the model uncertainty. The dropout value ensures that the predictive distribution is not diverse, and CI is insignificant. The researchers experimentally found that the MC dropout value of 0.379 is optimal for this model. The predictive distribution is obtained by evaluating PCNN 200 times for each image. Furthermore, model uncertainty is computed using CI, SD, and E.
Table 5 and Table 6 highlight the model uncertainty for dataset_1 and dataset_2, respectively. The proposed model achieved a low entropy and SD for both datasets. It can be observed in Table 5 and Table 6 that the average CI of [98.55–98.61] and [98.45–98.51] for dataset_1 and dataset_2 indicate the proposed model has high confidence and minimum variance in its outcome.
Table 7 highlights the performance measures of dataset_1. Among the CNN architectures, PCNN scored a superior accuracy, precision, recall, and specificity of 98.96, 98.2, 98.52, 98.36, and 98.7, respectively. The performance of the Banerjee model [18] is lower than the other CNN architectures. PCNN performs better than the existing CNN models for CAD prediction. Dataset_1 contains a greater number of images. The mapping of features made the CNN architectures generate more features. However, the feature extraction process of the proposed method enabled PCNN to produce a smaller number of features and maintain a better performance than the existing architectures. Figure 8 represents the comparative analysis outcome of CNN. It is evident from Figure 8 that the performance of PCNN is higher than the current architectures.
Likewise, Table 8 outlines the performance of CNN architectures with Dataset_2. The value of accuracy, precision, recall, F-measure, and specificity is 98.96, 98.2, 98.52, 98.36, and 98.7, accordingly. However, GoogleNet has scored low accuracy, precision, recall, F-measure, and specificity of 97.1, 96.7, 97.1, 96.9, and 96.4, respectively. The absence of temporary memory is one of the limitations of the Banerjee model that reduces its predicting performance. In addition, the outcome of Table 5 and Table 6 suggest that the performance of PCNN is higher than the existing CNN architectures. Figure 9 shows the relevant graph of Table 6.
In addition to the initial comparative analysis, the researcher applied MCC and Kappa to evaluate the performance of PCNN. Figure 10 and Figure 11 reveal that PCNN achieved a superior MCC and K score compared to the existing models.
Table 9 outlines the memory size and computing time during the training phase. PCNN consumes 121.45 MB and 118.45 MB for Dataset_1 and Dataset_2, accordingly. The computing time of PCNN is 99.32 min and 99.21 min, respectively. The computing time of PCNN is superior to the existing CNN with less memory. Figure 12 highlights CNN’s space and computation time for both Dataset_1 and Dataset_2.
Table 10 outlines the error rate of the CNN architectures during the testing phase. The error rate of PCNN is 15.1 and 13.9 for Dataset_1 and Dataset_2, respectively. Nevertheless, Jingsi model scores 20.5 and 19.6, which is higher than other CNN models. The outcome emphasizes the efficiency of the feature extraction process of PCNN. Figure 13 illustrates the error rate of CNN models.
Figure 14 represents the receiver operating characteristic (ROC) and precision–recall (PR) curve for dataset_1 during the testing phase. It shows that PCNN achieves a better Area under the ROC curve (AUC) for CAD and No CAD classification, respectively.
Similarly, Figure 15 reflects the ROC and PR curve for dataset_2. It outlines that PCNN achieves a better ROC AUC score of 0.93. Furthermore, the AUC score of the PR curve (0.91) indicates that PCNN predicts CAD better than the existing models.
Table 11 highlights the computational complexities of CNN models for Dataset_1. It is evident from the outcome that PCNN requires a smaller number of parameters (4.3 M), learning rate (1 × 10−4), number of flops (563 M), and computation time (1.92 s).
Likewise, Table 12 reflects the outcome for Dataset_2. It shows that PCNN generates an output with fewer parameters, flops, and learning rates than the existing CNN models.

4.2. Clinical Insights and Limitations

PCNN generates outcomes that are superior to the existing CNN models. It can be employed in real-time applications to support physicians in diagnosing CAD. In addition, it can be integrated with Internet of Things devices to support healthcare centers in identifying CAD at an earlier stage. The feature extraction and the pooling layer of PCNN can detect CAD from complex CT images. The dropout layer reduces the neurons to avoid limitations such as overfitting and underfitting. PCNN applies loss function to compute the kernels and weights of the model. It optimizes the model’s performance and generates a meaningful outcome.
PCNN produces an effective result and supports CAD diagnosing process. However, a few limitations need to be addressed in future studies. The multiple layers of CNN increase the training time and require a better graphical processing unit. The imbalanced dataset may reduce the performance of the proposed method. The researcher introduced the concept of temporary storage to hold the intermediate results.
Nonetheless, there is a possibility of losing information due to multiple features. The lack of co-ordinate frames may lead to the adversarial visualization of images. The feature selection process can improve the images’ internal representation. Finally, the structure of PCNN requires a considerable amount of data to produce an exciting result. To maintain the better performance, data pre-processing is necessary to handle image rotation and scaling tasks.

5. Conclusions

This study developed a CNN model for predicting CAD from CT images. The existing CNN architectures require a high-end hardware configuration for processing complex images. A feature extraction technique is employed to support the proposed CNN model. The proposed method modifies the existing DenseNet architecture in order to implement a feed-forward back-propagation network. Two benchmark datasets are used for the performance evaluation. The experiment analysis’s outcome highlights the superior performance of the proposed CNN model in terms of accuracy, precision, recall, F-measure, and specificity. Moreover, the proposed CNN’s memory consumption and computation time during the training phase are lower than the existing CNNs. In addition, ROC and PR curve analysis suggest that the proposed method can predict CAD with a lower false positive rate with higher prediction accuracy. Thus, the proposed method can support the physician in detecting and preventing CAD patients. In the future, the proposed model can be implemented to predict CAD from electronic health records.

Author Contributions

Conceptualization, A.F.A., A.R.W.S. and T.A.A.; Data curation, A.F.A. and A.R.W.S.; Formal analysis, A.R.W.S.; Investigation, A.R.W.S.; Methodology, A.F.A. and A.R.W.S.; Project administration, A.F.A. and A.R.W.S.; Resources, A.F.A. and A.R.W.S.; Software, A.R.W.S.; Validation, A.R.W.S.; Visualization, T.A.A.; Writing—original draft, A.F.A. and A.R.W.S.; Writing—review & editing, T.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Grant No. GRANT843].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Lin, S.; Li, Z.; Fu, B.; Chen, S.; Li, X.; Wang, Y.; Wang, X.; Lv, B.; Xu, B.; Song, X.; et al. Feasibility of using deep learning to detect coronary artery disease based on facial photo. Eur. Heart J. 2020, 41, 4400–4411. [Google Scholar] [PubMed]
  2. Betancur, J.; Hu, L.H.; Commandeur, F.; Sharir, T.; Einstein, A.J.; Fish, M.B.; Ruddy, T.D.; Kaufmann, P.A.; Sinusas, A.J.; Miller, E.J.; et al. Deep learning analysis of upright-supine high-efficiency SPECT myocardial perfusion imaging for prediction of obstructive coronary artery disease: A multicenter study. J. Nucl. Med. 2019, 60, 664–670. [Google Scholar] [CrossRef] [PubMed]
  3. Abdolmanafi, A.; Duong, L.; Dahdah, N.; Adib, I.R.; Cheriet, F. Characterization of coronary artery pathological formations from OCT imaging using deep learning. Biomed. Opt. Express 2018, 9, 4936–4960. [Google Scholar] [CrossRef]
  4. Demirer, M.; Gupta, V.; Bigelow, M.; Erdal, B.; Prevedello, L.; White, R. Image Dataset for a CNN Algorithm Development to Detect Coronary Atherosclerosis in Coronary CT Angiography. Mendeley Data, V1. Available online: https://data.mendeley.com/datasets/fk6rys63h9/1 (accessed on 2 November 2021).
  5. Hong, Y.; Commandeur, F.; Cadet, S.; Goeller, M.; Doris, M.K.; Chen, X.; Kwiecinski, J.; Berman, D.S.; Slomka, P.J.; Chang, H.J.; et al. Deep learning-based stenosis quantification from coronary CT angiography. Proc. SPIE Int. Soc. Opt. Eng. 2019, 10949, 109492I. [Google Scholar]
  6. Zreik, M.; Lessmann, N.; van Hamersvelt, R.W.; Wolterink, J.M.; Voskuil, M.; Viergever, M.A.; Leiner, T.; Išgum, I. Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis. Med. Image Anal. 2018, 1, 72–85. [Google Scholar] [CrossRef]
  7. Lih, O.S.; Jahmunah, V.; San, T.R.; Ciaccio, E.J.; Yamakawa, T.; Tanabe, M.; Kobayashi, M.; Faust, O.; Acharya, U.R. Comprehensive electrocardiographic diagnosis based on deep learning. Artif. Intell. Med. 2020, 103, 101789. [Google Scholar] [CrossRef]
  8. Hampe, N.; Wolterink, J.M.; Van Velzen, S.G.M.; Leiner, T.; Išgum, I. Machine Learning for Assessment of Coronary Artery Disease in Cardiac CT: A Survey. Front. Cardiovasc. Med. 2019, 6, 172. [Google Scholar] [CrossRef]
  9. Xu, X.; Lin, J.; Tao, Y.; Wang, X. An Improved DenseNet Method Based on Transfer Learning for Fundus Medical Images. In Proceedings of the 7th International Conference on Digital Home (ICDH), Guilin, China, 30 November–1 December 2018; pp. 137–140. [Google Scholar]
  10. Zhang, J.; Wu, C.; Yu, X.; Lei, X. A Novel DenseNet Generative Adversarial Network for Heterogenous Low-Light Image Enhancement. Front. Neurorobotics 2021, 15, 700011. [Google Scholar] [CrossRef]
  11. Wang, Z.Q.; Zhou, Y.J.; Zhao, Y.X.; Shi, D.M.; Liu, Y.Y.; Liu, W.; Liu, X.L.; Li, Y.P. Diagnostic accuracy of a deep learning approach to calculate FFR from coronary CT angiography. J. Geriatr. Cardiol. 2019, 16, 42–48. [Google Scholar]
  12. Zreik, M.; van Hamersvelt, R.W.; Khalili, N.; Wolterink, J.M.; Voskuil, M.; Viergever, M.A. Deep learning analysis of coronary arteries in cardiac CT angiography for detection of patients requiring invasive coronary angiography. IEEE Trans. Med. Imaging 2019, 39, 1545–1557. [Google Scholar] [CrossRef]
  13. Abdar, M.; Książek, W.; Acharya, U.R.; Tan, R.; Makarenkov, V.; Pławiak, P. A new machine learning technique for an accurate diagnosis of coronary artery disease. Comput. Methods Programs Biomed. 2019, 179, 104992. [Google Scholar] [PubMed]
  14. Huang, W.; Huang, L.; Lin, Z.; Huang, S.; Chi, Y.; Zhou, J.; Zhang, J.; Tan, R.S.; Zhong, L. Coronary artery segmentation by deep learning neural networks on computed tomographic coronary angiographic images. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, Hawaii, 18–21 July 2018; pp. 608–611. [Google Scholar]
  15. Tatsugami, F.; Higaki, T.; Nakamura, Y.; Yu, Z.; Zhou, J.; Lu, Y.; Fujioka, C.; Kitagawa, T.; Kihara, Y.; Iida, M.; et al. Deep learning–based image restoration algorithm for coronary CT angiography. Eur. Radiol. 2019, 29, 5322–5329. [Google Scholar] [CrossRef] [PubMed]
  16. Yang, S.; Kweon, J.; Roh, J.H.; Lee, J.H.; Kang, H.; Park, L.J.; Kim, D.J.; Yang, H.; Hur, J.; Kang, D.Y.; et al. Deep learning segmentation of major vessels in X-ray coronary angiography. Sci. Rep. 2019, 9, 16897. [Google Scholar] [PubMed]
  17. Cardiovascular Diseases. 2021. Available online: https://www.who.int/health-topics/cardiovascular-diseases (accessed on 1 November 2021).
  18. Banerjee, R.; Ghose, A.; Mandana, K.M. A hybrid CNN-LSTM architecture for detection of coronary artery disease from ECG. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  19. Zreik, M.; van Hamersvelt, R.W.; Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Isgum, I. A recurrent CNN for automatic detection and classification of coronary artery plaque and stenosis in coronary CT angiography. IEEE Trans. Med. Imaging 2018, 38, 1588–1598. [Google Scholar]
  20. Wolterink, J.M.; van Hamersvelt, R.W.; Viergever, M.A.; Leiner, T.; Išgum, I. Coronary artery centerline extraction in cardiac CT angiography using a CNN-based orientation classifier. Med. Image Anal. 2019, 51, 46–60. [Google Scholar]
  21. Papandrianos, N.; Papageorgiou, E. Automatic Diagnosis of Coronary Artery Disease in SPECT Myocardial Perfusion Imaging Employing Deep Learning. Appl. Sci. 2021, 11, 6362. [Google Scholar] [CrossRef]
  22. Khan Mamun, M.M.R.; Alouani, A. FA-1D-CNN Implementation to Improve Diagnosis of Heart Disease Risk Level. In Proceedings of the 6th World Congress on Engineering and Computer Systems and Sciences, Virtual Conference, 13–15 August 2020; pp. 122-1–122-9. [Google Scholar]
  23. Sharma, M.; Acharya, U.R. A new method to identify coronary artery disease with ECG signals and time-Frequency concentrated antisymmetric biorthogonal wavelet filter bank. Pattern Recognit. Lett. 2019, 125, 235–240. [Google Scholar] [CrossRef]
  24. Alizadehsani, R.; Abdar, M.; Roshanzamir, M.; Khosravi, A.; Kebria, P.M.; Khozeimeh, F.; Nahavandi, S.; Sarrafzadegan, N.; Acharya, U.R. Machine learning-based coronary artery disease diagnosis: A comprehensive review. Comput. Biol. Med. 2019, 111, 103346. [Google Scholar] [CrossRef]
  25. Gülsün, M.A.; Funka-Lea, G.; Sharma, P.; Rapaka, S.; Zheng, Y. Coronary centerline extraction via optimal flow paths and CNN path pruning. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 317–325. [Google Scholar]
  26. Liu, X.; Mo, X.; Zhang, H.; Yang, G.; Shi, C.; Hau, W.K. A 2-year investigation of the impact of the computed tomography–derived fractional flow reserve calculated using a deep learning algorithm on routine decision-making for coronary artery disease management. Eur. Radiol. 2021, 31, 7039–7046. [Google Scholar] [CrossRef]
  27. Nishi, T.; Yamashita, R.; Imura, S.; Tateishi, K.; Kitahara, H.; Kobayashi, Y.; Yock, P.G.; Fitzgerald, P.J.; Honda, Y. Deep learning-based intravascular ultrasound segmentation for the assessment of coronary artery disease. Int. J. Cardiol. 2021, 333, 55–59. [Google Scholar] [CrossRef]
  28. Liu, C.Y.; Tang, C.X.; Zhang, X.L.; Chen, S.; Xie, Y.; Zhang, X.Y.; Qiao, H.Y.; Zhou, C.S.; Xu, P.P.; Lu, M.J.; et al. Deep learning powered coronary CT angiography for detecting obstructive coronary artery disease: The effect of reader experience, calcification and image quality. Eur. J. Radiol. 2021, 142, 109835. [Google Scholar] [PubMed]
  29. Lin, A.; Kolossváry, M.; Motwani, M.; Išgum, I.; Maurovich-Horvat, P.; Slomka, P.J.; Dey, D. Artificial Intelligence in Cardiovascular Imaging for Risk Stratification in Coronary Artery Disease. Radiol. Cardiothorac. Imaging 2021, 3, e200512. [Google Scholar] [CrossRef] [PubMed]
  30. Cho, H.; Kang, S.; Min, H.; Lee, J.; Kim, W.; Kang, S.H.; Kang, D.; Lee, P.H.; Ahn, J.; Park, D.; et al. Intravascular ultrasound-based deep learning for plaque characterization in coronary artery disease. Atherosclerosis 2021, 324, 69–75. [Google Scholar] [PubMed]
  31. Alizadehsani, R.; Khosravi, A.; Roshanzamir, M.; Abdar, M.; Sarrafzadegan, N.; Shafie, D.; Khozeimeh, F.; Shoeibi, A.; Nahavandi, S.; Panahiazar, M.; et al. Coronary artery disease detection using artificial intelligence techniques: A survey of trends, geographical differences and diagnostic features 1991–2020. Comput. Biol. Med. 2021, 128, 104095. [Google Scholar] [PubMed]
  32. Rim, T.H.; Lee, C.J.; Tham, Y.; Cheung, N.; Yu, M.; Lee, G.; Kim, Y.; Ting, D.S.W.; Chong, C.C.Y.; Choi, Y.S.; et al. Deep-learning-based cardiovascular risk stratification using coronary artery calcium scores predicted from retinal photographs. Lancet Digit. Health 2021, 3, e306–e316. [Google Scholar] [CrossRef]
  33. Morris, S.A.; Lopez, K.N. Deep learning for detecting congenital heart disease in the fetus. Nat. Med. 2021, 27, 764–765. [Google Scholar]
  34. Cheung, W.K.; Bell, R.; Nair, A.; Menezes, L.J.; Patel, R.; Wan, S.; Chou, K.; Chen, J. A computationally efficient approach to segmentation of the aorta and coronary arteries using deep learning. IEEE Access 2021, 9, 108873–108888. [Google Scholar]
  35. Li, G.; Wang, H.; Zhang, M.; Tupin, S.; Qiao, A.; Liu, Y.; Ohta, M.; Anzai, H. Prediction of 3D Cardiovascular hemodynamics before and after coronary artery bypass surgery via deep learning. Commun. Biol. 2021, 4, 99. [Google Scholar]
  36. Krittanawong, C.; Virk, H.U.H.; Kumar, A.; Aydar, M.; Wang, Z.; Stewart, M.P.; Halperin, J.L. Machine learning and deep learning to predict mortality in patients with spontaneous coronary artery dissection. Sci. Rep. 2021, 11, 1–10. [Google Scholar] [CrossRef]
  37. Doppala, B.P.; Bhattacharyya, D.; Janarthanan, M.; Baik, N. A Reliable Machine Intelligence Model for Accurate Identification of Cardiovascular Diseases Using ensemble Techniques. J. Heal. Eng. 2022, 2022, 2585235. [Google Scholar]
  38. Ali, M.M.; Paul, B.K.; Ahmed, K.; Bui, F.M.; Quinn, J.M.W.; Moni, M.A. Heart disease prediction using supervised machine learning algorithms: Performance analysis and comparison. Comput. Biol. Med. 2021, 136, 104672. [Google Scholar] [CrossRef] [PubMed]
  39. Khanna, A.; Selvaraj, P.; Gupta, D.; Sheikh, T.H.; Pareek, P.K.; Shankar, V. Internet of things and deep learning enabled healthcare disease diagnosis using biomedical electrocardiogram signals. Expert Syst. 2021, e12864. [Google Scholar] [CrossRef]
  40. Yan, J.; Zhang, B.; Zhou, M.; Kwok, H.F.; Siu, S.W.I. Multi-Branch-CNN: Classification of ion channel interacting peptides using multi-branch convolutional neural network. Comput. Biol. Med. 2022, 147, 105717. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Proposed CNN network for CAD.
Figure 1. Proposed CNN network for CAD.
Diagnostics 12 02073 g001
Figure 2. (a): Positive individual, (b): negative individual.
Figure 2. (a): Positive individual, (b): negative individual.
Diagnostics 12 02073 g002
Figure 3. Superior vena cava images of individuals.
Figure 3. Superior vena cava images of individuals.
Diagnostics 12 02073 g003
Figure 4. Process of feature extraction.
Figure 4. Process of feature extraction.
Diagnostics 12 02073 g004
Figure 5. Fine-tuned DenseNet Architecture.
Figure 5. Fine-tuned DenseNet Architecture.
Diagnostics 12 02073 g005
Figure 6. Proposed feature extraction algorithm.
Figure 6. Proposed feature extraction algorithm.
Diagnostics 12 02073 g006
Figure 7. Proposed CNN model.
Figure 7. Proposed CNN model.
Diagnostics 12 02073 g007
Figure 8. Comparative analysis outcome: Dataset_1.
Figure 8. Comparative analysis outcome: Dataset_1.
Diagnostics 12 02073 g008
Figure 9. Comparative analysis outcome: Dataset_1.
Figure 9. Comparative analysis outcome: Dataset_1.
Diagnostics 12 02073 g009
Figure 10. MCC and Kappa: Dataset_1.
Figure 10. MCC and Kappa: Dataset_1.
Diagnostics 12 02073 g010
Figure 11. MCC and Kappa: Dataset_2.
Figure 11. MCC and Kappa: Dataset_2.
Diagnostics 12 02073 g011
Figure 12. Computation time of CNN models.
Figure 12. Computation time of CNN models.
Diagnostics 12 02073 g012
Figure 13. Error rates of CNN models.
Figure 13. Error rates of CNN models.
Diagnostics 12 02073 g013
Figure 14. Receiver operating characteristic (ROC) and precision–recall curve: dataset_1.
Figure 14. Receiver operating characteristic (ROC) and precision–recall curve: dataset_1.
Diagnostics 12 02073 g014
Figure 15. Receiver operating characteristic (ROC) and precision–recall curve: dataset_2.
Figure 15. Receiver operating characteristic (ROC) and precision–recall curve: dataset_2.
Diagnostics 12 02073 g015
Table 1. Features of the existing literature.
Table 1. Features of the existing literature.
AuthorsMethodologyFeaturesLimitations
Lin. S et al. [1]Conducted a cross-sectional study of CAD patients for validating CNN-based CAD.The findings showed that the deep learning algorithm could support physicians in detecting cardiovascular diseases.The findings are based on the specific location and lack of a benchmark dataset for evaluating the CNN model.
Jingsi Z et al. [10]Proposed a low-light image enhancement method.The DenseNet framework has reduced the noise in the images.Lack of discussion of the application of bright images.
Abdar M et al. [13]Integrated genetic algorithm and support vector machine for feature extraction.The outcome showed that N2Genetic-nuSVM showed a better accuracy.Lack of comparison with the recent techniques.
Wolterink J.M. et al. [20]A 3D-dilated CNN is developed to predict the radius of an artery from CCTA images.Results show that the method extracted 92% of clinically relevant coronary artery segments.Trained with a small dataset. The outcome may be with the size of the dataset.
Papandrianos N. and Papageorgiou E. [21]Applied CNN model for CAD detection from images.The method can differentiate the infarction from healthy patients.The classification accuracy is better. However, there is a lack of benchmark evaluation techniques.
Nishi et al. [27]Developed an image segmentation technique for predicting CAD.The outcome highlighted that the method could produce effective results.The performance is based on a single dataset.
Cho et al. [30]Proposed an intravascular ultrasound-based algorithm for classifying attenuation and calcified plaques.The results outlined that the model achieved 98% accuracy.The model performance is based on the dataset of 598 patients.
Morris S.A. and Lopez K.N. [31]Developed a detection model for congenital heart disease in the fetus.The outcome showed that the model’s performance is better than the recent models.The authors evaluated the model using 1326 fetal echocardiograms.
Cheung et al. [36]Proposed an image segmentation approach using Unet model.The model achieved 91,320% of dice similarity coefficient.The lack of discussion of the image quality used in the study.
Bhanu Prakash Doppala et al. [37]Developed an ensemble model for cardiovascular disease detection.The model achieves an accuracy of 96.75%.The model is based on the voting mechanisms, which may lead to a larger computation time.
Ali Md Mamun et al. [38]Proposed an ML algorithm for heart disease detection.The outcome shows that the model has achieved a 100% of accuracy with the Kaggle dataset.There is a lack of experimentation with the model with different datasets.
Khanna, Ashish et al. [39]Developed an ML technique for heart disease detection from ECG.Employed regression model to predict heart disease from ECG.Limited discussion on the model uncertainty.
Yan, Jielu et al. [40]Proposed an ML technique for predicting ion channel peptides.The outcome shows that the model achieves high accurate results.The dataset is relatively small.
Table 2. Description of datasets.
Table 2. Description of datasets.
DatasetNumber of PatientsNumber of ImagesClassification
150026372
22007162
Table 3. Performance analysis of PCNN model for dataset_1.
Table 3. Performance analysis of PCNN model for dataset_1.
Fold(s)AccuracyPrecisionRecallF-MeasureSpecificity
198.697.498.497.998.5
298.298.297.998.0597.8
399.197.798.39898.8
499.398.698.798.6598.8
599.699.199.399.299.6
Average98.9698.298.5298.3698.7
Table 4. Performance analysis of PCNN model for dataset_2.
Table 4. Performance analysis of PCNN model for dataset_2.
Fold(s)AccuracyPrecisionRecallF-MeasureSpecificity
198.497.898.29898.1
297.899.399.199.299.3
399.198.798.798.798.6
498.998.298.698.498.2
599.199.398.79998.9
Average98.6698.6698.6698.6698.62
Table 5. Model uncertainty analysis outcome for dataset_1.
Table 5. Model uncertainty analysis outcome for dataset_1.
Fold(s)CI (%) @95%SDEntropy
1[97.92–97.99]0.00120.0049
2[98.12–98.19]0.00190.0329
3[98.79–98.87]0.00210.0319
4[98.84–98.91]0.00200.0281
5[99.08–99.11]0.00170.0091
Average[98.55–98.61]0.00170.0213
Table 6. Model uncertainty analysis outcome for dataset_1.
Table 6. Model uncertainty analysis outcome for dataset_1.
Fold(s)CI (%) @95%SDEntropy
1[98.11–98.18]0.00210.0041
2[97.41–97.49]0.00180.0312
3[98.42–98.46]0.00140.0187
4[99.12–99.17]0.00110.0093
5[99.21–99.26]0.00090.0089
Average[98.45–98.51]0.00140.0144
Table 7. Comparative analysis outcome of CNN model for dataset_1.
Table 7. Comparative analysis outcome of CNN model for dataset_1.
Methods/
Measures
AccuracyPrecisionRecallF-MeasureSpecificity
Jingsi model [10]96.796.296.796.4597.65
GoogleNet96.997.197.497.2596.5
Inception V397.896.796.196.496.2
Banerjee model [18]98.197.397.597.497.57
Papandrianos model [21]98.397.697.197.3597.69
PCNN98.9698.298.5298.3698.7
Table 8. Comparative analysis outcome of CNN model for dataset_2.
Table 8. Comparative analysis outcome of CNN model for dataset_2.
Methods/
Measures
AccuracyPrecisionRecallF-MeasureSpecificity
Jingsi model96.395.896.796.2597.2
GoogleNet97.196.797.196.996.4
Inception V397.697.296.89797.3
Banerjee model98.197.697.597.5597.1
Papandrianos model98.398.297.998.0597.8
PCNN98.9698.298.5298.3698.7
Table 9. Memory sizes of CNN for Dataset_1 and Dataset_2.
Table 9. Memory sizes of CNN for Dataset_1 and Dataset_2.
Methods/DatasetsDataset_1 (MB)Dataset_2
(MB)
Dataset_1
Time (Minutes)
Dataset_2
Time (Minutes)
Jingsi model279.21189.32105.26101.25
GoogleNet175.69159.27102.26101.36
Inception V3138.14142.58134.56129.71
Banerjee model128.54143.96116.32107.25
Papandrianos model129.65137.89101.45103.59
PCNN119.25124.26100.5698.89
Table 10. Error rates of CNN for Dataset_1 and Dataset_2.
Table 10. Error rates of CNN for Dataset_1 and Dataset_2.
Methods/MeasuresDataset_1
(%)
Dataset_2
(%)
Jingsi model20.519.6
GoogleNet19.417.3
Inception V318.9417.1
Banerjee model17.316.4
Papandrianos model16.915.7
PCNN15.113.9
Table 11. Computational complexities of CNN for Dataset_1.
Table 11. Computational complexities of CNN for Dataset_1.
Methods/MeasuresNumber of ParametersLearning RateNumber of FlopsTesting Time
(s)
Jingsi model5.1 M1 × 10−3565 M2.5
GoogleNet6.7 M1 × 10−3624 M2.36
Inception V37.4 M1 × 10−4594 M2.7
Banerjee model14.6 M1 × 10−31421 M2.3
Papandrianos model11.2 M1 × 10−21530 M2.1
PCNN4.3 M1 × 10−4563 M1.92
Table 12. Computational complexities of CNN for Dataset_2.
Table 12. Computational complexities of CNN for Dataset_2.
Methods/MeasuresNumber of ParametersLearning RateNumber of FlopsComputation Time
(s)
Jingsi model4.3 M1 × 10−3436 M1.91
GoogleNet5.6 M1 × 10−3512 M1.72
Inception V36.3 M1 × 10−5402 M1.86
Banerjee model9.4 M1 × 10−4921 M1.98
Papandrianos model10.3 M1 × 10−3430 M1.36
PCNN3.7 M1 × 10−5403 M1.15
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

AlOthman, A.F.; Sait, A.R.W.; Alhussain, T.A. Detecting Coronary Artery Disease from Computed Tomography Images Using a Deep Learning Technique. Diagnostics 2022, 12, 2073. https://doi.org/10.3390/diagnostics12092073

AMA Style

AlOthman AF, Sait ARW, Alhussain TA. Detecting Coronary Artery Disease from Computed Tomography Images Using a Deep Learning Technique. Diagnostics. 2022; 12(9):2073. https://doi.org/10.3390/diagnostics12092073

Chicago/Turabian Style

AlOthman, Abdulaziz Fahad, Abdul Rahaman Wahab Sait, and Thamer Abdullah Alhussain. 2022. "Detecting Coronary Artery Disease from Computed Tomography Images Using a Deep Learning Technique" Diagnostics 12, no. 9: 2073. https://doi.org/10.3390/diagnostics12092073

APA Style

AlOthman, A. F., Sait, A. R. W., & Alhussain, T. A. (2022). Detecting Coronary Artery Disease from Computed Tomography Images Using a Deep Learning Technique. Diagnostics, 12(9), 2073. https://doi.org/10.3390/diagnostics12092073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop