Next Article in Journal
The Need for Health Education and Vaccination—Importance of Teacher Training and Family Involvement
Next Article in Special Issue
Solving Patient Allocation Problem during an Epidemic Dengue Fever Outbreak by Mathematical Modelling
Previous Article in Journal
Prevalence of Frailty in the Middle East: Systematic Review and Meta-Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multisource Smart Computer-Aided System for Mining COVID-19 Infection Data

Mathematics Department, Faculty of Science, Al-Azhar University, Cairo 11651, Egypt
*
Author to whom correspondence should be addressed.
Healthcare 2022, 10(1), 109; https://doi.org/10.3390/healthcare10010109
Submission received: 11 November 2021 / Revised: 25 December 2021 / Accepted: 29 December 2021 / Published: 6 January 2022

Abstract

:
In this paper, we approach the problem of detecting and diagnosing COVID-19 infections using multisource scan images including CT and X-ray scans to assist the healthcare system during the COVID-19 pandemic. Here, a computer-aided diagnosis (CAD) system is proposed that utilizes analysis of the CT or X-ray to diagnose the impact of damage in the respiratory system per infected case. The CAD was utilized and optimized by hyper-parameters for shallow learning, e.g., SVM and deep learning. For the deep learning, mini-batch stochastic gradient descent was used to overcome fitting problems during transfer learning. The optimal parameter list values were found using the naïve Bayes technique. Our contributions are (i) a comparison among the detection rates of pre-trained CNN models, (ii) a suggested hybrid deep learning with shallow machine learning, (iii) an extensive analysis of the results of COVID-19 transition and informative conclusions through developing various transfer techniques, and (iv) a comparison of the accuracy of the previous models with the systems of the present study. The effectiveness of the proposed CAD is demonstrated using three datasets, either using an intense learning model as a fully end-to-end solution or using a hybrid deep learning model. Six experiments were designed to illustrate the superior performance of our suggested CAD when compared to other similar approaches. Our system achieves 99.94, 99.6, 100, 97.41, 99.23, and 98.94 accuracy for binary and three-class labels for the CT and two CXR datasets.

1. Introduction to COVID-19 and Diagnosis

The widespread COVID-19 pandemic constitutes a severe threat to global health. Therefore, most new research has used tools and techniques for tracking COVID-19 and discovering various infection areas to minimize the risk of its spread. Because of the massive quantity of data available every day for COVID-19 infection, spread, detection, deaths, etc., there is a need for big data analytics, storage, and security in NoSQL database management systems [1,2]. Machine learning and AI approaches can evaluate large quantities of COVID-19 data to create new models and techniques for diagnosing COVID-19. Big data analysis techniques are crucial to analyze more data in less time, as time is a critical factor for treating COVID-19 infection cases. Furthermore, AI techniques enable a global visualization of the analyzed big data of COVID-19. The visualization uses AI to present an overview of global health and confirmed cases of COVID-19. In addition, the presented images of the lungs can indicate the presence of COVID-19. Therefore, the tracking of COVID-19 diseases to enhance community health needs comprehensive data and intelligent computational instruments. In a variety of approaches, numerous researchers have employed big data and AI tools to track COVID-19 disorders, as shown in Figure 1.
COVID-19 is an infectious disease where coronaviruses are a large family of viruses that can affect both humans and animals and cause respiratory difficulties [1]. Historically, 2020 was considered a volatile year for humans worldwide compared to previous years because of COVID-19, as it is a massive threat to global health. As of March 2021, there have been more than 128 million confirmed illnesses and approximately 3 million deaths worldwide [2]. Therefore, the number of infected subjects is increasing, with more than 150 countries reportedly having at least one case [3].
Image scanning is helpful to diagnose COVID-19 for infected subjects. Patients that have been exposed and have terrible symptoms of the virus may not be identified by the outcome of RT-PCR tests [4,5,6] that can still be non-deterministic. Image scanning includes X-rays (CXR), and computed tomography (CT) images. CT scans have proven to be one of the most accurate methods of diagnosis for COVID-19 [7]. However, there are several significant drawbacks [8], such as the high cost and not being conducive to bedside testing [9]. Consequently, it is not usually used in COVID-19 diagnosis, and it is also not necessary for the progression of specific cases to be observed, especially in seriously ill patients [10]. On the contrary, the X-ray technique is a less sensitive method than CT for COVID-19 detection, with a reported baseline hypersensitivity of 69 percent [11]. The X-ray is also a cheaper, faster option and can be used in many healthcare centers. Positive X-ray results reduce the need for CT screening if there is a strong clinical suspicion of COVID-19 infection [11]. However, this presents limitations for patients, including pregnant women, since it can affect the fetus [12]. In both lungs, radiologists also examine multiple patchy, segmental, or sub-segmental shadows in the ground glass density when analyzing X-ray images to diagnose COVID-19 [13]. This can be automated to assist experts in making a decision [14,15,16].
Therefore, big data and AI technology offer an essential role in the battle against COVID-19. Both tools might help doctors to diagnose COVID-19 cases more quickly and accurately. Accordingly, computer-based models for predicting, foretelling, analyzing, and distributing SARS-CoV-2 drugs have been designed and developed, allowing machine learning, computer vision, and robotic technology to be applied. In addition, AI and big data tools include visualization to illustrate information that supports regional transmission and risk allocation.
Different studies [17,18] were carried out based on in-depth learning technologies to diagnose and classify various diseases, such as viral pneumonia and organ tumors. Today, deep learning technologies have been used widely in the healthcare domain. This study makes the following contributions:
  • A deep learning sample-efficient algorithm for the diagnosis of COVID-19 based on CXR and CT scans.
  • Three COVID-19 datasets were used to train and test the proposed CAD. The datasets include 4001 positive CT scans of COVID-19 clinical results and 3835 positive CXR images. It is the most widely accessible CT dataset for COVID-19 as far as we are aware.
  • An extensive analysis of the results of COVID-19 transition was planned and conducted; informative conclusions are presented through developing various transfer techniques.
  • Self-controlled learning with transfer learning was utilized to learn strong, impartial representations of features to reduce the chance of over-fitting to learn from restricted labeled information.
  • Detailed studies were carried out to show that our CAD was successful. The results, on average, were a 99.18% accuracy, 99.69% recall, and 99.4% precision on the COVID-19 CXR and CT imaging datasets.
This paper discusses the most recent research in Section 2. Section 3 presents an overview of the methods and techniques used. The proposed model is presented in Section 4. Section 5 gives a brief description of the dataset used and explains the computer system configuration, parameter settings, and performance metrics. Section 6 presents the experiment and discussion. Finally, Section 7 concludes the paper with an outline of future work.

2. Background on Machine Learning and Deep Learning

Deep learning (DL) is a subset of the machine learning (ML) branch, the third generation of artificial neural networks. The principal objective of DL is the simulation of high-level data abstractions [19,20,21]. Different DL utilizes numerous layers to remove upper-level functions progressively from the raw data. DL produces several neuron layers, organized layer per layer.
For computer vision and image processing, there are numerous architectures of various types, such as generative adversarial networks (GANs) [22], convolutional neural networks (CNNs) [23,24], and DE convolutional networks [25].
CNNs are mainly utilized for images. CNNs are new deep learning algorithms suggested by Badrinarayana [24]. CNN lines distinguish among the weights of different artifacts in the image. This approach needs less pre-processing comparing with other shallow classification algorithms [26]. For input images, a CNN uses filters to capture spatial and time dependencies [27]. In CNNs, the height, m, and width, n, and r correspond to the channel number or depth. The input is separated by m and r instead of the three input components, m × m × r . There are several kernels of size k in every convolution layer [28]. As mentioned previously, the filtering is the base of relations, along with the development of k maps of each size (m, m, 1), each with the same parameters. The convergence layer calculates the point product, similar to MLP, among weights and inputs, except for a small amount of the original volume of the input, as shown in Equation (1). In addition, an activation function for the non-linearity function activates the contribution of the convolutional layers [27]:
h k = f ( W k s + b k )
The output of the current k-layer is denoted by h k , the kernel or weight of the current layer is indicated by W k , s presents the output of the previous layer, and b k represents the current bias of the current layer. The number of computational parameters is an essential indicator of a deep learning model’s complexity. The output characteristic maps can be described according to the following formula [27]:
M = ( N F ) S + 1
The input map dimensions are denoted by N and filter dimensions or receptive area by F, while M refers to output map dimensions and S to the stride length. Usually, padding is used to guarantee input and output during convolution operations that are the same size.
The padding number varies according to the kernel size. The number of rows and columns for padding is calculated in Equation (3) [29].
P = ( F 1 ) 2
where the amount of padding is symbolized by P , and F represents the dimensions of the kernels.
One of the most important principles in computer engineering is the reusability of components. In turn, many architectures have been introduced, including AlexNet, ResNet-50, ResNet-101, VGG-16, and VGG-19 [27,30]. Therefore, we intend to reuse the model regards to transfer learning guidelines. The transfer learning process reuses information from the source domain in the target domain [31]. See Figure 2 for extra explanation. Parameter optimization, structural reformulation, regularization, etc., are different improvement categories that were interested by many research communities. However, the main drive in CNN performance improvement appears to have come from the rearrangement of processing units and the design of new blocks. The majority of advancements in CNN designs have been carried out in the areas of depth and spatial exploitation to develop an excellent internal representation from raw pixels without requiring extensive processing.
AlexNet is considered as a type of feed-forward CNN with depth of eight layers and a spatial exploitation architecture [32]. It has five convolution layers (conv1 through conv5) as well as three completely connected layers (fc6, fc7, fc8) [33]. It was trained by classifying 1 million photos into 1000 different categories [23].
VGG-16 was trained using the same training set used for AlexNet. It contains three fully connected layers (fc6, fc7, fc8) and five convolutional blocks comprising 13 convolutional layers [34]. On the other hand, VGG-19 comprises 19 layers, including five convolutional blocks of 16 convolutional layers and three fully connected layers (fc6, fc7, and fc8).
Each ResNet type, such as ResNet-50 and ResNet-101, has its residual block. ResNet-50 is a 50-layer network that is cascaded from a convolution layer to 16 residual blocks within the network and finally to a fully linked layer. ResNet-101 has a total of 101 layers and 33 residual blocks [35]. Table 1 shows how contemporary models compare in terms of error, network parameters, the maximum number of connections, and more.
Machine learning (ML) algorithms are known for learning underlying relationships in data and making decisions without the need for explicit instructions. The capacity of a CNN to utilize spatial or temporal correlation in data is one of its most appealing features. A support vector machine (SVM) is a shallow classification algorithm developed by Vapnik [36]. The SVM is classification algorithm reduces learning steps and offers a quicker solution than other common algorithms [37,38]. The SVM classifier is built on the concept of the most appropriate hyper-planes, which are used to differentiate between two classes, positive or negative as shown in Equation (4) [39] by including the central function.
f ( x ) = s i g n ( ( i = 1 n α i y i K ( x i , x ) ) + b )

3. Brief Coverage of Previous Works

Many researchers are currently encouraged to establish early detection models to detect COVID-19 infection before outbreak:
Zhou Tao et al. [40] proposed EDL_COVID (an ensemble deep learning model) to detect COVID-19 disease from 2933 CT images. The proposed model depends on the three ensemble models AlexNet, GoogleNet, and ResNet.
An ensemble strategy was proposed by Rohit Kundu et al. [41] for detecting COVID-19 in CT scan images for human lungs. They employed two datasets of CT scan images to create decision scores for the proposed ensemble model utilizing three CNN models: VGG-11, ResNet-50-2, and Inception v3.
The authors proposed a deep convolutional 3D neural network called DeCoVNet to identify COVID-19 from CT images [15]. Thus, when COVID-19 was diagnosed, the algorithm worked in a black box because it focused on DL and was still at an early stage of explanatory ability.
COVNET [16] has developed and tested the efficiency of COVID-19 detection utilizing chest CT. The researchers have proposed a 3D deep learning system. The robustness evaluation of the model included community-acquired pneumonia (CAP) and other non-pneumonia exams.
In contrast with the RT-PCR assay of COVID-19, Yang et al. [18] assessed the agnostic and consistency value of chest CT. They suggested that chest CT should be considered, particularly in epidemic areas with a high preliminary possibility of disease for screening of COVID-19, comprehensive assessment, and follow-up.
Horri et al. [32] used three different methods of physician imaging (X-ray, ultrasound, and CT) for diagnosing COVID-19 stably and automatically. They utilized a deep VGG transmission learning network to refine their analysis. The accuracy of their classification was stated to be 86 percent, 84 percent, and 100 percent for three different datasets.
Ying et al. obtained a 94% accuracy and a 99% AUC with CT images utilizing a deep model based on ResNet50, known as DRE-Net [42]. They also considered an approach for target identification, i.e., indicating the areas of concern with bounding boxes [43]. VGG architecture [44] has been used to diagnose symptomatic lung regions [34]. A suggested method distinguishes cases of pneumonia (CAP) and non-pneumonia from COVID-19 (NP) in the population.
Jiang et al. [15] proposed an early screening strategy using pulmonary CT imaging to distinguish COVID-19 mutations from viral influenza pneumonia and stable cases. Several CNN models were suggested and utilized to identify the CT image datasets and quantify the risk of infection with COVID-19. The results may be beneficial in deep learning technologies for the early screening of COVID-19 patients. In the classic ResNet for feature extraction, the authors have proposed a location-attention mechanism.
The AIMDP model was suggested [42] for use with mutable artificial intelligent techniques to improve the model’s diagnosis and predictive role. The authors [32] developed a framework focused on the deep learning of the detection of CT viral pneumonia.
The authors in [44] also provide an overview of the most recent artificial intelligence systems in X-ray images for COVID-19 diagnostics. However, they used X-ray images, as their work was based on this only. To estimate COVID-19 diagnostics, Ghoshal et al. [45] presented a Bayesian convolutional neural network, differentiating between COVID-19 and non-COVID-19 cases, with a 92.9 percent accuracy. Binary classification was carried out by Narin et al. [46] for detecting COVID-19 to achieve the best accuracy of 98.0% with ResNet50 models, compared to the various deep learning (DL) models. Zhang et al. [47] submitted the COVID-19 (0.952 AUC) ResNet model to illustrate the pneumonia areas affected by applying the Grad-CAM approach for the gradient activation.
Finally, Wang et al. [48] suggest a deep CNN rated as 83.5 percent accurate between the VCOVID-19, non-COVID-19, and uninfected cases.
These studies have provided detailed solutions for combating the pandemic COVID-19. However, there are certain drawbacks to be taken into account. In the best case, researchers used small datasets of fewer than 400 images of COVID-19. In some cases, only 10 X-ray images were used for the COVID-19 class to validate the framework. Furthermore, there was no ground for comparison or medical surveillance with the obtained results, which can suggest not only COVID-19 identification but also the location of influenced areas in the lungs. For iteratively sliced COVID-19 identification using X-ray pictures, a deep learning model ensemble is proposed [49]. This research made use of a CNN and a set of pre-trained models. The proposed algorithm enhances memory efficiency while reducing complexity.

4. Architecture of the Smart CAD System

The proposed CAD system depends on deep learning, transfer learning, and shallow machine learning. In deep learning, multi-hidden layers are stacked for learning objects. These layers require a training process including “fine-tuning” to slightly adjust the weights of the DNN found in pre-training during the backpropagation procedure. Hence, DL nets can extract, classify the features, and effectively make a precise decision after an efficient training process. Transfer learning is used in the proposed CAD system to optimize multiple CNN architectures for datasets. However, the transfer-learning methodology generates optimal fitted CNNs for the datasets capable of classifying and diagnosing infection of COVID-19 scan images. In addition, these fine-tuned models can extract the feature set usable by the different shallow classifiers. Figure 3 shows a context diagram starting from scanning the image of the inspected case until detecting the infection response using the proposed smart CAD system, in which there are key components comprising the proposed CAD system, including:
  • Scanning: The source of the input image used to check the status of COVID-19 infection. The supported format of scans can be either CT or CXR images.
  • Pre-processing: A set of procedures performed for every newly scanned image before investigating the diagnosis process. It comprises auto color correction, auto contrast enhancement, resizing the image to the standard size, and normalizing color channels.
  • Diagnosing: A key component of the medical CAD system to detect, assist, and advise the doctors in their inspection and symptom analysis during the examination process. It can be divided into:
    Classification: A vital component of the smart CAD system in which different architectures can be alternately used. These models are responsible for extracting the features and the classification.
    Decision Unit: This depends on the most common and powerful DL activation function, ReLu. It is a subsequent responsibility of the classification component to make a decision.
Figure 4 shows different phases of the proposed system in a layered sub-black box style, in which the essential layers are briefly described for the proposed smart CAD system. According to current knowledge, all COVID-19 detection systems consist of a few significant layers: input data, model layer, activation layer, and model layer output for CXR or CT scan image analysis. In turn, the classification and decision in every CAD system using deep learning must include a collection of these different layers. Each group of these layers with a specific order is called a network architecture starting from input layer to output layer (e.g., AlexNet, VGG-16, VGG-19). Next, a brief description defining each role is discussed in detail along with its importance for the medical CAD system.

4.1. Input Layer

This layer reads the image data collection in advance. In other words, the CXR and the CT scan images are pre-processed independently. In the pre-processing phase, the images are reconstructed and resized. The images are taken from various sources, and their dimensions vary since the taken images from medical instruments were created from several letters, arts and crafts, and medical symbols. Moreover, the model layer of each of these products needs separate image dimensions to be managed. Therefore, the input image size was adjusted to fit the templates used in this analysis rather than cutting the lung and chest area as far as possible.

4.2. Model Layer

This layer represents the leading layer of the proposed smart CAD system, in which most calculations are carried out. The calculations include extracting image dataset features and preserving the spatial relationship between image pixels. Next, the data are moved from the input layer to the model layer. This layer contains four sub-black boxes. The CNN-based AlexNet was used with the aim of utilizing AlexNet’s pre-trained approach to diagnose COVID-19. The second sub-layer is the CNN-based RESNET of two versions RESNET50 and RESNET101, distinguished from other architectures by adding to the model blocks that feed the values into their following layers. This value changes the device value as described by adding a block every two layers between the linear and the ReLu activation codes. However, ResNet101 architecture uses more layers than the blocks of ResNet50 with three layers. The ResNet50 model offers fast training and considerable benefit because image residuals are learned rather than functionality [35].
The third sub-layer is the VGG sub-layer based on the CNN. Although it is a single model, the main advantage related to previous versions is that the CNN models are commonly used, so they are organized more thoroughly and accompanied by two- or three-color layers. VGG has a strong representation of features, and the model can serve as a helpful extractor for new images [34]. The last sub-layer is the SVM classification. Since the SVM is a good classification algorithm, it can be used to classify features that have already extracted. The methods used for feature extraction were derived from previous sub-layers (see Figure 5).

4.3. Activation Layer

This layer is a non-linear map of CNN architectures that works at the end of the learning phase to replace negative pixel values with zero in the convolved functions.

4.4. Output Layer

Based on the output score of the activation layer, the final response of classification is provided as an output label. The resulting label can be numerically categorized or encoded; for example, “0” is marked with COVID-19 (i.e., the positive event), “1” is marked with regular cases, and “2” is marked with other cases of pneumonia, etc.

5. Experimental Result

The proposed model was evaluated in-depth to assess the efficiency of the solutions and examine the impact on transfer learning and self-controlled learning. In the following subparagraphs, we describe the utilized datasets for the proposed CAD system, experimental environment, settings, and results due to performance metrics.

5.1. Dataset Description

Three datasets were used in these experiments, two of which have images of CXR type, and the last has CT images. The acquired dataset of CT scans was divided into 4001 COVID-19 and 15,684 non-COVID-19 images, whereas the first CXR dataset consists of 219 COVID-19 and 2686 non-COVID-19 images. The second CXR dataset comprises 3616 COVID-19 and 17,549 non-COVID-19 images. The evaluation supports the holdout procedure using 80% training set and 20% testing set. See Table 2 for a brief description of dataset details. Figure 6 and Figure 7 show a montage preview of the CT and CXR images.

5.2. Experimental Details

5.2.1. Computer System Configuration

The proposed CAD system was implemented using MATLAB R2020a, computer vision, image processing, neural networks, and deep learning toolboxes. The CAD system works on a HP Zbook workstation with Windows 10 64-bit, CPU: i7-6820HQ, RAM: 32GB DDR5, and GPU: 8 GB.

5.2.2. Parameter Settings

All networks were trained as follows: optimizer, SGDM initial learning rate 0.0001, and validation frequency 5. Every epoch, which is a complete cycle of training iteration, in the dataset was shuffled, and the training process stopped if the it did not change significantly. For all networks, the dataset was divided into 80% and 20% for training and validating sets, respectively. For all networks, the same training and validation datasets were chosen to facilitate the performance comparison of networks.

5.2.3. Performance Metrics

For the proposed CAD system, there are different performance metrics for evaluating efficiency and effectiveness. In such cases, the negative and positive cases were assigned to the non-COVID-19 and COVID-19 infection groups, respectively. In sequence, the number of correctly detected COVID-19 and non-COVID-19 infections is represented by N TP   and   N TN , respectively, whereas N FP   and   N FN indicate the number of incorrectly diagnosed COVID-19 and non-COVID-19 infections, respectively. Table 3 represents a brief description of the most common metrics used for evaluating the proposed CAD system.

6. Experiment Design: Result Evaluation and Discussion

The proposed CAD system was evaluated using two scenarios per single dataset; hence, six experiments were performed. The first scenario depends on optimizing parameters and fine-tuning pre-trained networks as an end-to-end CAD component. The second scenario involves employing the developed component in the first scenario as a feature extractor engine. The feature extractor engine then passes these feature sets to an SVM classifier boosted by optimizing the kernel function as a hybrid learning CAD component. The recorded results were captured per the dataset regarding the stated two scenarios, and the most effective model of the dataset was determined. In the following, the results are divided into three subsections for each dataset.

6.1. CT Scan Dataset

Firstly, the experiments were started with CT scan images, and as mentioned above, there are two scenarios for each dataset. The first scenario is exhibited here with the two-class label as the normal state and COVID-19. The same scenario was performed for the three-class dataset.
Table 4 and Table 5 show the numerical results for the first scenario for the CT scan dataset. Table 4 and Table 5 show all the results for the proposed models with three metrics, namely accuracy, precision, and recall, for each of the fully deep learning and the hybrid learning solutions. The experimental analysis shows the superiority of the proposed models over various metrices. Therefore, the same experiment was repeated for two new datasets for the X-ray images, as outlined in the following two subsections.

6.2. The First X-ray Dataset

As shown in the previous subsection, our proposed models are either fully deep learning or a hybrid model with the SVM by applying the first X-ray dataset. This experiment starts with the two-class label then the three-class label as discussed before. Table 6 and Table 7 show the results for the two-class label and three-class label, respectively.
In this experiment, the two classes’ dataset hybrid VGG19-SVM shows the best performance measures compared to both the other models. Even with the three-class dataset, the fully deep learning (enhanced TL of VGG16) method gives better results for accuracy and recall than the hybrid learning solutions.

6.3. The Second X-ray Dataset

Lastly, the two scenarios were applied for the second X-ray dataset. This experiment’s results are shown in Table 8 and Table 9 for the two-class label and the three-class label, respectively.

7. Discussion

This section discusses the superiority of the proposed models versus the related models in recent literature studies. The proposed model has multi-source scan images based on modularity, including CT scan and X-ray images. First, for the CT scan, Table 10 shows our proposed model deducted from the comparative study in Table 4 and the literature for the same dataset and inputs—results of this experiment are visualized in Figure 8. The second scenario was performed on the three-class label for the same dataset. All comparative results were replicated according to the three-class label dataset, as shown in Table 11 and Figure 9. In turn, the experiments were carried out in which the end-to-end VGG16 with the binary class demonstrated its superiority to the hybrid model. With three classes, the hybrid model achieved better results, and both showed better results than the comparative study from the literature.
Second is the first X-ray dataset where the proposed model obtained accuracy lower than Muhammed E.H. et al. [51] of around 0.89%, and the proposed model achieves a much more reasonable recall rate. Consequently, our proposed model does not stick in under-fit or over-fit with regards to a specific label (see Table 12). The proposed model satisfies the balance classification rates between different labels in the given dataset. Furthermore, the proposed model achieves a notable enhancement compared to others in terms of accuracy, precision, and recall by a significant rate as illustrated in Figure 10 for binary classification. Table 13 and Figure 11 demonstrate the superiority of the proposed model versus the models in the literature for three classes.
In the third dataset, the hybrid learning solution provided better results than fully deep learning. For the binary label, the proposed enhanced TL VGG16+SVM demonstrated its superiority (see Table 14). Figure 12 represents the visual analysis of the proposed model for binary classifier in terms of accuracy, precision, and recall. The proposed enhanced TL VGG19+SVM showed its effectiveness for the three-class label dataset (see Table 15). Figure 13 shows a graphical bar chart analysis of the proposed model versus the models in the literature; both binary and multiclass models show improvements in accuracy compared to those in the literature.

8. Conclusions

This paper proposes a CAD system for detecting COVID-19 infection. An excellent diagnostic performance was demonstrated in using both CT and CXR images. In addition, the CAD system is superior to those found in the literature. The CAD system could be a supplementary reliable analysis tool for diagnosing COVID-19 cases using CXR and CT images. Visible features in CT scan images, such as the intensity, shape, size, and nodule margins, may influence the diagnostic efficiency of the CAD system. Furthermore, junior radiotherapists lacking experience can use these helpful suggestions provided by the proposed CAD system.

Author Contributions

Conceptualization, H.K.Y., K.A.F. and K.A.E.; Investigation, K.A.F. and E.A.E.; Methodology, H.K.Y., E.A.E. and K.A.E.; Software, H.K.Y.; Supervision, M.T.A.-K., E.A.E. and K.A.E.; Writing—original draft, H.K.Y. and K.A.E.; Writing—review & editing, M.T.A.-K., H.K.Y., K.A.F., E.A.E. and K.A.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of faculty of science, Al-Azhar university.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available online and on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Culp, W.C., Jr. Coronavirus disease 2019: In-home isolation room construction. A&A Pract. 2020, 14, e01218. [Google Scholar] [CrossRef]
  2. Dash, S.; Chakraborty, C.; Giri, S.K.; Pani, S.K.; Frnda, J. BIFM: Big-Data Driven Intelligent Forecasting Model for COVID-19. IEEE Access 2021, 9, 97505–97517. [Google Scholar] [CrossRef]
  3. Fu, L.; Wang, B.; Yuan, T.; Chen, X.; Ao, Y.; Fitzpatrick, T.; Li, P.; Zhou, Y.; Lin, Y.-F.; Duan, Q.; et al. Clinical characteristics of coronavirus disease 2019 (COVID-19) in China: A systematic review and meta-analysis. J. Infect. 2020, 80, 656–665. [Google Scholar] [CrossRef]
  4. Li, Y.; Yao, L.; Li, J.; Chen, L.; Song, Y.; Cai, Z.; Yang, C. Stability issues of RT-PCR testing of SARS-CoV-2 for hospitalized patients clinically diagnosed with COVID-19. J. Med. Virol. 2020, 92, 903–908. [Google Scholar] [CrossRef] [Green Version]
  5. Zhai, P.; Ding, Y.; Wu, X.; Long, J.; Zhong, Y.; Li, Y. The epidemiology, diagnosis and treatment of COVID-19. Int. J. Antimicrob. Agents 2020, 55, 105955. [Google Scholar] [CrossRef]
  6. Kucirka, L.M.; Lauer, S.A.; Laeyendecker, O.; Boon, D.; Lessler, J. Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS-CoV-2 Tests by Time Since Exposure. Ann. Intern. Med. 2020, 173, 262–267. [Google Scholar] [CrossRef]
  7. Afshar, P.; Heidarian, S.; Enshaei, N.; Naderkhani, F.; Rafiee, M.J.; Oikonomou, A.; Fard, F.B.; Samimi, K.; Plataniotis, K.N.; Mohammadi, A. COVID-CT-MD, COVID-19 computed tomography scan dataset applicable in machine learning and deep learning. Sci. Data 2021, 8, 121. [Google Scholar] [CrossRef]
  8. Lin, E.C. Radiation Risk from Medical Imaging. Mayo Clin. Proc. 2010, 85, 1142–1146. [Google Scholar] [CrossRef] [Green Version]
  9. Axiaq, A.; Almohtadi, A.; Massias, S.A.; Ngemoh, D.; Harky, A. The role of computed tomography scan in the diagnosis of COVID-19 pneumonia. Curr. Opin. Pulm. Med. 2021, 27, 163–168. [Google Scholar] [CrossRef]
  10. Self, W.H.; Courtney, D.M.; McNaughton, C.D.; Wunderink, R.; Kline, J.A. High discordance of chest X-ray and computed tomography for detection of pulmonary opacities in ED patients: Implications for diagnosing pneumonia. Am. J. Emerg. Med. 2013, 31, 401–405. [Google Scholar] [CrossRef] [Green Version]
  11. Jacobi, A.; Chung, M.; Bernheim, A.; Eber, C. Portable chest X-ray in coronavirus disease-19 (COVID-19): A pictorial review. Clin. Imaging 2020, 64, 35–42. [Google Scholar] [CrossRef]
  12. Ratnapalan, S.; Bentur, Y.; Koren, G. Doctor, will that X-ray harm my unborn child? Can. Med. Assoc. J. 2008, 179, 1293–1296. [Google Scholar] [CrossRef] [Green Version]
  13. Jin, Y.-H.; Cai, L.; Cheng, Z.-S.; Cheng, H.; Deng, T.; Fan, Y.-P.; Fang, C.; Huang, D.; Huang, L.-Q.; Huang, Q.; et al. A rapid advice guideline for the diagnosis and treatment of 2019 novel coronavirus (2019-nCoV) infected pneumonia (standard version). Mil. Med. Res. 2020, 7, 4. [Google Scholar] [CrossRef] [Green Version]
  14. Civit-Masot, J.; Luna-Perejón, F.; Morales, M.D.; Civit, A. Deep Learning System for COVID-19 Diagnosis Aid Using X-ray Pulmonary Images. Appl. Sci. 2020, 10, 4640. [Google Scholar] [CrossRef]
  15. Duran-Lopez, L.; Dominguez-Morales, J.P.; Conde-Martin, A.F.; Vicente-Diaz, S.; Linares-Barranco, A. PROMETEO: A CNN-based computer-aided diagnosis system for WSI prostate cancer detection. IEEE Access 2020, 8, 128613–128628. [Google Scholar] [CrossRef]
  16. Dominguez-Morales, J.P.; Fernández, A.F.J.; Morales, M.J.D.; Jimenez-Moreno, G. Deep Neural Networks for the Recognition and Classification of Heart Murmurs Using Neuromorphic Auditory Sensors. IEEE Trans. Biomed. Circuits Syst. 2017, 12, 24–34. [Google Scholar] [CrossRef]
  17. Salehinejad, H.; Valaee, S.; Dowdell, T.; Colak, E.; Barfett, J. Generalization of deep neural networks for chest pathology classification in x-rays using generative adversarial networks. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 990–994. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, C.; Dou, Q.; Chen, H.; Heng, P.-A. Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest X-ray segmentation. In International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2018; pp. 143–151. [Google Scholar] [CrossRef] [Green Version]
  19. Bengio, Y. Learning Deep Architectures for AI (Found. Trends® Mach. Learn); Now Publishers Inc.: Norwell, MA, USA, 2009. [Google Scholar]
  20. Hinton, G.E.; Osindero, S.; Teh, Y.-W. A Fast-Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  21. Lan, K.; Wang, D.-T.; Fong, S.; Liu, L.-S.; Wong, K.K.L.; Dey, N. A Survey of Data Mining and Deep Learning in Bioinformatics. J. Med. Syst. 2018, 42, 139. [Google Scholar] [CrossRef]
  22. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
  23. Krizhevsky, B.A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
  24. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  25. Zeiler, M.D.; Krishnan, D.; Taylor, G.W.; Fergus, R. Deconvolutional networks. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2528–2535. [Google Scholar] [CrossRef]
  26. John, M.M. Design Methods and Processes for ML/DL Models. Ph.D. Thesis, Malmö Universitet, Malmö, Sweden, 2021. [Google Scholar]
  27. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [Green Version]
  28. Becker, B.; Vaccari, M.; Prescott, M.; Grobler, T. CNN architecture comparison for radio galaxy classification. Mon. Not. R. Astron. Soc. 2021, 503, 1828–1846. [Google Scholar] [CrossRef]
  29. Pathak, Y.; Shukla, P.; Tiwari, A.; Stalin, S.; Singh, S. Deep Transfer Learning Based Classification Model for COVID-19 Disease. IRBM 2020, in press. [CrossRef]
  30. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
  31. Hira, S.; Bai, A.; Hira, S. An automatic approach based on CNN architecture to detect Covid-19 disease from chest X-ray images. Appl. Intell. 2020, 51, 2864–2889. [Google Scholar] [CrossRef]
  32. Horry, M.J.; Chakraborty, S.; Paul, M.; Ulhaq, A.; Pradhan, B.; Saha, M.; Shukla, N. COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data. IEEE Access 2020, 8, 149808–149824. [Google Scholar] [CrossRef]
  33. Dhar, P.; Dutta, S.; Mukherjee, V. Cross-wavelet assisted convolution neural network (AlexNet) approach for phonocardiogram signals classification. Biomed. Signal Process. Control. 2020, 63, 102142. [Google Scholar] [CrossRef]
  34. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  36. Cortes, C.; Vapnik, V. Support-vector networks Machine learning. Mach. Learn. 1995, 20, 237–297. [Google Scholar] [CrossRef]
  37. Osowski, S.; Siwek, K.; Markiewicz, T. MLP and SVM networks-a comparative study. In Proceedings of the 6th Nordic Signal Processing Symposium (NORSIG), Espoo, Finland, 11 June 2004; pp. 37–40. [Google Scholar]
  38. Bogawar, P.S.; Bhoyar, K. An improved multiclass support vector machine classifier using reduced hyper-plane with skewed binary tree. Appl. Intell. 2018, 48, 4382–4391. [Google Scholar] [CrossRef]
  39. Sun, L.; Bao, J.; Chen, Y.; Yang, M. Research on parameter selection method for support vector machines. Appl. Intell. 2018, 48, 331–342. [Google Scholar] [CrossRef]
  40. Zhou, T.; Lu, H.; Yang, Z.; Qiu, S.; Huo, B.; Dong, Y. The ensemble deep learning model for novel COVID-19 on CT images. Appl. Soft Comput. 2021, 98, 106885. [Google Scholar] [CrossRef]
  41. Kundu, R.; Basak, H.; Singh, P.K.; Ahmadian, A.; Ferrara, M.; Sarkar, R. Fuzzy rank-based fusion of CNN models using Gompertz function for screening COVID-19 CT scans. Sci. Rep. 2021, 11, 14133. [Google Scholar] [CrossRef]
  42. Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Wang, R.; Zhao, H.; Zha, Y.; et al. Deep learning Enables Accurate Diagnosis of Novel Coronavirus (COVID-19) with CT images. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 2775–2780. [Google Scholar] [CrossRef]
  43. Hu, S.; Gao, Y.; Niu, Z.; Jiang, Y.; Li, L.; Xiao, X.; Wang, M.; Fang, E.F.; Menpes-Smith, W.; Xia, J.; et al. Weakly Supervised Deep Learning for COVID-19 Infection Detection and Classification from CT Images. IEEE Access 2020, 8, 118869–118883. [Google Scholar] [CrossRef]
  44. Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation, and Diagnosis for COVID-19. IEEE Rev. Biomed. Eng. 2020, 14, 4–15. [Google Scholar] [CrossRef] [Green Version]
  45. Ghoshal, B.; Tucker, A. Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection. arXiv 2020, arXiv:2003.10769. [Google Scholar]
  46. Narin, A.; Kaya, C.; Pamuk, Z. Department of Biomedical Engineering, Zonguldak Bulent Ecevit University, 67100, Zonguldak, Turkey. arXiv 2020, arXiv:2003.10849. [Google Scholar]
  47. Zhang, J.; Xie, Y.; Pang, G.; Liao, Z.; Verjans, J.; Li, W.; Sun, Z.; He, J.; Li, Y.; Shen, C.; et al. Viral pneumonia screening on chest X-ray images using confidence-aware anomaly detection. arXiv 2020, arXiv:2003.12338. [Google Scholar]
  48. Gunraj, H.; Wang, L.; Wong, A. COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest CT Images. Front. Med. 2020, 7, 608525. [Google Scholar] [CrossRef]
  49. Rajaraman, S.; Siegelman, J.; Alderson, P.O.; Folio, L.S.; Folio, L.R.; Antani, S.K. Iteratively Pruned Deep Learning Ensembles for COVID-19 Detection in Chest X-Rays. IEEE Access 2020, 8, 115041–115050. [Google Scholar] [CrossRef]
  50. de Vente, C.; Boulogne, L.H.; Venkadesh, K.V.; Sital, C.; Lessmann, N.; Jacobs, C.; Sánchez, C.I.; van Ginneken, B. Improving automated covid-19 grading with convolutional neural networks in computed tomography scans: An ablation study. arXiv 2020, arXiv:2009.09725. [Google Scholar]
  51. Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Emadi, N.A.; et al. Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  52. Cavallo, A.U.; Troisi, J.; Forcina, M.; Mari, P.; Forte, V.; Sperandio, M.; Pagano, S.; Cavallo, P.; Floris, R.; Garaci, F. Texture analysis in the evaluation of Covid-19 pneumonia in chest X-Ray images: A Proof of Concept Study. Curr. Med. Imaging 2021, 17, 1094–1102. [Google Scholar] [CrossRef]
  53. Rajpal, S.; Agarwal, M.; Rajpal, A.; Lakhyani, N.; Saggar, A.; Kumar, N. Cov-elm classifier: An extreme learning machine based identification of covid-19 using chest x-ray images. arXiv 2020, arXiv:2007.08637. [Google Scholar]
  54. Echtioui, A.; Zouch, W.; Ghorbel, M.; Mhiri, C.; Hamam, H. Covid19 Detection Methods of COVID-19. SLAS Technol. Transl. Life Sci. Innov. 2020, 25, 566–572. [Google Scholar]
Figure 1. Big data analytics against COVID-19.
Figure 1. Big data analytics against COVID-19.
Healthcare 10 00109 g001
Figure 2. Simple overviews showing model forecasts of domain A and domain B transition (right) or without (left). The transfer learning extracts the features from domain A as common knowledge and then uses common knowledge to forecast model B.
Figure 2. Simple overviews showing model forecasts of domain A and domain B transition (right) or without (left). The transfer learning extracts the features from domain A as common knowledge and then uses common knowledge to forecast model B.
Healthcare 10 00109 g002
Figure 3. Context overview of the proposed smart CAD system.
Figure 3. Context overview of the proposed smart CAD system.
Healthcare 10 00109 g003
Figure 4. Layered phase of smart CAD system.
Figure 4. Layered phase of smart CAD system.
Healthcare 10 00109 g004
Figure 5. Flowchart of the hybrid learning models.
Figure 5. Flowchart of the hybrid learning models.
Healthcare 10 00109 g005
Figure 6. Sample of CT dataset collection: (a) normal cases (b) COVID-19 cases.
Figure 6. Sample of CT dataset collection: (a) normal cases (b) COVID-19 cases.
Healthcare 10 00109 g006
Figure 7. Sample of CXR dataset collection: (a) normal cases (b) COVID-19 cases.
Figure 7. Sample of CXR dataset collection: (a) normal cases (b) COVID-19 cases.
Healthcare 10 00109 g007
Figure 8. Comparative study between proposed model and other literature models using CT dataset (two-class) [50].
Figure 8. Comparative study between proposed model and other literature models using CT dataset (two-class) [50].
Healthcare 10 00109 g008
Figure 9. Comparative study between proposed model and other literature models using CT dataset (three-class) [50].
Figure 9. Comparative study between proposed model and other literature models using CT dataset (three-class) [50].
Healthcare 10 00109 g009
Figure 10. Comparative study between proposed model and other literature models using X-ray dataset (two-class) [51,52,53,54].
Figure 10. Comparative study between proposed model and other literature models using X-ray dataset (two-class) [51,52,53,54].
Healthcare 10 00109 g010
Figure 11. Comparative study between proposed model and other literature models using X-ray dataset (three-class) [51,52,53,54].
Figure 11. Comparative study between proposed model and other literature models using X-ray dataset (three-class) [51,52,53,54].
Healthcare 10 00109 g011
Figure 12. Comparative study between proposed model and other literature models using X-ray dataset (two-class) [51,52,53,54].
Figure 12. Comparative study between proposed model and other literature models using X-ray dataset (two-class) [51,52,53,54].
Healthcare 10 00109 g012
Figure 13. Comparative study between proposed model and other literature models using X-ray dataset (three-class) [51,52,53,54].
Figure 13. Comparative study between proposed model and other literature models using X-ray dataset (three-class) [51,52,53,54].
Healthcare 10 00109 g013
Table 1. Characteristics of convolutional neural networks used in this study.
Table 1. Characteristics of convolutional neural networks used in this study.
ModelVGGALEXNETRESNET
SIZE OF INPUT224 × 224227 × 227224 × 224
STRIDE11.41.2
NO. OF FC LAYERS331
TOP FIVE ERRORS7.416.45.3
NUMBER OF MACS15.3 M666 M3.86 G
NUMBER OF FEATURE MAPS3–5123–2563–1024
NO. OF CONV. LAYERS16550
NUMBER OF WEIGHTS14.7 M2.3 M23.5 M
SIZE OF FILTER33, 5, 111, 3, 7
Table 2. Technical characteristics data of patients with COVID-19 and non-COVID-19 group.
Table 2. Technical characteristics data of patients with COVID-19 and non-COVID-19 group.
Data TypeTotal No. of ImagesNo. of ClassesCOVID-19 Image No.Other Pneumonia Image No.Normal Image No.Non-Informative Image No.Lung Opacity Image No.
CT19,68534001-57059979-
CXR290521913451341--
21,16543616134510,192-6012
Table 3. Common performance metrics for CAD evaluation.
Table 3. Common performance metrics for CAD evaluation.
MetricFormulaDescription
Accuracy accuracy = N TP   +   N TN N TP   +   N TN   +   N FP   +   N FN Ratio of number of all correct detected cases to the total number of cases.
Precision precision = N TP N TP   +   N FP Number of correct detected COVID-19 cases divided by the total input number of COVID-19 infection.
Recall recall = N TP N TP   +   N FN Proportion of COVID-19 cases that are correctly classified as COVID-19, with respect to COVID-19 cases.
Specificity specificity = N TN N TN   +   N FP Proportion of negative data points that are correctly classified as normal, with respect to all normal cases
Table 4. The performance measures in applying different learning models for the CT scan images dataset (two-class) where the bolded number indicates the best result among the other classification models.
Table 4. The performance measures in applying different learning models for the CT scan images dataset (two-class) where the bolded number indicates the best result among the other classification models.
AlgorithmAccuracyPrecisionRecall
Feature ExtractionClassification
Enhanced TL AlexNetAlexNet99.4899.9399.44
Enhanced TL ResNet-50ResNet-5099.6310099.56
Enhanced TL ResNet-101ResNet-10199.7999.8199.94
Enhanced TL VGG-19VGG-1999.8499.81100
Enhanced TL VGG-16VGG-1699.9499.94100
Enhanced TL AlexNetSVM99.6399.7599.81
Enhanced TL ResNet-50SVM99.8399.8499.95
Enhanced TL ResNet-101SVM99.8799.999.95
Enhanced TL VGG-19SVM99.7910099.75
Enhanced TL VGG-16SVM99.7999.8799.87
Table 5. The performance measures in applying different learning models for the CT images dataset (three-class) where the bolded number indicates the best result among the other classification models.
Table 5. The performance measures in applying different learning models for the CT images dataset (three-class) where the bolded number indicates the best result among the other classification models.
AlgorithmAccuracyPrecisionRecall
Feature ExtractionClassification
Enhanced TL AlexNetAlexNet99.36599.5599.78
Enhanced TL ResNet-50ResNet-5099.5499.6599.83
Enhanced TL ResNet-101ResNet-10199.4199.4599.9
Enhanced TL VGG-19VGG-1999.4999.5799.88
Enhanced TL VGG-16VGG-1699.263499.9799.18
Enhanced TL AlexNetSVM98.999.5199.33
Enhanced TL ResNet-50SVM99.5199.699.83
Enhanced TL ResNet-101SVM99.5599.799.78
Enhanced TL VGG-19SVM99.699.9199.66
Enhanced TL VGG-16SVM98.599.1199.26
Table 6. The performance measures in applying different learning models for the X-ray dataset (two-class) where the bolded number indicates the best result among the other classification models.
Table 6. The performance measures in applying different learning models for the X-ray dataset (two-class) where the bolded number indicates the best result among the other classification models.
Learning ModeAlgorithmAccuracyPrecisionRecall
Feature ExtractionClassification
Fully deep learning (E2E Solution)Enhanced TL AlexNetAlexNet96.3896.9598.62
Enhanced TL ResNet-50ResNet-5096.7297.2599.87
Enhanced TL ResNet-101ResNet-10195.3595.897.1
Enhanced TL VGG-19VGG-1990.795.9699.24
Enhanced TL VGG-16VGG-1697.4197.6100
Hybrid learning solutionEnhanced TL AlexNetSVM99.6710098.86
Enhanced TL ResNet-50SVM99.3598.8698.86
Enhanced TL ResNet-101SVM99.6710098.86
Enhanced TL VGG-19SVM100100100
Enhanced TL VGG-16SVM99.6710098.86
Table 7. The performance measures in applying different learning models for the X-ray dataset (three-class) where the bolded number indicates the best result among the other classification models.
Table 7. The performance measures in applying different learning models for the X-ray dataset (three-class) where the bolded number indicates the best result among the other classification models.
Learning ModeAlgorithmAccuracyPrecisionRecall
Feature ExtractionClassification
Fully deep learning (E2E Solution)Enhanced TL AlexNetAlexNet96.3896.9598.62
Enhanced TL ResNet-50ResNet-5096.7297.2599.87
Enhanced TL ResNet-101ResNet-10195.3595.897.1
Enhanced TL VGG-19VGG-1990.795.9699.24
Enhanced TL VGG-16VGG-1697.4197.6100
Hybrid learning solutionEnhanced TL AlexNetSVM91.493.9293.22
Enhanced TL ResNet-50SVM83.383.8980.02
Enhanced TL ResNet-101SVM81.1781.7182.69
Enhanced TL VGG-19SVM97.298.3799.01
Enhanced TL VGG-16SVM92.997.0593.79
Table 8. The performance measures in applying different learning models for the X-ray dataset (two-class) where the bolded number indicates the best result among the other classification models.
Table 8. The performance measures in applying different learning models for the X-ray dataset (two-class) where the bolded number indicates the best result among the other classification models.
Learning ModeAlgorithmAccuracyPrecisionRecall
Feature ExtractionClassification
Fully deep learning (E2E Solution)Enhanced TL AlexNetAlexNet96.5997.6295.78
Enhanced TL ResNet-50ResNet-5098.6299.7197.65
Enhanced TL ResNet-101ResNet-10198.9499.3798.61
Enhanced TL VGG-19VGG-1998.4499.3597.65
Enhanced TL VGG-16VGG-1698.8498.5799.24
Hybrid learning solutionEnhanced TL AlexNetSVM97.797.9197.72
Enhanced TL ResNet-50SVM98.9898.9498.99
Enhanced TL ResNet-101SVM98.7598.8198.64
Enhanced TL VGG-19SVM98.9899.398.75
Enhanced TL VGG-16SVM99.2399.4499.1
Table 9. The performance measures in applying different learning models for the X-ray dataset (three-class) where the bolded number indicates the best result among the other classification models.
Table 9. The performance measures in applying different learning models for the X-ray dataset (three-class) where the bolded number indicates the best result among the other classification models.
Learning ModeAlgorithmAccuracyPrecisionRecall
Feature ExtractionClassification
Fully deep learning (E2E Solution)Enhanced TL AlexNetAlexNet96.398.1697.63
Enhanced TL ResNet-50ResNet-5098.4899.2999
Enhanced TL ResNet-101ResNet-10197.7898.8998.65
Enhanced TL VGG-19VGG-1998.5499.4499.12
Enhanced TL VGG-16VGG-1695.3799.3695.25
Hybrid learning solutionEnhanced TL AlexNetSVM97.3298.3598.22
Enhanced TL ResNet-50SVM98.4699.2999.08
Enhanced TL ResNet-101SVM97.9198.8398.64
Enhanced TL VGG-19SVM98.9499.5999.38
Enhanced TL VGG-16SVM98.7499.5299.22
Table 10. Comparison between proposed model versus other related models for the CT scan dataset (two-class).
Table 10. Comparison between proposed model versus other related models for the CT scan dataset (two-class).
AlgorithmAccuracyPrecisionRecall
Proposed (Enhanced E2E of VGG16)99.9499.94100
Coen de Vente et al. [50]87.6374.0066.00
Table 11. Comparison between proposed model versus other related models for the CT scan dataset (three-class).
Table 11. Comparison between proposed model versus other related models for the CT scan dataset (three-class).
AlgorithmAccuracyPrecisionRecall
Proposed (Enhanced Hybrid of ResNet-101 and SVM)99.699.9199.66
Coen de Vente et al. [50]87.6374.0066.00
Table 12. Comparison between proposed model versus other related models for the X-ray dataset (two-class).
Table 12. Comparison between proposed model versus other related models for the X-ray dataset (two-class).
AlgorithmAccuracyPrecisionRecall
Proposed (Enhanced TL of VGG19)100100100
Muhammed E.H. et al. [51]98.30100.0096.70
Armando Ugo Cavallo et al. [52]91.80--93.00
Sheetal et al. [53]94.40--94.50
Amira et al. [54]91.3491.0088.33
Table 13. Comparison between proposed model versus other related models for the X-ray dataset (three-class).
Table 13. Comparison between proposed model versus other related models for the X-ray dataset (three-class).
AlgorithmAccuracyPrecisionRecall
Proposed (Enhanced E2E of VGG16)97.4197.6100
Muhammed E.H. et al. [51]98.30100.0096.70
Armando Ugo Cavallo et al. [52]91.80--93.00
Sheetal et al. [53]94.40--94.50
Amira et al. [54]91.3491.0088.33
Table 14. Comparison between proposed model versus other related models for the X-ray dataset (two-class).
Table 14. Comparison between proposed model versus other related models for the X-ray dataset (two-class).
AlgorithmAccuracyPrecisionRecall
Proposed (Enhanced TL VGG16)99.2399.4499.1
Muhammed E.H. et al. [51]98.30100.0096.70
Armando Ugo Cavallo et al. [52]91.80--93.00
Sheetal et al. [53]94.40--94.50
Amira et al. [54]91.3491.0088.33
Table 15. Comparison between proposed model versus other related models for the X-ray dataset (three-class).
Table 15. Comparison between proposed model versus other related models for the X-ray dataset (three-class).
AlgorithmAccuracyPrecisionRecall
Proposed (Enhanced TL VGG-19+SVM)98.9499.5999.38
Muhammed E.H. et al. [51]98.30100.0096.70
Armando Ugo Cavallo et al. [52]91.80--93.00
Sheetal et al. [53]94.40--94.50
Amira et al. [54]91.3491.0088.33
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abou-Kreisha, M.T.; Yaseen, H.K.; Fathy, K.A.; Ebeid, E.A.; ElDahshan, K.A. Multisource Smart Computer-Aided System for Mining COVID-19 Infection Data. Healthcare 2022, 10, 109. https://doi.org/10.3390/healthcare10010109

AMA Style

Abou-Kreisha MT, Yaseen HK, Fathy KA, Ebeid EA, ElDahshan KA. Multisource Smart Computer-Aided System for Mining COVID-19 Infection Data. Healthcare. 2022; 10(1):109. https://doi.org/10.3390/healthcare10010109

Chicago/Turabian Style

Abou-Kreisha, Mohammad T., Humam K. Yaseen, Khaled A. Fathy, Ebeid A. Ebeid, and Kamal A. ElDahshan. 2022. "Multisource Smart Computer-Aided System for Mining COVID-19 Infection Data" Healthcare 10, no. 1: 109. https://doi.org/10.3390/healthcare10010109

APA Style

Abou-Kreisha, M. T., Yaseen, H. K., Fathy, K. A., Ebeid, E. A., & ElDahshan, K. A. (2022). Multisource Smart Computer-Aided System for Mining COVID-19 Infection Data. Healthcare, 10(1), 109. https://doi.org/10.3390/healthcare10010109

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop