Next Article in Journal
A Jigsaw Puzzle Solver-Based Attack on Image Encryption Using Vision Transformer for Privacy-Preserving DNNs
Next Article in Special Issue
Tokenized Markets Using Blockchain Technology: Exploring Recent Developments and Opportunities
Previous Article in Journal
Empower Psychotherapy with mHealth Apps: The Design of “Safer”, an Emotion Regulation Application
Previous Article in Special Issue
A Double-Stage 3D U-Net for On-Cloud Brain Extraction and Multi-Structure Segmentation from 7T MR Volumes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Hybrid Deep Convolutional Neural Network for COVID-19 Disease Identification from Chest X-ray Images

by
Theodora Sanida
1,*,
Irene-Maria Tabakis
1,
Maria Vasiliki Sanida
2,
Argyrios Sideris
1 and
Minas Dasygenis
1
1
Department of Electrical and Computer Engineering, University of Western Macedonia, 50131 Kozani, Greece
2
Department of Digital Systems, University of Piraeus, 18534 Piraeus, Greece
*
Author to whom correspondence should be addressed.
Information 2023, 14(6), 310; https://doi.org/10.3390/info14060310
Submission received: 28 February 2023 / Revised: 20 May 2023 / Accepted: 25 May 2023 / Published: 29 May 2023
(This article belongs to the Special Issue Artificial Intelligence and Big Data Applications)

Abstract

:
The prompt and accurate identification of the causes of pneumonia is necessary to implement rapid treatment and preventative approaches, reduce the burden of infections, and develop more successful intervention strategies. There has been an increase in the number of new pneumonia cases and diseases known as acute respiratory distress syndrome (ARDS) as a direct consequence of the spread of COVID-19. Chest radiography has evolved to the point that it is now an indispensable diagnostic tool for COVID-19 infection pneumonia in hospitals. To fully exploit the technique, it is crucial to design a computer-aided diagnostic (CAD) system to assist doctors and other medical professionals in establishing an accurate and rapid diagnosis of pneumonia. This article presents a robust hybrid deep convolutional neural network (DCNN) for rapidly identifying three categories (normal, COVID-19 and pneumonia (viral or bacterial)) using X-ray image data sourced from the COVID-QU-Ex dataset. The proposed approach on the test set achieved a rate of 99.25% accuracy, 99.10% Kappa-score, 99.43% AUC, 99.24% F1-score, 99.25% recall, and 99.23% precision, respectively. The outcomes of the experiments demonstrate that the presented hybrid DCNN mechanism for identifying three categories utilising X-ray images is robust and effective.

1. Introduction

The COVID-19 pandemic has led to a considerable increase in pneumonia patients worldwide. The most prominent indications and symptoms of COVID-19 are chest discomfort, cough, sore throat, fever, and shortness of breath, similar to other pneumonia types. COVID-19 pneumonia presents unique challenges as it can cause severe respiratory distress, which can advance rapidly to acute respiratory distress syndrome (ARDS) [1]. So, to successfully combat the disease and implement preventative measures, it is necessary to differentiate between COVID-19 infection and other bacterial or viral pneumonias. A delay in treatment may result in mortality, or other problems, including impaired lung function and chronic non-communicable respiratory infections, such as asthma or chronic obstructive pulmonary disease (COPD) [2,3].
In diagnosing a broad range of lung-related disorders, chest X-rays (CXR) are often used as one of the diagnostic techniques [4]. Chest X-rays are inexpensive, widely available, and can be performed quickly, making them an effective diagnostic tool for pneumonia. In contrast, computed tomography (CT) and magnetic resonance imaging (MRI) are higher-cost techniques and require more time. For this reason, chest radiography has developed into an important diagnostic technique for COVID-19 infection pneumonia in hospitals. However, when performing chest X-rays in the early stages of pneumonia, the radiographic features may not be distinct, making their interpretation more difficult. As a result, especially during the COVID-19 pandemic, using a CAD system for pneumonia diagnosis can help to manage the high volume of patients presenting with respiratory symptoms [5]. Consequently, designing a computer-aided diagnostic (CAD) system has become essential in supporting medical practitioners in establishing an accurate diagnosis of pneumonia on time [6,7].
Lately, increased attention has been paid to deep learning (DL) methods, particularly deep convolutional neural networks (DCNNs), in CAD systems based on computer vision methodologies so as to identify diseases using chest X-ray images. The use of DCNNs in chest X-rays for COVID-19 detection has gained significant traction due to its potential to provide a fast and accurate diagnosis, which is crucial for handling the COVID-19 pandemic [8]. DCNNs are a kind of artificial intelligence (AI) often utilised in image categorisation tasks because they can extract characteristics from images and categorise them based on these features. In the medical imaging domain [9], it has been shown that identifying DCNNs in chest X-rays plays a crucial role in diagnosing bacterial pneumonia, viral pneumonia, and other chest disorders. Therefore, the development of such models to identify radiographs contaminated with COVID-19 is urgently required to be able to make suitable clinical decisions to assist radiologists, medical experts, practitioners and doctors [10,11].
This work proposes a CAD system with a hybrid identification strategy that uses chest X-ray image data to categorize three distinct diseases that might cause pneumonia. The hybrid DCNN consists of a combination of VGG [12] blocks and an inception [13,14] module. Compared to previous methods that have already been established, our novel network achieves a higher rate of accurate identification using a large collection. Thus, by employing X-ray images, this image-based pneumonia disease diagnosis method will assist medical professionals in the early and rapid identification of pneumonia.
The most significant contributions of this work are as follows:
  • The identification of pneumonia is performed using a hybrid DCNN mechanism. The modified VGG19 model includes two inception blocks to take advantage of simultaneous feature extraction capabilities. The hybrid DCNN is equipped with powerful feature extraction capabilities.
  • We conducted exhaustive high-level simulations to assess the effectiveness of the presented hybrid DCNN. The proposed hybrid DCNN mechanism findings were compared to those obtained from the most current and advanced networks.
The remaining structure of this work is as follows: In Section 2, we address similar articles on pneumonia and the diagnosis of COVID-19 that have been published in the literature. Section 3 analyzes the materials and methods used in our experimental work. The experiment outcomes are discussed in Section 4, along with assessment metrics, and then the accuracy rate is compared with existing identification techniques. Finally, the study’s conclusion is in presented in Section 5, which includes some predictions for the future.

2. Related Work

The automated investigation and analysis of an extensive collection of image data create new and exciting challenges that call for state-of-the-art computational strategies and classic machine learning (ML), deep learning (DL), or computational intelligence (CI) approaches that can provide high-performance and specialized medical services [15]. In the last two years, a significant number of investigators from all over the world have developed and published many studies to detect and slow the spread of the COVID-19 virus. A substantial number of these researchers have used a variety of AI methodologies to analyze and diagnose X-ray images to identify various diseases. The capacity of DL techniques [16] to generate better results than typical ML approaches has made them the most popular methods for identifying images. In this part, we will concentrate on research that uses novel methodologies to identify COVID-19 based on DL methods.
COVID-AleXception is proposed in [17], which is a concatenation of the features from two pre-trained CNN methods, Xception and AlexNet. The dataset comprises 15,153 X-ray images (1345 pneumonia, 3616 COVID-19 and 10,192 normal). Each CNN method was trained for 100 epochs with the Adam optimisation algorithm. The COVID-AleXception method achieved an identification accuracy rate of 98.68% over Xception and AlexNet, which yielded an identification accuracy of 95.63% and 94.86%, respectively. Hafeez et al. [18] designed a customised CNN prediction system for chest X-rays and compared it with two pre-trained CNN methods (VGG16 and AlexNet). The accuracy of the proposed system for the three categories (normal, COVID-19, and virus bacteria) is 89.855%, 89.015% for VGG16 and 89.155% for AlexNet.
In [19], the authors suggested a lightweight CNN technique for COVID-19 identification utilising X-ray images and evaluated it with seven pre-trained CNN systems (InceptionV3, Xception, ResNet50V2, MobileNetV2, DenseNet121, EfficientNet-B0, and EfficientNetV2). The dataset comprised 600 COVID-19, 600 normal, and 600 pneumonia images. Each CNN method was trained for 50 epochs. The rate of the accuracy of the proposed method for the three categories is 98.33% and 97.73% from EfficientNetV2. CoroNet is proposed in [20], based on the Xception method. The utilised collection comprised 330 bacterial pneumonia, 327 viral pneumonia, 284 COVID-19, and 310 normal X-ray images. The CoroNet method was trained for 80 epochs, and four categories reached a rate of accuracy of 89.60%.
Ghose et al. [21] designed a customised CNN automatic diagnosis system. The dataset comprises 10,293 X-ray images, including 4200 pneumonia, 2875 COVID-19, and 3218 normal images. The customised CNN was trained for 25 epochs with the Adam optimisation algorithm. The proposed method attained 98.50% accuracy, a 98.30% F1-score, and 99.20% precision. In [22], the authors suggested a DL diagnosis system to quickly detect pneumonia using X-ray images. They compare the VGG19 and ResNet50 methods for three distinct diseases of lung detection. The dataset comprises 11,263 pneumonia, 11,956 COVID-19 and 10,701 normal images. Each CNN method was trained for 180 epochs. The accuracy of the proposed diagnosis system for the three categories is 96.60% for the VGG19 method and 95.80% for ResNet50.
Furthermore, in [23], the authors compare four DL methods (VGG16, ResNet50, DenseNet121, and VGG19) to diagnose X-ray images as COVID-19 or normal. The dataset comprises 1592 X-ray images (802 normal, 790 COVID-19). Each CNN method was trained for 30 epochs. The VGG16 method for the two categories achieved an accuracy rate of 99.33%, ResNet50 achieved 97.00%, DenseNet121 achieved 96.66%, and VGG19 achieved 96.66%. In [24], the authors suggested a DL model based on MobileNetV2 to identify COVID-19 infection. The dataset comprises 1576 normal, 3616 COVID-19 and 4265 pneumonia X-ray images. Each CNN strategy was trained for 80 epochs. The accuracy rate of the suggested diagnosis approach for the three categories is 97.61%.
Nayak et al. [25] designed a CNN technique called LW-CORONet. The suggested method is evaluated by employing two datasets where dataset-1 has 2250 images (750 pneumonia, 750 normal, and 750 COVID-19) and dataset-2 has 15,999 images (5575 pneumonia, 8066 normal, and 2358 COVID-19). The customised CNN was trained for 100 epochs with the Adam optimisation algorithm. The identification accuracy obtained is 98.67% on dataset-1 and 95.67% on dataset-2 for three category cases, respectively. In [26], the authors suggested a CNN model for medical diagnostic image analysis to identify COVID-19. The proposed approach is based on the MobileNetV2 method. The dataset comprises 10.192 normal, 3616 COVID-19, 6012 lung opacity and 1345 viral pneumonia images. The proposed diagnosis method achieves an identification accuracy rate of 95.80%.
Most researchers fed their identification networks with data from relatively small collections. Consequently, most of the networks reached high levels of accuracy; however, the prediction results based on those networks cannot be generalised owing to the small number of image data on which the networks were trained [27]. Table 1 presents a comprehensive description of the categorisation of the above systems for identifying COVID-19 and analyzes the model employed and the accuracy rate achieved.
In our work, the collection included 33,920 chest X-ray image data, balanced with around 10,500 images belonging to each category. Thus, a hybrid DCNN identification mechanism was created for diagnosing pneumonia and COVID-19 disease based on image evidence from a much larger collection. Our suggested design has a primary key goal to improve disease detection accuracy and reduce the frequency of inaccurate identifications. The hybrid DCNN network was trained and tested using X-ray image data that included three distinct types of pneumonia. According to the experiments’ findings, the model’s categorization accuracy is 99.23%. Since it has a high accuracy rate, the recommended strategy may be of assistance to those operating in the medical industry.

3. Materials and Methods

3.1. Dataset Collection

There are 33,920 chest X-ray image data in the collection COVID-QU-Ex [28], all of which are available to the public.
The COVID-QU-Ex collection consists of three categories: normal, non-COVID infection, and COVID-19. Patients with normal (healthy) situations represent 32% of the total collection with 10,701 instances, non-COVID infection situations represent 33% with 11,263 instances, and COVID-19 situations represent 35% with 11,956 instances. These images represent two different diseases and one healthy state. Each image’s resolution in the collection, which is in a PNG file format, is 256 pixels per flank. Figure 1 illustrates a sample of a normal instance and two distinct disorders that may damage the lungs. Since the collection is already large and relatively well-balanced, as shown in Figure 2, there is no need to use data augmentation techniques to make it more balanced.
From the radiographic findings in Figure 1, a normal lung X-ray typically shows clear lung fields without any significant opacities or abnormalities. The lung markings appear normal, with the blood vessels and airway passages clearly visible. In cases of viral or bacterial pneumonia, the X-ray image often reveals areas of opacity or consolidation. These areas appear as dense, cloudy regions within the lung fields, indicating the presence of inflammation, fluid, or pus. The opacities can be patchy, focal, or lobar, depending on the severity and extent of the infection. X-ray findings in COVID-19 pneumonia show ground glass opacities (blurry areas) in multiple areas of the lungs. These opacities often have a peripheral distribution and can affect both lungs symmetrically [29,30].

3.2. Split Collection

In DCNN, a collection is divided into three parts: training/validation/testing. So, the training set optimizes the network’s weights by reducing the predicted and actual output differences. The validation set is utilised to estimate the network’s effectiveness on unknown data during training to enhance the network’s performance. The test set provides a final, objective assessment of the network’s performance after training on unseen data [31].
The COVID-QU-Ex collection split ratio is 80:20 for train and test objectives. Additionally, 20% of training images were used as validation data for the network during the training phase. This rate is typically used when there is a large collection, adequate data is available to train stage the network, and sufficient image data remains for validation and testing of the network [32]. The number of image data per type utilized for training/validation/testing is outlined in Table 2.

3.3. Hybrid DCNN for Diagnosing Pneumonia and COVID-19 Disease

We developed a hybrid DCNN mechanism that is effective in distinguishing between the three distinct categories that have the potential to have an impact on the lungs. The hybrid DCNN network was based on combining VGG blocks and the inception module. So, combining VGG19 with the inception module increases accuracy, improves feature extraction, and improves computing efficiency.
The VGG19 [12] network comprises a total of 19 layers, 16 of which are convolutional layers and 3 of which are completely connected. It was developed specifically to perform well on image categorization tasks, making it a popular option for various computer vision applications due to its architecture. The 16 convolutional layers are separated into five blocks of two or three convolutional layers followed by a max pooling layer. Additionally, the blocks use small filters ( 3 × 3 ) with a stride of 1, and as the network becomes deeper, the number of filters gradually increases. Each of the three fully connected layers has 4096 neurons and uses a softmax activation function to perform the final categorization.
The inception [13] module is composed of several parallel branches that each have a different size filter, including: Convolutional layers use filters of varying sizes to extract characteristics from the input image 224 × 224 × 3 . The max pooling layer minimizes the spatial dimensionality of the feature maps generated by convolutional layers. The concatenation layer merges the outputs from multiple branches of the inception module into a single multi-scale representation of the input image. The inception module is widely employed in modern DCNN designs for computer vision and has demonstrated exemplary performance in various image category tasks.
The hybrid DCNN mechanism for COVID-19 disease identification has the following elements: ten convolutional layers for feature extraction, four max-pooling layers for spatial dimension of the feature maps, two inception modules, a global average pooling (GAP) layer and a fully connected (FC) layer to conduct the categorization. The hybrid DCNN mechanism takes an input size image (224, 224, 3) and passes it through the network to identify the disease categories in the image. The initial VGG block utilizes 64 filters, which results in a feature map that is (224, 224, 64) in size. The output shape produced as a consequence of this process is (112, 112, 64). The second VGG block utilizes 128 filters and produces a feature map that is (112, 112, 128) in size, and the resulting shape of the output is (56, 56, 128). The following third VGG block utilizes 256 filters and generates a feature map (56, 56, 256) in size; the output shape this creates is a rectangle (28, 28, 256). The final VGG block utilizes 512 filters and generates a feature map with dimensions of (28, 28, 512), and the shape of the output is (14, 14, 512). The first inception module utilizes 512 filters, and the shape of the output produced as a consequence is (7, 7, 512). The second inception module utilizes 512 filters, and the shape of the output that this generates is (7, 7, 512). An output shape is possessed by the GAP layer (1, 1, 512). Finally, the FC layer has an output shape that consists of (1, 1, 3). Figure 3 depicts the diagram of the inception module, whereas Figure 4 illustrates the hybrid DCNN mechanism diagram.
Our hybrid DCNN approach combines the strengths of the VGG19 architecture and the inception module to achieve improved accuracy and speed while reducing computational complexity over existing methods. Specifically, the first four blocks of the VGG19 architecture form the backbone of our network, which are highly efficient in extracting low-level features in images. However, the later blocks of VGG19 are computationally expensive, particularly when working with large datasets. To address this, we add two inception modules of the VGG19 architecture to improve the network’s ability to learn more valuable features, leading to an even higher accuracy. The inclusion of two inception modules provides additional flexibility and complexity in the network, enabling it to capture a broader range of image features. So, by combining these two architectures, we leverage both strengths to achieve improved accuracy and speed while reducing computational complexity.

3.4. Implementation Description

All experimentations were performed utilizing a GPU (NVIDIA RTX 3050 with 8 GB RAM). Python 3, CUDA, the Keras package, CuDNN, Matplotlib and NumPy were the main libraries used to implement all networks. All networks were optimized using the Adam [33] optimizer with a learning rate of 0.0001, a number of epochs of 30 and categorical cross-entropy as a loss function. Table 3 displays the specific training parameters for all networks.

3.5. Performance Measures

Accuracy, precision (specificity), recall (sensitivity), and F1-score are the most popular measures to evaluate deep learning networks [34]. In addition, the Kappa score [35] coefficient is used to assess the level of agreement between the predicted labels and the actual labels in the test data. Consequently, these measures were selected for this work. All measures are based on the number of true negative ( T N ), true positive ( T P ), false positive ( F P ), and false negative ( F N ) cases. Furthermore, the confusion matrix is used to evaluate the performance of networks during categorization tasks. Finally, the ROC curve demonstrates how effectively the network can discriminate between various kinds of image data; when the indicator is increased, the network can satisfactorily distinguish between the type with the infection and without infection. The formulas for the measures above are provided using Equations (1)–(6):
Accuracy = ( ( T P + T N ) / ( T P + F N + T N + F P ) ) × 100 %
Precision = ( T P / ( T P + F P ) ) × 100 %
Recall = ( T P / ( T P + F N ) ) × 100 %
F1-score = 2 × ( ( Precision × Recall ) / ( Precision + Recall ) ) × 100 %
Random Accuracy = ( ( ( T N + F P ) × ( T N + F N ) + ( F N + T P ) × ( F P + T P ) ) / ( ( T P + F N + T N + F P ) × ( T P + F N + T N + F P ) ) ) × 100 %
Kappa - score = ( ( Accuracy Random Accuracy ) / ( 1 Random Accuracy ) ) × 100 %

4. Experimental Results

The immediate purpose of our suggested network is to sweeten the identification accuracy of the COVID-19 disease and reduce miscategorization. Figure 5 and Figure 6 illustrate the accuracy and loss curves for 30 epochs during the training and validation stages. The highest training and validation accuracy is shown in the hybrid DCNN with 99.32% and 97.60%, and loss is 0.1062 and 0.9260. On the contrary, the lowest training and validation accuracy is obtained at 98.97% and 91.57%, and loss is 0.1548 and 0.9157 for the ResNet50 network. Analyzing the accuracy curve, it is seen that the accuracy values of the hybrid DCNN are statable without showing overfitting over the other networks.
As can be seen in Figure 5, the Hybrid DCNN outperforms the other popular CNN architectures, starting from more than 0.85 when epochs are 0. The main reason for the superior performance is that the proposed approach uses the generic characteristics of images extracted from the ImageNet [36] dataset by the VGG19. So, our model has already learned to recognize a great deal of visually valuable elements, such as edges, textures, and shapes, which can be used for identification, and learns specific features of the COVID-19 disease identification task due to the two newly added inception modules. On the other hand, the other CNN architectures are also initialized with pre-trained weights from ImageNet, but they do not achieve optimal results for the COVID-19 identification task. Thus, the Hybrid DCNN is a powerful model that combines the advantages of both generic and specific feature extraction capabilities, resulting in top performance for the COVID-19 identification task.
Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, demonstrate the confusion matrix and ROC curve plots for all networks. Among the 6788 instances, 51 were miscategorised by the proposed hybrid DCNN, the VGG19 with one inception module miscategorised 96, the VGG19 network miscategorised 124, the VGG16 network miscategorised 181, and the ResNet50 network misclassified 241 instances.
The performance of the five networks is outlined in Table 4; the standard deviation is included in parentheses. The proposed hybrid DCNN mechanism has the best performance with 99.25% accuracy, 99.23% precision, 99.25% recall, a 99.24% F1-score, 99.43% AUC, and 99.10% Kappa score. On the contrary, a comparatively low performance was obtained by the ResNet50 network with 96.45% accuracy, 96.41% precision, 96.41% recall, 96.40% F1-score, 97.32% AUC, and 95.17% Kappa score. Hence, as shown in Table 5, it was revealed that the proposed hybrid DCNN is superior to other networks.
Figure 12 shows results from the hybrid DCNN mechanism on some sample instances from the test set. For example, the proper category kind, shown in Figure 12a top/bottom, is accurately diagnosed with a probability greater than 98.82% as “Normal”. Moreover, the suggested strategy accurately identifies each instance in Figure 12b (top/bottom images). The proposed mechanism is accurately identified, as shown in the top image of Figure 12c. In contrast, the irregular opacity of the lungs affects the feature extraction process. So, erroneous lung disease identifications may arise, as illustrated in Figure 12c (bottom image). Considering the outcomes, it can be deduced that the recommended hybrid DCNN mechanism enhances the accuracy of COVID-19 disease identification. Specifically, combining the highly effective first four blocks of the VGG19 architecture with the efficient inception modules allows our network to capture useful features missed by other methods, thereby reducing the frequency of inaccurate identifications.

5. Conclusions and Future Work

The COVID-19 pandemic has created a global health concern, with millions of individuals infected worldwide. The rapid spread of the disease has made early detection and accurate diagnosis crucial to prevent its further spread. This work presents a CAD system with a hybrid identification strategy that uses chest X-ray image data to categorize three distinct diseases. The hybrid DCNN identification mechanism consists of a combination of VGG blocks and three inception modules. Our network mechanism achieves 99.25% accuracy, a 99.10% Kappa score, 99.43% AUC, and 99.24% F1-score. These results demonstrate that the proposed strategy can effectively distinguish between pneumonia, COVID-19, and typical chest X-ray images. In further research, the diagnostic accuracy of large-scale medical datasets has to be investigated, and appropriate experiments must be conducted to verify our hybrid DCNN identification strategy in specialized services such as service-oriented networks (SONs).

Author Contributions

Methodology, T.S.; conceptualization, T.S.; formal analysis, T.S., I.-M.T., M.V.S. and A.S.; investigation, T.S., M.V.S. and I.-M.T.; software, T.S.; project administration, T.S.; resources, T.S., I.-M.T. and A.S.; validation, T.S., M.V.S., I.-M.T. and A.S.; visualization, T.S. and A.S.; supervision, M.D.; writing—original draft preparation, T.S. and M.V.S.; writing—review and editing, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

This article uses the COVID-QU-Ex collection, which is fully available in [28].

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ARDSAcute Respiratory Distress Syndrome
CADComputer-Aided Diagnostic
COPDChronic Obstructive Pulmonary Disease
CTComputed Tomography
CXRChest X-Ray
DCNNDeep Convolutional Neural Network
DLDeep Learning
GPUGraphics processing unit
MLMachine Learning
MRIMagnetic Resonance Imaging
ReLURectified linear unit
SONsService-Oriented Networks
VGGVisual geometry group

References

  1. Lerner, D.K.; Garvey, K.L.; Arrighi-Allisan, A.E.; Filimonov, A.; Filip, P.; Shah, J.; Tweel, B.; Del Signore, A.; Schaberg, M.; Colley, P.; et al. Clinical features of parosmia associated with COVID-19 infection. Laryngoscope 2022, 132, 633–639. [Google Scholar] [CrossRef] [PubMed]
  2. Mollarasouli, F.; Zare-Shehneh, N.; Ghaedi, M. A review on corona virus disease 2019 (COVID-19): Current progress, clinical features and bioanalytical diagnostic methods. Microchim. Acta 2022, 189, 103. [Google Scholar] [CrossRef]
  3. Watanabe, A.; So, M.; Mitaka, H.; Ishisaka, Y.; Takagi, H.; Inokuchi, R.; Iwagami, M.; Kuno, T. Clinical features and mortality of COVID-19-associated mucormycosis: A systematic review and meta-analysis. Mycopathologia 2022, 187, 271–289. [Google Scholar] [CrossRef]
  4. Irmici, G.; Cè, M.; Caloro, E.; Khenkina, N.; Della Pepa, G.; Ascenti, V.; Martinenghi, C.; Papa, S.; Oliva, G.; Cellina, M. Chest X-ray in Emergency Radiology: What Artificial Intelligence Applications Are Available? Diagnostics 2023, 13, 216. [Google Scholar] [CrossRef] [PubMed]
  5. Taleghani, N.; Taghipour, F. Diagnosis of COVID-19 for controlling the pandemic: A review of the state-of-the-art. Biosens. Bioelectron. 2021, 174, 112830. [Google Scholar] [CrossRef]
  6. Ravi, V.; Narasimhan, H.; Pham, T.D. A cost-sensitive deep learning-based meta-classifier for pediatric pneumonia classification using chest X-rays. Expert Syst. 2022, 39, e12966. [Google Scholar] [CrossRef]
  7. Rajaraman, S.; Guo, P.; Xue, Z.; Antani, S.K. A Deep Modality-Specific Ensemble for Improving Pneumonia Detection in Chest X-rays. Diagnostics 2022, 12, 1442. [Google Scholar] [CrossRef]
  8. Hasan, M.M.; Islam, M.U.; Sadeq, M.J.; Fung, W.K.; Uddin, J. Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment. Sensors 2023, 23, 527. [Google Scholar] [CrossRef]
  9. Soomro, T.A.; Zheng, L.; Afifi, A.J.; Ali, A.; Yin, M.; Gao, J. Artificial intelligence (AI) for medical imaging to combat coronavirus disease (COVID-19): A detailed review with direction for future research. Artif. Intell. Rev. 2022, 55, 1409–1439. [Google Scholar] [CrossRef]
  10. Pfaff, E.R.; Girvin, A.T.; Bennett, T.D.; Bhatia, A.; Brooks, I.M.; Deer, R.R.; Dekermanjian, J.P.; Jolley, S.E.; Kahn, M.G.; Kostka, K.; et al. Identifying who has long COVID in the USA: A machine learning approach using N3C data. Lancet Digit. Health 2022, 4, e532–e541. [Google Scholar] [CrossRef]
  11. Ahsan, M.M.; Luna, S.A.; Siddique, Z. Machine-learning-based disease diagnosis: A comprehensive review. Healthcare 2022, 10, 541. [Google Scholar] [CrossRef] [PubMed]
  12. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
  13. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
  14. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  15. Conti, V.; Militello, C.; Rundo, L.; Vitabile, S. A novel bio-inspired approach for high-performance management in service-oriented networks. IEEE Trans. Emerg. Top. Comput. 2020, 9, 1709–1722. [Google Scholar] [CrossRef]
  16. Han, X.; Hu, Z.; Wang, S.; Zhang, Y. A Survey on Deep Learning in COVID-19 Diagnosis. J. Imaging 2022, 9, 1. [Google Scholar] [CrossRef]
  17. Ayadi, M.; Ksibi, A.; Al-Rasheed, A.; Soufiene, B.O. COVID-AleXception: A Deep Learning Model Based on a Deep Feature Concatenation Approach for the Detection of COVID-19 from Chest X-ray Images. Healthcare 2022, 10, 2072. [Google Scholar] [CrossRef] [PubMed]
  18. Hafeez, U.; Umer, M.; Hameed, A.; Mustafa, H.; Sohaib, A.; Nappi, M.; Madni, H.A. A CNN based coronavirus disease prediction system for chest X-rays. J. Ambient. Intell. Humaniz. Comput. 2022, 1–15. [Google Scholar] [CrossRef] [PubMed]
  19. Huang, M.L.; Liao, Y.C. A lightweight CNN-based network on COVID-19 detection using X-ray and CT images. Comput. Biol. Med. 2022, 146, 105604. [Google Scholar] [CrossRef] [PubMed]
  20. Khan, A.I.; Shah, J.L.; Bhat, M.M. CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Comput. Methods Programs Biomed. 2020, 196, 105581. [Google Scholar] [CrossRef] [PubMed]
  21. Ghose, P.; Uddin, M.A.; Acharjee, U.K.; Sharmin, S. Deep viewing for the identification of covid-19 infection status from chest X-ray image using cnn based architecture. Intell. Syst. Appl. 2022, 16, 200130. [Google Scholar] [CrossRef]
  22. Ibrokhimov, B.; Kang, J.Y. Deep Learning Model for COVID-19-Infected Pneumonia Diagnosis Using Chest Radiography Images. BioMedInformatics 2022, 2, 654–670. [Google Scholar] [CrossRef]
  23. Khan, I.U.; Aslam, N. A deep-learning-based framework for automated diagnosis of COVID-19 using X-ray images. Information 2020, 11, 419. [Google Scholar] [CrossRef]
  24. Kaya, Y.; Gürsoy, E. A MobileNet-based CNN model with a novel fine-tuning mechanism for COVID-19 infection detection. Soft Comput. 2023, 27, 5521–5535. [Google Scholar] [CrossRef] [PubMed]
  25. Nayak, S.R.; Nayak, D.R.; Sinha, U.; Arora, V.; Pachori, R.B. An Efficient Deep Learning Method for Detection of COVID-19 Infection Using Chest X-ray Images. Diagnostics 2023, 13, 131. [Google Scholar] [CrossRef] [PubMed]
  26. Sanida, T.; Sideris, A.; Tsiktsiris, D.; Dasygenis, M. Lightweight neural network for COVID-19 detection from chest X-ray images implemented on an embedded system. Technologies 2022, 10, 37. [Google Scholar] [CrossRef]
  27. Sanida, T.; Sideris, A.; Chatzisavvas, A.; Dossis, M.; Dasygenis, M. Radiography Images with Transfer Learning on Embedded System. In Proceedings of the 2022 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Ioannina, Greece, 23–25 September 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–4. [Google Scholar] [CrossRef]
  28. Tahir, A.M.; Chowdhury, M.E.; Khandakar, A.; Rahman, T.; Qiblawey, Y.; Khurshid, U.; Kiranyaz, S.; Ibtehaz, N.; Rahman, M.S.; Al-Maadeed, S.; et al. COVID-19 infection localization and severity grading from chest X-ray images. Comput. Biol. Med. 2021, 139, 105002. [Google Scholar] [CrossRef]
  29. Yasin, R.; Gouda, W. Chest X-ray findings monitoring COVID-19 disease course and severity. Egypt. J. Radiol. Nucl. Med. 2020, 51, 193. [Google Scholar] [CrossRef]
  30. Rousan, L.A.; Elobeid, E.; Karrar, M.; Khader, Y. Chest X-ray findings and temporal lung changes in patients with COVID-19 pneumonia. BMC Pulm. Med. 2020, 20, 245. [Google Scholar] [CrossRef]
  31. Sanida, M.V.; Sanida, T.; Sideris, A.; Dasygenis, M. An Efficient Hybrid CNN Classification Model for Tomato Crop Disease. Technologies 2023, 11, 10. [Google Scholar] [CrossRef]
  32. Mohanty, S.P.; Hughes, D.P.; Salathé, M. Using deep learning for image-based plant disease detection. Front. Plant Sci. 2016, 7, 1419. [Google Scholar] [CrossRef]
  33. Sanida, T.; Tsiktsiris, D.; Sideris, A.; Dasygenis, M. A heterogeneous implementation for plant disease identification using deep learning. Multimed. Tools Appl. 2022, 81, 15041–15059. [Google Scholar] [CrossRef]
  34. Tharwat, A. Classification assessment methods. Appl. Comput. Inform. 2020, 17, 168–192. [Google Scholar] [CrossRef]
  35. Delgado, R.; Tibau, X.A. Why Cohen’s Kappa should be avoided as performance measure in classification. PLoS ONE 2019, 14, e0222916. [Google Scholar] [CrossRef] [PubMed]
  36. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar] [CrossRef]
Figure 1. X-ray samples by category from the COVID-QU-Ex collection (the white markers indicate infected areas).
Figure 1. X-ray samples by category from the COVID-QU-Ex collection (the white markers indicate infected areas).
Information 14 00310 g001
Figure 2. The allocation of X-ray image data per category from the COVID-QU-Ex collection is balanced; thus, no augmentation techniques are required.
Figure 2. The allocation of X-ray image data per category from the COVID-QU-Ex collection is balanced; thus, no augmentation techniques are required.
Information 14 00310 g002
Figure 3. The block diagram of the inception module.
Figure 3. The block diagram of the inception module.
Information 14 00310 g003
Figure 4. The block diagram of proposed hybrid DCNN for COVID-19 disease identification.
Figure 4. The block diagram of proposed hybrid DCNN for COVID-19 disease identification.
Information 14 00310 g004
Figure 5. Comparison plot of the accuracy curves of the training/validation for each network.
Figure 5. Comparison plot of the accuracy curves of the training/validation for each network.
Information 14 00310 g005
Figure 6. Comparison plot of the loss curves of the training/validation for each network.
Figure 6. Comparison plot of the loss curves of the training/validation for each network.
Information 14 00310 g006
Figure 7. Results of the confusion matrix and ROC curve for the hybrid DCNN on the test dataset.
Figure 7. Results of the confusion matrix and ROC curve for the hybrid DCNN on the test dataset.
Information 14 00310 g007
Figure 8. Results of the confusion matrix and ROC curve for the VGG19 with one inception module on the test dataset.
Figure 8. Results of the confusion matrix and ROC curve for the VGG19 with one inception module on the test dataset.
Information 14 00310 g008
Figure 9. Results of the confusion matrix and ROC curve for the VGG19 network on the test dataset.
Figure 9. Results of the confusion matrix and ROC curve for the VGG19 network on the test dataset.
Information 14 00310 g009
Figure 10. Results of the confusion matrix and ROC curve for the VGG16 network on the test dataset.
Figure 10. Results of the confusion matrix and ROC curve for the VGG16 network on the test dataset.
Information 14 00310 g010
Figure 11. Results of the confusion matrix and ROC curve for the ResNet50 network on the test dataset.
Figure 11. Results of the confusion matrix and ROC curve for the ResNet50 network on the test dataset.
Information 14 00310 g011
Figure 12. Indicative instances evaluated by hybrid DCNN mechanism.
Figure 12. Indicative instances evaluated by hybrid DCNN mechanism.
Information 14 00310 g012
Table 1. A summary of studies using CNN methods for COVID-19 identification.
Table 1. A summary of studies using CNN methods for COVID-19 identification.
StudyBest MethodAccuracy (%)
[17]COVID-AleXception98.68
[18]Custom CNN89.855
[19]Lightweight CNN98.33
[20]CoroNet89.60
[21]Custom CNN98.50
[22]VGG1996.60
[23]VGG1699.33
[24]MobileNetV297.61
[25]LW-CORONet98.67
[26]MobileNetV295.80
Table 2. Number of image data per type for training/validation/testing in the COVID-QU-Ex collection.
Table 2. Number of image data per type for training/validation/testing in the COVID-QU-Ex collection.
CategoryNumber of
Images
Training
Images
Validation
Images
Test
Images
Normal (Healthy)10,701684917122140
Non-COVID infections
(Viral or Bacterial Pneumonia)
11,263720818022253
COVID-1911,956765819032395
Total33,92021,71554176788
Table 3. Configurations of the training parameters for all networks.
Table 3. Configurations of the training parameters for all networks.
Name of ParameterValue for Training
OptimizerAdam
Number of epochs30
Learning rate0.0001
Mini batch size32
Loss functionCross-entropy
Table 4. Performance measures of five networks, standard deviation included in parentheses.
Table 4. Performance measures of five networks, standard deviation included in parentheses.
NetworkAccuracy
(%)
Precision
(%)
Recall
(%)
F1-Score
(%)
AUC
(%)
Kappa-Score
(%)
Hybrid DCNN99.25
(0.0254)
99.23
(0.0270)
99.25
(0.0295)
99.24
(0.0307)
99.43
(0.0354)
99.10
(0.0386)
VGG19 with
one inception
module
98.59
(0.0427)
98.55
(0.0454)
98.59
(0.0458)
98.56
(0.0507)
98.94
(0.0492)
98.45
(0.0471)
VGG1998.17
(0.0474)
98.13
(0.0432)
98.18
(0.0481)
98.15
(0.0507)
98.63
(0.0531)
97.84
(0.0516)
VGG1697.33
(0.0706)
97.31
(0.0732)
97.28
(0.0713)
97.30
(0.0634)
97.97
(0.0642)
96.61
(0.0770)
ResNet5096.45
(0.0552)
96.41
(0.0507)
96.41
(0.0587)
96.40
(0.0602)
97.32
(0.0580)
95.17
(0.0524)
Table 5. Performance evaluation of five networks.
Table 5. Performance evaluation of five networks.
NetworkCategoriesPrecisionRecallF1-Score
Hybrid DCNNCOVID-190.99830.99330.9958
Non-COVID0.99020.99160.9909
Normal0.98840.99250.9904
VGG19 with one inception moduleCOVID-190.99720.99010.9948
Non-COVID0.99010.97340.9819
Normal0.96680.99190.9802
VGG19COVID-190.99620.98330.9897
Non-COVID0.98040.97510.9777
Normal0.96750.98690.9771
VGG16COVID-190.97770.98910.9834
Non-COVID0.97270.96540.9690
Normal0.96900.96400.9665
ResNet50COVID-190.98270.97370.9782
Non-COVID0.95410.96850.9612
Normal0.95540.95000.9527
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sanida, T.; Tabakis, I.-M.; Sanida, M.V.; Sideris, A.; Dasygenis, M. A Robust Hybrid Deep Convolutional Neural Network for COVID-19 Disease Identification from Chest X-ray Images. Information 2023, 14, 310. https://doi.org/10.3390/info14060310

AMA Style

Sanida T, Tabakis I-M, Sanida MV, Sideris A, Dasygenis M. A Robust Hybrid Deep Convolutional Neural Network for COVID-19 Disease Identification from Chest X-ray Images. Information. 2023; 14(6):310. https://doi.org/10.3390/info14060310

Chicago/Turabian Style

Sanida, Theodora, Irene-Maria Tabakis, Maria Vasiliki Sanida, Argyrios Sideris, and Minas Dasygenis. 2023. "A Robust Hybrid Deep Convolutional Neural Network for COVID-19 Disease Identification from Chest X-ray Images" Information 14, no. 6: 310. https://doi.org/10.3390/info14060310

APA Style

Sanida, T., Tabakis, I. -M., Sanida, M. V., Sideris, A., & Dasygenis, M. (2023). A Robust Hybrid Deep Convolutional Neural Network for COVID-19 Disease Identification from Chest X-ray Images. Information, 14(6), 310. https://doi.org/10.3390/info14060310

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop