Next Article in Journal
Optic Nerve Sheath Diameter Ultrasound: A Non-Invasive Approach to Evaluate Increased Intracranial Pressure in Critically Ill Pediatric Patients
Next Article in Special Issue
An Improved Machine-Learning Approach for COVID-19 Prediction Using Harris Hawks Optimization and Feature Analysis Using SHAP
Previous Article in Journal
Clinical Applications of Fetal MRI in the Brain
Previous Article in Special Issue
COVID-Net CXR-S: Deep Convolutional Neural Network for Severity Assessment of COVID-19 Cases from Chest X-ray Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

WMR-DepthwiseNet: A Wavelet Multi-Resolution Depthwise Separable Convolutional Neural Network for COVID-19 Diagnosis

1
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
2
School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
3
School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 611731, China
4
Department of Information System and Technology, University of Missouri-St. Louis, St. Louis, MO 63121, USA
5
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(3), 765; https://doi.org/10.3390/diagnostics12030765
Submission received: 1 February 2022 / Revised: 7 March 2022 / Accepted: 18 March 2022 / Published: 21 March 2022

Abstract

:
Timely discovery of COVID-19 could aid in formulating a suitable treatment plan for disease mitigation and containment decisions. The widely used COVID-19 test necessitates a regular method and has a low sensitivity value. Computed tomography and chest X-ray are also other methods utilized by numerous studies for detecting COVID-19. In this article, we propose a CNN called depthwise separable convolution network with wavelet multiresolution analysis module (WMR-DepthwiseNet) that is robust to automatically learn details from both spatialwise and channelwise for COVID-19 identification with a limited radiograph dataset, which is critical due to the rapid growth of COVID-19. This model utilizes an effective strategy to prevent loss of spatial details, which is a prevalent issue in traditional convolutional neural network, and second, the depthwise separable connectivity framework ensures reusability of feature maps by directly connecting previous layer to all subsequent layers for extracting feature representations from few datasets. We evaluate the proposed model by utilizing a public domain dataset of COVID-19 confirmed case and other pneumonia illness. The proposed method achieves 98.63% accuracy, 98.46% sensitivity, 97.99% specificity, and 98.69% precision on chest X-ray dataset, whereas using the computed tomography dataset, the model achieves 96.83% accuracy, 97.78% sensitivity, 96.22% specificity, and 97.02% precision. According to the results of our experiments, our model achieves up-to-date accuracy with only a few training cases available, which is useful for COVID-19 screening. This latest paradigm is expected to contribute significantly in the battle against COVID-19 and other life-threatening diseases.

1. Introduction

The global COVID-19 epidemic has infected 347 millions of people around the globe and over 5.5 million deaths confirmed by the World Health Organization (WHO) as of 25 January 2022 [1]. The main strategy for better managing this pandemic is to find, isolate, and care for patients as soon as possible. The ability to quickly, easily, affordably, and reliably identify COVID-19 pathology in a person is critical to abating the spread of COVID-19 contagion. The traditional method for detecting COVID-19 is actually reverse transcription polymerase chain reaction (RT-PCR) tests [2]. Small quantities of viral RNA are collected from a nasal swab, augmented, and evaluated during the RT-PCR test with virus detection signified generally with a fluorescent dye. Unfortunately, the RT-PCR procedure is time-consuming and manual, taking up to two days to complete. False-positive polymerase chain reaction (PCR) testing has also been recorded in some studies [3,4]. Imaging-technology-related techniques such as computed tomography (CT) imaging, CXR-imaging-based [5,6,7,8], and ultrasound imaging [9] are examples of other research methods. CT scanning machines are often troublesome to operate for COVID patients since they must often be moved to the CT room, the equipment must be thoroughly cleaned after each use, and there is a higher risk of radiation exposure [9].
CT has been successfully used as a supportive method for COVID-19 condition evaluation, despite the fact that it is not approved as a primal diagnostic means [6]. The most general and common CT findings are considered to be the ground-glass opacities (GGO), which is at the beginning stage, accelerating stage, and air space combination during the peak stage while the bronchovesicular congealing in the contusions and pulling bronchiectasis are both evident during the reception stage. Machine learning algorithms have been reported with significant performance for the diagnosis of COVID-19 using CXR and CT scans. Multilayered perceptron (MLP), as a common method of ANN, has shown promising prediction capability of COVID-19 cases with an acceptable accuracy [10].
The application of DL frameworks to diagnose COVID-19 from CT images has shown promise results in several studies [6,11,12]. CT scans and RT-PCR tests are relatively expensive [13], and clinicians are compelled to conduct limited testing for only vulnerable populations due to excessive demand. CXR imaging is a relatively low-cost form of detecting lung infections and it can also be used to detect COVID19 [14]. With relatively small and large datasets, convolutional neural networks (CNNs) have obtained up-to-date results in medical imaging research [15,16,17,18]. Due to the large number of parameters, CNNs can easily overfit on a small dataset; as a result, generalization efficiency is reciprocal to the dimension of the labeled data. Tiny datasets present the most difficult task in the medical imaging domain because of the restricted quantity and variety of samples [5,6,7].
A range of medical biomarkers and abnormalities have also been investigated as indicators of disease development in research, and there are some indications that imaging data could supplement these models [19,20,21]. While these methodologies have been utilized to examine COVID-19 in recent research, some have been employed to multi-institutional chest X-ray image samples [22,23]. The relationship between ground-glass obscurities and lung fusion on CXR with disease severity and development has been qualitatively characterized in recent research [24]. The study of attack detection and ECG-based biometric identification has utilized DL algorithm combined with wavelet multiresolution analysis extensively [25,26].
The aim of this research is to establish a conceptual depthwise separable convolution network with wavelet multiresolution analysis module for COVID-19 screening from chest X-ray (CXR) and computed tomography (CT). In consideration of a novel medical predicament such as COVID-19, obtaining adequately accessible compilation of medical image dataset for training deep learning (DL) algorithms is difficult due to the time and resources required to collect and mark images.
Medical image mining is a time-consuming and costly procedure that necessitates the involvement of radiologists and researchers [6]. Furthermore, due to the recent nature of the COVID-19 outbreak, adequate data of CXR images are difficult to come by. However, in COVID-19 AI-based screening systems from CT and CXR imaging, loss of spatial information is still a major concern which, in most occasions, results from the downsampling operation. The consequence of this is that the AI-based system will learn incomplete information from the data, thereby missing the distinct features for optimal classification.
In view to alleviate this drawback, we proposed a novel depthwise separable convolution network with wavelet multiresolution analysis module that optimizes the downsampling operation without losing spatial details for COVID-19 classification. The contributions of this work include: (1) Magnify the feature extraction robustness of the network by replacing the max-pooling layers with discrete wavelet transform (DWT) pooling for the loss reduction of spatial details and to achieve reduction in dimension without losing positional details by employing scaling and wavelet functions. (2) The depthwise separable connectivity framework ensures reusability of feature maps by directly connecting previous layer to all subsequent layers for extracting feature representations from few dataset. This enables the model to learn the spatial details for effective classification. (3) This paper is the first work that introduces a depthwise separable convolution network with wavelet multiresolution analysis module for feature extraction from radiograph images. The proposed model is an end-to-end learning techniques for COVID-19 classification that achieves much higher diagnosis accuracy.
The subsequent sections of this article is coordinated as follows. In Section 2, we survey related essays. In Section 3, we give a detailed explanation of the methodology, descriptive information about the dataset, and the implementation technicalities. The experimental outcomes are presented in Section 4. In Section 5, we give more light on the evaluation and validation of our model. In Section 6, we discuss the relevance of our proposed scheme. The concluding phase is detailed in Section 7.

2. Related Works

COVID-19 investigations based on DL algorithms have been on the rise in most research articles at present. The ImageNet weights were pretrained on 18-layer custom ResNet architecture against 100 COVID-19 and 1431 pneumonia instances of CXR dataset as proposed in [27].
According to Lu et al. [19], who adopted a neural network approach for the prediction of intensive care unit admission, concluded that biomarkers such as creatinine, C-reactive protein, etc. indicated momentary variations among admitted COVID-19 patients in the ward and transferred to the intensive care unit in contrast to the patients not transferred. Li et al. [28] formulated a DL model and a risk rating algorithm for the outcome of intensive care unit admission and death in the hospital. The ROC-AUC was utilized as a metric to evaluate the model performance. The authors discovered that these biomarkers were the leading ICU indicators, aside age, cardiac troponin, and oxygen saturation, which were the main death indicators. Similarly, Hou et al. [29] formulated a machine learning (ML) algorithm to predict the leading ICU admission and the main mortality indicators, which are temperature, procalcitonin, age, lactate dehydrogenase, lymphocytes, pulse oxygen saturation, ferritin, and C-creative protein.
Nneji et al. [30] suggested a scheme that combines wavelet transform and generative adversarial network (GAN) CNN in order to enhance the low quality of radiograph images for COVID-19 identification. A custom-based residual CNN approach was suggested in [31,32] to accurately differentiate COVID-19 instances from healthy CXR images and other pneumonia-related ailment. COVIDX-Net is a compilation of DL frameworks that were trained on 25 verified COVID-19 instances [33]. Recent studies have focused on automatic coronavirus pneumonia investigation from CT scans with encouraging results [34,35,36].
A ResNet-50 transfer learning based CNN algorithm was proposed in [37] to identify COVID-19 on a private dataset with an overall score of 94% accuracy against a regular standard CT slice. In [38], a weakly supervised approach was suggested where segmentation masks were produced automatically in which the CT image and mask are supplied to the algorithm for classification. The authors of this essay claimed that their procedure obtained 95.9% AUC. A combination of DL algorithm was suggest in [39] to achieve lung field segmentation by hybridizing 3D ResNet-50 transfer learning model with U-Net preprocessor model in a single architecture to classify COVID-19 and distinguish it from non-COVID-19 instances in a broad range of data from nonpublic datasets extracted from six hospitals. The authors of this study claim that their algorithm obtained 87.1% sensitivity.
ML approach was proposed in [40] to tackle the difficulty of automatically differentiating COVID-19 from other acquired pneumonia diseases. Infection-size-conscious techniques with random forest classifier algorithm was proposed in [41] to remove infection and lung areas by means of segmented scan to categorize images based on infection size using 1071 healthy and 182 COVID-19 instances. The authors of this essay claimed that the algorithm obtained 87.9% accuracy when trained on public and private datasets. A joint function pyramid network-based attention module and ResNet-50 proposed in [42] obtained 86.4% accuracy and 90.3% sensitivity when tested on a private dataset of 24 healthy and 27 COVID-19 individual instances. A DL-inspired random forest model was proposed in [43] to focus on extensive features to check COVID-19 severity. The procedure achieved an overall accuracy of 87.5% on 176 instances.
In summary, most studies, including those that have utilized CXR and CT imaging, rely on an insufficient number of COVID-19 images from various sources with no standardized protocols. They appear to be simple applications of existing AI-based algorithms leading to minimal AI innovation and clinical utility. The high data discrepancy associated with various studies causes comparison perplexity despite the fact that all models performed admirably [44]. Generally, models for COVID-19 examination and investigation based on CXR or CT images perform well.
Notwithstanding, a few models utilize10 COVID-19 test instances, and at least one model utilizes external validation attributed to data scarcity. As a consequence, they may or may not be applicable to other contexts. A system that uses less data and attains high significant achievement in accuracy with less training instances is required. This will permit to a greater extent the inclusion of uncommon data class in the testing set. The objective of this article is to formulate a scheme that can help to enhance previous models and achieve state-of-the-art results.

3. Materials and Methods

3.1. Datasets

Artificial intelligence (AI) has achieved a remarkable reputation in the field of clinical research. In the face of the current pandemic ravaging our world, AI can assist healthcare workers in the process of disease detection, boosting the accuracy of identification methods at fast rate and perhaps saving lives. The scarcity of appropriate data is perhaps the most significant barrier facing AI-based approaches. Since AI-based approaches are data-driven, a large amount of data is needed. The process of data collection is quite tedious, as there are many ethics concerns from experts. Bearing this view in mind, we resorted to well-known and validated dataset repositories for the collection and compilation of the dataset. In this article, we collected CXR data of different pneumonia related illnesses from three different open sources [45,46,47,48]. As illustrated in Table 1, we collected 3029 scans of bacterial pneumonia, 8851 scans of healthy patients and 2983 scans of viral pneumonia from the Kaggle database of the Radiological Society of North America (RSNA) [45]. We collected 74,999 scans of other pneumonia-related illnesses from National Institute of Health (NIH) [46]. We collected 3616 scans of COVID-19 CXR from the COVID-19 radiography database [47] as illustrated in Table 1 for the purpose of validating our proposed architecture for multiple classification problems. The COVID-19 CT samples were obtained from COVID-19 dataset [48] as depicted in Table 2 for binary classification. As indicated, there are approximately 93,627 CXR scans including COVID-19 and 10 other pneumonia-related illnesses as well as healthy instances and a total of 2482 CT scans of COVID-19 and non-COVID-19 samples. Since the number of each category of data class varies, as a result, we selected 2000 scans of CXR from each category which sum up to 24,000 CXR images. Since the amount of CXR associated with each class is balanced, the dataset is partitioned into three sets of 70%, 20%, and 10% for training, validation, and test, respectively. Similarly, the CT dataset is also partitioned in the same manner from a selection of 1230 scans from each category. Figure 1 gives a visual representation of the dataset distribution for CXR scans while Figure 2 displays the visual representation for the CT scans.

3.2. Proposed WMR-DepthwiseNet

In this article, we proposed a deep convolutional neural network called the depthwise separable convolution network with wavelet multiresolution analysis module (WMR-DepthwiseNet) for the classification of COVID-19, healthy, and other pneumonia cases. As depicted in Figure 3, the core structure of WMR-DepthwiseNet is depthwise separable convolution connectivity and wavelet multiresolution analysis module. The parameters for the proposed depthwise separable convolution is presented in Table 3. To begin with, an initial 3 × 3 standard convolution is executed on the input radiograph image and the feature maps are fed as input into the depthwise separable convolution block to obtain features with the help of the depthwise separable convolution connectivity structure. The input features are concatenated with the output features by the depthwise separable convolution connectivity structure in an iterative manner that capacitates each convolution layer to receive raw details from all prior layers, which can achieve reusability of feature maps for the goal of extracting more features from fewer radiograph images.
Consequently, a pointwise convolution layer is introduced to accomplish subsampling. The pointwise convolution layer consists of 1 × 1 convolution, a rectified linear unit (ReLU), and a batch normalization (BN). A wavelet multiresolution analysis module is implemented to realize channelwise concatenation of both spatial and spectral details of the input and output feature maps to enable the network give more attention to positional details without loss of spatial information. The structure of the wavelet multiresolution analysis consists of 1 × 1 pointwise convolution, two coefficients of detail and approximate, and channelwise concatenation. At the tail end of the depthwise separable convolution block, a global average pooling is applied to the feature maps before sending them to the 1 × 1 convolutional layer instead of the conventional fully connected layer and then followed by a softmax layer for classifying the output of the prediction for the multiclass problem, whereas for the binary class problem, we substituted the softmax layer with sigmoid layer. To maintain a fix size of the feature maps, the padding is set to zero for all the convolution layers.
The overall strucure of the WMR-DepthwiseNet employs 3 × bottleneck modules of 3 × 3 depthwise separable convolution, 8 × bottleneck modules of 5 × 5 depthwise separable convolution, an efficient last stage of the classification head, and four levels of wavelet multi-resolution decomposition. The bottleneck module of the depthwise separable convolution and the wavelet multiresolution decomposition will be discussed in the following subsequent sections.

3.2.1. Depthwise Separable Convolution Module

Depthwise separable convolution module is a factorized form a conventional convolution that consists of depthwise convolution and a pointwise layer of 1 × 1 convolution. The depthwise convolution applies a single convolution filter for every input channel to carry out lightweight filtering operation. The pointwise layer of 1 × 1 convolution ensures that new features are created via computing simple summations of the input channels. The depthwise separable convolution layer depicted in Figure 4 while Figure 5 shows the transition from the regular convolution to depthwise separable convolution which is built with the following operations: 3 × 3 convolution, 5 × 5 convolution, batch normalization (BN), rectified linear unit (ReLU), 1 × 1 convolution, batch normalization (BN), and rectified linear unit (ReLU).
The 1 × 1 pointwise convolution is incorporated as a bottleneck layer to reduce the feature maps of the input before every 3 × 3 and 5 × 5 depthwise convolution, which enhances the computational efficiency. Since the number of feature map channels output by each depthwise separable convolution block contributes to the computational cost, the 1 × 1 pointwise convolution compresses the number of feature map channels of the input to be equivalent with the number of feature map channels of the output while the 3 × 3 depthwise convolution extracts details from the feature maps and ensures the number of the channels do not change.
The first depthwise separable convolutional module consists of three 3 × 3 bottleneck depthwise separable convolutional layers, and the second depthwise separable convolutional module consists of eight 5 × 5 bottleneck depthwise separable convolutional layers. Let G d · represents depthwise separable convolution layers transformation, where d indexes the depthwise separable convolution layers and depicts the output of the dth depthwise separable convolution layers as X d . For ensuring information flow enhancement between depthwise separable convolution layers within each depthwise separable convolution module, the depthwise separable convolution module uses direct connection from prior depthwise separable convolution layers to all subsequent depthwise separable convolution layers. That is, the dth depthwise separable convolution layer receives the feature maps of all the subsequent depthwise separable convolution layers, Y 0 , . . . . . , Y d 1 as depicted in Equation (1)
Y d = G d Y 0 , Y 1 . . . , Y d 1
where Y 0 , Y 1 . . . , Y d 1 depicts the concatenation of the feature maps generated in the depthwise separable convolution layers 0 , , d 1 . This type of depthwise separable connectivity framework realizes reusability of feature maps, which is capable of mining more features from the limited radiograph scans to enhance classification accuracy.

3.2.2. Discrete Wavelet Transform

To enhance the performance of identification without delineating infection lungs areas manually, a conventional approach is that if the model can unaidedly focus on infection areas without losing spatial information, the model would enhance its ability to differentiate the interclass discrepancies between COVID-19 and other pneumonia which will automatically enhance the classification accuracy. Conventional CNNs mainly adopts pooling layers for downsampling operation to achieve reduction in dimensionality, which usually lead to loss of spatial information. Some advanced CNNs have adopted channel and spatial attention mechanisms to strengthen the network’s critical information adaptability for aggregating and unaidedly recalibrating feature maps through spatialwise and channelwise approaches. Several attention schemes have been proposed such as squeeze-and-excitation [49], bottleneck attention scheme [50], SCA-CNN [51], and so on. The major drawbacks of the attention schemes is that it sometimes lead to decline in performance and accuracy when the feature map is multiplied with two attention maps and the weight map produced during the early phase of the network’s training when the parameters are not well trained. To alleviate the above drawbacks, we introduce discrete wavelet transform pooling to replace the conventional pooling operation in standard CNNs, which enables the network to retain spatial information and ensure dimensionality reduction without loss of information and spatial details thereby improving the network performance and computational efficiency. Let ψ ( · ) denote the wavelet function defined over the main axis ( , ) where the integral of ψ ( · ) is zero as presented in Equation (2). Figure 6 shows the operation of discrete wavelet transform for downsizing.
ψ ( u ) d u = 0
The integral of the square of ψ ( · ) is unity as presented in Equation (3)
ψ 2 ( u ) d u = 1
Equation (4) explicitly expresses the admissibility condition.
C ψ = 0 | ψ ( f ) | 2 f d f s a t i s f i e s 0 < C ψ <
By converting and stretching this mother wavelet as shown in Equation (5), a twofold-indexed family of wavelets can be formed.
ψ λ , t ( u ) = 1 λ ψ u t λ
where λ > 0 and t is 1, the normalization on the right hand side of Equation (5) is chosen such that | | ψ λ , t | | = | | ψ | | for all λ , t and 1 λ is the normalizing term.

3.2.3. Wavelet Multiresolution Analysis

Our proposed scheme allows us to connect depthwise separable convolution with wavelet multiresolution analysis to achieve filtering and downsampling. The wavelet multiresolution analysis algorithm formulates pointwise convolution and pooling into depthwise separable convolutional neural network as filtering and downsizing. The proposed scheme executes pooling operation to pool features by carrying out four-level decomposition of two-dimensional discrete wavelet transform as depicted in Figure 7. The structure of the wavelet multiresolution analysis consists of two filter banks of high and low pass filters, scaling factor of 2 for downsampling operation, and the generated detail and approximate components. Equations (6) and (7) depict the multiresolution analysis of the wavelet transform for both the scaling and wavelet functions, respectively.
W ψ k + 1 , m = h ψ j W ψ k , j | j = 2 m , m 0
W Ψ k + 1 , m = h Ψ j W Ψ k , j | j = 2 m , m 0
where W ψ , W Ψ are the approximate and detail components, respectively. The approximate function is denoted by ψ and ψ represents the detail function. h ψ n represents the time inverse scaling while the wavelet distributions is denoted by h Ψ j . The variable in the distribution is denoted as ( n ) , whereas the resolution scale is depicted as ( k ) . First, the discrete wavelet transform (DWT) is applied on the rows, and second, it is applied on the columns to achieve detail and approximate sub-bands. LH, HL, and HH are the sub-bands of the detail component at every level of decomposition while LL is the approximate sub-band at the highest order of the decomposition analysis.
Subsequently, the fourth-order sub-band is used to recapture the image characteristics after conducting the fourth-order decomposition. With the utilization of the inverse WT, which is based on the inverse DWT (IDWT), the image characteristics is pooled by a factor of 2 as depicted in Equation (8)
W ψ k , m = h ψ j W ψ k + 1 , j + h Ψ j W Ψ k + 1 , j | j = 2 m , m 0

4. Results

4.1. Experimental Details

The WMR-DepthwiseNet is composed of two segments; the first segment is the depthwise separable convolution block with each layer of depthwise separable convolution module densely connected. The second segment is the wavelet multiresolution analysis module connected to the depthwise separable convolution block with concatenation channel connection to link subsequent layer with all the prior layers in order to avoid the loss of information about the input and positional details as the information moves along the network. Many state-of-the-art models have adopted similar technique such as ResNet and residual models and achieved good results in several machine vision problems, but a great amount of computational resources is needed due to the huge amount of parameters.
WMR-DepthwiseNet processes the input image through depthwise separable convolution layers. Precisely, the multiresolution analysis decomposes the input image via the low-pass and high-pass filters and concatenate the decomposed images into the depthwise separable convolution block channel-wise. The connection channels are executed using 1 × 1 pointwise convolutions. Finally, global average pooling is utilized to obtain a vectorized feature of the final output of the depthwise separable convolution block before feeding it to the 1 × 1 convolution followed by the classifier for identification. In this work, we utilized 1 × 1 convolution instead of the conventional fully connected layer.

4.2. Experimental Setup

To investigate the performance of our proposed model on screening COVID-19, we collected public dataset of both CXR and CT images from three open sources [45,46,47,48]. Since it is challenging to collect the different pneumonia-related illnesses from one data source especially COVID-19 cases, we put a dataset together from different open sources.
However, viral pneumonia dataset consist of 2983 scans of CXR, which is relatively the smallest when compared to the other CXR categories. As a result, we selected 2000 scans from each category for this study, bringing to a total of 24,000 CXR images as presented in Table 1. Since the amount of CXR linked with each class is balanced, the dataset is split into three portions. The training partition has 70% scans, the validation partition has 20% scans, and the test partition has 10% scans. In a similar manner, 1230 CT scans were selected to form a balance class with the same split ratio as presented in Table 2. During the process of feature extraction, the model is trained on the train dataset of 70% and validated simultaneously on the validation dataset of 20%. The remaining 10% of the dataset is used to test the model’s performance.
We utilized a dropout of 0.5 to avoid overfitting. An Adam optimizer with a learning rate of 1 × 10 4 is used to train the proposed model for 30 epochs with batch size of 32. We trained our proposed WMR-DepthwiseNet on NVIDIA GTX1080. Keras is used for the construction of the WMR-DepthwiseNet scheme. The loss function utilized in this work is cross-entropy presented in Equations (9) and (10).
C E l o s s = i = 1 12 y i l o g ( p i )
where i denote the distribution of the class which is 12 categories, y i denote the class label, and p i is the predicted class.
C E l o s s = i = 1 2 y i l o g ( p i )
where i denote the distribution of the class which is 2 categories, y i denote the class label, and p i is the predicted class.

5. Evaluation

In this section, we presents an ablation study of the structural configuration of our proposed model with different depthwise bottleneck modules. We selected a few pretrained models and compared them with our proposed network in terms of classification performance using the same dataset. we only fine-tuned the last layer to correspond to the number of classes in our dataset. Another study was conducted to compare our proposed network with several state-of-the-art COVID-19 imaged-based screening methods.
In order to verify the effectiveness of our proposed model, we compared our designed WMR-DepthwiseNet with up-to-date models. For fair comparison, we run four state-of-the-art COVID-19 methods on the same dataset. From all indications, our proposed model outperforms the up-to-date methods and the deep learning pretrained models with a promising performance. The evaluation criterion adopted as the metric to evaluate the diagnosis performance of our proposed WMR-DepthwiseNet is as follows: accuracy (ACC), precision (PRE), sensitivity (SEN), specificity (SPE), area under curve (AUC), and F1-Score.
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
A c c u r a c y = T P + T N T P + T N + F P + F N
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N T N + F P
where T P , F P , and F N indicates the outcomes of true positive, false positive, and false negative, respectively.

5.1. Ablation Study

Firstly, we performed ablation study on different configurations of our proposed WMR-DepthwiseNet with different depthwise separable convolution bottlenecks. Particularly, we made comparison on the following architectures.
  • WMR-DepthwiseNet-A: 3 × (bn 3 × 3) + 5 × (bn 5 × 5): This network employs 3 × bottleneck modules of 3 × 3 depthwise separable convolution, 5 × bottleneck modules of 5 × 5 depthwise separable convolution.
  • WMR-DepthwiseNet-B: 3 × (bn 3 × 3) + 6 × (bn 5 × 5): This network employs 3 × bottleneck modules of 3 × 3 depthwise separable convolution, 6 × bottleneck modules of 5 × 5 depthwise separable convolution.
  • WMR-DepthwiseNet-C: 3 × (bn 3 × 3) + 7 × (bn 5 × 5):This network employs 3 × bottleneck modules of 3 × 3 depthwise separable convolution, 7 × bottleneck modules of 5 × 5 depthwise separable convolution.
  • WMR-DepthwiseNet-D: 3 × (bn 3 × 3) + 8 × (bn 5 × 5):This network employs 3 × bottleneck modules of 3 × 3 depthwise separable convolution, 8 × bottleneck modules of 5 × 5 depthwise separable convolution.
The experimental results of the ablation study are summarized in Table 4 and Table 5. At first, we evaluated our proposed network using the same dataset to examine the effect of different depthwise saparable bottleneck modules on the performance of the models. The number of 5 × 5 depthwise separable convolution bottleneck utilized varied from 5 × , 6 × , 7 × , and 8 × .
From all indications, the WMR-DepthwiseNet-D with 3 × ( bn 3 × 3 ) + 8 × ( bn 5 × 5 ) achieved the highest performance across all the metrics. WMR-DepthwiseNet-A with 3 × ( bn 3 × 3 ) + 5 × ( bn 5 × 5 ) achieve the least score of 92.71% sensitivity as presented in Table 4 on CXR dataset and 91.46% sensitivity on CT dataset as presented in Table 5. However, an average increment of 4.38% was achieved on both CXR and CT dataset when a depthwise saparable bottleneck modules of 8 × ( bn 5 × 5 ) is adopted as shown in Table 4 and Table 5. It is worth mentioning that the WMR-DepthwiseNet-D with 3 × ( bn 3 × 3 ) + 8 × ( bn 5 × 5 ) preserve more spatial details and hence improves model performance. To this end, our combined depthwise separable convolution network with wavelet multiresolution analysis module called WMR-DepthwiseNet-D achieves the best result across all evaluation metrices using both CXR and CT dataset as represented in Figure 8 and Figure 9. More to the point, the strategy of combining depthwise separable convolution network with wavelet multiresolution analysis enhances the performance of the WMR-DepthwiseNet by a wide margin.

5.2. COVID-19 Classification Evaluation

We compare the findings of our proposed model with well-known CNN pre-trained models and up-to-date COVID-19 screening methods. According to Table 6 and Table 7, our proposed WMR-DepthwiseNet outperforms all the selected pretrained models yielding state-of-the-art results using the same CXR and CT dataset. Our proposed approach yields 98.46% sensitivity, 97.99% specificity, 98.63% accuracy, 98.72% AUC, 98.87% precision, and 98.92% F1-score on CXR dataset as shown in Table 6. Table 7 shows that our model yields 97.78% sensitivity, 96.22% specificity, 96.83% accuracy, 97.61% AUC, 97.02% precision, and 97.37% F1-score on CT dataset.
Figure 8 and Figure 9 illustrate the stability and convergence of the proposed WMR-DepthwiseNet-D in the test curve of the accuracy graphs for both CXR and CT datasets, respectively. More so, the performance of our proposed model can also be seen in the ROC-AUC curve as illustrated in Figure 10 and Figure 11 for both CXR and CT dataset respectively. The precision–recall curve is another important performance metric we adopted in our comparison as presented in Figure 12 and Figure 13. From all indications, our proposed model outweighs all the other models across all the evaluation metrics. Owing to our depthwise separable convolution network with wavelet multiresolution analysis module, the model achieves 98.63% accuracy on CXR dataset and 96.83% accuracy on CT dataset. The efficacy of integrating wavelet multiresolution analysis module with depthwise separable convolution network to modify the learning process is demonstrated by this result. Our model achieves 98.72% AUC, which is significantly higher than the other approaches. These findings confirm the benefits of wavelet multiresolution analysis module in our proposed model. Our proposed approach also achieves the highest specificity score of 97.99%, demonstrating the critical function of the WMR-DepthwiseNet.
As several attempts at COVID-19 classification have been made, we are now comparing the findings of our proposed WMR-DepthwiseNet with previous up-to-date COVID-19 screening methods. In detecting COVID-19 from CT exams, Chen et al. [11] uses CNN-based U-Net++ to recapture attributes from high-resolution CT exams for detecting COVID-19. The authors reported an accuracy of 95.2%. Shi et al. [41] uses an infection-size-based random forest approach to obtain region-specific attributes from CT exams for COVID-19 classification achieving 89.4% accuracy.
In separating COVID-19 from other viral pneumonia, Xu et al. [52] and Wang et al. [33] works are quite impressive achieving overall accuracy of 86.7% and 82.9%, respectively. However, their biggest flaw was that they only calculated a few indicators, which was insufficient to adequately represent the classification’s overall results. Song et al. [42] formulated a deep learning scheme called deepPneumonia to distinguish COVID-19 instances using CT exams achieving 86.1% accuracy. Using CT scans, Wang et al. [12] detected COVID-19 using CNN achieving 92.36% accuracy. Jin et al. [53] utilized a logistic regression scheme for detecting COVID-19. The authors claimed that their method achieved 96.5% accuracy. Jin et al. [54] formulated an AI-based scheme for detecting COVID-19, achieving 95.7% accuracy. Barstugan et al. [40] formulated a ML scheme for classifying COVID-19 using CT scans and achieved 90.7% accuracy. Table 8 presents a summary of the aforementioned methods in comparison with our proposed scheme. Tabrizchi et al. [55] suggested an enhanced densely connected convolutional networks (DenseNet) technique for three class classification based on transfer learning (TL). The proposed model obtained an overall accuracy of 95.7%, sensitivity of 87.4%, and specificity of 95.7%. The authors claimed that the high performance of their suggested TL model is due to the classifier’s robustness in dealing with imbalanced data classes and fewer datasets.Tabrizchi et al. [56] conducted a review of previously published methods and used artificial intelligence (AI) image-based diagnosis methods to detect coronavirus infection with zero or near-zero false positives and false negatives. The goal of their research is to develop the best accurate COVID-19 detection method among AI approaches including machine learning (ML), artificial neural networks (ANN), and ensemble learning (EL). The machine learning model with SVM classifier surpasses the order models, with accuracy of 99.2%, precision of 98.2%, and recall of 100%.
In another experiment, we compared the formulated WMR-DepthwiseNet with four selected COVID-19 models using the same dataset for fairness. Cov-Net [37] and DeCoVNet [53] show quite an impressive result followed by COVID-Net [33]. However, our proposed model outperforms the aforementioned COVID-19 models including DeepPneumonia [42], which had previously yielded up-to-date results and the other models as dipicted in Table 9 and Table 10 using the same CXR and CT dataset. Though the complex lung structures and indistinct infection areas pose unusual challenges, our proposed framework still achieves accurate results, demonstrating its robust strengths.
The proposed WMR-DepthwiseNet has competitive classification efficiency for COVID-19 recognition. The underlying explanation may be that the proposed WMR-DepthwiseNet can better utilize the extracted features of high-level discriminative representation. It is worth noting that the the deptwise separable convolution network with wavelet multiresolution analysis module can handle small-scale data while using less computing power than conventional deep-learning-based approaches. To further examine the performance of the suggested scheme with different hyper-parameter tuning, we presented a statistical report in Table 11 showing the yielded results by the formulated scheme using CXR and CT datasets. with a learning rate of 0.1 and 25 % dropout using SGD optimize, the model obtained the least score of 88.18% accuracy on the CXR dataset. For the CT dataset, the model obtained the least score of 89.33% accuracy using RSMprop optimizer with a larning rate of 0.01 and 25 % dropout. Utilizing 0.50 dropout and learning rate of 0.0001 , the model obtained the best accuracy score of 98.63% and 96.83% with Adam optimizer on both CXR and CT datasets, respectively.

5.3. Cross-Dataset Evaluation

Despite the outstanding results record by the proposed model, we also presented cross-dataset evaluation to investigate if there is any discrepancy regarding the results obtained. We study the influence of training a model in one data distribution and evaluating it in another in this experiment. This situation is more realistic because training a model with images from all available sensors, environments, and persons is nearly impossible. We maintain same manner of data split of training, validation and test with 50%, 25%, and 25%, respectively, as shown in Table 12 and Table 13. The COVID-CXR scans dataset [47] is utilized for training the system while the COVID-CXR scans dataset [58] is utilized for the testing. Similarly, for the CT dataset, The COVID-CT scans dataset [48] is utilized for training the system while the COVID-CT scans dataset [59] is utilized for the testing. We ensured that no images from the training dataset source are present in the test dataset source. We adopted the well-known dataset reported in [47,48,58,59] for the experiment because it has been used by various researchers in the literature.
Although we emphasize that the training images used to train the model and the test images are drawn from different distributions. Other test designs were also investigated, such as employing the COVID-CXR training partition as a test and combining both COVID-CXR partitions as a bigger test set (See Table 12). We employed similar approach to the COVID-CT dataset (See Table 13). We also examine the inverse scenario in which the train and test set from the COVID-CXR dataset [58] are used for training and the train set from the COVID-CXR dataset [47] are used for testing. Similarly, the train and test set from the COVID-CT dataset [48] are used for training and the train set from the COVID-CT dataset [59] are used for testing.
When we examine cross-dataset assessment to intra-dataset evaluation, the model performance did not change that much which indicates that there is no significant bias in the results reported using the intra-dataset. Table 14 presents the results for the cross-dataset evaluation on the proposed model for the CXR dataset while Table 15 presents the results for the CT dataset. We believe that the slight changes in the performance of our model can be attributed to the data acquisition variation. Images from distinct dataset can be taken using different equipment and image sensors, causing relevant features on the images to change, yet the proposed model performed satisfactorily.

6. Discussion

It is important to make some remark about the proposed depthwise separable convolution network with wavelet multiresolution analysis module. Manual detection of COVID-19 by an expert utilizing CXR and CT can have a high sensitivity but a low specificity of 25%. This inadequate specificity leads to false positive predictions, which leads to ineffective therapy and wasted money. Our proposed WMR-DepthwiseNet has a high specificity of 96.22%, which can be used to help expert radiologists reduce the number of false positive instances reported.
More importantly, the stated result in terms of Receiver Operating Characteristic (ROC) can aid expert radiologist in achieving a trade-off between specificity and sensitivity by telling the overall accuracy as illustrated in Figure 10 and Figure 11. The Receiver Operating Characteristic (ROC) maximizes the true positive prediction and also minimizes the false positive rate. From the ROC curve, it is obvious that the formulated model outperforms the other algorithms with the overall accuracy of 98.72% on CXR dataset and 97.61% on CT dataset.
More interestingly, the precision–recall curve also shows that our proposed WMR-DepthwiseNet outweighs the other models with an average precision of 98.69% on CXR dataset and 97.02% on CT dataset. The precision-recall graph demonstrates the trade-off between precision and sensitivity. It is obvious that the model performs better than the other up-to-date COVID-19 models as shown in Figure 12 and Figure 13 which means our model has higher precision associated with higher sensitivity.
Furthermore, some comments on WMR-DepthwiseNet computational cost and model complexity are necessary. We combined depthwise separable convolution network with wavelet multi-resolution analysis module for feature extraction. We adopted wavelet pooling instead of the usual max-pooling operator for down-sizing operation which reduced model complexity and computation time. Another intriguing feature of our WMR-DepthwiseNet is its capacity to preserve high-level features without loss of spatial details. In terms of computing cost, the formulated algorithm was trained on an NVIDIA GTX 1080 and implementation on Keras framework. In comparison to earlier up-to-date models, the complexity of the proposed scheme is much reduced with fewer parameters as a result of the wavelet pooling strategy adopted. In all the assessment metrics, the proposed WMR-DepthwiseNet outperforms their counterparts as depicted in Table 8. Our proposed strategy consistently produces better performance in terms of SEN, SEP, ACC, AUC, PRE, and F1Score. The explanation for this is that our proposed WMR-DepthwiseNet learns high-level discriminative details. Furthermore, the WMR-DepthwiseNet outperforms up-to-date approaches with better classification results.

7. Conclusions

We propose a CNN called depthwise separable convolution network with wavelet multiresolution analysis module (WMR-DepthwiseNet) in this paper with the objective of addressing the issue of low performance in COVID-19 screening from radiograph (CXR and CT) images as well as loss of spatial details during feature extraction. We implemented a depthwise separable convolution network with wavelet multi-resolution analysis and a discrete wavelet transform (DWT) pooling to replace the conventional max-pooling operation as a strategy to avoid loss of spatial details and to preserve high-level feature and learn the distinctive representations for COVID-19 classification. We have demonstrated that our proposed model is effective and converges very fast with better classification performance. By a broad margin, our proposed approach outshone previous up-to-date COVID-19 diagnostic strategies. The limitation of our work is that we did not consider imbalance problem which happens to be the case for newly discovered diseases due to lack of sufficient data which usually leads to uneven class distribution. In our future work, we will also focus on imbalance class problem. We belief that graph-based convolutional neural network can improve the quality of the result. Therefore, part of our future work will take into consideration the possibility of implementing graph depthwise separable convolutional network.

Author Contributions

H.N.M., G.U.N.: Conceptualization, Methodology, Resources, Writing—original draft, Software, Validation, Visualization, Writing—review and editing. J.L.: Funding acquisition, Project administration, Supervision. S.N., M.A.H., J.J., I.A.C.: Data curation, Software, Formal analysis, Investigation, Validation, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

There is no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study, due to the reason that this study only makes use of publicly available data.

Informed Consent Statement

Not applicable.

Data Availability Statement

In this study, we collected chest x-ray data of different pneumonia-related illnesses from three different open sources. We collected 3616 scans of COVID-19 CXR from the COVID-19 radiography database. We collected 3029 scans of bacterial pneumonia, 8851 scans of healthy patients, and 2983 scans of viral pneumonia from the Kaggle database of the Radiological Society of North America (RSNA). More so, we collected 74,999 scans of other pneumonia-related illnesses from the National Institute of Health (NIH). https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 1 June 2021). https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data (accessed on 2 June 2021). https://www.kaggle.com/nih-chest-xrays/data (accessed on 3 June 2021). https://github.com/ari-dasci/OD-covidgr (accessed on 2 June 2021). https://github.com/UCSD-AI4H/COVID-CT (accessed on 4 June 2021).

Conflicts of Interest

The authors declare no conflict of interest regarding this publication.

References

  1. WHO. WHO Coronavirus (COVID-19) Dashboard With Vaccination Data. Available online: https://covid19.who.int/ (accessed on 5 January 2022).
  2. Wang, W.; Xu, Y.; Gao, R.; Lu, R.; Han, K.; Wu, G.; Tan, W. Detection of SARS-CoV-2 in different types of clinical specimens. JAMA 2020, 323, 1843–1844. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Ai, T.; Yang, Z.; Hou, H.; Zhan, C.; Chen, C.; Lv, W.; Tao, Q.; Sun, Z.; Xia, L. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: A report of 1014 cases. Radiology 2020, 296, E32–E40. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Chen, C.; Gao, G.; Xu, Y.; Pu, L.; Wang, Q.; Wang, L.; Wang, W.; Song, Y.; Chen, M.; Wang, L.; et al. SARS-CoV-2–positive sputum and feces after conversion of pharyngeal samples in patients with COVID-19. Ann. Intern. Med. 2020, 172, 832–834. [Google Scholar] [CrossRef] [PubMed]
  5. Hosseiny, M.; Kooraki, S.; Gholamrezanezhad, A.; Reddy, S.; Myers, L. Radiology perspective of coronavirus disease 2019 (COVID-19): Lessons from severe acute respiratory syndrome and Middle East respiratory syndrome. Am. J. Roentgenol. 2020, 214, 1078–1082. [Google Scholar] [CrossRef] [PubMed]
  6. Ravishankar, H. Understanding the mechanisms of deep transfer learning for medical images. In Deep Learning and Data Labeling for Medical Applications; Springer: Berlin, Germany, 2016; pp. 188–196. [Google Scholar]
  7. Yu, Y.; Lin, H.; Meng, J.; Wei, X.; Guo, H.; Zhao, Z. Deep transfer learning for modality classification of medical images. Information 2017, 8, 91. [Google Scholar] [CrossRef] [Green Version]
  8. Ulhaq, A.; Khan, A.; Gomes, D.; Pau, M. Computer vision for COVID-19 control: A survey. arXiv 2020, arXiv:2004.09420. [Google Scholar] [CrossRef]
  9. Born, J.; Brändle, G.; Cossio, M.; Disdier, M.; Goulet, J.; Roulin, J.; Wiedemann, N. POCOVID-Net: Automatic detection of COVID-19 from a new lung ultrasound imaging dataset (POCUS). arXiv 2020, arXiv:2004.12084. [Google Scholar]
  10. Bell, D.J. COVID-19. Available online: https//radiopaedia.org/articles/covid-19-3 (accessed on 17 May 2021).
  11. Chen, J.; Wu, L.; Zhang, J.; Zhang, L.; Gong, D.; Zhao, Y.; Chen, Q.; Huang, S.; Yang, M.; Yang, X.; et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography. Sci. Rep. 2020, 10, 19196. [Google Scholar] [CrossRef]
  12. Wang, S.; Kang, B.; Ma, J.; Zeng, X.; Xiao, M.; Guo, J.; Cai, M.; Yang, J.; Li, Y.; Meng, X.; et al. A deep learning algorithm using CT images to screen for Corona Virus Disease (COVID-19). Eur. Radiol. 2021, 31, 6096–6104. [Google Scholar] [CrossRef]
  13. Livingston, E.; Desai, A.; Berkwits, M. Sourcing personal protective equipment during the COVID-19 pandemic. JAMA 2020, 323, 1912–1914. [Google Scholar] [CrossRef] [Green Version]
  14. Wong, H.Y.F.; Lam, H.Y.S.; Fong, A.H.T.; Leung, S.T.; Chin, T.W.Y.; Lo, C.S.Y.; Lui, M.M.S.; Lee, J.C.Y.; Chiu, K.W.H.; Chung, T.W.H.; et al. Frequency and distribution of chest radiographic findings in patients positive for COVID-19. Radiology 2020, 296, E72–E78. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Nneji, G.U.; Cai, J.; Deng, J.; Nahar, S.; Mgbejime, G.T.; James, E.C.; Woldeyes, S.K. A Dual Weighted Shared Capsule Network for Diabetic Retinopathy Fundus Classification. In Proceedings of the 2021 International Conference on High Performance Big Data and Intelligent Systems (HPBD & IS), Macau, China, 5–7 December 2021; pp. 297–302. [Google Scholar] [CrossRef]
  16. Nneji, G.U.; Cai, J.; Jianhua, D.; Monday, H.N.; Ejiyi, C.J.; James, E.C.; Mgbejime, G.T.; Oluwasanmi, A. A Super-Resolution Generative Adversarial Network with Siamese CNN Based on Low Quality for Breast Cancer Identification. In Proceedings of the 4th International Conference on Pattern Recognition and Artificial Intelligence, Yibin, China, 20–22 August 2021; pp. 218–223. [Google Scholar]
  17. Monday, H.N.; Li, J.P.; Nneji, G.U.; Oluwasanmi, A.; Mgbejime, G.T.; Ejiyi, C.J.; Chikwendu, I.A.; James, E.C. Improved Convolutional Neural Multi-Resolution Wavelet Network for COVID-19 Pneumonia Classification. In Proceedings of the 4th International Conference on Pattern Recognition and Artificial Intelligence, Yibin, China, 20–22 August 2021; pp. 267–273. [Google Scholar]
  18. Monday, H.N.; Li, J.P.; Nneji, G.U.; James, E.C.; Chikwendu, I.A.; Ejiyi, C.J.; Oluwasanmi, A.; Mgbejime, G.T. The Capability of Multi Resolution Analysis: A Case Study of COVID-19 Diagnosis. In Proceedings of the 4th International Conference on Pattern Recognition and Artificial Intelligence, Yibin, China, 20–22 August 2021; pp. 236–242. [Google Scholar]
  19. Monday, H.N.; Li, J.; Nneji, G.U.; Nahar, S.; Hossin, M.A.; Jackson, J.; Ejiyi, C.J. COVID-19 Diagnosis from Chest X-ray Images Using a Robust Multi-Resolution Analysis Siamese Neural Network with Super-Resolution Convolutional Neural Network. Diagnostics 2022, 12, 741. [Google Scholar] [CrossRef]
  20. Nneji, G.U.; Cai, J.; Monday, H.N.; Hossin, M.A.; Nahar, S.; Mgbejime, G.T.; Deng, J. Fine-Tuned Siamese Network with Modified Enhanced Super-Resolution GAN Plus Based on Low-Quality Chest X-ray Images for COVID-19 Identification. Diagnostics 2022, 12, 717. [Google Scholar] [CrossRef]
  21. Nneji, G.U.; Cai, J.; Deng, J.; Monday, H.N.; James, E.C.; Ukwuoma, C.C. Multi-Channel Based Image Processing Scheme for Pneumonia Identification. Diagnostics 2022, 12, 325. [Google Scholar] [CrossRef]
  22. Shen, B.; Hoshmand-Kochi, M.; Abbasi, A.; Glass, S.; Jiang, Z.; Singer, A.J.; Thode, H.C.; Li, H.; Hou, W.; Duong, T.Q. Initial Chest Radiograph Scores Inform COVID-19 Status, Intensive Care Unit Admission and Need for Mechanical Ventilation. Clin. Radiol. 2021, 76, 473.e1–473.e7. [Google Scholar] [CrossRef]
  23. Cohen, J.P.; Morrison, P.; Dao, L.; Roth, K.; Duong, T.Q.; Ghassemi, M. COVID-19 Image Data Collection: Prospective Predictions Are the Future. arXiv 2020, arXiv:006.11988. [Google Scholar]
  24. Wong, A.; Lin, Z.Q.; Wang, L.; Chung, A.G.; Shen, B.; Abbasi, A.; Hoshmand-Kochi, M.; Duong, T.Q. Towards Computer-Aided Severity Assessment via Deep Neural Networks for Geographic and Opacity Extent Scoring of SARS-CoV-2 Chest X-rays. Sci. Rep. 2021, 11, 9315. [Google Scholar] [CrossRef]
  25. Nneji, G.U.; Cai, J.; Deng, J.; Monday, H.N.; James, E.C.; Lemessa, B.D.; Yutra, A.Z.; Leta, Y.B.; Nahar, S. COVID-19 Identification Using Deep Capsule Network: A Perspective of Super-Resolution CNN on Low-Quality CXR Images. In Proceedings of the 7th International Conference on Communication and Information Processing (ICCIP 2021), Beijing, China, 16–18 December 2021; pp. 96–102. [Google Scholar]
  26. Monday, H.N.; Li, J.P.; Nneji, G.U.; Yutra, A.Z.; Lemessa, B.D.; Nahar, S.; James, E.C.; Haq, A.U. The Capability of Wavelet Convolutional Neural Network for Detecting Cyber Attack of Distributed Denial of Service in Smart Grid. In Proceedings of the 2021 18th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 17–19 December 2021; pp. 413–418. [Google Scholar] [CrossRef]
  27. Zhang, J.; Xie, Y.; Pang, G.; Liao, Z.; Verjans, J.; Li, W.; Sun, Z.; He, J.; Li, Y.; Shen, C.; et al. Viral Pneumonia Screening on Chest X-rays Using Confidence-Aware Anomaly Detection. IEEE Trans. Med. Imaging 2020, 40, 879–890. [Google Scholar] [CrossRef]
  28. Li, X.; Ge, P.; Zhu, J.; Li, H.; Graham, J.; Singer, A.; Richman, P.S.; Duong, T.Q. Deep learning prediction of likelihood of ICU admission and mortality in COVID-19 patients using clinical variables. PeerJ 2020, 8, e10337. [Google Scholar] [CrossRef]
  29. Hou, W.; Zhao, Z.; Chen, A.; Li, H.; Duong, T.Q. Machining learning predicts the need for escalated care and mortality in COVID-19 patients from clinical variables. Int. J. Med. Sci. 2021, 18, 1739–1745. [Google Scholar] [CrossRef]
  30. Nneji, G.U.; Cai, J.; Jianhua, D.; Monday, H.N.; Chikwendu, I.A.; Oluwasanmi, A.; James, E.C.; Mgbejime, G.T. Enhancing Low Quality in Radiograph Datasets Using Wavelet Transform Convolutional Neural Network and Generative Adversarial Network for COVID-19 Identification. In Proceedings of the 2021 4th International Conference on Pattern Recognition and Artificial Intelligence, Xiamen, China, 24–26 September 2021; pp. 146–151. [Google Scholar] [CrossRef]
  31. Hemdan, E.E.-D.; Shouman, M.A.; Karar, M.E. Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in X-ray images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  32. Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N.; et al. Can AI help in screening Viral and COVID-19 pneumonia? arXiv 2020, arXiv:2003.13145. [Google Scholar] [CrossRef]
  33. Wang, L.; Lin, Z.Q.; Wong, A. Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef] [PubMed]
  34. Polsinelli, M.; Cinque, L.; Placidi, G. A light cnn for detecting covid-19 from ct scans of the chest. Pattern Recognit. Lett. 2020, 140, 95–100. [Google Scholar] [CrossRef]
  35. Singh, D.; Kumar, V.; Kaur, M. Classification of COVID-19 patients from chest CT images using multi-objective differential evolution–based convolutional neural networks. Eur. J. Clin. Microbiol. Infect. Dis. 2020, 39, 1379–1389. [Google Scholar] [CrossRef]
  36. Ouyang, X. Dual-sampling attention network for diagnosis of COVID-19 from community acquired pneumonia. IEEE Trans. Med. Imaging 2020, 39, 2595–2605. [Google Scholar] [CrossRef]
  37. Li, L. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: Evaluation of the diagnostic accuracy. Radiology 2020, 296, E65–E71. [Google Scholar] [CrossRef]
  38. Zheng, C.; Deng, X.; Fu, Q.; Zhou, Q.; Feng, J.; Ma, H.; Liu, W.; Wang, X. Deep learning-based detection for COVID-19 from chest CT using weak label. MedRxiv 2020. [Google Scholar]
  39. Maghdid, H.S.; Asaad, A.T.; Ghafoor, K.Z.; Sadiq, A.S.; Mirjalili, S.; Khan, M.K. Diagnosing COVID-19 pneumonia from X-ray and CT images using deep learning and transfer learning algorithms. Multimodal Image Exploit. Learn. 2021, 11734, 117340E. [Google Scholar]
  40. Barstugan, M.; Ozkaya, U.; Ozturk, S. Coronavirus (covid-19) classification using ct images by machine learning methods. arXiv 2020, arXiv:2003.09424. [Google Scholar]
  41. Shi, F.; Xia, L.; Shan, F.; Song, B.; Wu, D.; Wei, Y.; Yuan, H.; Jiang, H.; He, Y.; Gao, Y.; et al. Large-Scale Screening of COVID-19 from Community Acquired Pneumonia using Infection Size-Aware Classification. Phys. Med. Biol. 2021, 66, 065031. [Google Scholar] [CrossRef] [PubMed]
  42. Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Wang, R.; Zhao, H.; Chong, Y.; et al. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. IEEE/ACM Trans. Comput. Biol. Bioinform. 2021, 18, 2775–2780. [Google Scholar] [CrossRef] [PubMed]
  43. Tang, Z. Severity assessment of coronavirus disease 2019 (COVID-19) using quantitative features from chest CT images. arXiv 2020, arXiv:2003.11988. [Google Scholar]
  44. Wynants, L.; Van Calster, B.; Collins, G.S.; Riley, R.D.; Heinze, G.; Schuit, E.; Bonten, M.M.; Dahly, D.L.; Damen, J.A.; Debray, T.P.; et al. Prediction models for diagnosis and prognosis of covid-19: Systematic review and critical appraisal. BMJ 2020, 369, m1328. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. RSNA Pneumonia Detection Challenge | Kaggle. Available online: https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data (accessed on 1 May 2021).
  46. NIH Chest X-rays | Kaggle. Available online: https://www.kaggle.com/nih-chest-xrays/data (accessed on 9 June 2021).
  47. Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Kashem, S.B.A.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar] [CrossRef] [PubMed]
  48. Soares, E.; Angelov, P.; Biaso, S.; Froes, M.H.; Abe, D.K. SARS-CoV-2 ct-scan dataset: A large dataset of real patients ct scans for sars-cov-2 identification. medRxiv 2020. [Google Scholar] [CrossRef]
  49. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–20 June 2018; pp. 7132–7141. [Google Scholar]
  50. Park, J.; Woo, S.; Lee, J.-Y.; Kweon, I.S. BAM: Bottleneck attention module. arXiv 2018, arXiv:1807.06514. [Google Scholar]
  51. Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.S. SCA-CNN: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5659–5667. [Google Scholar]
  52. Xu, X. Deep learning system to screen coronavirus disease 2019 pneumonia. arXiv 2020, arXiv:2002.09334. [Google Scholar] [CrossRef]
  53. Jin, C.; Chen, W.; Cao, Y.; Xu, Z.; Tan, Z.; Zhang, X.; Deng, L.; Zheng, C.; Zhou, J.; Shi, H.; et al. Development and evaluation of an AI system for COVID-19 diagnosis. Nat. Commun. 2020, 11, 5088. [Google Scholar] [CrossRef]
  54. Jin, S. AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system in four weeks. medRxiv 2020. [Google Scholar] [CrossRef] [Green Version]
  55. Tabrizchi, H.; Mosavi, A.; Vamossy, Z.; Varkonyi-Koczy, A.R. Densely Connected Convolutional Networks (DenseNet) for Diagnosing Coronavirus Disease (COVID-19) from Chest X-ray Imaging. In Proceedings of the 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA) 2021, Lausanne, Switzerland, 23–25 June 2021; pp. 1–5. [Google Scholar]
  56. Tabrizchi, H.; Mosavi, A.; Szabo-Gali, A.; Felde, I.; Nadai, L. Rapid COVID-19 Diagnosis Using Deep Learning of the Computerized Tomography Scans. In Proceedings of the 2020 IEEE 3rd International Conference and Workshop in Óbuda on Electrical and Power Engineering (CANDO-EPE), Online, 18–19 November 2020. [Google Scholar]
  57. Wang, X.; Deng, X.; Fu, Q.; Zhou, Q.; Feng, J.; Ma, H.; Liu, W.; Zheng, C. A Weakly-Supervised Framework for COVID-19 Classification and Lesion Localization From Chest CT. IEEE Trans. Med. Imaging 2020, 39, 2615–2625. [Google Scholar] [CrossRef] [PubMed]
  58. Tabik, S.; Gómez-Ríos, A.; Martín-Rodríguez, J.L.; Sevillano-García, I.; Rey-Area, M.; Charte, D.; Guirado, E.; Suárez, J.L.; Luengo, J.; Valero-González, M.A.; et al. COVIDGR Dataset and COVID-SDNet Methodology for Predicting COVID-19 Based on Chest X-Ray Images. IEEE J. Biomed. Health Inform. 2020, 24, 3595–3605. [Google Scholar] [CrossRef] [PubMed]
  59. Yang, X.; He, X.; Zhao, J.; Zhang, Y.; Zhang, S.; Xie, P. COVID-CT-dataset: A CT scan dataset about COVID-19. arXiv 2020, arXiv:2003.13865. [Google Scholar]
Figure 1. Data collection of chest X-ray images of different pneumonia-related illnesses including COVID-19.
Figure 1. Data collection of chest X-ray images of different pneumonia-related illnesses including COVID-19.
Diagnostics 12 00765 g001
Figure 2. Data collection of computed tomography (CT) images of COVID-19 and non-COVID-19.
Figure 2. Data collection of computed tomography (CT) images of COVID-19 and non-COVID-19.
Diagnostics 12 00765 g002
Figure 3. Overall structure of our proposed WMR-DepthwiseNet.
Figure 3. Overall structure of our proposed WMR-DepthwiseNet.
Diagnostics 12 00765 g003
Figure 4. Detailed structure of our proposed depthwise separable convolution module.
Figure 4. Detailed structure of our proposed depthwise separable convolution module.
Diagnostics 12 00765 g004
Figure 5. Transition of regular convolution to depthwise separable convolution module for both 3 × 3 and 5 × 5 depthwise convolutions.
Figure 5. Transition of regular convolution to depthwise separable convolution module for both 3 × 3 and 5 × 5 depthwise convolutions.
Diagnostics 12 00765 g005
Figure 6. Illustration of discrete wavelet transform operation for downsampling.
Figure 6. Illustration of discrete wavelet transform operation for downsampling.
Diagnostics 12 00765 g006
Figure 7. (a)–(d) Detailed structure of the wavelet multiresolution analysis of four-level decomposition.
Figure 7. (a)–(d) Detailed structure of the wavelet multiresolution analysis of four-level decomposition.
Diagnostics 12 00765 g007
Figure 8. Accuracy curves showing the performance of our proposed WMR-DepthwiseNet in comparison with some selected up-to-date COVID-19 models using the same CXR dataset.
Figure 8. Accuracy curves showing the performance of our proposed WMR-DepthwiseNet in comparison with some selected up-to-date COVID-19 models using the same CXR dataset.
Diagnostics 12 00765 g008
Figure 9. Accuracy curves showing the performance of the formulated WMR-DepthwiseNet in comparison with some selected up-to-date COVID-19 models using the same CT dataset.
Figure 9. Accuracy curves showing the performance of the formulated WMR-DepthwiseNet in comparison with some selected up-to-date COVID-19 models using the same CT dataset.
Diagnostics 12 00765 g009
Figure 10. ROC-AUC curves of our proposed WMR-DepthwiseNet in comparison with some selected up-to-date models using the same CXR dataset.
Figure 10. ROC-AUC curves of our proposed WMR-DepthwiseNet in comparison with some selected up-to-date models using the same CXR dataset.
Diagnostics 12 00765 g010
Figure 11. ROC-AUC curves of our proposed WMR-DepthwiseNet in comparison with some selected up-to-date models using the same CT dataset.
Figure 11. ROC-AUC curves of our proposed WMR-DepthwiseNet in comparison with some selected up-to-date models using the same CT dataset.
Diagnostics 12 00765 g011
Figure 12. Precision-Recall curves of the formulated WMR-DepthwiseNet in comparison with some selected up-to-date models using the same CXR dataset.
Figure 12. Precision-Recall curves of the formulated WMR-DepthwiseNet in comparison with some selected up-to-date models using the same CXR dataset.
Diagnostics 12 00765 g012
Figure 13. Precision-Recall curves of the formulated WMR-DepthwiseNet in comparison with some selected up-to-date models using the same CT dataset.
Figure 13. Precision-Recall curves of the formulated WMR-DepthwiseNet in comparison with some selected up-to-date models using the same CT dataset.
Diagnostics 12 00765 g013
Table 1. Description of the chest X-ray (CXR) dataset showing different categories of pneumonia illnesses and the distribution of images per category as well as the number of selected images per category.
Table 1. Description of the chest X-ray (CXR) dataset showing different categories of pneumonia illnesses and the distribution of images per category as well as the number of selected images per category.
DatasetCategory of PneumoniaData Count per CategorySelected No. of Data CategoryTraining SetValidation SetTest Set
RSNA [45]Bacteria302920001400400200
Viral298320001400400200
Healthy885120001400400200
NIH [46]Atelectasis499920001400400200
Cardiomegaly10,00020001400400200
Consolidation10,00020001400400200
Effusion10,00020001400400200
Infiltration10,00020001400400200
Mass10,00020001400400200
Nodule10,00020001400400200
Pneumothorax10,00020001400400200
Rahman et al. [47]COVID-19361620001400400200
Total93,62724,00016,80048002400
Table 2. Description of the computed tomography (CT) dataset showing COVID-19 and non-COVID-19 categories including the distribution of images per category as well as the number of selected images per category.
Table 2. Description of the computed tomography (CT) dataset showing COVID-19 and non-COVID-19 categories including the distribution of images per category as well as the number of selected images per category.
DatasetCategory of PneumoniaData Count per CategorySelected No. of Data CategoryTraining SetValidation SetTest Set
Silva et al. [48]COVID-CT12521230861246123
NON-COVID-CT12301230861246123
Total 248224601722492246
Table 3. Parameter for the proposed depthwise separable convolution. bnk represents bottleneck convolution. NLT stands for the kind of nonlinearity adopted. HSW represents h-swish. REL denotes ReLU and SD represents stride.
Table 3. Parameter for the proposed depthwise separable convolution. bnk represents bottleneck convolution. NLT stands for the kind of nonlinearity adopted. HSW represents h-swish. REL denotes ReLU and SD represents stride.
InputOperatorExpansion SizeOutputNLTSD
224 × 224 × 3Conv2d, 3×3-16HSW2
112 × 112 × 16bnk, 3×31616REL2
56 × 56 × 16bnk, 3×37224REL2
28 × 28 × 24bnk, 3×38624RE1
28 × 28 × 24bnk, 5×59640HSW2
14 × 14 × 40bnk, 5×524040HSW1
14 × 14 × 40bnk, 5×524040HSW1
14 × 14 × 40bnk, 5×512048HSW1
14 × 14 × 48bnk, 5×514448HSW1
7 × 7 × 96bnk, 5×528896HSW2
7 × 7 × 96bnk, 5×557696HS1
7 × 7 × 96bnk, 5×557696HSW1
7 × 7 × 256Conv2d, 1×1-256HSW1
1 × 1 × 256Avg pool 7×7---1
1 × 1 × 1024Conv2d, 1×1-1024HSW1
Table 4. Performance evaluation of the structural configuration of our proposed model with different depthwise bottleneck modules for CXR dataset. bn represents the bottleneck module.
Table 4. Performance evaluation of the structural configuration of our proposed model with different depthwise bottleneck modules for CXR dataset. bn represents the bottleneck module.
Structural ModelsSEN (%)SPE (%)ACC (%)AUC (%)PRE (%)F1-Score (%)Time (min)
WMR-DepthwiseNet-A: 3 (bn 3 × 3 ) + 5 (bn 5 × 5 )92.7191.8490.3991.1491.6792.1213.5
WMR-DepthwiseNet-B: 3 (bn 3 × 3 ) + 6 (bn 5 × 5 )97.596.2293.5796.9395.4296.1514.2
WMR-DepthwiseNet-C: 3 (bn 3 × 3 ) + 7 (bn 5 × 5 )98.1797.8595.2697.1196.6497.314.9
WMR-DepthwiseNet-D: 3 (bn 3 × 3 ) + 8 (bn 5 × 5 )98.4697.9998.6398.7298.6998.9215.6
WMR-DepthwiseNet-D: 3 (bn 3 × 3 ) + 9 (bn 5 × 5 )96.394.695.694.896.295.116.7
WMR-DepthwiseNet-D: 3 (bn 3 × 3 ) + 10 (bn 5 × 5 )95.294.493.794.195.896.817.1
Table 5. Performance evaluation of the structural configuration of our proposed model with different depthwise bottleneck modules for for CT dataset. bn represents the bottleneck module.
Table 5. Performance evaluation of the structural configuration of our proposed model with different depthwise bottleneck modules for for CT dataset. bn represents the bottleneck module.
Structural ModelsSEN (%)SPE (%)ACC (%)AUC (%)PRE (%)F1-Score (%)Time (min)
WMR-DepthwiseNet-A: 3 (bn 3 × 3 ) + 5 (bn 5 × 5 )91.4692.6189.0790.4890.8191.7811.3
WMR-DepthwiseNet-B: 3 (bn 3 × 3 ) + 6 (bn 5 × 5 )94.6795.1291.3195.7394.2895.5812.7
WMR-DepthwiseNet-C: 3 (bn 3 × 3 ) + 7 (bn 5 × 5 )95.4196.9294.5595.8295.1496.8612.5
WMR-DepthwiseNet-D: 3 (bn 3 × 3 ) + 8 (bn 5 × 5 )97.7896.2296.8397.6197.0297.3713.9
WMR-DepthwiseNet-D: 3 (bn 3 × 3 ) + 9 (bn 5 × 5 )94.193.795.194.995.194.714.8
WMR-DepthwiseNet-D: 3 (bn 3 × 3 ) + 10 (bn 5 × 5 )94.894.194.095.894.393.915.5
Table 6. Comparing the performance of our formulated WMR-DepthwiseNet with famous pretrained algorithms using the same CXR dataset. We only fine-tuned the last layer of the pretrained algorithms to match the number of classes.
Table 6. Comparing the performance of our formulated WMR-DepthwiseNet with famous pretrained algorithms using the same CXR dataset. We only fine-tuned the last layer of the pretrained algorithms to match the number of classes.
ModelsSEN (%)SPE (%)ACC (%)AUC (%)PRE (%)F1 Score (%)Time (min)
VGG-1992.7191.8492.3991.1491.6792.1226.2
AlexNet90.3789.7289.9590.6189.7590.1816.4
ResNet-5095.7396.1894.2395.7693.9294.8625.9
EfficientNet96.4995.9496.6994.9495.7796.0321.6
DenseNet-12193.7492.3192.8593.3192.9593.4822.1
Inception-V391.8890.7591.3190.9690.2191.6619.7
MobileNet-V294.8395.2794.1493.5792.6393.7817.3
WMR-DepthwiseNet-D (Proposed)98.4697.9998.6398.7298.6998.9215.6
Table 7. Comparing the performance of our formulated WMR-DepthwiseNet with famous pretrained algorithms using the same CT dataset. We only fine-tuned the last layer of the pretrained algorithms to match the number of classes.
Table 7. Comparing the performance of our formulated WMR-DepthwiseNet with famous pretrained algorithms using the same CT dataset. We only fine-tuned the last layer of the pretrained algorithms to match the number of classes.
ModelsSEN (%)SPE (%)ACC (%)AUC (%)PRE (%)F1 Score (%)Time (min)
VGG-1991.1790.9190.0290.7890.6291.3724.2
AlexNet89.5988.7188.5289.9890.0389.7414.7
ResNet-5094.8295.6293.2393.4591.8792.1723.4
EfficientNet94.6794.8194.1392.8193.0893.8919.8
DenseNet-12192.2890.8190.5591.7590.7391.6520.3
Inception-V390.0389.2490.8889.3289.1390.7817.6
MobileNet-V292.7993.7892.8191.8290.6791.9615.1
WMR-DepthwiseNet-D (Proposed)97.7896.2296.8397.6197.0297.3713.9
Table 8. Evaluation performance of our proposed WMR-DepthwiseNet model in comparison with several COVID-19 image-based screening methods for both CXR and CT datasets.
Table 8. Evaluation performance of our proposed WMR-DepthwiseNet model in comparison with several COVID-19 image-based screening methods for both CXR and CT datasets.
MethodsSEN (%)SPE (%)ACC (%)
Chen et al. [11]10093.695.2
Barstugan et al. [40]91.892.390.7
Wang et al. [12]90.489.592.3
Li et al. [37]90.096.092.3
Song et al. [42]96.077.086.1
Shi et al. [41]90.787.289.4
Wang et al. [33]85.989.482.9
Jin et al. [53]94.195.596.5
Xu et al. [52]87.990.786.7
Jin et al. [54]97.492.295.7
WMR-DepthwiseNet-D (CXR)98.4697.9998.63
WMR-DepthwiseNet-D (CT)97.7896.2296.83
Table 9. Comparison of our proposed WMR-DepthwiseNet with other selected state-of-the-art COVID-19 models using the same training data distribution for CXR dataset.
Table 9. Comparison of our proposed WMR-DepthwiseNet with other selected state-of-the-art COVID-19 models using the same training data distribution for CXR dataset.
ModelSEN (%)SPE (%)ACC (%)AUC (%)PREC (%)Time (min)
COVID-Net [33]94.2093.9994.8694.3295.5626.4
DeCoVNet [57]97.2197.6897.7897.2197.4122.8
Cov-Net [37]97.9296.2897.6796.2797.6523.7
DeepPneumonia [42]90.7291.2090.7890.0691.8025.8
WMR-DepthwiseNet-D (CXR)98.4697.9998.6398.7298.6915.6
Table 10. Comparison of our proposed WMR-DepthwiseNet with other selected state-of-the-art COVID-19 models using the same training data distribution for CT dataset.
Table 10. Comparison of our proposed WMR-DepthwiseNet with other selected state-of-the-art COVID-19 models using the same training data distribution for CT dataset.
ModelSEN (%)SPE (%)ACC (%)AUC (%)PREC (%)Time (min)
COVID-Net [33]92.3792.5493.8192.6593.1624.9
DeCoVNet [57]95.8196.4395.1794.9895.2120.2
Cov-Net [37]95.7695.8196.7695.3695.0321.6
DeepPneumonia [42]89.0490.7789.2489.7090.5523.4
WMR-DepthwiseNet-D (CT)97.7896.2296.8397.6197.0213.9
Table 11. Evaluation of hyperparameter tuning on the overall performance of the proposed WMR-DepthwiseNet.
Table 11. Evaluation of hyperparameter tuning on the overall performance of the proposed WMR-DepthwiseNet.
CXR DatasetCT Dataset
Hyper-Parameter TuningSGDAdamRMSPropSGDAdamRMSProp
ACC (%)ACC (%)ACC (%)ACC (%)ACC (%)ACC (%)
LR (0.1) + Dropout (0.25)88.1890.7391.1489.9190.7791.89
LR (0.1) + Dropout (0.50)90.5691.2689.7290.2691.3789.43
LR (0.1) + Dropout (0.75)89.8890.1489.0291.7292.7490.19
LR (0.01) + Dropout (0.25)92.5191.8590.1891.7890.4289.33
LR (0.01) + Dropout (0.50)91.0490.2891.2290.8092.2591.66
LR (0.01) + Dropout (0.75)90.5592.8392.7691.0891.8190.71
LR (0.001) + Dropout (0.25)90.3391.1893.1892.4691.5290.59
LR (0.001) + Dropout (0.50)91.7792.1591.1392.8992.7992.77
LR (0.001) + Dropout (0.75)92.6693.7892.9994.0293.6892.16
LR (0.0001) + Dropout (0.25)94.3894.1393.2394.3894.1794.89
LR (0.0001) + Dropout (0.50)95.6197.2695.8194.2796.8395.33
LR (0.0001) + Dropout (0.75)94.7995.7693.9893.1694.7294.79
Table 12. Dataset split for cross evaluation on CXR dataset.
Table 12. Dataset split for cross evaluation on CXR dataset.
DatasetCategory of PneumoniaData Count per CategorySelected No. of Data CategoryTraining SetValidation SetTest Set
Tabik et al. [58]COVID-CXR426424212106106
NON-COVID-CXR426424212106106
Rahman et al. [47]COVID-CXR3616424212106106
NON-COVID-CXR10,192424212106106
Total 14,6561696848424424
Table 13. Dataset split for cross evaluation on CT dataset.
Table 13. Dataset split for cross evaluation on CT dataset.
DatasetCategory of PneumoniaData Count per CategorySelected No. of Data CategoryTraining SetValidation SetTest Set
Soares et al. [48]COVID-CT1252424212106106
NON-COVID-CT1230424212106106
Yang et al. [59]COVID-CT349424212106106
NON-COVID-CT463424212106106
Total 32941696848424424
Table 14. Results of the cross-evaluation of the proposed model for CXR dataset.
Table 14. Results of the cross-evaluation of the proposed model for CXR dataset.
Training DatasetTest DatasetACC (%)SEN (%)SPE (%)
Rahman et al. [47]Tabik et al. [58] (Train)98.1797.9897.0
Rahman et al. [47]Tabik et al. [58] (Test)97.9297.1396.87
Rahman et al. [47]Tabik et al. [58] (Train + Test)98.0197.8897.09
Tabik et al. [58] (Train + Test)Rahman et al. [47]97.8797.2396.46
Table 15. Results of the cross-evaluation of the proposed model for CT dataset.
Table 15. Results of the cross-evaluation of the proposed model for CT dataset.
Train DatasetTest DatasetACC (%)SEN (%)SPE (%)
Soares et al. [48]Yang et al. [59] (Train/Val)96.097.1195.92
Soares et al. [48]Yang et al. [59] (Test)95.4696.7394.81
Soares et al. [48]Yang et al. [59] (Train/Val + Test)97.095.0395.55
Yang et al. [59] (Train/Val + Test)Soares et al. [48]96.9490.5595.71
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Monday, H.N.; Li, J.; Nneji, G.U.; Hossin, M.A.; Nahar, S.; Jackson, J.; Chikwendu, I.A. WMR-DepthwiseNet: A Wavelet Multi-Resolution Depthwise Separable Convolutional Neural Network for COVID-19 Diagnosis. Diagnostics 2022, 12, 765. https://doi.org/10.3390/diagnostics12030765

AMA Style

Monday HN, Li J, Nneji GU, Hossin MA, Nahar S, Jackson J, Chikwendu IA. WMR-DepthwiseNet: A Wavelet Multi-Resolution Depthwise Separable Convolutional Neural Network for COVID-19 Diagnosis. Diagnostics. 2022; 12(3):765. https://doi.org/10.3390/diagnostics12030765

Chicago/Turabian Style

Monday, Happy Nkanta, Jianping Li, Grace Ugochi Nneji, Md Altab Hossin, Saifun Nahar, Jehoiada Jackson, and Ijeoma Amuche Chikwendu. 2022. "WMR-DepthwiseNet: A Wavelet Multi-Resolution Depthwise Separable Convolutional Neural Network for COVID-19 Diagnosis" Diagnostics 12, no. 3: 765. https://doi.org/10.3390/diagnostics12030765

APA Style

Monday, H. N., Li, J., Nneji, G. U., Hossin, M. A., Nahar, S., Jackson, J., & Chikwendu, I. A. (2022). WMR-DepthwiseNet: A Wavelet Multi-Resolution Depthwise Separable Convolutional Neural Network for COVID-19 Diagnosis. Diagnostics, 12(3), 765. https://doi.org/10.3390/diagnostics12030765

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop