Next Article in Journal
Microparticles as Viral RNA Carriers from Stool for Stable and Sensitive Surveillance
Next Article in Special Issue
Application of Artificial Intelligence Techniques for Monkeypox: A Systematic Review
Previous Article in Journal
Soluble ST2 as a Useful Biomarker for Predicting Clinical Outcomes in Hospitalized COVID-19 Patients
Previous Article in Special Issue
Automated Hypertension Detection Using ConvMixer and Spectrogram Techniques with Ballistocardiograph Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Pneumonia Based Lung Diseases Classification with Robust Technique Based on a Customized Deep Learning Approach

Department of Software Engineering, Faculty of Technology, Firat University, Elazig 23200, Turkey
Diagnostics 2023, 13(2), 260; https://doi.org/10.3390/diagnostics13020260
Submission received: 2 December 2022 / Revised: 15 December 2022 / Accepted: 9 January 2023 / Published: 10 January 2023
(This article belongs to the Special Issue Artificial Intelligence in Medicine 2023)

Abstract

:
Many people have been affected by infectious lung diseases (ILD). With the outbreak of the COVID-19 disease in the last few years, many people have waited for weeks to recover in the intensive care wards of hospitals. Therefore, early diagnosis of ILD is of great importance to reduce the occupancy rates of health institutions and the treatment time of patients. Many artificial intelligence-based studies have been carried out in detecting and classifying diseases from medical images using imaging applications. The most important goal of these studies was to increase classification performance and model reliability. In this approach, a powerful algorithm based on a new customized deep learning model (ACL model), which trained synchronously with the attention and LSTM model with CNN models, was proposed to classify healthy, COVID-19 and Pneumonia. The important stains and traces in the chest X-ray (CX-R) image were emphasized with the marker-controlled watershed (MCW) segmentation algorithm. The ACL model was trained for different training-test ratios (90–10%, 80–20%, and 70–30%). For 90–10%, 80–20%, and 70–30% training-test ratios, accuracy scores were 100%, 96%, and 96%, respectively. The best performance results were obtained compared to the existing methods. In addition, the contribution of the strategies utilized in the proposed model to classification performance was analyzed in detail. Deep learning-based applications can be used as a useful decision support tool for physicians in the early diagnosis of ILD diseases. However, for the reliability of these applications, it is necessary to undertake verification with many datasets.

1. Introduction

Around the world, acute infections of the lower respiratory tract have been a major source of illness and death [1]. Millions of people each year are impacted by lung disease, which poses serious hazards to children, seniors 65 and over, and those with a variety of clinical cases containing obesity, diabetes, and high blood pressure. Different factors can bring about lung disease, and the most known cause is viral [2].
A new member of the infectious lung disease back (ILD), COVID-19, first appeared in Wuhan, China at the end of 2019. The ICTV (International Committee on Taxonomy of Viruses) initially determined the coronavirus as SARS-CoV-2 [3]. At the beginning of 2020, the WHO (World Health Organization) changed its name to COVID-19. In March 2020, COVID-19 was named by the WHO as a pandemic disease. The number of COVID-19 diseases and fatalities surged so quickly during the pandemic that they reached approximately 600 million and 6.5 million, respectively [4]. The novel coronavirus has spread throughout the world due to this rise in instances.
Different signs and symptoms of infection, including high fever, diarrhea, coughing, respiratory conditions, and weariness, can be caused by the COVID-19 disease. In some active cases, COVID-19 can result in the patient experiencing major issues such as breathing difficulties, multi-organ failure, pneumonia, abrupt cardiac arrest, and even death. Because of the exponential boost in the number of active cases, healthcare services had virtually disappeared in many affluent nations. Until COVID-19 vaccines were created, most nations lacked testing supplies and adequate ventilators. The COVID-19 virus also made the situation more urgent. Many nations have cut off access to other nations because of this. These nations also pushed their citizens to stay at home and discouraged them from traveling domestically or internationally [5]. Despite the COVID-19 vaccines appearing to have brought the pandemic under control, the disease is still prevalent because fewer individuals are choosing to wear masks and more people feel comfortable going out in public. It is also of great importance that pneumonia, one of the most established ILD diseases, can be accurately distinguished from the popular COVID-19 disease.
Isolating infected patients from those who are not sick is one of the most crucial strategies in the fight against ILD. The most reliable and practical approach to diagnosis is chest X-ray (CX-R), which is a radiological imaging technique [6,7].
In recent times, the ILD with COVID-19 disease is a hot topic among scientists from many different academic fields around the world. Some researchers have submitted publications describing artificial intelligence-based algorithms for automatic ILD categorization from computed tomography (CT) and CX-R images to assist radiologists and specialists in making decisions [8,9,10].
In this study, it was aimed to improve the classification performance for ILD, particularly COVID-19, as it has severely impacted the human health system. Therefore, a specific deep-learning technique was developed for automated classification. The contributions of the proposed approach were expressed as follows:
  • Different regions on images are marked using the MCW segmentation algorithm. Because of this, it enables the unique information in the data to stand out. The pre-processing operation with the MCW algorithm increased the classification accuracy.
  • The attention structure in the CNN models is used to increase the distinctive representation. The LSTM blocks in deep learning models are added to benefit the ability to keep weight information in their memory blocks. Therefore, the attention-CNN LSTM (ACL) model, which was synchronously trained in the attention structure, convolutional layers, and the LSTM model, improved classification performance compared to the CNN model which did not contain attention and LSTM structures.

2. Related Works

Particularly in the medical field, numerous computer-aided detection methods have advanced substantially during the past few decades. Several artificial intelligence (AI)-based deep learning algorithms have been used in numerous medical applications, most notably in detection and diagnosis. Recent years have seen success with AI in the identification of several illnesses, including plant disease [11], osteoporosis [12], breast cancer [13], cardiovascular disease [14], and poultry disease [15]. Systems for computer-aided, deep learning-based ILD identification containing COVID-19 disease are necessary since ILD is now a popular clinical problem. Therefore, numerous researchers have created different AI applications employing both X-ray and CT images. Given that X-ray exams are less expensive than CT scan exams, it is practical and cost-effective to identify ILD utilizing CX-R images. On an X-ray dataset, Afshar et al. developed the COVIDCAPS framework, which has a 95.7% accuracy, and a 95.8% specificity [16]. These applications are capable of handling even little datasets with efficiency. Similar to how ResNet50 and Inception versions were utilized to build other models, the highest 99.7% accuracy was obtained by the ResNet50 model for binary classification [8]. Sethy et al. [17] successfully obtained an accuracy of 95.38% when separating the COVID-19-positive patients from the other cases using the SVM with the ResNet50 using learnable features from X-ray images.
Additionally, a deep convolutional neural network design has been applied to CX-R images by several researchers, producing accurate and useful results [9]. Hemdan et al. [18] built a customized CNN model for automated ILD classification. The structure containing seven CNN made up the proposed model. For binary and multi-class (pneumonia, COVID-19, and healthy) categorization, Apostolopoulos et al. [19] attained an accuracy of 98.75% and 93.48%, respectively. To classify the ILD samples including 1427 X-ray images, their deep learning model applied transfer learning. Utilizing data from multimodal imaging, Horry et al. [20] conducted detection through transfer learning. With the right parameters, the selected VGG19-based transfer learning model was able to achieve an 86% precision for ILD from ultrasounds (multi-class classification) and an 84% precision for CT images (binary classification). Using the DarkCovidNet network, Ozturk et al. [21] achieved an accuracy of 98.08% and 87.02% for ILD databases consisting of binary- and multi-classes. There are 17 CNN layers in all, each with a unique set of filters. Learning parameters were updated by the chaotic squirrel search algorithm, and the prediction process was carried out using the EfficientNet-B0 network, another hybrid model created by Altan and Karasu [22]. Transfer learning has been employed by Tsiknakis et al. [23] to categorize COVID-19 and standard X-ray images. They have determined that the entire Receiver Operating Characteristics (ROC) curve area is equal to 1. Demir [24] presented a hybrid deep learning model, which combined convolutional layers and the LSTM model, to automatically classify ILD. The model, named the DeepCoroNet, reached a classification accuracy of 96.54%. Ismael and Sengur [25] used a ResNet50 based-transfer learning approach for binary ILD classification. Deep features were extracted from the ResNet50 model. COVID-19 samples with deep features conveyed to the SVM algorithm were classified with an overall 94.7% accuracy. Muralidharan et al. [26] utilized a new deep-learning approach for automated ILD detection from X-ray images. First, X-ray image levels containing seven modes were tuned with a wavelet transform-based algorithm. To classify healthy, COVID-19, and pneumonia samples, these multiscale images were transmitted to the multiscale deep CNN. An accuracy of 96% was obtained with this model. Demir et al. [27] proposed a deep autoencoder that consisted of convolutional layers and an autoencoder model for ILD classification. The compressed layer (pooling layer) representation of the deep autoencoder network was used to extract features. A multilevel feature selection algorithm named serial data analysis and regression (SDAR) reduced the feature set sizes and boosted classification achievement. The classification accuracy of 97.33% was performed by the SVM classifier.
A good classification performance could not be obtained with CNN-based approaches trained from scratch when the approaches related to ILD are examined in general. A classifier such as SVM is also used to improve classification performance. This has increased the computational cost in the classification process. In the proposed study, superior performance has been achieved without the need for a separate classifier by increasing the performance of CNN-based models with attention and residual structures.

3. Dataset

The ILD database that was used contained 1061 CX-R samples in total, gathered from various accessible public sources. Radiologists and other specialists carried out the labeling activities. The COVID-19, Normal, and Pneumonia folders were used to reorganize the CX-R images. The numbers of COVID-19, Normal, and Pneumonia samples were 361, 200, and 500, respectively. Of the COVID-19 cases, 161 were female, compared to 200 male cases, and the average age of the individuals was above 45. The combined database with the COVID-19 and typical (healthy) CX-R samples were collected from the Kaggle database website links [28,29]. The dataset created by Wang et al. [30] added samples from the pneumonia class. Figure 1 displays CX-R image samples for each class. In Figure 1, the normal, COVID-19, and pneumonia classes are represented by the CX-R image samples included in the first, second and third columns respectively.

4. Proposed Methodology

In this study, a novel and efficient method for highly accurate ILD detection was developed. The dataset consisting of CX-R samples was used to evaluate the suggested approach, shown in Figure 2. Processing with marker-controlled watershed (MCW) segmentation of CX-R samples, and the attention-CNN LSTM (ACL) model, were the two steps of the suggested methodology. The CX-R images were subjected to pre-processing procedures at the initial level to improve classification performance. Gradient operation employing the Sobel operator was the initial level in the pre-processing procedure. The CX-R samples’ blob regions were highlighted using the gradient operator. In other words, the performance of the MCW segmentation was enhanced by the application of the gradient operator. The blobs on the gradient images were segmented using the MCW segmentation at the following level. Segmentation was utilized to lessen gray regions in the CX-R sample. In the third level of the pre-processing, CX-R samples were resized to 100 (height) × 100 (width) for standardizing CX-R samples and reducing the computational cost. In the last step, the processed CX-R samples were transmitted to the ACL model, which consisted of the attention structure, convolutional layers, and the LSTM model. The attention structure in the ACL model was used to increase the distinctive representation of the highlighted CX-R samples using the MCW segmentation algorithm. The convolutional layers were utilized to extract significant feature maps of CX-R samples. The LSTM blocks in the ACL architecture were added to benefit the ability to keep weight information in their memory blocks. These three strategies in the ACL model were synchronously operated in the training stage.

5. Methodology Techniques

5.1. Pre-Processing

The directional gradient is used to compute the gradient magnitudes and directions for input images in the gradient method. When performing these gradient operations, a gradient operator like Sobel, Roberts, and Prewitt [31] is used. Surface pixel density, including light pixel density, is high in the watershed transform. In other words, surfaces with low pixel density include dark surfaces. The watershed transformation can be used to identify catchment basins ( C a t B a s ) and watershed ridge lines in a sample [32]. The catchment basin C a t B a s ( m j ) Equation (1) of a minima m j is defined in the context of the watershed transformation as the collection of values ( x ) that are topographically nearest to m j compared to other local minimum m i in watershed transformation where function f C a t B a s ( D ) has minimum { m k } k S for a set S :
C a t B a s ( m j ) = { x D     | i S { i } :       f ( m j ) + T d ( x , m j ) < f ( m i ) + T d ( x , m i ) }
where domain and topographical distance, respectively, are D and T d . The set of points with no relation to any C a t B a s is known as the watershed transformation of f ( W s h e d ( f ) ) Equation (2):
W s h e d ( f ) = D ( j S C a t B a s ( m j ) )
given that W s h e d is a tag, W s h e d S , and W s h e d ( f ) is a mapping, and β : D S W s h e d is the result.
A strong and reliable algorithm for separating items with covered shapes, those whose borders are described as ledges, has been identified as the MCW segmentation. The associated objects have markers added to them. The associated items and backgrounds are given the inner and outer markers, respectively. By separating each object from its neighbors after segmentation, watershed zones are created on the selected ledges. As a result, the MCW segmentation algorithm can distinguish each distinctive tiny or large detail in a radiological image at the regional level. The MCW segmentation technique contains the following steps:
Step-1
Calculate the segmentation process that divides dark areas into items.
Step-2
Determine the foreground markers, which contain the linked pixel blots inside of each object.
Step-3
Determine background markers or pixels that are not a part of any item.
Step-4
Update for decreasing the foreground and background marker locations’ segmentation functions.
Step-5
Use the revised parameters to calculate the watershed transform.
Step-6
Compute learning parameters.

5.2. Machine Learning Technique

In the sequence folding layer, a set of image queue data is converted into a group of images, and convolution procedures are then implemented to these image queue data by employing a period. The data from the sequence folding layer is turned into sequence structure in the sequence unfolding layer.
The fundamental structural layer for a CNN called the convolution layer uses the convolution operation [33]. In this layer, there are several learnable filters. Convolutional layers extract features from inputs that are present in local, related parts of the dataset and assign their perspective to a feature map.
The implementation of the batch normalization (BN) layer is done to speed up network initialization and cut down on training time. Additionally, the vanishing gradient problem is lessened by employing BN layer operations [34]. The ReLU layer serves as the activation function and is used to set the gradient vanishing and explosion problems [35].
2-D data from the convolutional structure is converted into 1-D data through the smoothing layer to be used in the LSTM structure [36]. Classical LSTM layers consist of controlled structure units with input, output, and forget gates [37]. LSTM layers hold information data that were decided upon in a prior period and regulated the data transfer in units by using these gates. LSTM layers also significantly reduce the gradient disappearing and explosion issues. The forget gate structure resembles a neural network containing single-layer. Equation (3) states that the forget gate is active when the output is one.
f t = σ ( W [ x t , h t 1 , C t 1 ] + b f )
where the logistic sigmoid function is σ , the weighted vector is W , and the biased values are b f , the output vector of the preceding LSTM unit is h t 1 , the prior LSTM unit memory is C t 1 , and the accessible LSTM unit input is x t .
The existing memory in the input gate’s structure is made up of a single-layer neural network with the values of the previous memory units and the hyperbolic tangent function. Equations (4) and (5) present the respective formulae.
i t = σ ( W [ x t , h t 1 , C t 1 ] + b i )
C t = f t · C t 1 + i t · tan h ( [ x t , h t 1 , C t 1 ] ) + b c
The output gate receives the transmission of data and information from the current LSTM layer. Equations (6) and (7) show the computations for the output gate.
σ t = σ ( W [ x t , h t 1 , C t 1 ] + b o )
h t = o t · tan h ( C t )
The fully connected (FC) layer connects all of the neurons that are in the upper and lower layers. Neuron values are used to determine compatibility information for value and class [38]. The softmax layer receives the final FC layer data, including class possibility outcomes. The drop-out layer prevents the over-fitting issue by equating a set of input values to zero with a specified probability during optimization operation in training [39]. The softmax function Equation (8) for classifying in CNNs, performs the following functions:
S k = e x k i = 1 N e x i
The attention structure utilized in the proposed model is given in Figure 3, where g i depicts a gating signal vector acquired at a coarser scale and x i represents the output feature map of the ith layer, which subsequently sets the focus region for each pixel [40]. Equations (9) and (10) provide the computation of o u t using element-wise multiplication.
o u t = α i × x i
α i = σ ( φ T ( w x T x i + w g T g i + b g ) + b φ )
Bias terms are b φ and b g where linear transformations are w and φ using the 1 × 1 × 1 dimensional convolution operator, respectively. The learnable parameters for the attention modules are initially set at random and are optimized from scratch.

6. Experimental Studies

Coding procedures were operated on the Matlab R2021a program installed in a Windows-based operating system (Win 10 Pro) equipped with an Intel Core i9 processor, 32 GB DDR5 RAM, and 4 GB graphics card. Figure 4 shows the layer representation of the ACL network. The convolutional structure (six convolutional layers) in the ACL model starts with the convolutional layer named convlnp2d_1 and ends with the convolutional layer named convlnp2d_6. The attention structure in the ACL model was designed from the convlnp2d_4 convolutional layer.
The detailed layer information of the 28-layer ACL model is given in Table 1 in a sequential layer architecture.
The initial learning rate, max epochs, validation frequency, and minimum batch size, which are training option parameters of the ACL model, are selected as 0.001, 5, 30 and 32, respectively. The training optimization solver was stochastic gradient descent with momentum (SGDM). More detailed information about the simulation parameters performed is given in Table A1 in Appendix A. The Matlab integrated development environment (IDE) containing the proposed approach coding was run for 70–30%, 80–20%, and 90–10% training-test ratios. Accuracy and loss graphs in training-test processes for these options are given in Figure 5.
As seen in Figure 5, training-test accuracy and training-test loss values are given for all training-test ratios. The training accuracies for all training-test ratios were 100%. The best test accuracy (100%) was obtained for the 90–10% training-test ratio, while the worst test accuracy (94.65%) was obtained for the 70–30% training-test ratio. The best training-test loss values (0.019–0.01) were obtained for the 90–10% training-test ratio, while the worst training-test loss values (0.12–0.16) were obtained for the 70–30% training-test ratio.
At the end of the training process, according to class names, the test confusion matrix results are given in Figure 6 for different training-test ratios.
As seen in Figure 6, pneumonia samples were predicted with 100% accuracy. The worst COVID-19 and Normal sample predictions were obtained for the 70–30% training-test ratio. The COVID-19 samples were predicted with 100% accuracy for the 80–20% and 90–10% training-test ratios. The best prediction for Normal samples was achieved with the 90–10% training-test ratio.
In Table 2, the results of performance metrics, which consisted of sensitivity (Se), specificity (Sp), precision (Pr), and F-score, are given for different training-test ratios of the proposed ACL model. Using true positive (TP), true negative (TN), false positive (FP), and false negative (FN) values, these performance metrics were calculated in Equations (11)–(14) as follows:
S e = T P T P + F N
S p = T N T N + F P
P r = T P T P + F P
F - s c o r e = 2 × T P 2 × T P + F P + F N
In the 70–30% training-test ratio, the best Se was 1.0 for the Pneumonia class and the worst Se was 0.87 for the Normal class. The best Sp was 1.0 for the Pneumonia class and the worst Sp was 0.96 for the COVID-19 class. The best Pr was 1.0 for the Pneumonia class and the worst Pr was 0.90 for the Normal class. The best F-score was 1.0 for the Pneumonia class and the worst F-score was 0.88 for the Normal class. In the 80–20% training-test ratio, all metric results of the Pneumonia class were 1.0. The Se (1.0) of the COVID-19 class outperformed the Normal class. The worst Sp (0.94) and Pr (0.90) were obtained with the COVID-19 class while the worst F-score (0.89) was obtained with the Normal class. In the 90–10% training-test ratio, all metric results were 1.0 for all classes.
In Figure 7, ROC graphs and AUC results are given for all training-test ratios. The AUC values of the Pneumonia class were 1.0 in all training-test ratios. For the 70–30% training-test and 80–20% training-test, the COVID-19 class AUC results were 0.9532 and 0.9714, respectively. For the 70–30% training-test and 80–20% training-test, the Normal class AUC results were 0.9517 and 0.9000, respectively. In the 90–10% training-test ratio, the AUC values were 1.0 for the COVID-19 and Normal classes.

7. Discussion

In Figure 8, for all training-test ratios, confusion matrix results are given to evaluate the performance of the attention strategy and LSTM structure. In Table 3, the performance metrics results are calculated using TP, TN, FP, and FN values in these confusion matrices.
As seen in Table 3, attention strategy and LSTM structure, which synchronously operated in the ACL model, improved all performance metrics for all training-test ratios. The worst performance metrics results were obtained with the CNN model (Case 1) without the attention strategy and LSTM structure. The CNN model (Case 3) with only the attention strategy outperformed the CNN model (Case 2) with the only LSTM structure. According to the models in cases 1, 2, and 3, in the 70–30% training-test ratio, the Acc scores of the ACL model (Case 4) were improved by 4%, 3%, and 2%, respectively. In the 80–20% training-test ratio, the Acc scores of the ACL model were improved by 5%, 3%, and 1%, respectively. In the 90–10% training-test ratio, the Acc scores of the ACL model were improved by 15%, 10%, and 2%, respectively. For 70–30%, 80–20%, and 90–10% training-test ratios, the classification accuracies of MCW images compared to raw images were improved by 2%, 4%, and 5%, respectively.
To interpret the performance metrics in Table 3 more clearly, the graph in Figure 9 was created from the values in Table 3.
As seen in Figure 9, the slope is positive for most performance metrics, given that a curve is fitted from Case 1 to Case 4. This means that the proposed approach improves the classification performance in these metric values. However, the slope from Case 1 to Case 4 was zero for the Sp metric in the COVID-19 class at a training-test rate of 70–30%. In other words, classification performance was not improved for this metric and class. The slope from Case 3 to Case 4 was negative for the Sp, Pr, and F-score metrics in the COVID-19 class at the 80–20% training-test ratio. The proposed approach achieved worse classification performance for the COVID-19 class on these metrics than the model in Case 3. Contributions of the MCW segmentation algorithm, attention structure, and LSTM model in the proposed approach are given in Figure A1 of Appendix A.
In Table 4, the proposed approach was compared to the state-of-the-art techniques. These existing studies are included in Table 4 for two reasons. First, these studies have been popular in the COVID-19 field. Second, other methods were added due to their high performance. Acc, Se, and Sp metrics in Table 4 were taken into consideration as they are common metrics in all studies. The bar graph in Figure 10 was created using the data in Table 4 to better examine the performance results among existing studies. It cannot be said that the proposed approach and the existing studies are completely superior to each other. This is because the COVID-19 dataset is not standardized, and training-test ratios and model training parameters are different.
Ozturk et al. [21] used a deep CNN model, which included the end-to-end learning strategy, for automated ILD classification. This model, named the DarkCovidNet, reached an accuracy of 87.02%. This study, which was first published in the scope of COVID-19, can be considered one of the baseline models. In Ref. [9], ILD was automatically detected from chest X-ray images using an end-to-end-trained CNN architecture with numerous residual blocks. ResNet-50 and VGG-19 CNN models were not as effective as this model. With this approach, the scores for Acc, Se, and Sp were 92.64%, 91.37%, and 95.76, respectively. In Ref [19], the Acc, Sp, and Se metrics were used to compare the performance of transfer learning models such MobileNet v2, VGG19, and Inception. The MobileNet v2 model produced the best results. For automated ILD diagnosis, a SqueezeNet Model trained with the enhanced dataset from scratch was suggested in Ref. [41]. Additionally, hyperparameter optimization employed the Bayesian approach. Values of 98.26%, 98.33%, and 99.10%, respectively, were the highest ones recorded for Acc, Se, and Sp. In Ref [42], deep features from chest X-ray images were extracted using an end-to-end-trained CNN model with five convolutional layers. The SVM classifier with radial basis function kernel achieved an Acc of 98.97%, an Se of 89.39%, and an Sp of 99.75 during the classification stage. Deep features from the fully connected and convolutional layers of the AlexNet model were retrieved for Ref [43]. The Relief algorithm decreased a total of 10,568 deep features to 1500 deep features. This model had 99.18% Acc, 99.13% Se, and 99.21% Sp, respectively. In Ref [44], MobileNet v2 and SqueezeNet models were used to create the integrated features. The SVM classifier achieved an Acc of 99.27%, Se of 98.33%, and Sp of 99.69% after hyperparameters were tweaked with the Social Mimic method. Seven convolutional layers of the compressed CA model were used in the DeepCov19Net Model [27] to extract deep features. Three techniques were utilized in the pre-processing (Laplacian), feature selection (SDAR), and hyperparameter tuning (Bayesian) stages to improve classification performance. With an accuracy of 99.75%, sensitivity of 99.33%, and specificity of 99.79%, the suggested approach performed well. Convolutional layers and the LSTM model were merged in Demir’s hybrid deep learning model [24] to automatically detect ILD. The DeepCoroNet model achieved a classification accuracy of 96.54%. For ILD classification, Ismael and Sengur [25] employed a ResNet50 based-transfer learning method. The ResNet50 model’s deep characteristics were taken. The SVM method was able to classify ILD samples with deep features with an overall accuracy of 94.7%. An innovative deep learning technique was used by Muralidharan et al. [26] to detect ILD automatically from X-ray images. First, the fixed boundary-based two-dimensional empirical wavelet transform (FB2DEWT) approach was used to fine-tune X-ray image levels with seven modes. These multiscale images were sent to the multiscale deep CNN to classify healthy, COVID-19, and pneumonia samples. Using this model, an accuracy of 96% was achieved.
The accuracy (100%) of the proposed approach is valid for a 90–10% training-test ratio. As this ratio is decreased, it has been observed that the classification performance decreases. In addition, the limitation of sample input sizes to 100 × 100 also affected the classification performance. Classification performance can be improved by increasing the input size with more powerful hardware.
The datasets used in this study were brought together from three different sources. This limits a more realistic performance comparison with existing studies. Evaluations that will be made with samples obtained from a more organized and single database will be able to make more reliable performance comparisons.

8. Conclusions

In this study, ILD classification was performed with a powerful customized deep learning-based method. In the proposed approach, the MCW segmentation algorithm, which emphasizes the spots and traces in CX-R images in the COVID-19 class, is used for a more efficient operation of the attention structure in the ACL model. Attention and LSTM architectures in the ACL model have increased the classification performance as mentioned in the Discussion section. The classification performance of the model was evaluated for different training-test ratios. Classification accuracy reached 100% at a test rate of 90–10%. At test rates of 80–20% and 70–30%, the success rate was over 96%. The performance of the model was compared with both baseline and high classification methods. Although the classification performance of the proposed approach is good according to these methods, it is not correct to talk about the superiority of the methods because the data sets and evaluation methods used are not the same. The classification performance was obtained with low-size input data such as 100 × 100. If the hardware performance is further increased, it is possible to increase the classification performance even more. Additionally, it has been seen that the hyperparameter selection in the proposed deep learning model is very important in classification performance. These hyperparameters are tuned for empirical outputs. In future studies, the hyperparameters of deep learning models will be automatically tuned by optimization techniques such as the Bayesian optimization algorithm.

Funding

This research received no external funding.

Data Availability Statement

In this paper, the dataset is publicly available.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Training option parameters of the SGDM solver.
Table A1. Training option parameters of the SGDM solver.
SGDM OptionsValues
Momentum0.9
Initial Learn Rate0.001
Learn Rate Schedule‘none’
Learn Rate Drop Factor0.1
Learn Rate Drop Period10
L2 Regularization0.0001
Gradient Threshold Method‘l2norm’
Gradient ThresholdInf
Max Epochs5
Mini-BatchSize32
Verbose0
Verbose Frequency50
Validation Frequency30
Validation PatienceInf
Shuffle‘every-epoch’
Execution Environment‘auto’
Sequence Length‘longest’
Sequence Padding Value0
Sequence Padding Direction‘right’
Dispatch In Background0
Reset Input Normalization1
Batch Normalization Statistics‘population’
Figure A1. The contribution chart of the proposed methodology strategies.
Figure A1. The contribution chart of the proposed methodology strategies.
Diagnostics 13 00260 g0a1

References

  1. European Lung Foundation. Acute Lower Respiratory Infections. Available online: https://europeanlung.org/en/information-hub/lung-conditions/acute-lower-respiratory-infections/ (accessed on 2 March 2022).
  2. American Lung Association. Learn about Pneumonia. Available online: https://www.lung.org/lung-health-diseases/lung-disease-lookup/pneumonia/learn-about-pneumonia (accessed on 2 March 2022).
  3. Wu, F.; Zhao, S.; Yu, B.; Chen, Y.M.; Wang, W.; Song, Z.G.; Hu, Y.; Tao, Z.W.; Tian, J.H.; Pei, Y.Y.; et al. A new coronavirus associated with human respiratory disease in China. Nature 2020, 579, 265–269. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. World Health Organization. WHO Coronavirus (COVID-19) Dashboard. Available online: https://covid19.who.int (accessed on 7 September 2022).
  5. Chen, N.; Zhou, M.; Dong, X.; Qu, J.; Gong, F.; Han, Y.; Qiu, Y.; Wang, J.; Liu, Y.; Wei, Y.; et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: A descriptive study. Lancet 2020, 395, 507–513. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Bernheim, A.; Mei, X.; Huang, M.; Yang, Y.; Fayad, Z.A.; Zhang, N.; Diao, K.; Lin, B.; Zhu, X.; Li, K.; et al. Chest CT findings in coronavirus disease 2019 (COVID-19): Relationship to duration of infection. Radiology 2020, 295, 685–691. [Google Scholar] [CrossRef] [Green Version]
  7. Fang, Y.; Zhang, H.; Xie, J.; Lin, M.; Ying, L.; Pang, P.; Ji, W. Sensitivity of chest CT for COVID-19: Comparison to RT-PCR. Radiology 2020, 296, E115–E117. [Google Scholar] [CrossRef]
  8. Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef]
  9. Wang, L.; Lin, Z.Q.; Wong, A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef] [PubMed]
  10. Tuncer, T.; Dogan, S.; Ozyurt, F. An automated Residual Exemplar Local Binary Pattern and iterative ReliefF based corona detection method using lung X-ray image. Chemom. Intell. Lab. Syst. 2020, 203, 104054. [Google Scholar] [CrossRef]
  11. Singh, V.; Sharma, N.; Singh, S. A review of imaging techniques for plant disease detection. Artif. Intell. Agric. 2020, 4, 229–242. [Google Scholar] [CrossRef]
  12. Tu, K.N.; Lie, J.D.; Wan, C.K.V.; Cameron, M.; Austel, A.G.; Nguyen, J.K.; Van, K.; Hyun, D. Osteoporosis: A review of treatment options. P&T 2018, 43, 92–104. [Google Scholar]
  13. Waks, A.G.; Winer, E.P. Breast Cancer Treatment: A Review. JAMA—J. Am. Med. Assoc. 2019, 321, 288–300. [Google Scholar] [CrossRef]
  14. Dimmeler, S. Cardiovascular disease review series. EMBO Mol. Med. 2011, 3, 697. [Google Scholar] [CrossRef] [PubMed]
  15. Okinda, C.; Nyalala, I.; Korohou, T.; Okinda, C.; Wang, J.; Achieng, T.; Wamalwa, P.; Mang, T.; Shen, M. A review on computer vision systems in monitoring of poultry: A welfare perspective. Artif. Intell. Agric. 2020, 4, 184–208. [Google Scholar] [CrossRef]
  16. Afshar, P.; Heidarian, S.; Naderkhani, F.; Oikonomou, A.; Plataniotis, K.N.; Mohammadi, A. COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images. Pattern Recognit. Lett. 2020, 138, 638–643. [Google Scholar] [CrossRef] [PubMed]
  17. Sethy, P.K.; Behera, S.K.; Ratha, P.K.; Biswas, P. Detection of coronavirus disease (COVID-19) based on deep features and support vector machine. Int. J. Math. Eng. Manag. Sci. 2020, 5, 643–651. [Google Scholar] [CrossRef]
  18. Hemdan, E.E.-D.; Shouman, M.A.; Karar, M.E. COVIDX-Net: A Framework of Deep Learning Classifiers to Diagnose COVID-19 in X-Ray Images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  19. Apostolopoulos, I.D.; Mpesiana, T.A. Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [Green Version]
  20. Horry, M.J.; Chakraborty, S.; Paul, M.; Ulhaq, A.; Pradhan, B.; Saha, M.; Shukla, N. COVID-19 Detection through Transfer Learning Using Multimodal Imaging Data. IEEE Access 2020, 8, 149808–149824. [Google Scholar] [CrossRef]
  21. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Rajendra Acharya, U. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef]
  22. Altan, A.; Karasu, S. Recognition of COVID-19 disease from X-ray images by hybrid model consisting of 2D curvelet transform, chaotic salp swarm algorithm and deep learning technique. Chaos Solitons Fractals 2020, 140, 110071. [Google Scholar] [CrossRef]
  23. Tsiknakis, N.; Trivizakis, E.; Vassalou, E.; Papadakis, G.; Spandidos, D.; Tsatsakis, A.; Sánchez-García, J.; López-González, R.; Papanikolaou, N.; Karantanas, A.; et al. Interpretable artificial intelligence framework for COVID-19 screening on chest X-rays. Exp. Ther. Med. 2020, 20, 727–735. [Google Scholar] [CrossRef]
  24. Demir, F. DeepCoroNet: A deep LSTM approach for automated detection of COVID-19 cases from chest X-ray images. Appl. Soft Comput. 2021, 103, 107160. [Google Scholar] [CrossRef] [PubMed]
  25. Ismael, A.M.; Şengür, A. Deep learning approaches for COVID-19 detection based on chest X-ray images. Expert Syst. Appl. 2021, 164, 114054. [Google Scholar] [CrossRef] [PubMed]
  26. Muralidharan, N.; Gupta, S.; Prusty, M.R.; Tripathy, R.K. Detection of COVID19 from X-ray images using multiscale Deep Convolutional Neural Network. Appl. Soft Comput. 2022, 119, 108610. [Google Scholar] [CrossRef] [PubMed]
  27. Demir, F.; Demir, K.; Şengür, A. DeepCov19Net: Automated COVID-19 Disease Detection with a Robust and Effective Technique Deep Learning Approach. New Gener. Comput. 2022, 40, 1053–1075. [Google Scholar] [CrossRef] [PubMed]
  28. COVID-19 Chest X-Ray Images. Available online: https://www.kaggle.com/bachrr/covid-chest-xray (accessed on 1 March 2022).
  29. COVID-19 Chest X-Ray Images (Pneumonia). Available online: https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia (accessed on 1 March 2022).
  30. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-ray: Hospital-Scale Chest X-ray Database and Benchmarks on Weakly Supervised Classification and Localization of Common Thorax Diseases. In Deep Learning and Convolutional Neural Networks for Medical Imaging and Clinical Informatics; Advances in Computer Vision and Pattern Recognition; Springer: Cham, Switzerland, 2019; pp. 369–392. [Google Scholar]
  31. Wang, L.; Gong, P.; Biging, G.S. Individual tree-crown delineation and treetop detection in high-spatial-resolution aerial imagery. Photogramm. Eng. Remote Sensing 2004, 70, 351–357. [Google Scholar] [CrossRef] [Green Version]
  32. Huang, H.; Li, X.; Chen, C. Individual tree crown detection and delineation from very-high-resolution UAV images based on bias field and marker-controlled watershed segmentation algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2253–2262. [Google Scholar] [CrossRef]
  33. Alakwaa, W.; Nassef, M.; Badr, A. Lung cancer detection and classification with 3D convolutional neural network (3D-CNN). Int. J. Biol. Biomed. Eng. 2017, 11, 66–73. [Google Scholar] [CrossRef] [Green Version]
  34. Demir, F.; Siddique, K.; Alswaitti, M.; Demir, K.; Sengur, A. A Simple and Effective Approach Based on a Multi-Level Feature Selection for Automated Parkinson’s Disease Detection. J. Pers. Med. 2022, 12, 55. [Google Scholar] [CrossRef]
  35. Demir, F.; Akbulut, Y. A new deep technique using R-CNN model and L1NSR feature selection for brain MRI classification. Biomed. Signal Process. Control 2022, 75, 103625. [Google Scholar] [CrossRef]
  36. Petmezas, G.; Haris, K.; Stefanopoulos, L.; Kilintzis, V.; Tzavelis, A.; Rogers, J.A.; Katsaggelos, A.K.; Maglaveras, N. Automated Atrial Fibrillation Detection using a Hybrid CNN-LSTM Network on Imbalanced ECG Datasets. Biomed. Signal Process. Control 2021, 63, 102194. [Google Scholar] [CrossRef]
  37. Demir, F. DeepBreastNet: A novel and robust approach for automated breast cancer detection from histopathological images. Biocybern. Biomed. Eng. 2021, 41, 1123–1139. [Google Scholar] [CrossRef]
  38. Demir, F.; Tașcı, B. An Effective and Robust Approach Based on R-CNN+LSTM Model and NCAR Feature Selection for Ophthalmological Disease Detection from Fundus Images. J. Pers. Med. 2021, 11, 1276. [Google Scholar] [CrossRef] [PubMed]
  39. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  40. Atila, O.; Şengür, A. Attention guided 3D CNN-LSTM model for accurate speech based emotion recognition. Appl. Acoust. 2021, 182, 108260. [Google Scholar] [CrossRef]
  41. Ucar, F.; Korkmaz, D. COVIDiagnosis-Net: Deep Bayes-SqueezeNet based diagnosis of the coronavirus disease 2019 (COVID-19) from X-ray images. Med. Hypotheses 2020, 140, 109761. [Google Scholar] [CrossRef]
  42. Nour, M.; Cömert, Z.; Polat, K. A Novel Medical Diagnosis model for COVID-19 infection detection based on Deep Features and Bayesian Optimization. Appl. Soft Comput. 2020, 97, 106580. [Google Scholar] [CrossRef] [PubMed]
  43. Turkoglu, M. COVIDetectioNet: COVID-19 diagnosis system based on X-ray images using features selected from pre-learned deep features ensemble. Appl. Intell. 2021, 51, 1213–1226. [Google Scholar] [CrossRef]
  44. Toğaçar, M.; Ergen, B.; Cömert, Z. COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches. Comput. Biol. Med. 2020, 121, 103805. [Google Scholar] [CrossRef]
Figure 1. The samples of the dataset for each class: (a) Normal, (b) COVID-19, and (c) Pneumonia.
Figure 1. The samples of the dataset for each class: (a) Normal, (b) COVID-19, and (c) Pneumonia.
Diagnostics 13 00260 g001
Figure 2. The framework of the proposed approach.
Figure 2. The framework of the proposed approach.
Diagnostics 13 00260 g002
Figure 3. The framework of attention strategy.
Figure 3. The framework of attention strategy.
Diagnostics 13 00260 g003
Figure 4. The representation of the proposed ACL model.
Figure 4. The representation of the proposed ACL model.
Diagnostics 13 00260 g004
Figure 5. The accuracy and loss graphs for different training-test ratios of the proposed ACL model.
Figure 5. The accuracy and loss graphs for different training-test ratios of the proposed ACL model.
Diagnostics 13 00260 g005
Figure 6. The confusion matrices for different training-test ratios of the proposed ACL model. The highest true and false values have a darker background.
Figure 6. The confusion matrices for different training-test ratios of the proposed ACL model. The highest true and false values have a darker background.
Diagnostics 13 00260 g006
Figure 7. The ROC curves and AUC values for different training-test ratios of the proposed ACL model.
Figure 7. The ROC curves and AUC values for different training-test ratios of the proposed ACL model.
Diagnostics 13 00260 g007
Figure 8. The confusion matrices show the effects of Attention and LSTM structures on the proposed CNN model. The highest true and false values have a darker background.
Figure 8. The confusion matrices show the effects of Attention and LSTM structures on the proposed CNN model. The highest true and false values have a darker background.
Diagnostics 13 00260 g008
Figure 9. The graphical analysis of performance metrics given in Table 3 for all training-test ratios.
Figure 9. The graphical analysis of performance metrics given in Table 3 for all training-test ratios.
Diagnostics 13 00260 g009
Figure 10. The bar graph obtained from Table 4 [9,19,21,24,25,26,27,41,42,43,44].
Figure 10. The bar graph obtained from Table 4 [9,19,21,24,25,26,27,41,42,43,44].
Diagnostics 13 00260 g010
Table 1. The architecture information of the proposed ACL model.
Table 1. The architecture information of the proposed ACL model.
Layer #Layer NameLayerLayer Info
1inputSequence InputSequence input with 100 × 100 × 3 dimensions
2foldSequence FoldingSequence folding
3convInp2d_1Convolution16 3 × 3 × 3 convolutions with stride: 1 and padding: 0
4batchnorm_1BNBN with 16 channels
5relu1ReLUReLU
6maxpool2d_1Max Pooling3 × 3 max pooling with stride: 1 and padding: 0
7convInp2d_2Convolution16 3 × 3 × 16 convolutions with stride: 1 and padding: 0
8batchnorm_2BNBN with 16 channels
9relu2ReLUReLU
10maxpool2d2Max Pooling3 × 3 max pooling with stride: 1 and padding: 0
11convInp2d_3Convolution16 3 × 3 × 16 convolutions with stride: 1 and padding: 0
12relu_2_1_1_5ReLUReLU
13convInp2d_4Convolution16 3 × 3 × 16 convolutions with stride: 1 and padding: 0
14maxpool2d_2Max Pooling3 × 3 max pooling with stride: 1 and padding: 0
15convInp2d_5Convolution16 3 × 3 × 16 convolutions with stride: 1 and padding: 0
16relu_2_1_1_6ReLUReLU
17convInp2d_6Convolution16 3 × 3 × 16 convolutions with stride: 1 and padding: 0
18sigmoid_1_1_1_3sigmoidLayersigmoidLayer
19mul_1_1_1_3ElementWiseMultiplicationElement Wise Multiplication of 2 inputs
20unfoldSequence UnfoldingSequence unfolding
21flattenFlattenFlatten
22lstmLSTMLSTM with 100 hidden units
23fc0FC100 FC layer
24ReLu2ReLUReLU
25drop1Dropout40% dropout
26fc1FC3 FC layer
27softmaxSoftmaxsoftmax
28class outputClassification Outputcrossentropyex
Table 2. The performance metrics for different training-test ratios of the proposed ACL model.
Table 2. The performance metrics for different training-test ratios of the proposed ACL model.
Training-Test RatiosClassesSeSpPrF-score
70–30%COVID-190.940.960.930.94
Normal0.870.980.900.88
Pneumonia1.01.01.01.0
80–20%COVID-191.00.940.900.95
Normal0.801.01.00.89
Pneumonia1.01.01.01.0
90–10%COVID-191.01.01.01.0
Normal1.01.01.01.0
Pneumonia1.01.01.01.0
Table 3. The performance scores that show the effects of Attention and LSTM structures on the proposed CNN model.
Table 3. The performance scores that show the effects of Attention and LSTM structures on the proposed CNN model.
Case #ModelsTraining-Test RatiosClassesSeSpPrF-scoreAcc
1No Attention and LSTM Structures70–30%COVID-190.910.960.920.910.92
Normal0.820.970.860.84
Pneumonia0.980.950.950.97
80–20%COVID-190.930.930.880.910.91
Normal0.750.960.830.79
Pneumonia0.960.960.960.96
90–10%COVID-190.830.910.830.830.85
Normal0.750.910.680.71
Pneumonia0.90.940.940.92
2Only LSTM Structure70–30%COVID-190.930.960.920.920.94
Normal0.830.980.910.87
Pneumonia0.990.960.960.97
80–20%COVID-190.960.930.880.920.93
Normal0.750.980.910.82
Pneumonia0.980.970.970.98
90–10%COVID-190.890.950.910.900.90
Normal0.850.940.770.81
Pneumonia0.920.940.940.93
3Only Attention Structure70–30%COVID-190.930.960.930.930.95
Normal0.870.980.910.89
Pneumonia0.990.970.970.98
80–20%COVID-190.991.01.00.990.95
Normal0.781.01.00.87
Pneumonia1.00.970.970.99
90–10%COVID-190.970.990.970.970.98
Normal0.950.990.950.95
Pneumonia1.01.01.01.0
4Proposed Approach70–30%COVID-190.940.960.930.940.96
Normal0.870.980.900.88
Pneumonia1.01.01.01.0
80–20%COVID-191.00.940.900.950.96
Normal0.801.01.00.89
Pneumonia1.01.01.01.0
90–10%COVID-191.01.01.01.01.0
Normal1.01.01.01.0
Pneumonia1.01.01.01.0
Table 4. The comparison of the proposed approach compared to existing methodologies.
Table 4. The comparison of the proposed approach compared to existing methodologies.
AuthorsMethodsDatasetClasses #Acc (%)Se (%)Sp (%)
Ozturk et al. [21]DarkCovidNetPublic387.0292.1889.96
Wang et al. [9]COVID-NetPublic392.6491.3795.76
Apostolopoulos et al. [19]The pre-trained CNNsPublic396.7898.6696.46
Ucar and Korkmaz [41]COVIDiagnosis-NetPublic398.2698.3399.10
Nour et al. [42]Deep CNN, SVMPublic398.9789.3999.75
Turkoglu [43]AlexNet, Feature Selection, SVMPublic399.1899.1399.21
Togacar et al. [44]Deep features, SqueezeNet, SVMPublic399.2798.3399.69
Demir et al. [27]DeepCov19NetPublic399.7599.3399.79
Demir [24]DeepCoroNetPublic3100.00100.00100.00
Ismael and Sengur [25]ResNet50 Features + SVMPublic294.7491.0098.89
Muralidharan et al. [26]FB2DEWT + CNNPublic396.0096.0096.00
Proposed MethodProcessed images, ACL modelPublic3100.00100.00100.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Akbulut, Y. Automated Pneumonia Based Lung Diseases Classification with Robust Technique Based on a Customized Deep Learning Approach. Diagnostics 2023, 13, 260. https://doi.org/10.3390/diagnostics13020260

AMA Style

Akbulut Y. Automated Pneumonia Based Lung Diseases Classification with Robust Technique Based on a Customized Deep Learning Approach. Diagnostics. 2023; 13(2):260. https://doi.org/10.3390/diagnostics13020260

Chicago/Turabian Style

Akbulut, Yaman. 2023. "Automated Pneumonia Based Lung Diseases Classification with Robust Technique Based on a Customized Deep Learning Approach" Diagnostics 13, no. 2: 260. https://doi.org/10.3390/diagnostics13020260

APA Style

Akbulut, Y. (2023). Automated Pneumonia Based Lung Diseases Classification with Robust Technique Based on a Customized Deep Learning Approach. Diagnostics, 13(2), 260. https://doi.org/10.3390/diagnostics13020260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop