Next Article in Journal
Experimental Study on Airflow and Temperature Predicting in a Double Skin Façade in Hot and Cold Seasons in Romania
Next Article in Special Issue
An Operational Image-Based Digital Twin for Large-Scale Structures
Previous Article in Journal
Adaptive Cruise Control for Intelligent City Bus Based on Vehicle Mass and Road Slope Estimation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RiIG Modeled WCP Image-Based CNN Architecture and Feature-Based Approach in Breast Tumor Classification from B-Mode Ultrasound

by
Shahriar Mahmud Kabir
1,2,*,
Mohammed I. H. Bhuiyan
2,
Md Sayed Tanveer
1 and
ASM Shihavuddin
1
1
Department of Electrical and Electronic Engineering, Green University of Bangladesh, Dhaka 1207, Bangladesh
2
Department of Electrical and Electronic Engineering, Bangladesh University of Engineering Technology, Dhaka 1000, Bangladesh
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(24), 12138; https://doi.org/10.3390/app112412138
Submission received: 10 October 2021 / Revised: 29 November 2021 / Accepted: 3 December 2021 / Published: 20 December 2021

Abstract

:
This study presents two new approaches based on Weighted Contourlet Parametric (WCP) images for the classification of breast tumors from B-mode ultrasound images. The Rician Inverse Gaussian (RiIG) distribution is considered for modeling the statistics of ultrasound images in the Contourlet transform domain. The WCP images are obtained by weighting the RiIG modeled Contourlet sub-band coefficient images. In the feature-based approach, various geometrical, statistical, and texture features are shown to have low ANOVA p-value, thus indicating a good capacity for class discrimination. Using three publicly available datasets (Mendeley, UDIAT, and BUSI), it is shown that the classical feature-based approach can yield more than 97% accuracy across the datasets for breast tumor classification using WCP images while the custom-made convolutional neural network (CNN) can deliver more than 98% accuracy, sensitivity, specificity, NPV, and PPV values utilizing the same WCP images. Both methods provide superior classification performance, better than those of several existing techniques on the same datasets.

1. Introduction

Breast cancer in women is an important health problem for both developed and developing countries. A recent report by the Cancer Statistics Center of the American Cancer Society shows that among the estimated new cancer cases in 2020, the number of cases of breast cancer is 1,806,590. It also shows that around 606,520 cancer deaths are anticipated only in the United States, of which breast cancer contributes to around 279,100 (approximately 46%) [1].
Breast ultrasound (US) imaging is one of the most promising tools to distinguish and classify breast tumors among the other imaging techniques such as mammograms, MRIs, etc. Ultrasonic images are constructed by dispersing pulses of ultrasound into human tissue using a probe. In US imaging, the pulses echo off the body tissues having several reflection properties which are recorded and exhibited as an image. The B-mode or brightness mode image, in turn, shows the acoustic impedance of a cross-section of tissue in two dimensions.
Plenty of studies have been carried out and are still running to achieve higher accuracy in automatically differentiating malignant breast tumors from benign ones. In 2002, K. Horsch et al. [2] used the depth-to-width ratio of the region of a lesion, the normalized radial gradient, autocorrelation in the depth of lesion region, and minimum side difference of the lesion boundary for the detection of breast tumors. In 2007, Wei-Chih Shen et al. [3] presented a computer-aided diagnostic (CAD) system where a few geometric features such as shape, orientation, margin, lesion boundary, echo pattern, and posterior acoustic feature are used. They reported an accuracy of 91.7%. However, in their work, the segmentation of lesions was performed both manually and automatically from the normal breast tissue, making it complicated for vast data of US images. In recent years, multi-resolution transform domain-based methods using US images showed higher promise in automatic breast tumor classification tasks. In 2017, Sharmin R. Ara et al. [4] employed an empirical mode decomposition (EMD) method with discrete wavelet transform (DWT) followed by a wrapper algorithm to obtain a set of non-redundant features for classifying breast tumors and reported an accuracy of 98.01% on their database. Unfortunately, traditional DWT has limited directional information with the directions being only along with horizontal, vertical, and diagonal dimensions. In 2019, P. Acevedo et al. [5] used a gray level concurrency matrix (GLCM) algorithm with linear SVM to classify benign and malignant tumors. Eltoukhy et al. [6] presented a comparative study between two multi-resolution transform domain-based techniques, namely, wavelet and curvelet, for breast tumor diagnosis in digital mammogram images. Contourlet transform, another multi-resolution transform domain-based technique [7], is providing more directional information, with various directional decomposition levels increasing along with the increase of the pyramidal decomposition levels. It has also shown to be a better descriptor of arbitrary shapes and contours compared to wavelet transform. Contourlet-based mammography mass classification is reported in [8,9].
It is to be noted that there are many concerns about performing mammography, which utilizes low-energy X-ray radiation, for regular checkups. Moreover, in mammography, many women have to go through unnecessary breast biopsies due to a lack of specificity. In benign cases, this figure is about 65–85% of unnecessary breast biopsies [10]. This unnecessary biopsy causes patients emotional and physical burdens by increasing the unexpected cost of mammographic screening which can be easily avoided. For that reason, researchers have recently been putting their efforts into relatively safer approaches like ultrasonography and elastography. In [11], Contourlet transform is employed on ultrasound shear-wave elastography (SWE) images where Contourlet-based texture features were used with Fisher classifier for classification purposes and reported an accuracy of 92.5%. Contourlet transform was also employed in [12] on B-mode US, shear wave elastography (SWE), and contrast-enhanced US (CEUS) images, and it reported an accuracy of 67.57%, 81.08%, and 75%, respectively. Both DWT and curvelet transforms are not capable of providing a variety of directions and also do not have good directional selectivity in two dimensions as compared to the Contourlet transform.
Many researchers, rather than trying to extract various features from the original B-mode images, had tried to use statistical modeling such as Gaussian or Nakagami models to create parametric models of the images [13,14] and found satisfactory results. The primary inspiration behind these types of statistical methods is to mathematically model the scattering of sound waves through the tissues, which can provide more insight into the system and, thus, provide more accurate features. Moreover, statistical modeling can describe the false positive (FP) and false-negative (FN) more precisely than spatial domain visual ultrasound images. Ming-Chih Ho et al. [15] used Nakagami modeling to address the detection of liver fibrosis in rats, which might be different from breast tumor classification, but it does provide some validation to the usefulness of parametric imaging. A recent trend in this field is the application of deep learning-based neural networks such as CNN as a potential tool for the automated analysis of different types of medical images, allowing the easy and robust diagnosis of various types of medical ailments. Unlike the traditional feature engineering-based techniques, whose accuracies depend on the robustness of the feature extraction algorithms, deep neural networks allow the implementation of extremely efficient and highly accurate automated medical tools, especially for the automated classification of breast tumors [16] if provided with enough data and resources. Zhou et al. [17] applied CNN and morphology information extraction methods on shear-wave elastography data for breast tumor classification. Zeimarani et al. [18] also employed CNN for breast tumor classification, but they applied it directly on breast ultrasound images. Singh et al. [19] used a generative adversarial network (GAN) along with CNN for breast tumor segmentation and classification using ultrasound images with satisfactory outcomes. Shivabalan et al. [20] used a simple neural network that is cheap and easy to use and gained a satisfactory result in a small online dataset. Hou et al. [21] proposed an on-device AI pre-trained neural network model which can train the CNN classifier on a portable device without a cloud-based server. Shin et al. [22], in their work, illustrated a neural network with faster R-CNN and ResNet-101. Byra et al. [23] presented a method of US to RGB conversion and fine-tuning using back-propagation. Qi et al. [24] illustrated a novel approach of Deep CNN with multi-scale kernels and skip connections. However, the deep neural network methods do not take the statistical properties or characteristics into account.
In this work, the Rician Inverse Gaussian (RiIG) distribution [25] is shown to be highly suitable for modeling the statistics of the Contourlet coefficient images. It is shown that features (statistical, geometrical, and texture-based) extracted from the RiIG parametric images provide more accuracy for breast tumor classification than the features extracted from US B-mode images. Parametric (P) images are obtained by replacing a pixel with the RiIG parameter (δ), estimated over a local neighborhood of the corresponding pixels where the center of that neighborhood is considered for the requested parameter. Thus, the pixel values are mapped (δ-map) into the parameter values which provide the parametric image. To enhance the incorporation of the statistical characteristics in classification, WCP images are introduced. The WCP images are constructed by multiplying the Contourlet Parametric (CP) images (i.e., parametric images obtained from the Contourlet coefficients of images) with their corresponding Contourlet transformed coefficient images. Here, the term “weighted” is being used in this scheme because all the parameters of CP images were getting weighted by multi-plication with their corresponding Contourlet coefficient images. In our work, the WCP images are utilized for breast tumor classification in both feature extraction-based approach and convolutional neural network (CNN) based approach. In both approaches, the features extracted from WCP images are subjected to various classifiers such as the support vector machine (SVM), k-nearest neighbors (KNN), fitted binary classification decision tree (BCT), fitted error-correcting output codes (ECOC) model, binary Gaussian kernel classification model (BGKC), linear classification models for two-class (binary) learning with high-dimensional (BLHD), the fitted ensemble of learners for classification (ELC), etc.
From the results, it is shown that the features extracted from the WCP images provide the highest accuracy compared to the original US B-mode images, parametric (P) images, Contourlet transformed images and CP images. It is to be noted that this work is the first one to investigate the effectiveness of WCP images for breast tumor classification. For the CNN-based approach, the WCP images of six Contourlet sub-band coefficients are concatenated to form a six-channel 3D stack image and then fed to the neural network, with the same seven classifiers applied on the output side. New neural network architecture is proposed instead of using the available pre-trained networks, since the pre-trained networks are built for 1-channel or 3-channel visual images with spatial dimensions, and thus, they are not compatible with our 6-channel 3D stack of transform domain Contourlet sub-band coefficients. The performance of the prior classifiers is tested on three datasets of US images for breast tumor classification and compared with existing methods.
The main contributions of this work are listed below:
  • This paper demonstrates the suitability of Rician inverse Gaussian (RiIG) distribution [25] for statistical modeling of the Contourlet transformed breast ultrasound images. Further, it shows that the RiIG distribution is better than the well-known Nakagami distribution in capturing the statistics of Contourlet transformed breast ultrasound images in breast tumors classification.
  • The suitability of WCP images in classifying breast tumors is investigated for the first time employing three different publicly available datasets consisting of 1193 B-mode ultrasound images and shows that a very high degree of accuracy can be obtained in breast tumor classification using traditional machine-learning-based classifiers as well as deep convolutional neural networks (CNN).
  • A new deep CNN architecture is proposed for the classification of breast tumors based on RiIG modeled WCP images for the first time. It is also shown that the efficacy of the CNN architecture is superior to the classical feature-based method.

2. Materials and Methods

2.1. Datasets

A total of 996 clinical cases in 1060 US images are used in this study; 250 were from Database-I (Mendeley Dataset), 163 were from Database-II (Dataset UDIAT), and 647 were from Database-III (Dataset BUSI). The Database-I is contributed by Rodrigues et al. [26], available at (https://data.mendeley.com/datasets/wmy84gzngw/1; accessed on 6 January 2018). In this database, there are 250 US images of which 100 are fibroadenoma (benign), and 150 are malignant cases. All the images are stored in *.bmp format. The Database-II consists of 163 US images that are stored in *.png format, available at (http://www2.docm.mmu.ac.uk/STAFF/m.yap/dataset.php; accessed on 7 January 2018) [27]. In this database, the lesion regions (i.e., tumor contours) of the 163 clinical cases were identified by a radiologist and stored in binary image format in a separate folder while the B-mode US images were stored in another folder. The pathological findings of these 163 lesions were categorized into fibroadenoma (FA), invasive ductal carcinoma (IDC), ductal carcinoma in situ (DCIS), papilloma (PAP), unknown (UNK), lymph node (LN), lymphoma (LP), etc. Among them, 110 are benign, and 53 are malignant cases. The Database-III consists of 780 US images that are stored in *.png format, available at (https://scholar.cu.edu.eg/?q=afahmy/pages/dataset; accessed on 28 February 2021) [28]. This database contains breast ultrasound images at baseline including those of women between 25 and 75 years old; the number of female patients is 600. The dataset consists of 780 images of which 437 are benign, 210 are malignant, and 133 are normal cases. The binary mask images are provided with the corresponding B-mode images. The details of the three datasets are provided in Table 1. For classification purposes, only the benign and malignant cases (i.e., 647 images out of 780 images) are considered in this study from this database. Deep neural networks in general require large computational resources. Applying augmentation, each of the three databases consisted of 1000 benign and 1000 malignant cases. Only the translational augmentation of [−11 to +11] pixels on both directions was performed on the base images, as any kind of rotation or scaling would also ruin the size or orientation-based features. The overall number of augmented images was then 6000 with 2000 images per database. The primary motivations behind the data augmentation to an equal number of benign and malignant cases were to increase the number of samples necessary for training the neural network as well as to remove the class imbalance. The images in the datasets had already been pre-processed (i.e., speckle reduction, edge enhancement, compressed dynamic range, persistence, etc.) as is typical of clinical scanner outputs. Therefore, there is no need for further pre-processes for removing various noises, artifacts, and anomalies. The necessary steps for preparing the images for the featured-based approach and the CNN-based approach are described in the following sub-sections.

2.1.1. Normalization

The normalization is performed on each image using the formula z = {x − μ(x)}/σ(x) to bring the pixel values to zero mean and unit variance, where x and z represent the image pixels before normalization and after normalization, respectively. Moreover, the μ and σ denote the mean and the standard deviation of pixel values, respectively. Then the pixel values were clipped to keep them within [−3,3]. Here, the significance for taking −3 to 3 values into account is that few features like heterogenicity are measured considering those negative pixels. Applying the normalization process, most of the pixel intensities that were too far from the mean intensity were treated as anomalies and thus removed as shown in Figure 1.

2.1.2. Region of Interest (ROI) Segmentation

The B-mode images stored in database-I, database-II, and database-III are in various sizes where the highest resolution of them is 600 × 600 pixels. However, almost 50% of those images contained a large amount of shadowing effect. Therefore, a shadow reduction operation is performed using adaptive median filtering to minimize the unwanted portion of the image for the smooth detection of the region of interest (ROI), which is depicted in [29]. It should be noted that almost all the background information and lesion size must be preserved to ensure that no features will be suppressed with the shadow reduction operation. Next, the lesion boundary (ROI) is outlined. This process requires a binary input image, specified as a 2-D logical or numeric matrix. For that, the normalized image is subjected to binarization using MATLAB function ‘imbinarize’. Images after binarization and ROI segmentation are shown in Figure 2. The lesion boundary region is automatically outlined using MATLAB functions ‘bwboundaries’ and ‘visboundaries’, and those functions are developed using the Moore–Neighbor tracing algorithm modified by Jacob’s stopping criteria [30]. The nonzero pixels of that binary image belong to an object and zero-valued pixels constitute the background.

2.1.3. Contourlet Transform

The traditional Discrete Wavelet Transform (DWT) domain has limited directional information as only along with horizontal, vertical, and diagonal dimensions. On the other hand, the Contourlet transform has a variety of arbitrary shapes and contours that are not limited to three dimensions. The Contourlet transform is executed on the normalized B-mode images which decouple the multiscale and the directional decompositions using a filter bank [7].
The conceptual theme of a Contourlet transform is the decoupling operation that comprises a multiscale decomposition executed as pyramidal decomposition by a Laplacian pyramid and a following directional decomposition by engaging a directional filter bank. Fundamentally, the Contourlet transform is constructed by the grouping of nearby wavelet coefficients, since they are locally correlated to ensure the smoothness of the contours. Therefore, a sparse expansion is obtained for natural images by first applying a multi-scale transform, followed by a local directional transform to gather the nearby basis functions at the same scale into linear structures. Thus, it establishes a wavelet-like transform for edge detection and then a local directional transform for contour segment detection. The overall result is similar to an image expansion using basic elements that are more likely contour segments, and thus the name Contourlets. Performance comparison of DWT and Contourlet transform in terms of a better descriptor of contour segments are shown in Figure 3. It is observed that, for DWT, the contour detection is performed with limited three dimensions, and the detection becomes fader with the increase of decomposition levels. On the other hand, for Contourlet transform, contour detection with a wide range of 32 dimensions and detection become smoother with the increase of pyramidal decomposition levels. In DWT coefficient images, the tumor shadowing effect is not visualized whereas in Contourlet transformed coefficient images the shadowing effect is visualized. Moreover, from the literature [7], the Contourlet transform can provide a better description, arbitrary shapes, contours, and more directional information. In addition, the directional decomposition levels contain a variety of directions which is not fixed, and the directional sub-bands increase along with the increase of the pyramidal decomposition levels.

2.1.4. Contourlet Parametric (CP) Image

The Rician Inverse Gaussian (RiIG) distribution is proposed by Eltoft et al. [25]. It is a mixture of Rician distribution and Inverse Gaussian distribution. The PDF of RiIG distribution is given by
P R i I G r = 2 π α 3 2 δ   e x p δ γ × r δ 2 + r 2 3 4 K 3 2 α δ 2 + r 2   I 0 β r
where α, β and δ are the three parameters of this PDF. α controls the steepness of the distribution; β regulates the skewness; β < 0 suggests skewed to the left; β > 0 suggests skewed to the right, and δ is a dispersion parameter similar to the variance in the Gaussian distribution. The symbol r denotes the corresponding image which is subjected to the model by RiIG distribution. Moreover, γ = δ 2 β 2 ; I 0 . is the modified Bessel function of the first kind, and K 3 / 2 .   is the modified Bessel function of the second kind. A few realizations of the RiIG PDFs for a few selected values of the parameters are shown in Figure 4.
The Contourlet Parametric (CP) image is constructed from the RiIG parameter (δ) map, which is attained by employing a square sliding window to process the Contourlet coefficient image. This process is depicted in [14], where the author used this process to construct Nakagami parametric images with the image parameters being calculated for each image. It should be noted that in [14,31,32], the parametric images are obtained in the spatial domain, whereas we generated the images in the Contourlet transform domain. The results observed in previous studies recommend that the most appropriate sliding window for constructing the parametric image is a square with a side length equal to three times the pulse length of the incident ultrasound. In this study, the parametric imaging employed a 13 × 13 pixel sliding window within the Contourlet sub-band coefficient image to analyze each local RiIG parameter (δ). The employed sliding window size should be larger than the speckle and should discriminate variations of the local structure in tumors. The window was moved through the entire Contourlet sub-band coefficient image in steps of 1 pixel, with the local RiIG parameter (δ) assigned as the new pixel located at the center of the window at each position. This process yielded the RiIG parametric image as the map of RiIG parameter δ values. The suitability of the RiIG statistical model over the Nakagami statistical model is shown in Figure 5 by CP images and percentile probability plot (pp-plot) [33,34,35].

2.1.5. Weighted Contourlet Parametric (WCP) Image

To obtain the WCP images, the CP images are multiplied with their corresponding Contourlet sub-band coefficients. All the parameter values of those CP images are being weighted by performing multiplication operations with their corresponding Contourlet sub-bands. Therefore, these images can be denoted as “Weighted Contourlet Parametric (WCP)” images. The region of interest (ROI) (i.e., lesion region) is determined for the different sizes of WCP images by employing the Unitarian Rule to ensure that the ROI would be as similar to the same coordinates with the predetermined corresponding parent B-mode image [14]. To reduce the computational complexity for constructing WCP images, six Contourlet sub-bands are carefully chosen as the most suitable (the suitability of chosen six sub-bands is illustrated with ANOVA p-values in Table 2) for feature extraction among other sub-bands from pyramidal decomposition levels 2, 3, and 4 in Contourlet transform where those levels contain 8, 16, and 32 directional sub-bands, respectively. It should be noted that the number of directional sub-bands increases along with pyramidal sub-bands with a relation of 2(n+1).
If we consider pyramidal level-5 along with directional sub-bands-64, then it will increase the computational complexity which is depicted in Table 3. Moreover, we have satisfactory results with pyramidal decomposition level-4; thus, in this paper, image analysis is done up to level-4. These most suitable sub-bands are pyramidal level-2 directional level-4 (P2D4), pyramidal level-2 directional level-8 (P2D8), pyramidal level-3 directional level-8 (P3D8), pyramidal level-3 directional level-16 (P3D16), pyramidal level-4 directional level-16 (P4D16), and pyramidal level-4 directional level-32 (P4D32); these are shown in Figure 6. The main reason behind the selection of these sub-bands is because these particular sub-bands provide the highest resolution for the images, which is important for the feature extraction as well as the CNN for the classification process. From these six sub-bands the six CP images are calculated at first. After obtaining the CP images, each CP image is converted to WCP images by getting weight. In Figure 7 the Contourlet coefficients at decomposition level P4D32 of the normalized images of Figure 2A,D, corresponding CP images, and WCP images are shown where it is observed that the tumor region is visualized more clearly in WCP images than CP images.

2.2. Feature Extraction

A large set of ultrasound features does not necessarily guarantee the precise classification of breast tumors; rather, it sometimes degrades the performance of the classifier. Moreover, most of the time it would require a high configuration system for all the computation. In this work, several statistical, geometrical, and texture features are investigated on B-mode US image, B-mode parametric image, Contourlet transformed image, parametric version of Contourlet transformed (CP) image, and weighted parametric version of Contourlet coefficient (WCP) image. The prior features were employed on the B-mode US image, share wave elastography, parametric version of US image, and mammogram images in various earlier works but never on the weighted parametric version of Contourlet coefficient images. To ascertain the feasibility and to assess the dissimilarity of the extracted features, ANOVA p-value analysis has also been performed where the p-values are less than 0.1 for all the features utilized in this work, which proves that the features are useful and non-redundant. The features are summarized in Table 4 with corresponding references and p-values.

2.3. Proposed Classification Schemes

The proposed classification schemes, WCP feature-based scheme, and WCP CNN-based scheme are illustrated in Figure 8. To assess the performance of the algorithm-extracted WCP feature-based method, seven classifiers are considered as shown in Figure 8. On the other hand, in the CNN-based classification process, previously mentioned seven classifiers are utilized at the last layer of the CNN. All the classifiers employed in this study are implemented in MATLAB (the toolbox and default parameters). From the results, described in Section 3, it is seen that applying RiIG modeled WCP images provided the highest accuracy by the SVM classifier. After determining the RiIG based WCP images were the most suitable choice, they were provided to a CNN with all the seven classifiers applied to the outermost layer, to determine which classifier will provide even higher accuracy. For that reason, our proposed classification scheme consists of a CNN network where RiIG based WCP images are provided as inputs. Neural networks generally require a lot of samples for training, much more than the 250 images of database-I, 163 images of database-II, and 647 images of database-III. For that reason, the number of samples was increased by augmentation to 2000 for three databases with an equal number of malignant and benign cases, forming three large databases consisting of 6000 images. Since six sub-bands were selected for each B-mode image, the number of total images increased to 6000 × 6 = 36,000 Contourlet coefficient images. From Figure 6, it can be easily seen that the images obtained from different Contourlet sub-band coefficients all have different sizes. As a CNN would require all the images to have the same sizes, all the images were resized to 224 × 224, and then, the corresponding six sub-band images were stacked together to form 6000 3D stack images of size 224 × 224 × 6. The CNN network employed for this work is a modified version of the custom CNN network provided in [41]; the differences between that network and the proposed network have an input of 224 × 224 × 6 3D image stack, and the features extracted from the outermost layer (the Global Average Pooling layer) were provided to seven different classifiers. The inspiration for not using a pre-trained network for the WCP images is, as we claimed, that the pre-trained networks were built for 3-channel visual images with spatial dimensions, and, they were not compatible with our 3D stack of transform domain coefficient images. The architecture of the proposed CNN network configuration is depicted in Table 5 and shown in Figure 9.
For training, a ratio of 90–10% is used where 10% of the un-augmented database images (i.e., only original database images) are randomly selected for blind testing, and the remaining 90% (i.e., remaining original database images and their corresponding augmented images) are used for training so that there is no overlap between the testing and training samples. If the test data are selected from augmented data, the accuracy can be significantly biased due to leakage and can be higher than the real test. As there are not many tests or original data, it is more appropriate to separate the testing images from the whole database for testing purposes and the rest of the database is utilized to generate training data with augmentation. A 10-fold cross-validation scheme is also employed along with an exhaustive grid search method using the average validation accuracy as a metric to determine the hyper-parameters of the neural network. This network employs the Adam optimization technique [42] and a batch size and learning rate of 64 and 0.01, respectively. The training data are applied to the CNN network through 40,000 iterations. The performance of the proposed method is measured using the performance indices like accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), etc. Later, the confusion matrices are obtained by measuring true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN), respectively, where positive stands for a malignant tumor, and negative stands for a benign tumor. The results are discussed in Section 3.

3. Experimental Results

In the proposed classification scheme, for the classical feature-based classification approach, the classification performances are investigated on B-mode US image, parametric (P) image, Contourlet transformed image, parametric version of Contourlet transformed image, weighted parametric version of Contourlet transformed image, etc. The results are shown in Table 6, where it is evident that the application of statistical modeling and Contourlet transform improves the accuracy of the classification. Here, it can be seen that the features extracted from the B-mode image provide the least amount of accuracy. For B-mode images without any statistical modeling or Contourlet transform applied on them, the highest accuracies obtained for database-I, database-II, and database-III were 92%, 92.05%, and 92.15%, respectively, all of them obtained using the KNN classifier. Applying Nakagami and RiIG statistical modeling on the B-mode images improves the accuracies of the classification, the highest being 93.5%, 93.25%, and 92.55% for databases I, II, and III, respectively, obtained from the SVM classifier. Applying Contourlet transform on the B-mode images also proved to be effective in increasing the accuracies, the highest being 93%, 92.65%, and 93.05%, for databases I, II, and III, respectively, using the SVM classifier.
From the results, it is seen that applying either technique on the B-mode images improves the classification performance. In the case of CP images, where both the techniques are applied together, the highest classification accuracies increased to 93%, 93.15%, and 93.55% for databases I, II, and III, respectively, all of them obtained from the SVM classifier. In the case of our proposed WCP images, the highest accuracies increased further to 97.5%, 97.55%, and 97.95% for databases I, II, and III, respectively, and all were obtained from the SVM classifier. Here, it could be easily seen that the RiIG statistical model provided better performance for all seven classifiers compared to the Nakagami statistical model for all types of images in database-I, database-II, and database-III, proving RiIG to be more suitable for the statistical modeling of the B-mode images. As it was shown that the RiIG modeled WCP images provided the best result for the feature engineering method, the CNN method was applied on RiIG modeled WCP images only. From the results, it could be easily seen that CNN-based feature extraction provided more accuracy than algorithm-based feature extraction. For the CNN-based approach, the highest accuracies obtained for databases I, II and III were 98.05%, 98.35%, and 98.55%, respectively, all of them obtained from the KNN classifier. From Table 6, it is evident that the proposed RiIG based WCP image is the most suitable choice for the classification of breast tumors in both the feature extraction-based approach and CNN-based approach. Moreover, the CNN-based approach provides higher accuracy over the feature extraction-based approach. The confusion matrices of 10-fold cross-validation result for the proposed CNN-based approach employing the KNN classifier are shown in Table 7 along with performance indices such as accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) by measuring true positive (TP), true negative (TN), false positive (FP), and false-negative (FN), where positive stands for malignant tumor and negative stands for a benign tumor. It is observed that for all three databases, the values of accuracy, sensitivity, specificity, PPV, and NPV are greater than 98%.

4. Discussion

In the previous section, it was shown that the best classification accuracy is achieved using the CNN-based approach with the RiIG-based WCP images. A comparison with other works is presented in Table 8. The work of P. Acevedo et al. [5] yielded an accuracy of 94% with an F1 score of 0.942 using Database-I. Shivabalan et al. [20] also used the same Database-I and reported an accuracy of 94.5% with an F1 score of 0.945. Therefore, the accuracy level obtained by the proposed method using the same Database-I about 98.25% with an F1 score of 0.982 is significantly better. In another work, Hou et al. [21] used the Database-II and reported an accuracy of 94.8%. Shin et al. [22] reported an accuracy of 84.5% using the same Database-II combined with other databases. Byra et al. [23] reported an accuracy of 85.3% with an F1 score of 0.765 using Database-II. Qi et al. [24] illustrated an accuracy of 94.48% with an F1 score of 0.942 using Database-II. On the other hand, the proposed method using Database-II gives an accuracy of 98.35% with an F1 score of 0.984, which is significantly better. The method of Ka Wing Wan et al. [43] provides accuracies of 91%, with an F1 score of 0.87 with a CNN, and 90%, with an F1 score of 0.83 using a Random Forest classifier for the Database-III. Moon et al. [44] reported an accuracy of 94.62% with an F1 score of 0.911, using the same Database-III. In contrast, the accuracy and F1 score for the proposed method are superior. Furthermore, the proposed CNN-based approach is applied for classification on the Database-III with 80% training and 20% testing ratio with the same validation approach as in [43,44]. This experiment provides an accuracy of 96.45%, a sensitivity 93.09%, a specificity 98.14% with an F1 score of 0.946, still superior to those of [43,44]. The box plots given in Figure 10 indicate the comparison of accuracies of Table 8; these also indicate a consistent performance by the various methods including the proposed method.

5. Conclusions

In this paper, two new approaches in breast tumors classification are presented, employing RiIG statistical model-based Weighted Contourlet Parametric images obtained from the Contourlet transformed breast US images. In the first approach, various statistical, geometrical, and texture-based features are extracted from RiIG statistical model-based WCP images, which are then classified employing different classifiers. It is shown that by employing the SVM classifier, a very good degree of accuracy can be achieved. Secondly, a new custom CNN-based architecture is proposed to classify WCP images of breast tumors which has shown a better performance than the first approach in terms of accuracy. The proposed CNN architecture can also provide a very high degree of sensitivity, specificity, NPV, and PPV values by employing the KNN classifier. Both the approaches demonstrate better performance in classification as compared to existing methods on publicly available benchmark datasets. In addition, the RiIG distribution is a highly suitable distribution for modeling the statistics of the Contourlet transform coefficients of B-mode Ultrasound images of breast tumors. There are scopes for further improvements by employing the proposed approach in other multi-resolution transform domains and involving other datasets.

Author Contributions

Simulation, validation, formal analysis, investigation, data curation, S.M.K.; conceptualization, methodology, S.M.K. and M.I.H.B.; writing—original draft preparation, S.M.K., and M.S.T.; writing—review and editing, M.I.H.B., M.S.T. and A.S.; supervision, M.I.H.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in Dataset-1: [repository name, Mendeley Data] at [https://doi.org/10.17632/wmy84gzngw.1; accessed on 6 January 2018], reference number [26]. Dataset-II: [repository name, Department of Computing and Mathematics, Manchester Metropolitan University] at [https://doi.org/10.1109/JBHI.2017.2731873; accessed on 7 January 2018], reference number [27]. Dataset-III: [repository name, Cairo University Scholars] at [https://doi.org/10.1016/j.dib.2019.104863; accessed on 28 February 2021], reference number [28].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Siegel, R.L.; Miller, K.D.; Jemal, A. Cancer statistics, 2020. Cancer J. Clin. 2020, 70, 7–30. [Google Scholar] [CrossRef] [PubMed]
  2. Horsch, K.; Giger, M.L.; Venta, L.A.; Vyborny, C.J. Computerized diagnosis of breast lesions on ultrasound. Med. Phys. 2002, 29, 157–164. [Google Scholar] [CrossRef] [PubMed]
  3. Shen, W.-C.; Chang, R.-F.; Moon, W.K.; Chou, Y.-H.; Huang, C.-S. Breast Ultrasound Computer-Aided Diagnosis Using BI-RADS Features. Acad. Radiol. 2007, 14, 928–939. [Google Scholar] [CrossRef] [PubMed]
  4. Ara, S.R.; Bashar, S.K.; Alam, F.; Hasan, M.K. EMD-DWT Based Transform Domain Feature Reduction Approach for Quantitative Multi-class Classification of Breast Tumours. Ultrasonics 2017, 80, 22–33. [Google Scholar] [CrossRef]
  5. Acevedo, P.; Vazquez, M. Classification of Tumors in Breast Echography Using a SVM Algorithm. In Proceedings of the International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 5–7 December 2019; pp. 686–689. [Google Scholar] [CrossRef]
  6. Eltoukhy, M.M.; Faye, I.; Samir, B.B. A comparison of wavelet and curvelet for breast cancer diagnosis in digital mammogram. Comput. Biol. Med. 2010, 40, 384–391. [Google Scholar] [CrossRef]
  7. Do, M.N.; Vetterli, M. The contourlet transform: An efficient directional multi-resolution image representation. IEEE Trans. Image Process. 2005, 14, 2091–2096. [Google Scholar] [CrossRef] [Green Version]
  8. Jesneck, J.L.; Lo, J.Y.; Baker, J.A. Breast mass lesions: Computer-aided diagnosis models with mammographic and sonographic descriptors 1. Radiology 2007, 244, 390–398. [Google Scholar] [CrossRef]
  9. Moayedi, F.; Azimifar, Z.; Boostani, R.; Katebi, S. Contourlet-based mammography mass classification using the SVM family. Comput. Biol. Med. 2010, 40, 373–383. [Google Scholar] [CrossRef]
  10. Dehghani, S.; Dezfooli, M.A. Breast Cancer Diagnosis System Based on Contourlet Analysis and Support Vector Machine. World Appl. Sci. J. 2011, 13, 1067–1076. [Google Scholar]
  11. Zhang, Q.; Xiao, Y.; Chen, S.; Wang, C.; Zheng, H. Quantification of Elastic Heterogeneity Using Contourlet-Based Texture Analysis in Shear-Wave Elastography for Breast Tumour Classification. Ultrasound Med. Biol. 2015, 41, 588–600. [Google Scholar] [CrossRef]
  12. Li, Y.; Liu, Y.; Zhang, M.; Zhang, G.; Wang, Z.; Luo, J. Radiomics with Attribute Bagging for Breast Tumour Classification Using Multimodal Ultrasound Images. J. Ultrasound Med. 2020, 39, 361–371. [Google Scholar] [CrossRef]
  13. Oelze, M.L.; Zachary, J.F.; O’Brien, W.D. Differentiation of Tumour Types In Vivo By Scatterer Property Estimates and Parametric Images Using Ultrasound Backscatter. In Proceedings of the IEEE Ultrasonics Symposium, Honolulu, HI, USA, 5–8 October 2003; pp. 1014–1017. [Google Scholar] [CrossRef]
  14. Liao, Y.-Y.; Tsui, P.-H.; Li, C.-H.; Chang, K.-J.; Kuo, W.-H.; Chang, C.-C.; Yeh, C.-K. Classification of scattering media within benign and malignant breast tumours based on ultrasound texture-feature-based and Nakagami-parameter images. J. Med. Phys. 2011, 38, 2198–2207. [Google Scholar] [CrossRef]
  15. Ho, M.-C.; Lin, J.-J.; Shu, Y.-C.; Chen, C.-N.; Chang, K.-J.; Chang, C.-C.; Tsui, P.-H. Using ultrasound Nakagami imaging to assess liver fibrosis in rats. Ultrasonics 2012, 52, 215–222. [Google Scholar] [CrossRef]
  16. Bharati, S.; Podder, P.; Mondal, M.R.H. Artificial Neural Network Based Breast Cancer Screening: A Comprehensive Review. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2020, 12, 125–137. [Google Scholar]
  17. Zhou, Y. A Radiomics Approach with CNN for Shear-Wave Elastography Breast Tumor Classification. IEEE Trans. Biomed. Eng. 2018, 65, 1935–1942. [Google Scholar] [CrossRef]
  18. Zeimarani, B.; Costa, M.G.F.; Nurani, N.Z.; Filho, C.F.F.C. A Novel Breast Tumor Classification in Ultrasound Images, Using Deep Convolutional Neural Network. In Proceedings of the XXVI Brazilian Congress on Biomedical Engineering, Armação de Buzios, Brazil, 21–25 October 2019; pp. 70, 89–94. [Google Scholar] [CrossRef]
  19. Singh, V.K.; Rashwana, H.A.; Romania, S.; Akramb, F.; Pandeya, N.; Sarkera, M.M.K.; Saleha, A.; Arenasc, M.; Arquezc, M.; Puiga, D.; et al. Breast Tumor Segmentation and Shape Classification in Mammograms using Generative Adversarial and Convolutional Neural Network. Elsevier J. Expert Syst. Appl. 2020, 139, 1–14. [Google Scholar] [CrossRef]
  20. Ramachandran, A.; Ramu, S.K. Neural network pattern recognition of ultrasound image gray scale intensity histogram of breast lesions to differentiate between benign and malignant lesions: An analytical study. JMIR Biomed. Eng. 2021, 6, e23808. [Google Scholar] [CrossRef]
  21. Hou, D.; Hou, R.; Hou, J. On-device Training for Breast Ultrasound Image Classification. In Proceedings of the 10th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 6–8 January 2020; pp. 78–82. [Google Scholar] [CrossRef]
  22. Shin, S.Y.; Lee, S.; Yun, I.D.; Kim, S.M.; Lee, K.M. Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images. IEEE Trans. Med. Imaging 2019, 38, 762–774. [Google Scholar] [CrossRef] [Green Version]
  23. Byra, M.; Galperin, M.; Fournier, H.O.; Olson, L.; O’Boyle, M.; Comstock, C.; Andre, M. Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion. Med. Phys. 2019, 46, 746–755. [Google Scholar] [CrossRef]
  24. Qi, X.; Zhang, L.; Chen, Y.; Pi, Y.; Chen, Y.; Lv, Q.; Yi, Z. Automated diagnosis of breast ultrasonography images using deep neural networks. Med. Image Anal. 2019, 52, 185–198. [Google Scholar] [CrossRef]
  25. Eltoft, T. The Rician Inverse Gaussian Distribution: A New Model for Non-Rayleigh Signal Amplitude Statistics. IEEE Trans. Image Process. 2005, 14, 1722–1735. [Google Scholar] [CrossRef]
  26. Rodrigues, S.P. Breast Ultrasound Image. Mendeley Data 2017, 1. [Google Scholar] [CrossRef]
  27. Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R. Automated Breast Ultrasound Lesions Detection using Convolutional Neural Networks. IEEE J. Biomed. Health Inform. 2017, 22, 1218–1226. [Google Scholar] [CrossRef] [Green Version]
  28. Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief. 2020, 28, 104863. [Google Scholar] [CrossRef]
  29. Nugroho, H.A.; Triyani, Y.; Rahmawaty, M.; Ardiyanto, I. Breast ultrasound image segmentation based on neutrosophic set and watershed method for classifying margin characteristics. In Proceedings of the 7th IEEE International Conference on System Engineering and Technology (ICSET), Shah Alam, Malaysia, 2–3 October 2017; pp. 43–47. [Google Scholar] [CrossRef]
  30. Gonzalez, R.C.; Woods, R.E.; Eddins, S.L. Digital Image Processing Using MATLAB, 3rd ed.; Gatesmark Publishing: Knoxville, TN, USA, 2020. [Google Scholar]
  31. Eltoft, T. Modeling the Amplitude Statistics of Ultrasonic Images. IEEE Trans. Med. Imaging 2006, 25, 229–240. [Google Scholar] [CrossRef]
  32. Tsui, P.H.; Chang, C.C. Imaging local scatterer concentrations by the Nakagami statistical model. Ultrasound Med. Biol. 2007, 33, 608–619. [Google Scholar] [CrossRef]
  33. Press, W.H.; Teukolsky, S.A.; Vellerling, W.T.; Flannery, B.P. Numerical recipes in C. In The Art of Scientific Computing; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  34. Achim, A.; Tsakalides, P.; Bezarianos, A. Novel Bayesian multiscale method for speckle removal in medical ultrasound images. IEEE Trans. Med. Imaging 2001, 20, 772–783. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Cardoso, J. Infomax and maximum likelihood for blind source separation. IEEE Signal Process. Lett. 1997, 4, 112–114. [Google Scholar] [CrossRef]
  36. Baek, S.E.; Kim, M.J.; Kim, E.K.; Youk, J.H.; Lee, H.J.; Son, E.J. Effect of clinical information on diagnostic performance in breast sonography. J. Ultrasound Med. 2009, 28, 1349–1356. [Google Scholar] [CrossRef] [PubMed]
  37. Hazard, H.W.; Hansen, N.M. Image-guided procedures for breast masses. Adv. Surg. 2007, 41, 257–272. [Google Scholar] [CrossRef] [PubMed]
  38. Balleyguier, V.; Vanel, D.; Athanasiou, A.; Mathieu, M.C.; Sigal, R. Breast Radiological Cases: Training with BI-RADS Classification. Eur. J. Radiol. 2005, 54, 97–106. [Google Scholar] [CrossRef]
  39. Radi, M.J. Calcium oxalate crystals in breast biopsies. An overlooked form of microcalcification associated with benign breast disease. Arch. Pathol. Lab. Med. 1989, 113, 1367–1369. [Google Scholar]
  40. Chandrupatla, T.R.; Osler, T.J. The Perimeter of an Ellipse. Math. Sci. 2010, 35, 122–131. [Google Scholar]
  41. Kabir, S.M.; Tanveer, M.S.; Shihavuddin, A.; Bhuiyan, M.I.H. Parametric Image-based Breast Tumor Classification Using Convolutional Neural Network in the Contourlet Transform Domain. In Proceedings of the 11th International Conference on Electrical and Computer Engineering (ICECE), Dhaka, Bangladesh, 17–19 December 2020; pp. 439–442. [Google Scholar] [CrossRef]
  42. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations—ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  43. Wan, K.W.; Wong, C.H.; Ip, H.F.; Fan, D.; Yuen, P.L.; Fong, H.Y.; Ying, M. Evaluation of the performance of traditional machine learning algorithms. convolutional neural network and AutoML Vision in ultrasound breast lesions classification: A comparative study. Quant. Imaging Med. Surg. 2021, 11, 1381–1393. [Google Scholar] [CrossRef]
  44. Moon, W.K.; Lee, Y.-W.; Ke, H.-H.; Lee, S.H.; Huang, C.-S.; Chang, R.-F. Computer-aided diagnosis of breast ultrasound images using ensemble learning from convolutional neural networks. Comput. Methods Programs Biomed. 2020, 190, 105361. [Google Scholar] [CrossRef]
Figure 1. Example of normalization where the first row is the benign case and the second row is the malignant case. (A,C) US B-mode image; (B,D) Normalized image.
Figure 1. Example of normalization where the first row is the benign case and the second row is the malignant case. (A,C) US B-mode image; (B,D) Normalized image.
Applsci 11 12138 g001
Figure 2. Example of binarization and ROI segmentation where the first row is the benign case, and the second row is the malignant case. (A,D) Normalized B-mode US images; (B,E) binary images, and (C,F) lesion boundary outlined automatically.
Figure 2. Example of binarization and ROI segmentation where the first row is the benign case, and the second row is the malignant case. (A,D) Normalized B-mode US images; (B,E) binary images, and (C,F) lesion boundary outlined automatically.
Applsci 11 12138 g002
Figure 3. Discrete Wavelet Transform (DWT) versus Contourlet scheme: illustrating the successive refinement of these two different multi-resolution transform domains near a smooth contour.
Figure 3. Discrete Wavelet Transform (DWT) versus Contourlet scheme: illustrating the successive refinement of these two different multi-resolution transform domains near a smooth contour.
Applsci 11 12138 g003
Figure 4. Examples of the PDFs residing in the RiIG model with various α, β, and δ values.
Figure 4. Examples of the PDFs residing in the RiIG model with various α, β, and δ values.
Applsci 11 12138 g004
Figure 5. Comparison of Nakagami and RiIG statistical modeling for image classification purposes by Nakagami and RiIG Contourlet Parametric (CP) images with percentile probability plot (pp-plot) portraying Nakagami, RiIG, and empirical Cumulative density functions (CDFs). Here, we can see that the Nakagami CP image has a lot of black spots which act as artifacts and obscure the tumor region, making it harder for the feature extraction algorithms to work properly. The RiIG CP image on the right side does not have any black spot, and thus, it is easier for the algorithms to extract the necessary features. From the pp-plots, it is seen that the RiIG CDF follows the empirical CDF more precisely than Nakagami CDF. It also indicates the RiIG distribution is more suitable for parametric modeling of the breast ultrasound images.
Figure 5. Comparison of Nakagami and RiIG statistical modeling for image classification purposes by Nakagami and RiIG Contourlet Parametric (CP) images with percentile probability plot (pp-plot) portraying Nakagami, RiIG, and empirical Cumulative density functions (CDFs). Here, we can see that the Nakagami CP image has a lot of black spots which act as artifacts and obscure the tumor region, making it harder for the feature extraction algorithms to work properly. The RiIG CP image on the right side does not have any black spot, and thus, it is easier for the algorithms to extract the necessary features. From the pp-plots, it is seen that the RiIG CDF follows the empirical CDF more precisely than Nakagami CDF. It also indicates the RiIG distribution is more suitable for parametric modeling of the breast ultrasound images.
Applsci 11 12138 g005
Figure 6. WCP images modeled by RiIG PDF at Contourlet sub-bands: pyramidal decomposition level-2 directional decomposition level-4 (P2D4), as well as P2D8, P3D8, P3D16, P4D16, and P4D32.
Figure 6. WCP images modeled by RiIG PDF at Contourlet sub-bands: pyramidal decomposition level-2 directional decomposition level-4 (P2D4), as well as P2D8, P3D8, P3D16, P4D16, and P4D32.
Applsci 11 12138 g006
Figure 7. Example of Contourlet Coefficient at sub-band P4D32 (A,D); corresponding CP image (B,E); corresponding WCP image (C,F). The first row represents a benign case, and the second row represents a malignant case.
Figure 7. Example of Contourlet Coefficient at sub-band P4D32 (A,D); corresponding CP image (B,E); corresponding WCP image (C,F). The first row represents a benign case, and the second row represents a malignant case.
Applsci 11 12138 g007
Figure 8. The proposed classification schemes.
Figure 8. The proposed classification schemes.
Applsci 11 12138 g008
Figure 9. The proposed CNN architecture.
Figure 9. The proposed CNN architecture.
Applsci 11 12138 g009
Figure 10. Comparison of accuracies of the three databases from Table 8.
Figure 10. Comparison of accuracies of the three databases from Table 8.
Applsci 11 12138 g010
Table 1. Patient data summary.
Table 1. Patient data summary.
Database-I
Tumor TypeNo. of PatientsNo. of LesionsMethod of Confirmation
Fibroadenoma (Benign)91100Biopsy
Malignant142150Biopsy

Database-II
Tumor TypeNo.ofPatientsNo. of LesionsMethod of Confirmation
Cyst (Benign)6565Biopsy
Fibroadenoma (Benign)3939Biopsy
Invasive Ductal Carcinoma (Malignant)4040Biopsy
Ductal Carcinoma in Situ (Malignant)44Biopsy
Papilloma (Benign)33Biopsy
Lymph Node (Benign)33Biopsy
Lymphoma (Malignant)11Biopsy
Unknown (Malignant)88Biopsy

Database-III
Tumor TypeNo. of PatientsNo. of LesionsMethod of Confirmation
Benign600437Reviewed by Special Radiologists
Malignant210
Normal133
Total patients = 996Total = 1193 lesions
Table 2. Suitability of six Contourlet sub-band coefficients considering ANOVA p-values (95% confidence), where PDL means Pyramidal Decomposition Level and DDL means Directional Decomposition Level.
Table 2. Suitability of six Contourlet sub-band coefficients considering ANOVA p-values (95% confidence), where PDL means Pyramidal Decomposition Level and DDL means Directional Decomposition Level.
PDLDDLANOVA p-ValuePDLDDLANOVA p-Value
210.077450.071
220.054460.052
230.078470.073
240.022480.078
250.066490.062
260.0664100.071
270.0624110.054
280.0184120.046
310.0764130.072
320.0814140.065
330.0744150.073
340.0454160.013
350.0544170.071
360.0634180.058
370.0584190.062
380.0084200.062
390.0554210.07
3100.0654220.082
3110.0794230.082
3120.0674240.043
3130.0714250.08
3140.0654260.054
3150.0624270.069
3160.0254280.058
410.0734290.063
420.0584300.074
430.0684310.058
440.0484320.005
Table 3. The Computational time and RAM consumed in constructing contourlet sub-band coefficients.
Table 3. The Computational time and RAM consumed in constructing contourlet sub-band coefficients.
Pyramidal Decomposition LevelOverall Occupied RAM (Capacity 16 GB)Overall Subband Image Development Time
27.92 GB3 min 54 s
310.89 GB6 min 11 s
413.32 GB32 min 36 s
515.92 GB1 h 20 min 43 s
Table 4. The features utilization considering WCP images with ANOVA p-values.
Table 4. The features utilization considering WCP images with ANOVA p-values.
Feature with Referencep-Values
Hypoechogenecity [36,37]0.0022
Microlobulation [36,37]0.0031
Homogeneous Echoes [36,37]0.0032
Heterogeneous Echoes [36,37]0.0040
Taller Than Wide [36,37]0.0044
Microcalcification [38,39]0.0054
Texture [38,39]0.0069
Shape Class [3]0.0145
Echo Pattern Class [3]0.0155
Margin Class [3]0.0162
Orientation Class [3]0.0165
Lesion Boundary Class [3]0.0166
Tilted Ellipse Radius [40]0.0312
Tilted Ellipse Perimeter [40]0.0344
Tilted Ellipse Area [40]0.0347
Tilted Ellipse Compactness [40]0.0355
Table 5. The Proposed CNN network configuration.
Table 5. The Proposed CNN network configuration.
LayersInput SizeKernel SizeStrideOutput Size
Input224 × 224 × 6
Conv 1224 × 224 × 67 × 7 × 643 × 398 × 98 × 64
Relu 198 × 98 × 64 98 × 98 × 64
Maxpool 198 × 98 × 642 × 2 × 642 × 249 × 49 × 64
Conv 249 × 49 × 645 × 5 × 1282 × 223 × 23 × 128
Relu 223 × 23 × 128 23 × 23 × 128
Maxpool 223 × 23 × 1282 × 2 × 1282 × 212 × 12 × 128
Conv 312 × 12 × 1283 × 3 × 1281 × 110 × 10 × 128
Relu 310 × 10 × 128 10 × 10 × 128
Maxpool 310 × 10 × 1282 × 2 × 1282 × 25 × 5 × 128
Global Avg. Pool5 × 5 × 128 1 × 1 × 128
Table 6. The classification performances for different types of images with databases I, II, and III.
Table 6. The classification performances for different types of images with databases I, II, and III.
Accuracy (%) with Database-I
ClassifierB-ModeB-Mode ParametricContourletContourlet Parametric (CP)Weighted Contourlet Parametric (WCP)
  NakagamiRiIGNakagami RiIG Nakagami RiIG RiIG (CNN)
SVM91.591.593.59291.5939397.597.75
KNN929192.59390.59292.595.598.25
BCT88.590.59189.59091.592.59596.85
ECOC90.590.591.591.591.592.592.594.596.45
BGKC8889.590.589.589.590909495.05
BLHD89.588.59090.588.590.591.594.595.95
ELC919292.59391.59393.596.597.05

Accuracy (%) with Database-II
ClassifierB-ModeB-Mode ParametricContourletContourlet Parametric (CP)Weighted Contourlet Parametric (WCP)
  NakagamiRiIGNakagami RiIG Nakagami RiIG RiIG (CNN)
SVM90.5091.9592.2591.4092.9093.1593.9597.5597.90
KNN92.0591.8592.0092.6591.1093.0593.5596.3098.35
BCT88.3590.0591.3589.5590.4591.8592.5595.0595.75
ECOC90.1591.6591.8090.8591.9592.4093.5596.9597.20
BGKC87.7588.9590.9588.9589.6590.5590.1594.4595.45
BLHD87.1589.2090.5589.2088.2590.0590.7595.0595.15
ELC90.2091.4592.0590.5591.5592.1593.1096.9597.65

Accuracy (%) with Database-III
ClassifierB-ModeB-Mode ParametricContourletContourlet Parametric (CP)Weighted Contourlet Parametric (WCP)
  NakagamiRiIGNakagami RiIG Nakagami RiIG RiIG (CNN)
SVM91.0091.9592.5592.1592.9593.5594.5597.9598.05
KNN92.1592.1592.5093.0592.5593.1594.9597.5098.55
BCT89.1590.5590.9589.0091.0591.1593.0595.9596.05
ECOC90.7592.0092.0591.1592.1592.5094.1597.0597.55
BGKC88.1589.0590.5589.0589.9590.1591.1595.1595.55
BLHD87.9589.2590.1589.2589.8590.0591.5595.5595.95
ELC90.7591.1592.1591.5592.1592.4593.1597.0597.95
Table 7. The confusion matrices of 10-fold cross-validation result of three databases by WCP image-based CNN Network with KNN classifier.
Table 7. The confusion matrices of 10-fold cross-validation result of three databases by WCP image-based CNN Network with KNN classifier.
WCP Image Analysis with Database-IWCP Image Analysis with Database-II
Applsci 11 12138 i001 Applsci 11 12138 i002
WCP Image Analysis with Database-III
Applsci 11 12138 i003
Table 8. A comparison of selected studies with the proposed classification scheme using databases I, II, and III.
Table 8. A comparison of selected studies with the proposed classification scheme using databases I, II, and III.
Author (Year)Major ContributionDatabaseClassifierPerformance (Accuracy in %)
P. Acevedo, (2019) [5]Gray level concurrency matrix (GLCM) algorithmDatabase-I [26]SVMACC: 94%, F1 Score: 0.942
Shivabalan K. R. (2021) [20]Simple Convoluted Neural NetworkDatabase-I [26]CNN
ACC: 94.5%, SEN: 94.9%
SPEC: 94.1%, F1 Score: 0.945

D. Hou, (2020), [21]
Portable device-based CNN architectureDatabase-II [27]CNNACC: 94.8%
S. Y. Shin, (2019) [22]Neural Network with R-CNN and ResNet-101Database-II [27]R-CNNACC: 84.5%
M. Byra, (2019) [23]
US to RGB Conversion and fine-tuning using back-propagation
Database-II [27]VGG19 CNN
ACC: 85.3%, SEN: 79.6%
SPEC: 88%, F1 Score: 0.765
X. Qi, (2019) [24]Deep CNN with multi-scale kernels and skip connections.Database-II [27]Deep CNN
ACC: 94.48%, SEN: 95.65%
SPEC: 93.88%, F1 Score: 0.942

Ka Wing Wan, (2021) [43]

Automatic Machine Learning model (AutoML Vision)

Database-III [28]

CNN

Random
Forest

ACC: 91%, SEN: 82%
SPEC: 96%, F1 Score: 0.87
ACC: 90%, SEN: 71%
SPEC: 100%, F1 Score: 0.83
Woo Kyung Moon, (2020) [44]CNN includes VGGNet, ResNet, and DenseNet.Database-III [28]Deep CNNACC: 94.62%, SEN: 92.31%
SPEC: 95.60%, F1 Score: 0.911
Proposed MethodWCP Image, Custom made CNN architectureDatabase-I [26]Deep CNN
ACC: 98.25%, SEN: 98.49%
SPEC: 98.01%, F1 Score: 0.982
Database-II [27]Deep CNN
ACC: 98.35%, SEN: 98.11%
SPEC: 98.59%, F1 Score: 0.984
Database-III [28]Deep CNN
ACC: 98.55%, SEN: 98.21%
SPEC: 98.89%, F1 Score: 0.986
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kabir, S.M.; Bhuiyan, M.I.H.; Tanveer, M.S.; Shihavuddin, A. RiIG Modeled WCP Image-Based CNN Architecture and Feature-Based Approach in Breast Tumor Classification from B-Mode Ultrasound. Appl. Sci. 2021, 11, 12138. https://doi.org/10.3390/app112412138

AMA Style

Kabir SM, Bhuiyan MIH, Tanveer MS, Shihavuddin A. RiIG Modeled WCP Image-Based CNN Architecture and Feature-Based Approach in Breast Tumor Classification from B-Mode Ultrasound. Applied Sciences. 2021; 11(24):12138. https://doi.org/10.3390/app112412138

Chicago/Turabian Style

Kabir, Shahriar Mahmud, Mohammed I. H. Bhuiyan, Md Sayed Tanveer, and ASM Shihavuddin. 2021. "RiIG Modeled WCP Image-Based CNN Architecture and Feature-Based Approach in Breast Tumor Classification from B-Mode Ultrasound" Applied Sciences 11, no. 24: 12138. https://doi.org/10.3390/app112412138

APA Style

Kabir, S. M., Bhuiyan, M. I. H., Tanveer, M. S., & Shihavuddin, A. (2021). RiIG Modeled WCP Image-Based CNN Architecture and Feature-Based Approach in Breast Tumor Classification from B-Mode Ultrasound. Applied Sciences, 11(24), 12138. https://doi.org/10.3390/app112412138

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop