Next Article in Journal
Alzheimer’s Disease Diagnosis Using Machine Learning: A Survey
Previous Article in Journal
Machine Learning Techniques for Soil Characterization Using Cone Penetration Test Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Lacerations Caused by Cervical Cancer through a Comparative Study among Texture-Extraction Techniques

by
Jorge Aguilar-Santiago
1,
José Trinidad Guillen-Bonilla
2,*,
Mario Alberto García-Ramírez
2 and
Maricela Jiménez-Rodríguez
1,*
1
Departamento de Ciencias Básicas, Centro Universitario de la Ciénega, Universidad de Guadalajara, Ocotlán 47810, Jalisco, Mexico
2
Departamento de Electro-Fotónica, Centro Universitario de Ciencias Exactas e Ingeniería, Universidad de Guadalajara, Guadalajara 44430, Jalisco, Mexico
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(14), 8292; https://doi.org/10.3390/app13148292
Submission received: 28 April 2023 / Revised: 5 July 2023 / Accepted: 13 July 2023 / Published: 18 July 2023

Abstract

:
Cervical cancer is a disease affecting a worrisomely large number of women worldwide. If not treated in a timely fashion, this disease can lead to death. Due to this problematic, this research employed the LBP, OC_LBP, CS-LTP, ICS-TS, and CCR texture descriptors for the characteristic extractions of 60 selected carcinogenic images classified as Types 1, 2, and 3 according to a database; afterward, a statistical multi-class classifier and an NN were used for image classification. The resulting characteristic vectors of all five descriptors were implemented in four tests to identify the images by type. The statistical multi-class combination and classification of all images achieved a classification efficiency of 83–100%. On the other hand, with the NN, the LBP, OC_LBP, and CCR descriptors presented a classification efficiency of between 81.6 and 98.3%, differing from that of ICS_TS and CS_LTP, which ranged from 36.6 to 55%. Based on the tests performed with regard to ablation, ROC curves, and confusion matrix, we consider that an efficient expert system can be developed with the objective of detecting cervical cancer at early stages.

1. Introduction

At present, there are several computer vision techniques employed for the extraction of important characteristics in images by analyzing color intensity or different types of structures [1]; all of these studies can be implemented in the development of software for the automated diagnosis of diseases that can lead to death if left undetected at early stages, such as analyses of chest X-ray radiographs [2], the detection of colorectal cancer [3] and hepatic tumors [4], and also to identify health issues in individuals through electrocardiograms (ECG) [5]. According to the World Health Organization (WHO), cervical cancer (CC) is the fourth most frequent cancer among women, especially in countries with low and medium incomes; 50% of precancerous lacerations of the cervix derive from human papilloma virus (HPV) types 16 and 18 [6]. This cancer is a great risk for women’s health; thus, it is of great importance to detect it in due time in order to provide rapid and timely treatment, henceforth preventing further complications. At present, advances in the area of artificial intelligence (AI), particularly neural networks and computer vision, have ventured to attract an ample number of researchers into the field of disease detection, such as that developed by Ye Rang et al., in which the effectiveness of machine learning and deep learning are compared to detect CC in cervicographies [7]. Furthermore, a predictive model has been employed to classify cancer images using deep learning and transfer learning [8]. A novel technique called octagonal combination of local binary patterns (OC-LBP) has been employed to identify intraepithelial cervical neoplasia (ICN) with the goal of predicting cervical cancer through deep learning using techniques like RESNET50, RESNET152, and InterceptionV3 models that were trained for this work [9]. Singh et al. implemented several machine learning algorithms to detect CC, with performance being compared among the different algorithms to determine the best one(s) for automated learning in terms of Papanicolaou tests for the classification of cells in normal and abnormal patterns, including Decision Tree, Random Forest, and Gaussian Naïve Bayes, and Gradient Boosting classifiers were implemented [10]. Furthermore, a system to classify cervix types using deep learning through a convolutional neural network (CNN) was developed [11]. Additionally, an artificial neural network (ANN) was deployed to classify normal and abnormal cells in the cervical region of the uterus [12]. An image-classification algorithm for cervical images in three categories—types 1, 2, and 3—was proposed in which the cervix is detected, and an R-CNN model is implemented, then images are classified with cropped cervixes [13]. Another research study detected cervical pre-cancer by employing a deep metric learning (DML) framework; using three neural networks (ResNet-50, MobileNet, and NasNet), a K-Nearest Neighbor classifier was implemented for this task [14]. A system based on a neural network for CC detection and classification as normal and abnormal was developed. This latter system deploys Oriented Local Histogram Technique (OLHT) applied to cervical images for border enhancement, in which Dual Tree Complex Wavelet Transform (DT-CWT) allows better multiple resolution, and afterward, Wavelet, Grey Level Co-Occurrence Matrix (GLCM), and Local Binary Pattern (LBP) are employed to extract image characteristics [15]. To detect CC, more than only images have been handled; other studies have analyzed medical data stored in databases to predict which patients have a greater propensity to endure CC; Al-Wesabi et al. compared different machine learning classifiers and determined that information such as age, first sexual encounter, pregnancy rate, tobacco addiction, hormonal contraceptives, and genital herpes can assist in the prediction of which patients can be found to be at risk [16]. Others estimate the results of a cancerous cervix biopsy, employing the data of what they consider risk factors in persons; a Decision Tree (DT) is tasked with classification employing Recursive Feature Elimination (RFE) and Least Absolute Shrinkage and Selection (LASSO) to select important attributes [17]. Random Forest and machine learning have also been utilized to evaluate medical information [18]. Machine learning and Decision Tree are capable of predicting cervical cancer by taking into consideration several risk factors [19,20]. Another proposal classifies biopsied cervical tissue employing LASSO to select its characteristics, and an SVM-classifier is implemented [21]. The Texture-Based Cervical Cancer Classification (NTCC) system extracts characteristics of the Papanicolaou (pap smear); first, a pre-process is required, and the nucleus and cytoplasm are segmented for training; immediately thereafter, the classification employs neural networks and SVM [22]. Another study implements machine learning with pap smear images for classification as normal or abnormal; pre-processing, segmentation, and the extraction of twenty characteristics and pattern classification are performed combining Random Forest and ReliefF for cervical cell classification [23]. In a similar manner, Khamparia et al. developed an interactive diagnostic system based on the Internet of Health Things (IOHT), in which pap smear imaging data, a convolutional neural network (CNN), and machine/learning techniques such as K-Nearest Neighbor, Naïve Bayes, Logistic Regression, Random Forest, and support vector machines were employed, and a web application to predict normal and abnormal carcinogenic cells was designed [24]. This paper considers the texture/characteristics extracted from PET images with 18F-FDG on patients with cervical cancer as they present metabolic characteristics of the tumors [25].
The previously mentioned techniques were developed with the goal in mind of detecting CC, which could be of aid to medical workers attending to women in rural areas or those with fewer resources to detect this disease in its early stages and for them to receive prompt treatment, thus preventing further complications. In this work, five texture-extraction techniques are described and applied as feature vectors in a classifier for multiple classes and in a neural network, using the LBP, OC_LBP, CS_LTP, ICS_TS, and CCR, 60 selected carcinogenic images, and the two classification techniques. The 60 images are classified experimentally, and the efficiency for each texture-extraction technique is measured with the results such that, based on the results, it is possible to identify the technique(s) along with the most efficient classifier in the recognition of carcinogenic images. Therefore, by identifying and selecting the extraction technique with the classifier, new and efficient artificial vision systems can be developed for the classification of cervical cancer images so that the classification errors will be reduced. It is important to mention that these computer vision systems employed to detect cervical cancer in its early stages are important in hospitals or medical oncology offices because they can improve the quality of life of the patient.
The rest of this paper has the following structure: Section 2 details some works related to this research; Section 3 describes the database employed for testing, a detailed report of the texture descriptors compared in this investigation, and the functioning of the statistical classifier and the neural network; Section 4 illustrates in detail the experimental results, while the discussion and contribution are presented in Section 5; and, last, but not least, the conclusions obtained are explained in Section 6.

2. Literature Review

Currently, there is an immeasurable number of persons without the economic resources or access to quality medical services for obtaining their exams within a periodic time frame; this causes the death of many persons due to diseases that could have been attended to and prevented, such as in the case of cervical cancer. To find a solution to this problem, some studies presented a method employing support vector machines (SVM) for multispectral pap smear image classification in the detection of cervical cancer where statistics of pixel intensity, as well as of asymmetric orthogonal wavelets, biorthogonal wavelets, and Gabor wavelets were handled for the extraction of characteristics [26]. Additionally, a system to detect and classify cervical cancer cells was developed implementing convolutional neural networks (CNN) and, for classification, the extreme learning machine (ELM) or the autoencoder (AE)-based classifier was employed [27]. An automated diagnostic system was created to detect cervical pre-cancerous lacerations through a pap smear image, performing characteristic extraction and a smart diagnosis considering nucleus size, nucleus grey levels, cytoplasm size, and cytoplasm grey levels; an algorithm called Region-Growing-Based Feature Extraction (RGBFE) was proposed through an Artificial Neural Network (ANN) with a Hierarchical Hybrid Multilayered Perceptron (H2MLP) architectural network to detect the different types of lacerations [28]. Furthermore, cervical cancer was diagnosed deploying an algorithm implementing K-means clustering, considering five regions of cancer biopsy images classified as pre-cancerous (Cervical Intraepithelial Neoplasia (CIN1, CIN2, CIN3)), and malignant [29]. The pap test was employed to determine cervical cancer at early stages, implementing a method called the Brightness Preserving Dynamic Fuzzy Histogram Equalization (BPDFHR) and Fuzzy C-Means technique to select the area of interest; characteristic extraction was performed through an Ant Colony Optimization (ACO) algorithm, and CNN, MLP, and ANN were employed for classification. According to the classification tests, it was determined that ACO-CNN possessed greater precision [30]. An Intelligent Deep Convolutional Neural Network for Cervical Cancer Detection and Classification (IDCNN-CDC) model was implemented to perform the pre-processing, segmentation, characterization, and classification in biomedical pap smear images. A Gaussian filter was deployed to enhance data collection, and the Extreme Learning Machine (ELM) was used in order to detect and classify the cells within the cervix [31]. A CNN performed classification of cervical images as cancerous or not cancerous with cervix images obtained through an Android device within 1 min after applying 3–5% acetic acid [32]. Another study employed a CNN to classify cervix images captured through a pocket colposcope with great quality; a RESNET-18 algorithm was deployed for characterization and disease classification, and a contrast between the acetic acid and the green light was applied to the images, accentuating aceto-whitening [33]. A fully automated pipeline was developed for cervix detection and cervical cancer classification from cervigram images, and CNNs were employed to detect the cervix region and tumor classification [34]. A system called CervDetect was designed to evaluate the risk of exposure to cervical cancer, implementing a self-learning algorithm with Random Forest (RF) and Neural Networks [18]. According to our review of these related works, it can be observed that the majority of studies deployed NNs to detect lacerations in the cervix through pap smear images; meanwhile, others employed images captured through a colposcope. It can be determined that CNN is the preferred method; therefore, this paper proposes a study where the performance of the LBP, OC_LBP, CS_LTP, ICS_TS, and CCR texture descriptors employs a statistical classifier and an NN.
  • Comparing a statistical method with an NN to verify whether there exist greater advantages when artificial intelligence (AI) is incorporated into image classification.
  • Determining which descriptors and classifiers present better performance regarding the detection of cervical cancer in images.
  • Identifying which descriptors provide better results for both a multi-class classifier and an NN, so that these can be subsequently employed for the development of an automated diagnostic software.

3. Materials and Methods

3.1. Database

Sixty images from the IARC colposcopy image bank were used. This database was generated by the International Agency for Research on Cancer (IARC/WHO) [35]. The database is composed of images that classify as type: 1, 2, and 3; 20 of each type were used in this project. The carcinogenic images were resized to a resolution of 640 × 480 pixels.

3.2. Texture-Extraction Techniques

With the goal in mind of their texture characteristics being recognizable in the gynecological images, Section 3.2.1, Section 3.2.2, Section 3.2.3, Section 3.2.4 and Section 3.2.5 describe the basic definition of the texture unit from five different techniques available in the literature and applied in topics such as security, quality control, and object detection, among others. Nonetheless, as reported to date, these techniques had not been applied in gynecological carcinogenic image recognition. Henceforth, in Section 4, the texture spectrum of carcinogenic images is determined, corroborating that the gynecological image can be characterized through its texture characteristic and that each technique generates its own spectrum.

3.2.1. Local Binary Pattern (LBP)

LBP is one of the texture descriptors widely used due to the excellent quality of its characterization. Currently, it is employed by researchers where facial recognition is performed [36]. Additionally, it is implemented in the facial detection and recognition of identical twins [37], and it has been applied with CNN to classify benign and malignant breast tumors [38]. Furthermore, LBP was employed with a CNN to classify hyperspectral images [39]. On enhancing its precision and adapting it to a set of images, it was deployed to design and produce footwear [40]. The original version of the Local Binary Pattern (LBP) [1996] technique is based in a local analysis of the grey-levels of the image through a mobile observation window [22,41,42,43]. The window size was 3 × 3 pixels, and each position detected a pattern P = { p i , j   } = p 1 , 1   p 1 , 2   p 1 , 3   p 2 , 1   p 2 , 2   p 2 , 3   p 3 , 1   p 3 , 2   p 3 , 3   whose elements { p i , j   }   ( i = 1 , 2 , 3 ; j = 1 , 2 , 3 ) have a value between an interval of 0–255. Afterward, each pattern is employed to calculate the LBP texture unit. This procedure is explained as follows: the central pixel p 2 , 2 is used as a threshold reference, and the pattern is compared according to the expression b l = 1 ,   I l   I c   0 ,   I l < I c , where I c is the intensity of the central pixel p 2 , 2 , I l is the l t h   l = 1 , 2 , , 3 × 3 1 intensity of the neighboring pixels against the central pixel p 2 , 2 , and b l is the l t h bit according to the local threshold. Through the resulting b l code, the LBP ( k L B P ) texture unit is calculated based on a Binary-Coded-Decimal (BCD) codification,
k L B P = l = 0 3 × 3 1 b l 2 l = b L 1 2 L 1 + + b 1 2 1 + b 0 2 0 , k L B P = 0 , 1 , 2 , , K 1
where k L B P is the texture unit value, 2 is the base, and L is the binary code length. In Figure 1, an example of how to calculate unit k L B P = 193 is shown: the pattern is p i , j = 233   200   195   255   200   56   26   48   120 : the p 2 , 2 withholds an I c = 200 intensity, and the resulting binary code is b = 11000001 . The process of how this binary value was generated can be observed in Figure 1.
The k L B P unit is considered a discrete random variable and is employed as an index in the h k L B P histogram. The histogram illustrates the occurrence frequency of the LBP units, and it contains 256 different elements.

3.2.2. Orthogonal Combination of Local Binary Patterns (OC-LBP)

The Orthogonal Combination of Local Binary Pattern (OC-LBP) is a modification of the LBP technique; it was also developed for texture analysis in grey-scale images [9,42]. As in the case of LBP, the OC-LBP technique is a technique based on a local analysis of the image through a mobile window (3 × 3 pixels), which detects the pattern P = p i , j = p 1 , 1   p 1 , 2   p 1 , 3   p 2 , 1   p 2 , 2   p 2 , 3   p 3 , 1   p 3 , 2   p 3 , 3   for each position of the image under study. The elements p i , j also possess a value between the interval of 0–255. In this case under study, the OC-LBP texture unit is calculated by means of the following procedure: the pattern P is divided in two sub-patterns: P 1 = 0   p 1 , 2   0   p 2 , 1   p 2 , 2   p 2 , 3   0   p 3 , 2   0 and P 2 = p 1 , 1   0   p 1 , 3   0   p 2 , 2   0   p 3 , 1   0   p 3 , 3 . Employing the central pixel p 2 , 2 as thresholder reference, the sub-patterns P 1 and P 2 are processed according to the following criteria: b l 1 = { 1   I l 1   I c   0 ,   I l 1 < I c } and b l 2 = 1   I l 2   I c   0 ,   I l 2 < I c , where b l 1   l 1 = 1 , 2 , 3 , 4 is the first binary code obtained from pattern P 1 , b l 2   l 2 = 1 , 2 , 3 , 4 is the second binary code obtained for pattern P 2 , I c is the intensity of the central element p 2 , 2 , and I l 1   and   I l 2 are the intensities of the neighbors for each of the sub-patterns ( P 1   and   P 2 ) . With the first binary code b l 1 , the term k O C L B P 1 is determined based on to the BCD conversion
k O C L B P 1 = l 1 = 0 3 b l 1 2 l 1
Continuing with the process, the term k O C L B P 2 is also calculated through the expression
k O C L B P 2 = l 2 = 0 3 b l 2 2 l 2
It can be noted that both bit chains are composed by only four elements; henceforth, each term has R 16 dimensions. Figure 2 illustrates an example for calculating k O C _ L B P 1 = 12 and k O C _ L B P 2 = 4 : the pattern P is p i , j = [ 233   200   195   255   200   56   26   48   120 ] , the sub-patterns are P 1 = [ 0   200   0   255   200   56   0   48   0 ] and P 2 = 233   0   195   0   200   0   26   0   120 , while the binary codes are b l 1 = 1100 and b l 2 = 0100 . Figure 2 presents this procedure graphically.
Thus, k O C L B P 1 and k O C L B P 2 are deployed as indexes in the discrete h k O C L B P 1 and h k O C L B P 2 histograms, and finally, both histograms, h k O C L B P 1 and h k O C L B P 2 , are concatenated to create a single h k O C L B P texture spectrum. This spectrum has R 32 dimensions and a smaller dimensional space than the LBP histogram R 256 .

3.2.3. Center-Symmetric Local Ternary Patterns (CS-LTP)

Another texture-extraction technique available in the literature is the Center-Symmetric Local Ternary Pattern (CS-LTP). This technique is also employed in the analysis of gray-scale images; it is based on the Local Ternary Pattern (LTP) technique, and it performs a local analysis of the studied image [41,42,44]. For each pixel in the image, a mobile window (3 × 3 pixel) detects the pattern P = p i , j = p 1 , 1   p 1 , 2   p 1 , 3   p 2 , 1   p 2 , 2   p 2 , 3   p 3 , 1   p 3 , 2   p 3 , 3 and is employed to calculate the texture unit through the following procedure. From pattern P , the sub-pattern P 1 = 0   p 1 , 2   0   p 2 , 1   0   p 2 , 3   0   p 3 , 2   0 is extracted. Then, a threshold is applied to P 1 with the criteria t l =   0 ,   x < T   2 ,   x > T   1 ,   o t h e r w i s e , where T is the threshold value (typically T = 3) and t l   ( l = 0 , 1 ) is a ternary code: if t 0 is calculated, then x is determined by x = p 2 , 1 p 2 , 3 , and if t 1 is calculated, then x = p 3 , 2 p 1 , 2 . In this case, the CS-LTP unit texture is calculated through the numerical conversion
k C S _ L T P = l = 0 1 t l 3 l
where 3 is the conversion base, and k C S L T P is the Center-Symmetric Local Ternary Pattern unit. In Figure 3, the procedure to calculate the k C S L T P unit can be appreciated.
Once again, the k C S L T P unit is considered a random discrete variable and is employed as an index in the discrete h k C S L T P histogram. As in previous cases, the h k C S L T P histogram displays an occurrence frequency of the units, but with a dimensional space of R 9 .

3.2.4. Improved Center-Symmetric Texture Spectrum (ICS-TS)

The Improved Center-Symmetric Texture Spectrum (ICS-TS) is also derived from the Local Ternary Patterns (LTP) technique and has proven effective in tasks regarding image classification [42]. Its texture unit is determined employing the following procedure. With a mobile 3 × 3 window, a pattern P = p i , j = p 1 , 1   p 1 , 2   p 1 , 3   p 2 , 1   p 2 , 2   p 2 , 3   p 3 , 1   p 3 , 2   p 3 , 3 is detected and, from P , sub-patterns P 1 = 0   p 1 , 2   0   p 2 , 1   0   p 2 , 3   0   p 3 , 2   0 and P 2 = p 1 , 1   0   p 1 , 3   0   0   0   p 3 , 1   0   p 3 , 3 are defined. Both P 1 and P 2 receive a threshold with the criteria t l 1 , l 2 = {   0 ,   x < T   1 , T x T   2 ,   x > T   } , where T is the threshold value, t l 1 and t l 2 are the ternary values, and x is a difference of intensities. The ternary code t l 1 is obtained only as CS-LTP (see Section 3.3), i.e., if t 0 is calculated, then x = p 2 , 1 p 2 , 3 , if t 1 is determined, then x = p 3 , 2 p 1 , 2 , and, in this case, the first term k I C S T S 0 is computed by
k I C S T S 1 = l 1 = 0 1 t l 1 3 l 1
For the second ternary value t l 2 , if t 0 then x = p 3 , 1 p 1 , 3 , for t 1 the behavior is x = p 3 , 3 p 1 , 1 and, finally, a second k I C S T S 1 term is given by
k I C S T S 2 = l = 0 1 t l 3 l = t 1 3 1 + t 0 3 0
Figure 4 illustrates the steps for the procedure to calculate both terms of the ICS-TS texture unit in detail.
As with previous texture units, k I C S T S 1 and k I C S T S 2 are employed as indexes for the h k I C S T S 1 and h k I C S T S 2 histograms and concatenating both the h k I C S T S texture spectrum is obtained, achieving an R 18 dimensional space.

3.2.5. Coordinated Cluster Representation (CCR)

The fundamental version of the Coordinated Cluster Representation (CCR) was developed for binary images; nonetheless, the CCR transform was applied in gray-scale and color images performing an image binarization as pre-processing. The matrixial representation of the CCR was reported in references [42,45,46,47], and such transformation is founded on two theorems. The first theorem establishes the structure of the CCR in periodic images.
Theorem 1. 
If a pixel from a binary image invariant to translation  S α  has a size of  τ 1 · τ 2  pixels,  τ 1  in one and  τ 2  in another direction, then some CCR distribution function,  F I , J α ( b )  does not bear more than  T = τ 1 · τ 2  non-zero values. If the CCR inspection window has a size equal or greater than  I τ 1  and  J τ 2 , then  F I , J α ( b )  takes exactly  T = τ 1 · τ 2  non-zero values.
The second theorem establishes the relation between H I , J α ( b ) and the nth correlation moments of a binary image S α . According to the second theorem, the H I , J α ( b ) histogram withholds all of the data regarding the correlation moments of n-points of the S α image. The F I , J ( b ) distribution function provides sufficient data regarding the binary image S due to the correlation analysis it performs thoroughly.
Theorem 2. 
Given a binary image matrix  S α = s α ( l , m )  and a CCR distribution function  F I , J α b  of an image. If  m a x l i I  and  m a x m i J   i = 1 , 2 , , k 1  and  K = I · J  then the  k t h  correlation function  s α l , m s α l + l 1 , m + m 1 s α l + l k 1 , m + m k 1 = l , m = 1 L , M s α l , m s α l + l 1 , m + m 1 s α l + l k 1 , m + m k 1  can only reconstructed from  F ( I , J ) α (b), where  N = L · M  is the size of image,  L = L max l i ,   M = M m a x { m i } .
In CCR, a mobile window progresses pixel-by-pixel through the entirety of the binary image under study; for each position, the window detects a binary pattern P = p i , j = [ p 1 , 1   p 1 , 2   p 1 , 3   p 2 , 1   p 2 , 2   p 2 , 3   p 3 , 1   p 3 , 2   p 3 , 3 ] and then employs it to calculate the CCR texture unit as follows. An observation window with a given I × J = 3 × 3 size is deployed to find the binary pattern P , which is considered a binary matrix with I rows and J columns, where white pixels are ones and the black pixels are zeros. The P matrix rows are concatenated forming a binary code b = p 1 , 1   p 1 , 2   p 1 , 3   p 2 , 1   p 2 , 2   p 2 , 3   p 3 , 1   p 3 , 2   p 3 , 3 . Afterward, the CCR texture unit is estimated through a Binary-Coded-Decimal (BCD) conversion.
k C C R = l = 0 8 b l 2 l
where k C C R is the CCR texture unit, b l is the l t h bit ( p 1 , 1 b 8 , p 1 , 2 b 7 , p 1 , 3 b 6 ,   p 2 , 1 b 5 , p 2 , 2 b 4 , p 2 , 3 b 3 , p 3 , 1 b 2 , p 3 , 2 b 1 , p 3 , 3 b 0 ) , and 2 is the working base. Figure 5 illustrates in detail the procedure taken to calculate the k C C R texture unit.
The h k C C R   histogram is generated interpreting the k C C R unit as a discrete variable. Finally, a probability function of the densities in terms of the F k C C R unit texture is obtained through the division of the h k C C R   histogram between the total of texture units. In this case, the texture spectrum contains R 512 dimensions.

3.3. Classifiers

3.3.1. Statistical Classifier

In references [45,46,48], a multi-class classifier was proposed, applied, and optimized for the recognition of ornate granites in digital images. Its experimental results were very high, being a promising classifier in image recognition. This research proposes its use in the identification of cervical cancer images. The classifier is statistically based and consists of two phases: learning and recognition. In the learning phase, a prototype vector is generated for each known carcinogenic image. Meanwhile, in the recognition phase, a vector is generated for a test carcinogenic image. This resulting vector is compared against all of the known prototype vectors, and, ultimately, the carcinogenic image is assigned to the class whose textural characteristics are most similar.
To train the classifier, the following process is employed, considering that the task is the classification of binary carcinogenic images in M classes where each binary image is a class. From a carcinogenic image of non-homogenous texture, a P series of S m α   m = 1 , 2 , 3 , , M , α = 1 , 2 , 3 , , P sub-images are sampled at random, and then the F m α k characteristic vector is obtained for each sub-image. Subsequently, each carcinogenic image of ( S m ) class is characterized through its F m α [ k ] spectra.
F m [ k ] = 1 P α = 1 P F m α [ k ]
F m [ k ] is considered the prototype vector of the cervical cancer image (class m ) in the textural spectrum employed, LBP, OC-LBP, CS-LTP, ICS-TS, or CCR. In the recognition phase, a characteristic vector from a test carcinogenic image is estimated, and the minimal distance is deployed to assign it to the class with the most similar prototype. For the characterization of the S T test cervical cancer image, a series of K sub-images β = 1 , 2 , 3 , , K of equal size are employed; in concurrence with the training phase, the F T β [ k ] characteristics vector of each S T β sub-image is obtained. The carcinogenic image S T is characterized through the F T β [ k ] functions as shown in the following section:
F T [ k ] = 1 K β = 1 K F T β [ k ]
The test carcinogenic image is assigned to class m if and only if the distance between the vector and prototype vector of class m is minimal; the distance is calculated with
d F T k , F m k = k F T k F m k
The classification result is illustrated in a confusion matrix C = C m m whose rows correspond to the test images, the columns entail the classes, the main diagonal presents the correct predictions, and the elements circumventing the diagonal represent the classification errors or incorrect predictions [49].

3.3.2. Neural Networks

It is known that neural networks possess a practical application in digital image recognition tasks, and they have been widely used in the literature in previous works with outstanding efficacy [50,51,52]. Their efficiency depends on the number of layers, neurons, and the coefficients obtained during the training phase of the neural network. In this paper, the LBP, OC-LBP, CS-LTP, ICS-TS and CCR histograms are deployed as input signals of the neural network (see Figure 6) with the objective of classifying the carcinogenic images that were selected from the database.
Figure 6 illustrates the neural network architecture proposed for image recognition. It can be appreciated that, initially, the laceration caused by the cervical cancer is transformed into a texture spectrum ( h k histogram) and, subsequently, the h k spectrum is employed as an input signal for the neural network, where it corresponds to the elements of the texture spectra (LBP, OC-LBP, CS_LTP, ICS-TS, and CCR), where k is the index of the dimensional space, and K is the maximal dimension of the descriptor. Consequently, the input signal is processed by the x hidden layers of the neural network where ( x = 1 , 2 , 3 , ) . Finally, the signal obtained through the layers of the neural network reaches the output layer, which is represented through O 1 , O 2 , O 3 , , O m , , O M where M corresponds the number of classes to identify.

4. Results

The objective of this experimental work is to achieve the correct recognition of the database generated by the International Agency for Research on Cancer (IARC/WHO), which consists of 60 RGB-color images divided into three types (1, 2, and 3), providing 20 different images for each of the cancer types. Four series of experiments were developed in which the classifiers described in Section 3 were applied, and their resulting characteristic vectors were the texture spectra of LBP, OC-LBP, CS-LTP, ICS-TS, and CCR. For our experiments, a CPU with an Intel I7-8750H 2.20 Ghz processor, 16 GB RAM DDR4, SSD WD Black 512 GB, Nvidia GeForce GTX 1050, Windows 10 home of 64 bits was deployed; the program languages utilized for development were Python with the Pytorch Library, Java, and Matlab R2021a [53].

4.1. Tests with the Statistical Classifier

The LBP, OC-LBP, CS-LTP, ICS-TS, and CCR texture spectra were calculated through a 3 × 3 window, and, subsequently, they were employed as characteristic vectors with the classifier described in Section 4.1. The classifier consists of two phases: training and recognition. During the training phase, each carcinogenic image S m is considered a class; thus, the number of classes is M = 20 (in experiment 4, M = 60 ). For each S m class, 1000 sub-images S m α   α = 1 , 2 , , P = 1000 are randomly extracted, and every sub-image S m α has its texture spectrum extracted F m α [ k ] . Subsequently, the S m class is characterized according to Equation (8), where the F m [ k ] vector is the prototype of class m . In the recognition phase, the test images are the same as those in the training phase. A set of 1000 sub-images S t β ( β = 1 , 2 , , K = 1000 ) is randomly extracted from the S t test carcinogenic image, and each S t β sub-image has its texture spectrum F T β [ k ] calculated. For the characterization of carcinogenic laceration test image S t , Equation (9) is employed. As a final step, S t images are identified through Equation (10).
Four series of experiments are developed for each texture descriptor. In the first series, type 1 carcinogenic images are identified. In the second and third series, types 2 and 3 are identified. To conclude, in the fourth series of experiments, all carcinogenic images are combined and subsequently identified. The results are expressed through a confusion matrix C m m , where the main diagonal elements denote the correct classification predictions; the elements surrounding the diagonal describe the identification error; the columns correspond to the carcinogenic images; and the rows correspond to the prototypes [49]. Figure 7a–d illustrates the projections of four confusion matrixes when the LBP histogram is employed as characteristic vector, and the carcinogenic images are classified with it.
The identification efficiency of the carcinogenic images in terms of percentage was estimated through the following expression (11):
E % = d i a g C m m m m C m m × 100 .
where C m m is the confusion matrix, d i a g C m m is the sum of the main diagonal of the C m m matrix (sum of correct classification predictions), m m C m m is the sum of all elements of the C m m matrix, and E % is the classification efficiency of carcinogenic images based on percentage terms. Based on Figure 8, the efficiency results with LBP were as follows: E % = 100 % (Type 1); E % = 100 % (Type 2); E % = 100 % (Type 3), and E % 100 % (all types combined). Figure 8 illustrates the experimental results obtained in the classification of the carcinogenic images, employing five texture spectra.
Analyzing Figure 8, the carcinogenic images are recognized efficiently ( 80 %   a n d   100 % ) when the statistical classifier is employed and its characteristic vector is any of the texture spectra: LBP; OC-LBP; CS-LTP; ICS-TS, or CCR. The five techniques entertain high efficiency in the first three experiments ( E ( % ) = 80 %   a n d   100 % ) . Nonetheless, in the fourth experiment, when the three types are combined, the efficient techniques comprise OC-LBP, CS-LTP, and ICS-TS with E ( % ) = 96 % and 98 % , while the LBP ( E ( % ) = 100 % ) and CCR ( E ( % ) = 83 % ) , respectively. The error increase in the fourth experiment can be attributed to the existing similarity among the three different types of cancer.

4.2. Neural Network Tests

For these experiments, the neural network processes the histograms of LBP, OC-LBP, CS-LTP, ICS-TS, and CCR as input data; each contains the textural information of the carcinogenic image. The number of neurons in the entry and hidden layers is equal to the texture spectrum dimensions. The parameters deployed included a Sigmoid activation function in the output layer (see Figure 9a), the Binary Cross Entropy Loss (BCELoss) as a loss function, and a learning rate of 0.00001. Adam, an extension of the Stochastic Gradient Descent, was employed as an optimizer (for an efficient stochastic optimization, only a first- order gradient and a few memory requisites are necessary) [54], and the epoch number was set at 20,000, providing a loss of equal or close to zero (see Figure 9b).
The results obtained through the neural network were represented in a confusion matrix C_mm, and, as a consequence, the classification of the carcinogenic image E(%) was calculated employing Equation (11). In Figure 10, the results obtained when the neural network was applied in the identification of cervical cancer are presented; the input signal is a histogram (LBP, OC-LBP, CS-LTP, ICS-TS, and CCR). As can be observed in Figure 10, the three most efficient texture-extraction techniques are LBP, OC_LBP, and CCR, with an efficiency within an 81–98% threshold. The three texture spectra extract sufficient textural information from the carcinogenic image, and the neural network is successfully calibrated by these. On the other hand, the least efficient techniques were CS_LTP and ICS_TS, with an efficiency of between 25 and 55 % and from 40 to 80%, respectively. Their deficient performance can be attributed to the neural network and to the small data size of its texture spectrum.

4.2.1. Ablation Study

In neural networks, ablation is employed to evaluate the importance of the neurons or filters in the classification performance [55]. Therefore, the different non-linear layers, with a certain number of layers for each descriptor according to its dimensional space comprising the network filters, were modified. Tests were applied to determine the number of layers and the neurons necessary for better performance. These results are displayed in Table 1.
Table 2 presents a comparison between accuracy and the techniques employed for the classification of cervical cancer images.

4.2.2. ROC Curves

ROC curves were employed to determine the accuracy of the neural network as well as the statistical classifier with the LBP, OC_LBP, CS-LTP, ICS-TS, and CCR descriptors. With this tool, the number of correct and incorrect assignations for the classification of each class can be measured. The analyzed results determined that those closer to the high left vertex are more precise. Figure 11 shows the ROC curves of the statistical classifier, and Figure 12 displays the ROC curves generated for the NN.

4.2.3. Confusion Matrices

Classification efficiency was tested with the database containing 60 carcinogenic images classified as Type I, Type II, and Type III, and the classification errors.
The LBP confusion matrix does not present due to the fact that errors were not detected in the classification; similarly, OC_LBP and ICS_TS only had one error, in that one of the images was classified of the same type, but the class or image classified was incorrect. CS_LTP had two errors as OC_LBP and ICS_TS. Table 3 corresponds to the CCR confusion matrix; this descriptor also displayed images classified in the correct type, but in the incorrect class, and it additionally classified images in incorrect types.
Confusion matrixes were also carried out where the classification efficiency of neural networks was evaluated; likewise, the images were classified as Type I, Type II, Type III, and classification errors. LBP and CCR had only one error; they classified the correct type, but the class or image classified was incorrect. Table 4, Table 5 and Table 6 present the results of the descriptors that had the most errors, in that they classified images in the correct type, but in the incorrect class, but also classified images in incorrect types.

5. Discussion and Contribution

5.1. Discussion

This paper evaluated the effectiveness of identifying images with carcinogenic lacerations with a statistical classifier and neural networks through the following different texture spectra: LBP; OC-LBP; CS-LTP; ICS-TS, and CCR, thus defining the techniques with the most efficiency in the classification of carcinogenic images.
When comparing the obtained results of the statistical classifier from Figure 8 vs. the neural network when the three types are combined from Figure 10, the statistical classifier was more efficient than the neural network. To illustrate this, the mean value for each technique was calculated, and, afterward, Table 7 was generated.
From the data in Table 3, the following can be inferred: the five texture-extraction techniques are efficient with the statistical classifier since the parameter E ( % ) can be found within the 83.3333–100% threshold. This high efficiency can be attributed to three motives [45,46,48]: (1) combining the classifier and the textural data extracted with any of the five techniques renders the high efficiency of the classification of carcinogenic images possible; (2) the number of sub-images allows the correct characterization of each class; and (3) the sub-image size is appropriate for adequate characterization of the classes. On the other hand, with the neural network, the three efficient techniques comprised LBP [E(%) = 98.3333%], OC_LBP [E(%) = 81.6667%)], and CCR [E(%) = 98.3333%], and the two least efficient ones were CS_LTP [E(%) = 36.6667%] and ICS_TS [E(%) = 55%]. This difference in efficiency is due to the definition of the texture spectrum for each technique; subsequently, the histograms exhibit different behaviors (see Figure 13), and their content is distinct. This causes each texture-extraction technique to have its own efficiency in the task of the identification of carcinogenic images.
In agreement with the results of the performed tests, to achieve a good classification efficiency of carcinogenic images, LBP, OC_LBP, and CCR are recommended; their efficiency was corroborated in both classifiers. Additionally, CS_LTP and ICS_TS could be employed if their histograms are deployed as a characteristic vector with a statistical classifier (see Figure 8). Henceforth, our future work is to develop an artificial vision system for identification of the type of cervical cancer.

5.2. Contribution

In our research work, the classification efficiency of cervical cancer images was measured by applying five texture-extraction techniques and two classifiers. Based on our experimental results, new and efficient expert systems for the early detection of cervical cancer can be developed by applying any technique of texture extraction (LBP, OC_LBP, CS_LTP, ICS_TS, and CCR) or classifier (statistical or neural network). This is because the image classification efficiency is from 80 to 100%, as shown in Figure 8. However, if the classifier used in image recognition is the neural network, then only three texture spectra will allow for the developing of efficient expert systems; these spectra are LBP, OC_LBP, and CCR, whose efficiencies are within the range of 81.6667 to 98.3333%, see Figure 10.
Therefore, this research study provides valuable information on the use of texture descriptors and classifiers for cervical cancer image classification. The findings provide a foundation for further exploration and development of efficient expert systems for the early detection and treatment of this life-threatening disease.

6. Conclusions

In view of the goal of providing a technique for the detection of lacerations caused by cervical cancer through the classification of digital images, a series of experiments were performed to identify 60 carcinogenic images according to the type of laceration. These tests determined that LBP provided 98.3333–100% efficiency with both classifiers in identifying images, but when all types were combined and classified, its efficiency decreased the threshold. Meanwhile, with the tests performed employing the neural network, it could be determined that characteristic vectors with a smaller dimensional space provide less information. Therefore, a deficient classification is obtained when compared against descriptors with a larger dimensional space.
In our experimental work, a public database was used; as a consequence, the classification efficiency is very high because natural conditions and noise did not affect image recognition. However, in practical applications, natural lighting conditions, image detection noise, and device noise are some limitations that could reduce the efficiency of our proposal, as well as reducing the efficiency of any expert system applied in the field of cancer image recognition.

Author Contributions

Conceptualization, M.J.-R. and J.T.G.-B.; methodology, M.J.-R. and J.T.G.-B.; software, J.A.-S.; formal analysis, J.A.-S.; investigation, M.A.G.-R.; writing—review and editing, J.A.-S., M.J.-R. and J.T.G.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

All the data used in this paper can be traced back to the cited references.

Acknowledgments

The authors wish to thank Mexico’s National Council of Science and Technology (CONACyT) for the support granted. J. Aguilar- Santiago expresses his gratitude to CONACyT for the scholarship provided. The authors thank all of the students for their collaboration.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chaki, J.; Dey, N. Texture Feature Extraction Techniques for Image Recognition; Springer Briefs in Applied Sciences and Technology; Springer: Singapore, 2020; ISBN 9789811508523. [Google Scholar]
  2. Devnath, L.; Summons, P.; Luo, S.; Wang, D.; Shaukat, K.; Hameed, I.A.; Aljuaid, H. Computer-Aided Diagnosis of Coal Workers’ Pneumoconiosis in Chest X-ray Radiographs Using Machine Learning: A Systematic Literature Review. Int. J. Environ. Res. Public Health 2022, 19, 6439. [Google Scholar] [CrossRef] [PubMed]
  3. González-Castro, V.; Cernadas, E.; Huelga, E.; Fernández-Delgado, M.; Porto, J.; Antunez, J.R.; Souto-Bayarri, M. CT Radiomics in Colorectal Cancer: Detection of KRAS Mutation Using Texture Analysis and Machine Learning. Appl. Sci. 2020, 10, 6214. [Google Scholar] [CrossRef]
  4. Krishan, A.; Mittal, D. Effective segmentation and classification of tumor on liver MRI and CT images using multi-kernel K-means clustering. Biomed. Eng. Biomed. Tech. 2020, 65, 301–313. [Google Scholar] [CrossRef]
  5. Prakash, A.J. Capsule Network for the Identification of Individuals Using Quantized ECG Signal Images. IEEE Sens. Lett. 2022, 6, 1–4. [Google Scholar] [CrossRef]
  6. World Health Organization. Available online: https://www.who.int/es/news-room/fact-sheets/detail/cervical-cancer (accessed on 11 November 2022).
  7. Park, Y.R.; Kim, Y.J.; Ju, W.; Nam, K.; Kim, S.; Kim, K.G. Comparison of machine and deep learning for the classification of cervical cancer based on cervicography images. Sci. Rep. 2021, 11, 16143. [Google Scholar] [CrossRef]
  8. Dhawan, S.; Singh, K.; Arora, M. Cervix Image Classification for Prognosis of Cervical Cancer using Deep Neural Network with Transfer Learning. EAI Endorsed Trans. Pervasive Health Technol. 2017, 7, 169183. [Google Scholar] [CrossRef]
  9. Zhu, C.; Bichot, C.-E.; Chen, L. Image region description using orthogonal combination of local binary patterns enhanced with color information. Pattern Recognit. 2013, 46, 1949–1963. [Google Scholar] [CrossRef]
  10. Singh, S.K.; Anjali, G. Performance analysis of machine learning algorithms for cervical cancer detection. Int. J. Healthc. Inf. Syst. Inform. 2020, 15, 1–21. [Google Scholar] [CrossRef]
  11. Payette, J.; Rachleff, J.; de Graaf, C.V. Intel and MobileODT Cervical Cancer Screening Kaggle Competition: Cervix Type Classification Using Deep Learning and Image Classification. Available online: https://www.semanticscholar.org/paper/Intel-and-MobileODT-Cervical-Cancer-Screening-%3A-and-Payette/fb75bbd2ffd384dc0ff5bd25bdd43e5051810d90 (accessed on 28 June 2023).
  12. Devi, M.A.; Ravi, S.; Vaishnavi, J.; Punitha, S. Classification of Cervical Cancer Using Artificial Neural Networks. Procedia Comput. Sci. 2016, 89, 465–472. [Google Scholar] [CrossRef] [Green Version]
  13. Yang, X.; Zeng, Z.; Teo, S.G.; Wang, L.; Chandrasekhar, V.; Hoi, S. Deep Learning for Practical Image Recognition: Case Study on Kaggle Competitions. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; ACM: London, UK, 2018; pp. 923–931. [Google Scholar]
  14. Pal, A.; Xue, Z.; Befano, B.; Rodriguez, A.C.; Long, L.R.; Schiffman, M.; Antani, S. Deep Metric Learning for Cervical Image Classification. IEEE Access 2021, 9, 53266–53275. [Google Scholar] [CrossRef]
  15. Elayaraja, P.; Suganthi, M. Automatic Approach for Cervical Cancer Detection and Segmentation Using Neural Network Classifier. Asian Pac. J. Cancer Prev. 2018, 19, 3571–3580. [Google Scholar] [CrossRef] [Green Version]
  16. Al-Wesabi, Y.M.S.; Choudhury, A.; Won, D. Classification of Cervical Cancer Dataset. arXiv 2018, arXiv:1812.10383. [Google Scholar]
  17. Tanimu, J.J.; Hamada, M.; Hassan, M.; Kakudi, H.; Abiodun, J.O. A Machine Learning Method for Classification of Cervical Cancer. Electronics 2022, 11, 463. [Google Scholar] [CrossRef]
  18. Mehmood, M.; Rizwan, M.; Gregus Ml, M.; Abbas, S. Machine Learning Assisted Cervical Cancer Detection. Front. Public Health 2021, 9, 788376. [Google Scholar] [CrossRef]
  19. Parikh, D.; Menon, V. Machine Learning Applied to Cervical Cancer Data. Int. J. Math. Sci. Comput. 2019, 5, 53–64. [Google Scholar] [CrossRef]
  20. Asadi, F.; Salehnasab, C.; Ajori, L. Supervised Algorithms of Machine Learning for the Prediction of Cervical Cancer. J. Biomed. Phys. Eng. 2020, 10, 513–522. [Google Scholar] [CrossRef]
  21. Huang, P.; Zhang, S.; Li, M.; Wang, J.; Ma, C.; Wang, B.; Lv, X. Classification of Cervical Biopsy Images Based on LASSO and EL-SVM. IEEE Access 2020, 8, 24219–24228. [Google Scholar] [CrossRef]
  22. Mariarputham, E.J.; Stephen, A. Nominated Texture Based Cervical Cancer Classification. Comput. Math. Methods Med. 2015, 2015, 586928. [Google Scholar] [CrossRef] [PubMed]
  23. Sun, G. Cervical Cancer Diagnosis based on Random Forest. Int. J. Perform. Eng. 2017, 17, 446. [Google Scholar] [CrossRef]
  24. Khamparia, A.; Gupta, D.; De Albuquerque, V.H.C.; Sangaiah, A.K.; Jhaveri, R.H. Internet of health things-driven deep learning system for detection and classification of cervical cells using transfer learning. J. Supercomput. 2020, 76, 8590–8608. [Google Scholar] [CrossRef]
  25. Wei, M.; Zhe, C.; Wei, S.; Feng, Y.; Ruwei, D.; Ning, W.; Jie, T. Staging of cervical cancer based on tumor heterogeneity characterized by texture features on 18F-FDG PET images. Phys. Med. Biol. 2015, 60, 5123–5139. [Google Scholar] [CrossRef] [Green Version]
  26. Zhang, J.; Liu, Y. Cervical Cancer Detection Using SVM Based Feature Screening. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2004; Barillot, C., Haynor, D.R., Hellier, P., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3217, pp. 873–880. ISBN 978-3-540-22977-3. [Google Scholar] [CrossRef] [Green Version]
  27. Ghoneim, A.; Muhammad, G.; Hossain, M.S. Cervical cancer classification using convolutional neural networks and extreme learning machines. Future Gener. Comput. Syst. 2020, 102, 643–649. [Google Scholar] [CrossRef]
  28. Mat-Isa, N.A.; Mashor, M.Y.; Othman, N.H. An automated cervical pre-cancerous diagnostic system. Artif. Intell. Med. 2008, 42, 1–11. [Google Scholar] [CrossRef] [PubMed]
  29. Rahmadwati; Naghdy, G.; Ros, M.; Todd, C. Morphological Characteristics of Cervical Cells for Cervical Cancer Diagnosis. In Proceedings of the 2011 2nd International Congress on Computer Applications and Computational Science, Jakarta, Indonesia, 15–17 November 2011; Gaol, F.L., Nguyen, Q.V., Eds.; Advances in Intelligent and Soft Computing; Springer: Berlin/Heidelberg, Germany, 2012; Volume 145, pp. 235–243. ISBN 978-3-642-28307-9. [Google Scholar]
  30. Kavitha, R.; Jothi, D.K.; Saravanan, K.; Swain, M.P.; Gonzáles, J.L.A.; Bhardwaj, R.J.; Adomako, E. Ant Colony Optimization-Enabled CNN Deep Learning Technique for Accurate Detection of Cervical Cancer. BioMed Res. Int. 2023, 2023, 1742891. [Google Scholar] [CrossRef] [PubMed]
  31. Ibrahim Waly, M.; Yacin Sikkandar, M.; Abdelkader Aboamer, M.; Kadry, S.; Thinnukool, O. Optimal Deep Convolution Neural Network for Cervical Cancer Diagnosis Model. Comput. Mater. Contin. 2022, 70, 3295–3309. [Google Scholar] [CrossRef]
  32. Kudva, V.; Prasad, K.; Guruvare, S. Automation of Detection of Cervical Cancer Using Convolutional Neural Networks. Crit. Rev. Biomed. Eng. 2018, 46, 135–145. [Google Scholar] [CrossRef]
  33. Skerrett, E.; Miao, Z.; Asiedu, M.N.; Richards, M.; Crouch, B.; Sapiro, G.; Qiu, Q.; Ramanujam, N. Multicontrast Pocket Colposcopy Cervical Cancer Diagnostic Algorithm for Referral Populations. BME Front. 2022, 2022, 9823184. [Google Scholar] [CrossRef]
  34. Alyafeai, Z.; Ghouti, L. A fully-automated deep learning pipeline for cervical cancer classification. Expert. Syst. Appl. 2020, 141, 112951. [Google Scholar] [CrossRef]
  35. Kaggle. [En línea]. Available online: https://www.kaggle.com/c/intel-mobileodt-cervical-cancer-screening (accessed on 30 June 2023).
  36. Liu, L.; Fieguth, P.; Zhao, G.; Pietikäinen, M.; Hu, D. Extended local binary patterns for face recognition. Inf. Sci. 2016, 358–359, 56–72. [Google Scholar] [CrossRef]
  37. Priya, T.V.; Sanchez, G.V.; Raajan, N.R. Facial Recognition System Using Local Binary Patterns(LBP). Int. J. Pure Appl. Math. 2018, 119, 1895–1899. [Google Scholar]
  38. Touahri, R.; AzizI, N.; Hammami, N.E.; Aldwairi, M.; Benaida, F. Automated Breast Tumor Diagnosis Using Local Binary Patterns (LBP) Based on Deep Learning Classification. In Proceedings of the 2019 International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 3–4 April 2019; IEEE: Sakaka, Saudi Arabia, 2019; pp. 1–5. [Google Scholar] [CrossRef]
  39. Wei, X.; Yu, X.; Liu, B.; Zhi, L. Convolutional neural networks and local binary patterns for hyperspectral image classification. Eur. J. Remote Sens. 2019, 52, 448–462. [Google Scholar] [CrossRef] [Green Version]
  40. Wang, C.; Li, D.; Li, Z.; Wang, D.; Dey, N.; Biswas, A.; Moraru, L.; Sherratt, R.S.; Shi, F. An efficient local binary pattern based plantar pressure optical sensor image classification using convolutional neural networks. Optik 2019, 185, 543–557. [Google Scholar] [CrossRef] [Green Version]
  41. Wu, X.; Sun, J. An Extended Center-Symmetric Local Ternary Patterns for Image Retrieval. In Proceedings of the Advances in Computer Science, Environment, Ecoinformatics, and Education: International Conference, CSEE 2011, Wuhan, China, 21–22 August 2011; Springer: Berlin/Heidelberg, Germany, 2011; Volume 214, pp. 359–364. [Google Scholar]
  42. Fernández, A.; Álvarez, M.X.; Bianconi, F. Texture Description Through Histograms of Equivalent Patterns. J. Math. Imaging Vis. 2013, 45, 76–102. [Google Scholar] [CrossRef] [Green Version]
  43. Fernández, A.; Ghita, O.; González, E.; Bianconi, F.; Whelan, P.F. Evaluation of robustness against rotation of LBP, CCR and ILBP features in granite texture classification. Mach. Vis. Appl. 2011, 22, 913–926. [Google Scholar] [CrossRef]
  44. Gupta, R.; Patil, H.; Mittal, A. Robust order-based methods for feature description. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: San Francisco, CA, USA, 2010; pp. 334–341. [Google Scholar] [CrossRef]
  45. Sánchez-Yáñez, R.E.; Kurmyshev, E.V.; Cuevas, F.J. A framework for texture classification using the coordinated clusters representation. Pattern Recognit. Lett. 2003, 24, 21–31. [Google Scholar] [CrossRef]
  46. Kurmyshev, E.V.; Poterasu, M.; Guillen-Bonilla, J.T. Image scale determination for optimal texture classification using coordinated clusters representation. Appl. Opt. 2007, 46, 1467–1476. [Google Scholar] [CrossRef]
  47. Kurmyshev, E.V.; Sanchez-Yanez, R.E. Comparative experiment with colour texture classifiers using the CCR feature space. Pattern Recognit. Lett. 2005, 26, 1346–1353. [Google Scholar] [CrossRef]
  48. Guillen-Bonilla, J.T.; Kurmyshev, E.; Fernández, A. Quantifying a similarity of classes of texture images. Appl. Opt. 2007, 46, 5562–5570. [Google Scholar] [CrossRef]
  49. Fajardo Sigüenza, E.D. Sistema de Clasificación de Textura y Color Mediante Visión por Computador, RediUMH Universidad Miguel Hernández, Spain. 2020. Available online: http://dspace.umh.es/handle/11000/7683 (accessed on 30 June 2023).
  50. Pointer, I. Programming Pytorch for Deep Learning Creating and Deploying Deep Learning Applications, 1st ed.; O’Reilly: Torrance, CA, USA, 2019. [Google Scholar]
  51. Brownlee, J. Deep Learning for Computer Vision Image Classification, Object Detection and Face Recognition in Python; Machine Learning Mastery: Vermont, Australia, 2020. [Google Scholar]
  52. Linmi, T.; Atif, M. Deep Learning for Hyperspectral Image Analysis and Classification; Springer: Berlin/Heidelberg, Germany, 2021; ISBN 978-981-334-420-4. [Google Scholar]
  53. Aston, Z.; Zachary, C.L.; Mu, L.; Alexander, J.S. Dive into Deep Learning [En línea]. arXiv 2022, arXiv:2106.11342. [Google Scholar] [CrossRef]
  54. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2017, arXiv:1412.6980. Available online: http://arxiv.org/abs/1412.6980 (accessed on 30 June 2023).
  55. Dhamdhere, K.; Sundararajan, M.; Yan, Q. How Important Is a Neuron? arXiv 2018, arXiv:1805.12233. Available online: http://arxiv.org/abs/1805.12233 (accessed on 28 June 2023).
Figure 1. LBP texture unit.
Figure 1. LBP texture unit.
Applsci 13 08292 g001
Figure 2. OC_LBP texture unit.
Figure 2. OC_LBP texture unit.
Applsci 13 08292 g002
Figure 3. CS_LTP texture unit.
Figure 3. CS_LTP texture unit.
Applsci 13 08292 g003
Figure 4. ICS_TS texture unit.
Figure 4. ICS_TS texture unit.
Applsci 13 08292 g004
Figure 5. CCR texture unit.
Figure 5. CCR texture unit.
Applsci 13 08292 g005
Figure 6. Neural network structure.
Figure 6. Neural network structure.
Applsci 13 08292 g006
Figure 7. Confusion matrix obtained during the recognition of carcinogenic images when the LBP histogram is applied for characterization with the multi-class statistical classifier: (a) Type 1 E % = 100 % ; (b) Type 2 E % = 100 % ; (c) Type 3 E % = 100 % ; and (d) Types 1, 2, and 3 carcinogenic images combined E % = 100 % .
Figure 7. Confusion matrix obtained during the recognition of carcinogenic images when the LBP histogram is applied for characterization with the multi-class statistical classifier: (a) Type 1 E % = 100 % ; (b) Type 2 E % = 100 % ; (c) Type 3 E % = 100 % ; and (d) Types 1, 2, and 3 carcinogenic images combined E % = 100 % .
Applsci 13 08292 g007aApplsci 13 08292 g007b
Figure 8. Efficiency results obtained in the numerical experiments performed for the classification of carcinogenic images, deploying the multi-class statistical classifier and the LBP, OC_LBP, CS_LTP, ICS_TS, and CCR histograms as characteristic vectors.
Figure 8. Efficiency results obtained in the numerical experiments performed for the classification of carcinogenic images, deploying the multi-class statistical classifier and the LBP, OC_LBP, CS_LTP, ICS_TS, and CCR histograms as characteristic vectors.
Applsci 13 08292 g008
Figure 9. (a) Sigmoid activation function; (b) loss function descent calculation.
Figure 9. (a) Sigmoid activation function; (b) loss function descent calculation.
Applsci 13 08292 g009
Figure 10. Classification efficiency of carcinogenic laceration images deploying neural networks and the LBP, OC-LBP, CS-LTP, ICS-TS, and CCR spectra as an input signal.
Figure 10. Classification efficiency of carcinogenic laceration images deploying neural networks and the LBP, OC-LBP, CS-LTP, ICS-TS, and CCR spectra as an input signal.
Applsci 13 08292 g010
Figure 11. ROC curves for the statistical classifier with texture descriptors: (a) LBP, (b) OC_LBP, (c) CS_LTP, (d) ICS_TS, and (e) CCR.
Figure 11. ROC curves for the statistical classifier with texture descriptors: (a) LBP, (b) OC_LBP, (c) CS_LTP, (d) ICS_TS, and (e) CCR.
Applsci 13 08292 g011
Figure 12. ROC curves for the NN with the following texture descriptors: (a) LBP, (b) OC_LBP, (c) CS_LTP, (d) ICS_TS, and (e) CCR.
Figure 12. ROC curves for the NN with the following texture descriptors: (a) LBP, (b) OC_LBP, (c) CS_LTP, (d) ICS_TS, and (e) CCR.
Applsci 13 08292 g012aApplsci 13 08292 g012b
Figure 13. Cervical cancer image and its LBP, OC_LBP, CS_LTP, ICS_TS, and CCR histograms calculated through a 3 × 3 window. (a) Input image. (b) LBP histogram. (c) OC_LBP histogram. (d) CS_LTP histogram. (e) ICS_TS histogram. (f) CCR histogram.
Figure 13. Cervical cancer image and its LBP, OC_LBP, CS_LTP, ICS_TS, and CCR histograms calculated through a 3 × 3 window. (a) Input image. (b) LBP histogram. (c) OC_LBP histogram. (d) CS_LTP histogram. (e) ICS_TS histogram. (f) CCR histogram.
Applsci 13 08292 g013
Table 1. Ablation test.
Table 1. Ablation test.
DescriptorCharacteristic Vector DimensionStructureAccuracy
LBP256
  • Input layer
  • 1 hidden layer
  • Output layer
98.3333
OC_LBP32
  • Input layer
  • 3 hidden layers
  • Output layer
81.6667
CS_LTP9
  • Input layer
  • 5 hidden layers
  • Output layer
36.6667
ICS_TS18
  • Input layer
  • 3 hidden layers
  • Output layer
55
CCR512
  • Input layer
  • 1 hidden layer
  • Output layer
98.3333
Table 2. Comparison of techniques.
Table 2. Comparison of techniques.
Employed TechniquesAccuracyReference
LBP/Multi-class100%Proposed
OC-LBP/Multi-class98.3333%Proposed
CS_LTP/Multi-class96.6667%Proposed
ICS-TS/Multi-class98.3333%Proposed
CCR/Multi-class83.3333%Proposed
LBP/Neural Network98.3333%Proposed
OC-LBP/Neural Network81.6667%Proposed
CS_LTP/Neural Network36.6667%Proposed
ICS-TS/Neural Network55%Proposed
CCR/Neural Network98.3333%Proposed
Random Forest, Neural Network93.6%[18]
CNN100%[32]
CNN99.7%[27]
Table 3. Confusion matrix CCR descriptor with statistical classifier.
Table 3. Confusion matrix CCR descriptor with statistical classifier.
OC_LBP
CategoryType IType IIType IIITotalErrors
Type I18 202
Type II 16 204
Type III 16204
60
Table 4. Confusion matrix employing OC_LBP with neural network.
Table 4. Confusion matrix employing OC_LBP with neural network.
OC_LBP
CategoryType IType IIType IIITotalErrors
Type I15 205
Type II 18 202
Type III 16204
60
Table 5. Confusion matrix employing CS_LTP with neural network.
Table 5. Confusion matrix employing CS_LTP with neural network.
OC_LBP
CategoryType IType IIType IIITotalErrors
Type I6 2014
Type II 11 209
Type III 52015
60
Table 6. Confusion matrix employing ICS_TS with neural network.
Table 6. Confusion matrix employing ICS_TS with neural network.
OC_LBP
CategoryType IType IIType IIITotalErrors
Type I9 2011
Type II 16 204
Type III 82012
60
Table 7. Mean value of classification efficiency of carcinogenic images.
Table 7. Mean value of classification efficiency of carcinogenic images.
Texture Spectrum
(Histogram)
Statistical ClassifierNeural Network
E (%)E (%)
LBP10098.3333
OC_LBP98.333381.6667
CS_LTP96.666736.6667
ICS_TS98.333355
CCR83.333398.3333
Note: Green denotes the most efficient techniques, and blue highlights the least efficient ones.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Aguilar-Santiago, J.; Guillen-Bonilla, J.T.; García-Ramírez, M.A.; Jiménez-Rodríguez, M. Identification of Lacerations Caused by Cervical Cancer through a Comparative Study among Texture-Extraction Techniques. Appl. Sci. 2023, 13, 8292. https://doi.org/10.3390/app13148292

AMA Style

Aguilar-Santiago J, Guillen-Bonilla JT, García-Ramírez MA, Jiménez-Rodríguez M. Identification of Lacerations Caused by Cervical Cancer through a Comparative Study among Texture-Extraction Techniques. Applied Sciences. 2023; 13(14):8292. https://doi.org/10.3390/app13148292

Chicago/Turabian Style

Aguilar-Santiago, Jorge, José Trinidad Guillen-Bonilla, Mario Alberto García-Ramírez, and Maricela Jiménez-Rodríguez. 2023. "Identification of Lacerations Caused by Cervical Cancer through a Comparative Study among Texture-Extraction Techniques" Applied Sciences 13, no. 14: 8292. https://doi.org/10.3390/app13148292

APA Style

Aguilar-Santiago, J., Guillen-Bonilla, J. T., García-Ramírez, M. A., & Jiménez-Rodríguez, M. (2023). Identification of Lacerations Caused by Cervical Cancer through a Comparative Study among Texture-Extraction Techniques. Applied Sciences, 13(14), 8292. https://doi.org/10.3390/app13148292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop