Next Article in Journal
Tri-Band Rectenna Dedicated to UHF RFID, GSM-1800 and UMTS-2100 Frequency Bands
Previous Article in Journal
A Low-Profile SIW-Based CTS Array with Reconfigurable Four Beams and Dual Polarizations for K-Band Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

RGB Channel Superposition Algorithm with Acetowhite Mask Images in a Cervical Cancer Classification Deep Learning Model

1
Department of Biomedical Engineering, Gil Medical Center, College of Medicine, Gachon University, 21 Namdong-daero 774 Beon-gil, Namdong-gu, Incheon 21565, Korea
2
Department of Obstetrics & Gynecology, Seoul Hospital, Ewha Womans University, Seoul 07804, Korea
3
Department of Obstetrics & Gynecology, Bucheon Hospital, Soonchunhyang University, Bucheon-si 14584, Korea
4
R & D Center, NTL Medical Institute, Seongnam-si 13449, Korea
5
Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon 21565, Korea
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(9), 3564; https://doi.org/10.3390/s22093564
Submission received: 9 April 2022 / Revised: 4 May 2022 / Accepted: 5 May 2022 / Published: 7 May 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Cervical cancer is one of the main causes of death from cancer in women. However, it can be treated successfully at an early stage. This study aims to propose an image processing algorithm based on acetowhite, which is an important criterion for diagnosing cervical cancer, to increase the accuracy of the deep learning classification model. Then, we mainly compared the performance of the model, the original image without image processing, a mask image made with acetowhite as the region of interest, and an image using the proposed algorithm. In conclusion, the deep learning classification model based on images with the proposed algorithm achieved an accuracy of 81.31%, which is approximately 9% higher than the model with original images and approximately 4% higher than the model with acetowhite mask images. Our study suggests that the proposed algorithm based on acetowhite could have a better performance than other image processing algorithms for classifying stages of cervical images.

1. Introduction

Cervical cancer is the fourth most common cancer in women worldwide. In 2018, approximately 570,000 women were diagnosed with cervical cancer, and an estimated 311,000 women, representing 7.5% of all female cancer deaths, died due to cervical cancer worldwide [1]. Cervical cancer is divided into two large groups, namely, atypical (A) and positive (P), which is the precancerous stage. A is graded into atypical 1 (A1) and atypical 2 (A2), and P, for human papillomavirus-infected tissue in which cervical cancer progresses, is classified as positive 1A (P1A), positive 1 B (P1B), and positive 2 (P2) depending on the state of lesions [2]. Cervical cancer is one of the cancers that can be treated successfully, as long as it is detected early and managed effectively [3,4,5]. Therefore, early diagnosis through regular screening is important. It is important to distinguish between A and P because treatment is required when the patient is in case P, which is the initial state of the lesion. Furthermore, when analyzing actual cases, A1 shows an overwhelming number of atypical cases, and P1B shows the largest number of dysplasia cases [6]. Therefore, a clear distinction between A1 and P1B cases is necessary.
Tests for the early diagnosis of cervical cancer include cervical Pap smear, colposcopy, and cervicography. A Pap smear test, which is one of the existing methods for diagnosing cervical cancer, is based on cytology and requires electricity for a microscope, consumables for examination, and experts for the interpretation of results, making it difficult to be implemented in an environment with insufficient resources [7]. In addition, the sensitivity estimate for the detection of invasive cancer is low, so repeated screening tests are required [8]. To compensate for the high false-negative rate of the Pap smear test, colposcopy was performed to diagnose lesions by directly observing the changed cervix after applying 3–5% acetic acid. Colposcopy shows a high accuracy, but depending on the experience of the specialist performing the examination, small lesions may not be detected and may be omitted, resulting in different results [9,10]. Cervicography is a test that diagnoses the lesion by enlarging the image after taking a picture of the cervix coated with acetic acid. This test has the advantage of being simple and maintaining objectivity [11]. However, technical defects may occur due to obstruction of the visual field, and a relatively high false-positive rate due to metaplasia, etc. [12,13].
When it is difficult to distinguish between normal and lesions in all cervical cancer diagnostic tests, one of the important abnormal findings to pay attention to in lesion diagnosis is the appearance of white spots on the cervix after acetic acid application in cervicography, that is, the expression of acetowhite [14,15,16,17]. The density of acetowhite areas generally increases with lesion severity. Clear and thin acetowhite areas are most likely due to immature metaplasia or inflammation, and thin but opaque areas of acetowhite are more likely to be asymptomatic for papillomavirus infection (SPI) or CIN1. Distinct opaque acetowhite areas after acetic acid application suggest high-grade lesions (HSILs). In addition, if the edge of the acetowhite region is unclear or angled, then it can be determined as a metaplasia, SPI, or CIN1, and the more regular the boundary of the acetowhite region, the more likely the HSIL is [18].
Because acetowhite is an important criterion for diagnosing lesions according to its color and shape, it is gaining attention as a tool to overcome the limitations of the accuracy and diagnostic efficiency of existing colposcopy [19]. In 2015, Sheng et al. showed an average of 0.7765 DSC by eliminating unbalanced labels through class-averaging graph-based transduction based on superpixels made with the k-means clustering algorithm [20]. In addition, in 2021, Yue et al. performed acetowhite segmentation with a 0.835 dice coefficient and 93.40% precision using a deep attention network for an image to which the specular reflection removal algorithm was applied [21]. In 2021, Liu et al. extracted the cervical region using the k-means clustering algorithm and then segmented acetowhite with an accuracy of 90.36% using a pre-trained ResNet101-based DeepLab V3+ network [22]. To date, deep learning has been used as a tool for the automatic segmentation of acetowhite [23,24,25]. However, to the best of our knowledge, no classification model based on acetowhite has been reported.
Therefore, in this paper, we compared the performance of the model for the original image without image processing, the mask image with acetowhite as the region of interest, and the RGB channel superposition image using the original image and mask image. In conclusion, we propose an image processing algorithm that accurately classifies images of the A1 and P1B states, which is a boundary between normal and abnormal, through RGB channel superposition using the acetowhite mask image. This method made it possible to intensively learn about the characteristics of acetowhite according to the state of each lesion.

2. Methods

2.1. Data

The data in this study consisted of simple atypical A1 and dysmorphic P1B images, and 438 A1 and 477 P1B images were used, respectively. Of the cervical images, 670 were obtained using the Dr. Cervicam+ camera (https://ntlhealthcare.com (accessed on 17 March 2021), Seongnamsi, Korea), and 245 were obtained using the Dr. Cervicam camera (https://ntlhealthcare.com (accessed on 17 March 2021), Seongnamsi, Korea). As for the age composition of patients, 190 of 915 patients were sampled and surveyed, and 46.32% were in their 20s or younger, 28.95% in their 30s, 17.37% in their 40s, and 7.37% in their 50s or older. The training set used for training the deep learning model and the test set used for evaluating the deep learning model consisted of a ratio of 8:2. The training set comprised 350 A1 and 382 P1B, and the test set comprised 108 A1 and 95 P1B.

2.2. Image Preprocessing

For the collected cervical images, an expert directly annotated the acetowhite as a rectangular type using the NTL AI Data Manager system (https://ntlhealthcare.com (accessed on 23 May 2021), Seongnamsi, Korea). Except for the annotated area, the background was treated with black to create a mask image of the acetowhite area. The captured cervical images were all constant at 1504 × 1000 pixels. Except for the external os located in the center of the image, the vaginal wall and colposcopy were on both sides of the image, so unnecessary parts for learning were included. Accordingly, both sides were cropped based on the center such that the aspect ratio of all images was the same, and the size was converted to 256 × 256 pixels. To generate an RGB channel superposition image, the acetowhite mask image and original image were prepared by separating each RGB channel image.

2.3. RGB Channel Superposition

The RGB channel superposition image clearly shows the acetowhite region of interest, while also learning the pixel values of the acetowhite periphery.
To create an RGB channel superposition image, the original image was divided into R, G, and B channels to have one channel and defined as OR, OG, and OB, respectively. In addition, the acetowhite mask image created through image preprocessing was divided into images with one channel and defined as MR, MG, and MB, respectively. The RGB channel superposition image was created by selecting two channels from the three channels of the original image, selecting one channel from the three channels of the mask image, and placing each image into the three RGB channels and merging them. Figure 1 shows a schematic of the RGB channel superposition process.
For example, if MR is selected as one of the three channels of the mask image and OG and OB are selected as two of the three channels of the original image, the acetowhite part has a purple-red color, and in the other areas, a blue-colored image is created. Because one image has three RGB channels, if multiple cases are created by selecting one of the three channels of the mask image and two of the three channels of the original image in the same way, a total of nine cases will be made. Figure 2 shows each of the nine cases created by the RGB channel superposition.
Each RGB channel superposition image consisted of 438 A1 and 477 P1B images identical to the original image and was used for the training and testing of each model. The OpenCV library (version 4.5.0) was used to superpose the RGB channels of the original image and acetowhite mask image.

2.4. Classification Deep Learning Model

The image classification deep learning model used in this study is the ResNet. Based on the VGGNet, a shortcut is placed between the convolutional layers, and the input value x was added to the output value F(x) after the training layer to determine the minimum value of F(x) + x and use it for the next input value. Thus, by learning the optimal F(x) + x, the classification performance increases as the layers become deeper, and the error rate is lower than that of VGGNet or GoogLeNet [26]. ResNet 50 is a model with 50 convolutional layers in the ResNet structure. Figure 3 shows the ResNet 50 model structure with shortcuts connected every three layers.
A learning process that requires a large amount of data and time is essential for a deep-learning model to achieve a high level of performance. Therefore, transfer learning was used, which enables high performance with a small amount of data by learning and training the prepared data based on the weights of the pretrained model [27,28]. In this study, a ResNet 50 model based on ImageNet was trained using the Adam optimizer; the batch size was 40, the epoch was 200, and the learning rate was set to 0.0001.

2.5. Evaluation of the Deep Learning Model Performance

In this study, to evaluate the performance of the deep learning classification model, the precision and recall, F1-score and accuracy, and area under curve (AUC) score were calculated by comparing the ground truth of the data and the deep learning classification results.
True negative (TN) designates cases when a normal cervical image was classified as normal, whereas true positive (TP) represents cases when a lesioned cervical image was classified as an abnormal. A case in which a normal cervical image was classified as abnormal is defined as a false positive (FP), and a case in which the abnormal cervical image was classified as normal is defined as a false negative (FN). The four indicators used to evaluate the performance of the deep learning classification model were calculated using Equations (1)–(4).
Precision   = TP TP + FP × 100
Recall   = TP TP + FN × 100
F 1 score   = Precision   ×   Recall Precision   +   Recall × 2
Accuracy   = TP + TN TN + TP + FP + FN × 100
In addition, a receiver operating characteristic (ROC) curve for the performance of each model was drawn, and AUC, which is the area under the ROC curve, was calculated. The closer the AUC is to 1, the better the model’s performance.

2.6. Statistical Analysis

Statistical analysis was performed to confirm the statistical significance between the study results using MedCalc (version 8.2.1.0, MedCalc Software, Ostend, Belgium). The precision, recall, F1-score, accuracy, and AUC of the original image model, acetowhite mask model, and RGB channel superposition model were compared and analyzed using the Friedman–Nemenyi test. A p-value less than 0.05 is considered statistically significant. For the RGB channel superposition model, the model showing the highest performance among the nine models was selected, and the statistical significance of the results was checked. We further confirmed the statistical significance using the critical difference diagram which shows that the mean ranks of each model under 5 different deep learning model performance evaluation methods. The lower the rank, further to the left, the better the performance of a model compared to the others [29].

3. Results

The precision, recall, F1-score, accuracy, and AUC were calculated to evaluate the classification performance of each model, to which the original images, acetowhite mask images, and RGB channel superposition images were trained. To prevent overfitting and increase the reliability of the deep learning model performance evaluation, the entire dataset was divided into five and evaluated through five cross-validations using each as a test set once. Table 1 shows the average deep learning model performance evaluation score of each RGB channel superposition case calculated through five cross-validations.
The original image model showed a precision of 84.73%, recall of 57.45%, F1-score of 68.25%, and accuracy of 72.46%. In the model trained on the image made with the mask using acetowhite as the region of interest, the precision was 84.70%, recall rate was 66.45%, the F1-score was 74.41%, and the accuracy was 76.28%. In the model trained with the original image and acetowhite mask image, to which the RGB channel superposition algorithm was applied, the model with the highest performance had a precision of 90.05%, a recall rate of 72.55%, an F1-score of 79.94%, and an accuracy of 81.31%. Table 2 shows the deep learning performance evaluation score of each model depending on the applied algorithm.
We compared the precision, recall, F1-score, accuracy, and AUC of the original image model, acetowhite mask model, and RGB channel superposition model using the Friedman–Nemenyi test. The result from the test shows 0.0388 of p-value. Figure 4 shows the critical difference diagram of the models.
The ROC graph and AUC of the original image model, acetowhite mask model, and RGB channel superposition model are shown in Figure 5. The AUC values were 0.731 in the original image model, 0.767 in the acetowhite mask image model, and 0.817 in the RGB channel superposition model.

4. Conclusions

In this study, we compared the performance of a deep learning classification model for cervical cancer, the original image without image processing, a mask image made with acetowhite as the region of interest, and an RGB channel superposition model, which was created by selecting the channel in the original image and acetowhite mask image. We aim to propose an image processing algorithm for improving the classification performance. Based on the evaluation results of the deep learning classification performance, the original image showed 72.46% accuracy and 0.731 AUC, and the acetowhite mask image showed 76.28% accuracy and 0.767 AUC. The acetowhite mask image model showed an improvement of approximately 4% compared with the original image model. The model with the highest performance among the nine cases of RGB channel superposition is the model with the R channel of the acetowhite mask and the R and B channels of the original image. This model showed an accuracy of 81.31% and an AUC of 0.817, which is approximately 9% higher than those of the original image model and approximately 5% higher than those of the acetowhite mask image model.
As a result of the Friedman–Nemenyi test, which can verify the statistical significance, it shows 0.0388 of p-value, meaning a statistically significant difference. In addition, the critical difference diagram shows that the leftmost of the three models, the RGB channel superposition model, has the best performance compared to others.
The model trained with the acetowhite mask image had a better performance than the original model because the characteristic of acetowhite, a white spot that appears on the cervix after acetic acid treatment, is an important criterion for diagnosing the stage of the lesion [30]. The mask image made with acetowhite as the region of interest reduces the influence of additional elements, such as the vaginal wall and colposcopy, except for the acetowhite part, and enables a deep learning model to efficiently train the features of acetowhite.
RGB channel superposition is an algorithm that creates an image by taking one channel from the acetowhite mask image and two channels from the original image. The performance of this RGB channel superposition was superior to that of the original or acetowhite mask image. For the acetowhite region of interest, the pixels of all three channels were trained, and for parts other than the region of interest, the pixels of the two channels were trained. It is thought that this is because it uses less data on parts other than the region of interest and uses more information about the acetowhite region of interest for training.
Among the nine models combined with the RGB channel superposition algorithm, the model that showed the highest performance was the one combining the R channel of the acetowhite mask and the R and B channels of the original image. When the histogram of the cervical image was analyzed, the number of pixels in the image was largest in the order of the R, B, and G channels. As a result, this model, which combines the R channel of the mask and the R and B channels of the original image, trained the image with the highest number of pixels among the nine models created by superposing the RGB channels. Accordingly, it had the highest performance among the nine models as it obtained the largest amount of pixel information for acetowhite and peripheral pixel information from the cervical image.
Various methods were proposed to utilize the acetowhite region of interest for deep learning through a systematic comparison of each model. However, a more advanced deep learning classification model can be developed through further research. In this study, a mask was created using the acetowhite region of interest as a rectangular region. However, if a polygonal region mask that can show a clear boundary for acetowhite is created, then the characteristics of the acetowhite boundary can be trained more clearly. It is expected that the impact of areas not necessarily included in the rectangular region can be reduced. In addition, in this study, acetowhite ROI data manually annotated by specialists and experts were used. Therefore, the cervical data are insufficient for training. The classification performance of the deep learning model will be further improved using a sufficient amount of cervical data annotated with acetowhite or data augmentation to satisfy the amount of data required for training.
According to the results of this study, if the RGB channel superposition algorithm is applied to a cervical classification image, the performance of the deep learning model of cervical cancer can be improved by training the acetowhite region with more pixel information than the peripheral part. Therefore, the diagnostic efficiency and accuracy of professional personnel in cervical cancer screening and diagnosis are expected to increase in the future. In addition, it is expected to help develop a CAD system for diagnosing cervical cancer by providing various evaluation indicators for the use of acetowhite in deep learning.

Author Contributions

Conceptualization, Y.J.K. (Young Jae Kim) and K.G.K.; Resources, W.J., K.H.N. and S.N.K.; Supervision, Y.J.K. (Young Jae Kim) and K.G.K.; Writing—original draft, Y.J.K. (Yoon Ji Kim). All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2022-2017-0-01630) supervised by the IITP (Institute for Information & Communications Technology Promotion), and by the GRRC program of Gyeonggi province. [GRRC-Gachon2020(B01), AI-based Medical Image Analysis, and by the Gachon University Gil Medical Center (FRD2019-08).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Stelzle, D.; Tanaka, L.F.; Lee, K.K.; Ibrahim Khalil, A.; Baussano, I.; Shah, A.S.V.; McAllister, D.A.; Gottlieb, S.L.; Klug, S.J.; Winkler, A.S.; et al. Estimates of the global burden of cervical cancer associated with HIV. Lancet Glob. Health 2021, 9, e161–e169. [Google Scholar] [CrossRef]
  2. Schneider, D.L.; Herrero, R.; Bratti, C.; Greenberg, M.D.; Hildesheim, A.; Sherman, M.E.; Morales, J.; Hutchinson, M.L.; Sedlacek, T.V.; Lorincz, A.; et al. Cervicography screening for cervical cancer among 8460 women in a high- risk population. Am. J. Obstet. Gynecol. 1999, 180, 290–298. [Google Scholar] [CrossRef]
  3. World Health Organization. Global Strategy to Accelerate the Elimination of Cervical Cancer as a Public Health Problem and Its Associated Goals and Targets for the Period 2020–2030. In United Nations General Assembly; United Nations: New York, NY, USA, 2020; p. 2. [Google Scholar]
  4. Saslow, D.; Solomon, D.; Lawson, H.W.; Killackey, M.; Kulasingam, S.L.; Cain, J.; Garcia, F.A.R.; Moriarty, A.T.; Waxman, A.G.; Wilbur, D.C.; et al. American Cancer Society, American Society for Colposcopy and Cervical Pathology, and American Society for Clinical Pathology screening guidelines for the prevention and early detection of cervical cancer. CA Cancer J. Clin. 2012, 62, 147–172. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Panici, P.B.; Angioli, R.; Penalver, M.; Pecorelli, S. Cervical cancer. Chemother. Gynecol. Neoplasms Curr. Ther. Nov. Approaches 2004, 361, 547–554. [Google Scholar]
  6. Cronjé, H.S.; Cooreman, B.F.; Beyer, E.; Bam, R.H.; Middlecote, B.D.; Divall, P.D.J. Screening for cervical neoplasia in a developing country utilizing cytology, cervicography and the acetic acid test. Int. J. Gynecol. Obstet. 2001, 72, 151–157. [Google Scholar] [CrossRef]
  7. Goldie, S.J.; Gaffikin, L.; Goldhaber-Fiebert, J.D.; Gordillo-Tobar, A.; Levin, C.; Mahé, C.; Wright, T.C. Cost-Effectiveness of Cervical-Cancer Screening in Five Developing Countries. N. Engl. J. Med. 2005, 353, 2158–2168. [Google Scholar] [CrossRef] [Green Version]
  8. Bedell, S.L.; Goldstein, L.S.; Goldstein, A.R.; Goldstein, A.T. Cervical Cancer Screening: Past, Present, and Future. Sex Med. Rev. 2020, 8, 28–37. [Google Scholar] [CrossRef]
  9. Jeronimo, J.; Schiffman, M. Colposcopy at a crossroads. Am. J. Obstet. Gynecol. 2006, 195, 349–353. [Google Scholar] [CrossRef]
  10. Mitchell, M.F.; Schottenfeld, D.; Tortolero-Luna, G.; Cantor, S.B.; Richards-Kortum, R. Colposcopy for the diagnosis of squamous intraepithelial lesions: A meta-analysis. Obstet. Gynecol. 1998, 91, 626–631. [Google Scholar] [CrossRef]
  11. Stafl, A. Cervicography: A new method for cervical cancer detection. Am. J. Obstet. Gynecol. 1981, 139, 815–821. [Google Scholar] [CrossRef]
  12. Szarewski, A.; Cuzick, J.; Edwards, R.; Butler, B.; Singer, A. The use of cervicography in a primary screening service. Obstet. Gynecol. Surv. 1991, 46, 632–633. [Google Scholar] [CrossRef]
  13. Kim, Y.T.; Kim, J.W.; Kim, S.H.; Kim, Y.R.; Kim, J.H.; Yoon, B.S.; Park, Y.W. Clinical usefulness of cervicogram as a primary screening test for cervical neoplasia. Yonsei Med. J. 2005, 46, 213–220. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Li, W.; Venkataraman, S.; Gustafsson, U.; Oyama, J.C.; Ferris, D.G.; Lieberman, R.W. Using acetowhite opacity index for detecting cervical intraepithelial neoplasia. J. Biomed. Opt. 2009, 14, 014020. [Google Scholar] [CrossRef] [PubMed]
  15. Shaw, E.; Sellors, J.; Kaczorowski, J. Prospective evaluation of colposcopic features in predicting cervical intraepithelial neoplasia: Degree of acetowhite change most important. J. Low. Genit. Tract Dis. 2003, 7, 6–10. [Google Scholar] [CrossRef] [PubMed]
  16. Wu, T.; Cheung, T.-H.; Yim, S.-F.; Qu, J.Y. Clinical study of quantitative diagnosis of early cervical cancer based on the classification of acetowhitening kinetics. J. Biomed. Opt. 2010, 15, 026001. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Balas, C.J.; Themelis, G.C.; Prokopakis, E.P.; Orfanudaki, I.; Koumantakis, E.; Helidonis, E.S. In vivo detection and staging of epithelial dysplasias and malignancies based on the quantitative assessment of acetic acid-tissue interaction kinetics. J. Photochem. Photobiol. B Biol. 1999, 53, 153–157. [Google Scholar] [CrossRef]
  18. Basu, P.; Sankaranarayanan, R. Atlas of Colposcopy—Principles and Practice: IARC CancerBase No. 13 [Internet]; International Agency for Research on Cancer: Lyon, France, 2017; Available online: https://screening.iarc.fr/atlascolpo.php (accessed on 5 October 2021).
  19. Acosta-Mesa, H.G.; Cruz-Ramírez, N.; Hernández-Jiménez, R. Aceto-White temporal pattern classification using k-NN to identify precancerous cervical lesion in colposcopic images. Comput. Biol. Med. 2009, 39, 778–784. [Google Scholar] [CrossRef]
  20. Huang, S.; Gao, M.; Yang, D.; Huang, X.; Elgammal, A.; Zhang, X. Unbalanced graph-based transduction on superpixels for automatic cervigram image segmentation. In Proceedings of the International Symposium on Biomedical Imaging, New York, NY, USA, 16–19 April 2015; pp. 1556–1559. [Google Scholar]
  21. Yue, Z.; Ding, S.; Li, X.; Yang, S.; Zhang, Y. Automatic Acetowhite Lesion Segmentation via Specular Reflection Removal and Deep Attention Network. IEEE J. Biomed. Health Inform. 2021, 25, 3529–3540. [Google Scholar] [CrossRef]
  22. Liu, J.; Liang, T.; Peng, Y.; Peng, G.; Sun, L.; Li, L.; Dong, H. Segmentation of acetowhite region in uterine cervical image based on deep learning. Technol. Health Care 2021, 30, 469–482. [Google Scholar] [CrossRef]
  23. Xiong, J.; Wang, L.; Gu, J. Image segmentation of the acetowhite region in cervix images based on chromaticity. In Final Program and Abstract Book, Proceedings of the 9th International Conference on Information Technology and Applications in Biomedicine, ITAB, Larnaca, Cyprus, 5–7 November 2009; IEEE: Piscataway, NJ, USA, 2009. [Google Scholar]
  24. Marquez-Grajales, A.; Acosta-Mesa, H.G.; Mezura-Montes, E.; Hernandez-Jimenez, R. Cervical image segmentation using active contours and evolutionary programming over temporary acetowhite patterns. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation, CEC 2016, Vancouver, BC, Canada, 24–29 July 2016; pp. 3863–3870. [Google Scholar]
  25. Kudva, V.; Prasad, K.; Guruvare, S. Detection of Specular Reflection and Segmentation of Cervix Region in Uterine Cervix Images for Cervical Cancer Screening. Irbm 2017, 38, 281–291. [Google Scholar] [CrossRef]
  26. Yun, J.W. Deep Residual Learning for Image Recognition. Enzym. Microb. Technol. 1996, 19, 107–117. [Google Scholar] [CrossRef]
  27. Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2021, 109, 43–76. [Google Scholar] [CrossRef]
  28. Rehman, A.; Naz, S.; Imran, M.; Akram, F.; Imran, M. A Deep Learning-Based Framework for Automatic Brain Tumors Classification Using Transfer Learning. Circuits Syst. Signal Process. 2020, 39, 757–775. [Google Scholar] [CrossRef]
  29. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  30. Pogue, B.W.; Kaufman, H.B.; Zelenchuk, A.; Harper, W.; Burke, G.C.; Burke, E.E.; Harper, D.M. Analysis of acetic acid-induced whitening of high-grade squamous intraepithelial lesions. J. Biomed. Opt. 2001, 6, 397. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A schematic diagram of the RGB channel superposition process. (a) Acetowhite mask image (b) Original image (c) RGB channel superposition image.
Figure 1. A schematic diagram of the RGB channel superposition process. (a) Acetowhite mask image (b) Original image (c) RGB channel superposition image.
Sensors 22 03564 g001
Figure 2. Nine cases of cervical images made through RGB channel superposition.
Figure 2. Nine cases of cervical images made through RGB channel superposition.
Sensors 22 03564 g002
Figure 3. Diagram of the ResNet 50 model architecture.
Figure 3. Diagram of the ResNet 50 model architecture.
Sensors 22 03564 g003
Figure 4. Critical Difference Diagram of the Friedman–Nemenyi test for deep learning model performance comparison. The number shows the lank of three models. The lower the rank, the better the performance of a model.
Figure 4. Critical Difference Diagram of the Friedman–Nemenyi test for deep learning model performance comparison. The number shows the lank of three models. The lower the rank, the better the performance of a model.
Sensors 22 03564 g004
Figure 5. ROC graph and AUC of the original, acetowhite mask, and RGB channel superposition models.
Figure 5. ROC graph and AUC of the original, acetowhite mask, and RGB channel superposition models.
Sensors 22 03564 g005
Table 1. Deep learning model performance evaluation score of each RGB channel superposition case.
Table 1. Deep learning model performance evaluation score of each RGB channel superposition case.
Precision
(%)
Recall
(%)
F1-Score
(%)
Accuracy
(%)
MR + OG+ OB90.1868.5177.5179.56
MR + OG + OR89.6069.5977.9679.89
MR + OB + OR90.0572.5579.9481.31
MG + OG + OB89.1870.4678.6180.22
MG + OG + OR89.7570.0378.3780.22
MG + OB + OR89.8668.9677.8579.67
MB + OG + OB91.1968.5377.5879.89
MB + OG + OR87.8870.6578.1079.67
MB + OB + OR88.7768.3577.2678.80
Table 2. Performance evaluation score of each deep learning model according to the applied algorithm.
Table 2. Performance evaluation score of each deep learning model according to the applied algorithm.
Precision
(%)
Recall
(%)
F1-Score
(%)
Accuracy
(%)
Original image84.7357.4568.2572.46
Acetowhite mask84.7066.4574.4176.28
RGB channel superposition90.0572.5579.9481.31
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, Y.J.; Ju, W.; Nam, K.H.; Kim, S.N.; Kim, Y.J.; Kim, K.G. RGB Channel Superposition Algorithm with Acetowhite Mask Images in a Cervical Cancer Classification Deep Learning Model. Sensors 2022, 22, 3564. https://doi.org/10.3390/s22093564

AMA Style

Kim YJ, Ju W, Nam KH, Kim SN, Kim YJ, Kim KG. RGB Channel Superposition Algorithm with Acetowhite Mask Images in a Cervical Cancer Classification Deep Learning Model. Sensors. 2022; 22(9):3564. https://doi.org/10.3390/s22093564

Chicago/Turabian Style

Kim, Yoon Ji, Woong Ju, Kye Hyun Nam, Soo Nyung Kim, Young Jae Kim, and Kwang Gi Kim. 2022. "RGB Channel Superposition Algorithm with Acetowhite Mask Images in a Cervical Cancer Classification Deep Learning Model" Sensors 22, no. 9: 3564. https://doi.org/10.3390/s22093564

APA Style

Kim, Y. J., Ju, W., Nam, K. H., Kim, S. N., Kim, Y. J., & Kim, K. G. (2022). RGB Channel Superposition Algorithm with Acetowhite Mask Images in a Cervical Cancer Classification Deep Learning Model. Sensors, 22(9), 3564. https://doi.org/10.3390/s22093564

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop