Development of a Machine Learning Model for the Classification of Enterobius vermicularis Egg
Abstract
:1. Introduction
2. Materials and Methods
2.1. Sample Preparation and Image Capturing
2.2. Model Training and Image Recognition
2.2.1. Data Preparation
2.2.2. Data Augmentation
2.2.3. Our Publicly Accessible Dataset of E. vermicularis Egg Images
2.3. Model Selection and Comparative Analysis
2.3.1. Criteria for Comparative Analysis
2.3.2. A Proposed Convolutional Neural Network Model for E. vermicularis
2.4. Model Training and Validation
2.5. Object Detection and Bounding Box
2.5.1. Image Reading and Preprocessing
- −
- Loading images: Each image is imported into our system using OpenCV’s cv2.imread() function. This step is essential to ensure the images are accurately loaded into the pipeline as arrays, preserving their integrity and original format.
- −
- Image conversion: To simplify the processing, images are transformed to grayscale using cv2.cvtColor (image, cv2.COLOR_BGR2GRAY). This transformation reduces the data’s dimensionality, emphasizing textural and shape-based features, which are more significant for our task than color.
- −
- Resizing images: Uniformity among input data is critical for the CNN’s performance. Images are resized to a standard size (e.g., 256 × 256 pixels) using cv2.resize(), meeting the model’s input specifications and ensuring consistency across all data.
2.5.2. Object Detection and Analysis
- −
- Initialization: Key parameters, such as bounding box sizes and step sizes, are set up. These parameters are essential for defining the sliding window mechanism utilized in our object detection approach.
- −
- Patching and searching: This method involves extracting patches from the original image, each resized to 370 × 370 pixels with a step size of 50 pixels. These patches are then processed by the pre-trained CNN to predict the presence of E. vermicularis eggs.
- −
- Heatmap generation: A heatmap is generated based on the model’s predictions, highlighting areas with scores above a set threshold (e.g., 0.8). This indicates potential E. vermicularis egg locations within the image.
- −
- Object annotation: Areas identified with high confidence are marked with bounding boxes and automatically annotated by the model, indicating the detected regions of interest within the image.
- −
- Output analysis: The results are compiled into two separate outputs: a heatmap showing the predicted object locations and the original image annotated with bounding boxes, class, and confidence level around the detected objects.
2.5.3. Evaluation
3. Results
3.1. Outcomes of Image Recognition
3.2. Results of Image Identification
4. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lashaki, E.K.; Mizani, A.; Hosseini, S.A.; Habibi, B.; Taherkhani, K.; Javadi, A.; Taremiha, A.; Dodangeh, S. Global prevalence of enterobiasis in young children over the past 20 years: A systematic review and meta-analysis. Osong Public Health Res. Perspect. 2023, 14, 441–450. [Google Scholar] [CrossRef] [PubMed]
- Wongsaroj, T.; Nithikathkul, C.; Reungsang, P.; Royal, L.; Nakai, W.; Krailas, D.; Ramasoota, P. Geographic information of helminthiasis in Thailand. Int. J. Geoinform. 2012, 8, 59–64. Available online: https://www.academia.edu/88709944/Geographic_Information_of_Helminthiasis_in_Thailand (accessed on 25 July 2024).
- Sung, J.F.; Lin, R.S.; Huang, K.C.; Wang, S.Y.; Lu, Y.J. Pinworm control and risk factors of pinworm infection among primary-school children in Taiwan. Am. J. Trop. Med. Hyg. 2001, 65, 558–562. [Google Scholar] [CrossRef] [PubMed]
- Wendt, S.; Trawinski, H.; Schubert, S.; Rodloff, A.C.; Mössner, J.; Lübbert, C. The diagnosis and treatment of pinworm infection. Dtsch. Arztebl. Int. 2019, 116, 213–219. [Google Scholar] [CrossRef] [PubMed]
- Vaisman, A.; Linder, N.; Lundin, J.; Orchanian-Cheff, A.; Coulibaly, J.T.; Ephraim, R.K.; Bogoch, I.I. Artificial intelligence, diagnostic imaging and neglected tropical diseases: Ethical implications. Bull. World Health Organ. 2020, 98, 288–289. [Google Scholar] [CrossRef] [PubMed]
- Kumar, S.; Arif, T.; Alotaibi, A.S.; Malik, M.B.; Manhas, J. Advances towards automatic detection and classification of parasites microscopic images using deep convolutional neural network: Methods, models and research directions. Arch. Comput. Methods Eng. 2023, 30, 2013–2039. [Google Scholar] [CrossRef] [PubMed]
- Naing, K.M.; Boonsang, S.; Chuwongin, S.; Kittichai, V.; Tongloy, T.; Prommongkol, S.; Dekumyoy, P.; Watthanakulpanich, D. Automatic recognition of parasitic products in stool examination using object detection approach. PeerJ Comput. Sci. 2022, 8, e1065. [Google Scholar] [CrossRef] [PubMed]
- Pedraza, A.; Ruiz-Santaquiteria, J.; Deniz, O.; Bueno, G. Parasitic egg detection and classification with transformer-based architectures. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022. [Google Scholar] [CrossRef]
- Delas Peñas, K.E.; Villacorte, E.A.; Rivera, P.T.; Naval, P.C. Automated detection of helminth eggs in stool samples using convolutional neural networks. In Proceedings of the 2020 IEEE Region 10 Conference (TENCON), Osaka, Japan, 16–19 November 2020. [Google Scholar] [CrossRef]
- Ruiz-Santaquiteria, J.; Pedraza, A.; Vallez, N.; Velasco, A. Parasitic egg detection with a deep learning ensemble. In Proceedings of the 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16–19 October 2022. [Google Scholar] [CrossRef]
- Rocha, M.; Claro, M.; Neto, L.; Aires, K.; Machado, V.; Veras, R. Malaria parasites detection and identification using object detectors based on deep neural networks: A wide comparative analysis. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2023, 11, 351–368. [Google Scholar] [CrossRef]
- Goceri, E. Medical image data augmentation: Techniques, comparisons and interpretations. Artif. Intell. Rev. 2023, 56, 1–45. [Google Scholar] [CrossRef] [PubMed]
- Higuchi, K.; Mizuhashi, T.; Matulic, F.; Igarashi, T. Interactive generation of image variations for copy-paste data augmentation. In Proceedings of the CHI′23: CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023. [Google Scholar] [CrossRef]
- Liu, X.; Ono, K.; Bise, R. Mixing data augmentation with preserving foreground regions in medical image segmentation. In Proceedings of the 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), Cartagena de Indias, Colombia, 18–21 April 2023. [Google Scholar] [CrossRef]
- Thanchomnang, T.; Chaibutr, N.; Maleewong, W.; Janwan, P. Automatic detection of Opisthorchis viverrini egg in stool examination using convolutional-based neural networks. PeerJ 2024, 12, e16773. [Google Scholar] [CrossRef] [PubMed]
- Peng, Y.; Meng, Z.Q.; Yang, L.A. Image-to-image translation for data augmentation on multimodal medical images. IEICE Trans. Inf. Syst. 2023, 106, 686–696. [Google Scholar] [CrossRef]
- Ialithabhavani, B.; Krishnaveni, G.; Malathi, J. A comparative performance analysis of different machine learning techniques. In Proceedings of the International Conference on Computer Vision and Machine Learning, Andhra Pradesh, India, 27–28 December 2018. [Google Scholar] [CrossRef]
- Cowley, H.P.; Natter, M.; Gray-Roncal, K.; Rhodes, R.E.; Johnson, E.C.; Drenkow, N.; Shead, T.M.; Chance, F.S.; Wester, B.; Gray-Roncal, W. A framework for rigorous evaluation of human performance in human and machine learning comparison studies. Sci. Rep. 2022, 12, 5444. [Google Scholar] [CrossRef] [PubMed]
- De Ranasinghe, I.M.M.P.; Munasinghe, L. Comparison of performances of ML-Algorithms in the estimation of the execution time of non-parallel Java programs. J. Sci. Univ. Kelaniya 2023, 16, 15–21. [Google Scholar] [CrossRef]
- Yao, Y.; Cheng, G.; Wang, G.; Li, S.; Zhou, P.; Xie, X.; Han, J. On improving bounding box representations for oriented object detection. IEEE Trans. Geosci. Remote Sens. 2022, 61, 1–11. [Google Scholar] [CrossRef]
- Nguyen, H.S.H.; Tran, T.H.G.; Tran, D.K.; Pham, D.D. An advanced IoU loss function for accurate bounding box regression. In Intelligent Systems and Networks; Anh, N.L., Koh, S.J., Nguyen, T.D.L., Lloret, J., Nguyen, T.T., Eds.; Springer: Singapore, 2022; Volume 471, pp. 151–157. [Google Scholar] [CrossRef]
- Ummarino, A.; Caputo, M.; Tucci, F.A.; Pezzicoli, G.; Piepoli, A.; Gentile, A.; Latiano, T.; Panza, A.; Cala, N.; Ceglia, A.P.; et al. A PCR-based method for the diagnosis of Enterobius vermicularis in stool samples, specifically designed for clinical application. Front. Microbiol. 2022, 13, 1028988. [Google Scholar] [CrossRef] [PubMed]
- Lee, Y.W.; Choi, J.W.; Shin, E.H. Machine learning model for diagnostic method prediction in parasitic disease using clinical information. Expert Syst. Appl. 2021, 185, 115658. [Google Scholar] [CrossRef]
- Zafar, A.; Attia, Z.; Tesfaye, M.; Walelign, S.; Wordofa, M.; Abera, D.; Desta, K.; Tsegaye, A.; Ay, A.; Taye, B. Machine learning-based risk factor analysis and prevalence prediction of intestinal parasitic infections using epidemiological survey data. PLoS. Negl. Trop. Dis. 2022, 16, e0010517. [Google Scholar] [CrossRef] [PubMed]
Models | Augmented Data | Accuracy | Precision | Recall | F1-Score | ROC-AUC | File Size (KB) |
---|---|---|---|---|---|---|---|
Proposed CNN | No | 0.71 | 0.71 | 0.71 | 0.70 | 0.77 | 2804 |
Proposed CNN | Yes | 0.90 | 0.90 | 0.90 | 0.89 | 0.97 | 2804 |
MobileNet | Yes | 0.96 | 0.96 | 0.96 | 0.95 | 1.00 | 50,439 |
EfficientNetB1 | Yes | 0.89 | 0.90 | 0.89 | 0.88 | 0.94 | 93,516 |
DenseNet121 | Yes | 0.97 | 0.97 | 0.97 | 0.96 | 1.00 | 96,155 |
Xception | Yes | 0.99 | 0.99 | 0.99 | 0.99 | 1.00 | 269,365 |
InceptionV3 | Yes | 0.98 | 0.98 | 0.98 | 0.97 | 1.00 | 281,301 |
Resnet50 | Yes | 0.99 | 0.99 | 0.99 | 0.99 | 1.00 | 375,244 |
IoU | Image Count | Average IoU | %Average IoU |
---|---|---|---|
<0.5 | 3 | 0.6136 | 61.36 |
>0.5 | 95 | 0.3241 | 32.41 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chaibutr, N.; Pongpanitanont, P.; Laymanivong, S.; Thanchomnang, T.; Janwan, P. Development of a Machine Learning Model for the Classification of Enterobius vermicularis Egg. J. Imaging 2024, 10, 212. https://doi.org/10.3390/jimaging10090212
Chaibutr N, Pongpanitanont P, Laymanivong S, Thanchomnang T, Janwan P. Development of a Machine Learning Model for the Classification of Enterobius vermicularis Egg. Journal of Imaging. 2024; 10(9):212. https://doi.org/10.3390/jimaging10090212
Chicago/Turabian StyleChaibutr, Natthanai, Pongphan Pongpanitanont, Sakhone Laymanivong, Tongjit Thanchomnang, and Penchom Janwan. 2024. "Development of a Machine Learning Model for the Classification of Enterobius vermicularis Egg" Journal of Imaging 10, no. 9: 212. https://doi.org/10.3390/jimaging10090212
APA StyleChaibutr, N., Pongpanitanont, P., Laymanivong, S., Thanchomnang, T., & Janwan, P. (2024). Development of a Machine Learning Model for the Classification of Enterobius vermicularis Egg. Journal of Imaging, 10(9), 212. https://doi.org/10.3390/jimaging10090212