Next Article in Journal
Prenatal Exome Sequencing Analysis in Fetuses with Various Ultrasound Findings
Previous Article in Journal
Long-Term Survival Analysis of 5619 Total Ankle Arthroplasty and Patient Risk Factors for Failure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence on Diagnostic Aid of Leprosy: A Systematic Literature Review

by
Jacks Renan Neves Fernandes
1,
Ariel Soares Teles
2,3,*,
Thayaná Ribeiro Silva Fernandes
2,
Lucas Daniel Batista Lima
2,
Surjeet Balhara
4,
Nishu Gupta
5 and
Silmar Teixeira
2
1
PhD Program in Biotechnology—Northeast Biotechnology Network, Federal University of Piauí, Teresina 64049-550, Brazil
2
Postgraduate Program in Biotechnology, Parnaíba Delta Federal University, Parnaíba 64202-020, Brazil
3
Federal Institute of Maranhão, Araioses 65570-000, Brazil
4
Department of Electronics & Communication Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India
5
Department of Electronic Systems, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, 2815 Gjøvik, Norway
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2024, 13(1), 180; https://doi.org/10.3390/jcm13010180
Submission received: 2 November 2023 / Revised: 20 December 2023 / Accepted: 25 December 2023 / Published: 28 December 2023
(This article belongs to the Section Dermatology)

Abstract

:
Leprosy is a neglected tropical disease that can cause physical injury and mental disability. Diagnosis is primarily clinical, but can be inconclusive due to the absence of initial symptoms and similarity to other dermatological diseases. Artificial intelligence (AI) techniques have been used in dermatology, assisting clinical procedures and diagnostics. In particular, AI-supported solutions have been proposed in the literature to aid in the diagnosis of leprosy, and this Systematic Literature Review (SLR) aims to characterize the state of the art. This SLR followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) framework and was conducted in the following databases: ACM Digital Library, IEEE Digital Library, ISI Web of Science, Scopus, and PubMed. Potentially relevant research articles were retrieved. The researchers applied criteria to select the studies, assess their quality, and perform the data extraction process. Moreover, 1659 studies were retrieved, of which 21 were included in the review after selection. Most of the studies used images of skin lesions, classical machine learning algorithms, and multi-class classification tasks to develop models to diagnose dermatological diseases. Most of the reviewed articles did not target leprosy as the study’s primary objective but rather the classification of different skin diseases (among them, leprosy). Although AI-supported leprosy diagnosis is constantly evolving, research in this area is still in its early stage, then studies are required to make AI solutions mature enough to be transformed into clinical practice. Expanding research efforts on leprosy diagnosis, coupled with the advocacy of open science in leveraging AI for diagnostic support, can yield robust and influential outcomes.

1. Introduction

Neglected tropical diseases (NTDs) can compromise people’s quality of life, leading to physical and psychological disabilities. Such diseases are caused by infectious agents or parasites and affect more than one billion people worldwide [1]. They are more prevalent in populations in Latin America, Africa, and Asia [2,3], and considered endemic in 13 low- and middle-income countries [4]. Leprosy is a NTD and considered one of the oldest diseases in human history [5]. The infectious agent of the disease is the intracellular parasite Mycobacterium leprae (M. leprae) or Hansen’s bacillus, which may affect the skin, peripheral nerves, eyes, endothelial cells, bones, mucous membranes, and may result in physical injuries and mental disabilities [6,7]. Approximately 210,000 new cases are reported annually, with 15,000 cases identified in children [8]. Leprosy is present in more than 150 countries, with 80% of cases concentrated in India, Brazil and Indonesia, and considered a public health problem [1].
Despite its high incidence in some regions, estimates show that only 5% of people exposed to the leprosy pathogen are actually infected, and only 20% of them develop the disease [9]. Even people who do not develop the disease, because they have innate immunity, may experience a period in which the bacillus is released through the upper respiratory tract, which is the most common route of transmission [10,11]. Diagnosing leprosy is a challenging task since the symptoms take from two months to 20 years to appear, and there is no gold standard test to diagnose it [12]. The diagnosis is predominantly made by analyzing clinical and dermato-neurological signs, and complementary tests such as heat sensitivity test and Mitsuda intradermal reaction test, and serology may also be used [13]. However, the absence of early symptomatology and similarity to other dermatological conditions can lead to an inconclusive diagnosis and, consequently, lack of appropriate treatment [14].
Artificial intelligence (AI) technology is a growing area and thanks to machine/deep learning (ML/DL), it has gained increasing prominence in medicine. ML/DL techniques encompass statistical models and algorithms capable of progressively learning from data, predicting features, and executing a task [15]. In particular, DL systems can process complex, high-dimensional data such as images [16,17]. In recent years, ML/DL applications have increased exponentially as a diagnostic aid in dermatology [18,19]. Methods for the analysis and classification of dermatological lesions may involve steps such as image acquisition, pre-processing, segmentation, feature extraction and lesion classification [5,20].
In recent years, secondary studies have addressed the use of AI in the health area, especially in dermatology, given the need for recognition and analysis of images with high speed and accuracy [21]. For example, Brinker et al. [22] reviewed studies focused on the development of skin lesion classifiers using Convolutional Neural Networks (CNNs). Popescu et al. [23] presented the advances in the detection of melanoma using artificial neural networks (ANNs). Wu et al. [24] provided an overview of the algorithms based on DL for skin cancer classification, while Kumar et al. [25] gathered data related to AI techniques for diagnosing various diseases, including skin diseases. Yu et al. [26] summarized a set of ML applications for psoriasis assessment and management. Different from the previous secondary studies, this Systematic Literature Review (SLR) aims to identify, analyze and characterize the state-of-the-art AI techniques for diagnostic aid of leprosy.
The remaining article is organized as follows. Section 2 addresses relevant concepts related to leprosy and its diagnosis methods, as well as concepts and application of AI in clinical medicine, for a better understanding of the review. Section 3 describes the research methodology. In Section 4, the selected studies are detailed to answer the research questions, while we discuss trends, open issues and limitations in Section 5. Finally, Section 6 concludes the review.

2. Background

2.1. Leprosy

Leprosy, one of humanity’s oldest diseases, remains a significant public health problem worldwide despite being treatable [7]. This chronic infectious condition, caused by Mycobacterium leprae, can affect the cells lining blood and lymph vessels, sensory, motor and autonomic nerves, eyes, bones, and the upper respiratory tract [27,28]. Exploring the epidemiology of this disease unveils the complex dynamics underlying its occurrence [9]. In addition to the medical complexities, individuals diagnosed with leprosy endure social discrimination, face social exclusion, suffer from a diminished quality of life and often struggle with permanent disfigurement [29].
About the transmissibility, even people who do not develop the condition may experience a period in which the bacillus is released through the upper respiratory tract, which is the most common route of transmission [10,11]. This transmission occurs through close and prolonged contact between a susceptible individual and an infected bacillus. In addition, less common transmission can also occur through skin erosion and vertical transmission. Thus, an infected bacillus carrier is essential in transmitting leprosy [30,31,32]. The infected individual who develops the disease may present characteristic symptoms that can determine the type of classification and, consequently, the treatment.
According to the World Health Organization (WHO), leprosy can be categorized as paucibacillary (PB) or multibacillary (MB), depending on the individual’s immune response to M. leprae. This classification is called Operational and guides the appropriate therapeutic regimen for the patient, designated for treatment purposes. In addition, it is based on the clinical appearance and bacterial index of the lesions. Individuals who have up to five skin lesions and negative intradermal smears are considered PB, and those with six or more skin lesions and positive intradermal smears are classified as MB [33,34].
M. leprae leads to loss of sensation, innervation, damage within the epidermis, and lesions, which are associated with loss of myelin in Schwann cells. In Brazil, one of the most endemic countries for the disease, the Madrid Classification (1953) [35] is used, which was later adapted by Ridley and Jopling (1966) and is widely used throughout the world. The Madrid classification also determines the type of leprosy according to the characteristics of the lesions, neural involvement and sensitivity. It subdivides them into indeterminate, tuberculoid, borderline and virchowian leprosy and is widely used for the differential diagnosis of leprosy [36,37]. Ridley and Jopling [38] classified leprosy into five clinical forms based on clinical, histopathological, immunological, and bacilloscopic characteristics: tuberculoid–tuberculoid, borderline-tuberculoid, borderline–borderline, borderline-lepromatous, and lepromatous–lepromatous.

2.2. Diagnosis Methods of Leprosy

Approximately 70% of leprosy cases are clinically diagnosed through clinical and epidemiological history, anamnesis, dermatological and neurological evaluation [39]. Clinical diagnosis is based on three cardinal signs: (1) definite loss of sensation in a hypopigmented or reddened skin patch; (2) peripheral nerve thickening, with loss of sensation and weakness of the muscles innervated by the affected nerve; and (3) microscopic detection of bacilli [1,40,41]. However, 30% of patients do not have the typical characteristics of the disease, requiring additional tests, such as the Mitsuda reaction test, serological tests, and molecular biology tests [37,42]. Specifically, on the clinical diagnosis performed by health professionals, the patient’s dermatological and neurological signs and symptoms are evaluated. Skin lesions are identified when present, and a thermal sensitivity test is performed to assess sensitivity changes in the lesions [27,37].
The neurological field also needs to be assessed. Tests for analysis of irritation or itching in the eyes and bleeding or wounds in the nose, palpation of peripheral nerve trunks, and assessment of muscle strength and joint mobility in hands and feet are necessary to verify neural involvement [39,43,44]. Hand and foot mobility may be assessed by the Graded Sensory Test [45]. Neuropathic pain affects more than 60% of leprosy patients and is caused by primary damage to fine fibers, unmyelinated fibers or dysfunction of the nervous system, and can be assessed by electroneuromyography [46].
The clinical diagnosis is often insufficient, requiring additional tests, such as laboratory tests [37,41]. Bacilloscopy, for example, allows the detection of alcohol-acid-resistant bacilli, such as Hansen’s bacillus, and has a specificity of 100% and a sensitivity that varies between 34.4% and 50%. For its application, it is necessary to collect smears of intradermal scrapings from regions such as right and left earlobes, right and left elbows, and skin lesions [47,48]. Another critical test is the histopathological one, which is performed using samples from the edges of more active lesions and nerves that can help diagnose cases with atypical clinical manifestations and then direct a more accurate treatment. The specificity of this exam is from 70% to 72%; on the other hand, its sensitivity is low, ranging from 49% to 70% [33,48,49].
The Mitsuda intradermal reaction test is a skin reaction based on an individual response of high sensitivity and high specificity of the delayed cell type against the bacillus M. leprae. Fernández’s intradermal reaction test, on the other hand, has an early reaction with low sensitivity, thus presenting a risk of cross-reactivity with other bacteria. Therefore, the Mitsuda test is the most widely used [37]. Positive indicates that an individual’s macrophages can destroy the bacillus M. leprae and, when negative, if exposed to M. leprae, he is at greater risk of becoming ill and developing the virchowian form of leprosy. In addition, the test helps to classify leprosy as undefined and borderline [50].
Another widely used test is serological, which is important for evaluating and quantifying the bacterial load of M. leprae. Phenolic glycolipid-I (PGL-1) is the major antigenic glycolipid of M. leprae and allows the detection of anti-PGL-1 immunoglobulin G (IgG) and immunoglobulin M (IgM) antibodies. The presence of IgM antibodies in response to PGL-I, which is present in the cell wall of M. leprae, helps to classify leprosy (low bacterial load for PB and high bacterial load for MB). Among the numerous methodologies, two are widely used: the enzyme-linked immunosorbent assay (ELISA) and ML-Flow, an alternative method to ELISA but with a lateral flow format [36]. In addition, serology is essential to identify household contacts at higher risk of developing the disease, as well as in the follow-up of cases to assess the risk of relapse [36,51].
One of the most specific diagnostic methods is those based on molecular tests, which are based on species-specific sequences, deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), such as the polymerase chain reaction (PCR) [47,52]. This methodology presents high specificity and sensitivity due to the success in detecting Hansen’s bacillus DNA, even when there are few [53,54,55].

2.3. Artificial Intelligence in Clinical Medicine

AI refers to the ability of intelligent agents to learn and solve problems in automated processes that impact the quality of life in society, from the automation of industrial processes and communication by smartphones to diagnostic help in medicine [21]. Machine Learning (ML) is a subfield of AI, which uses autonomous algorithms and statistical models that identify patterns and have the potential to help diagnose diseases and other medical approaches [56]. Deep learning (DL) is a sub-area of ML and uses the concept of artificial neuron layers for pattern extraction, and representation of complex and unstructured data [57,58].
Computer vision (CV), combined with IA models, enables the analysis of medical images, assisting in the diagnostic process and potentially reducing human errors [59]. Some diagnostic models based on CV have shown significant evidence in improving the detection of diseases, such as in the early diagnosis aid of skin cancer [60].
In healthcare, AI can provide suggestions and recommendations that direct the decision-making process in clinical practice, facilitated by evaluation and testing, notwithstanding barriers, such as data availability and quality [61,62,63]. Disease prediction models use AI techniques (i.e., ML/DL) associated with data mining approaches [64]. Cardiology, pulmonary medicine, endocrinology, nephrology, gastroenterology, and neurology are some application areas of AI in the medical practice [65,66].
The diagnostic method involves detecting a disease or health condition by analyzing the individual’s clinical signs and symptoms. AI models have been used to facilitate the diagnostic process, with the development of methods capable of analyzing, classifying, and predicting an outcome using a dataset related to the various existing pathologies, such as cancer, diabetes, dengue, malaria, tuberculosis [17], and mental health [67,68].
Moreover, improvements in the health area from AI have also reached the dermatological field [69]. ML techniques are helpful in diagnosis through skin image analysis, and a future trend in this area regarding personalized treatment [21,70,71]. Advances are not restricted to the analysis of melanomas and pigmented skin lesions. Other dermatological conditions are analyzed, such as psoriasis, acne, autoimmune disorders, and allergic contact dermatitis [20]. Promising results have been demonstrated in the detection of monkeypox lesions using the MobileNetV2 architecture [72], in the early detection of skin cancer [15], in which CNNs show high accuracy in disease recognition [73].

3. Methodology

This SLR followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) framework [74] (see PRISMA checklist in Supplementary File S1). We addressed three distinct phases: (i) planning the conduction of the review by elaborating a review protocol; (ii) performing collaboratively the activities contained in the protocol using the online tool Parsif.al [75]; and (iii) extracting data from the selected articles, analyzing and synthesizing the relevant information on the research topic. We registered the review protocol on PROSPERO (registration number CRD42023400323).

3.1. Research Questions

The following Research Questions (RQs) were defined:
  • (RQ1) What types of leprosy are targeted by AI models?
  • (RQ2) What data types were used to develop AI models?
  • (RQ3) What preprocessing techniques were used on the datasets?
  • (RQ4) What AI algorithms/architectures were applied to diagnose leprosy?
  • (RQ5) How well do the models perform?

3.2. Search Strategy and Selection Criteria

The following digital libraries were used to search for primary studies: ACM Digital Library, IEEE Digital Library, Web of Science, Scopus, and PubMed. The search process, conducted by one researcher, occurred on 31 August 2023; it was verified by two other researchers. To retrieve relevant studies, we used two search terms combined with their synonyms to design the search string presented in Box 1. The following three control articles were selected to guide the searches in the digital libraries [12,14,76]. The validation process of the search strings occurred in the databases, which demonstrated their ability to find studies suitable for this SLR, including the control articles.
Box 1. Search string used for this SLR.
“((“Artificial Intelligence” OR “Data Science” OR “Deep Learning” OR “Machine Learning” OR “Algorithm*” OR “Predict* Model*” OR “Big Data” OR “Transfer Learning” OR “Computer Vision” OR “Text Mining” OR “Dataset” OR “Support Vector Machine” OR “Artificial Neural Network” OR “Backpropagation Neural Network” OR “Convolutional Neural Network” OR “Neural network” OR “Pattern recognition” OR “Supervised Learning” OR “Generative Adversarial Network” OR “Feature Learning” OR “Meta Data” OR “Image Segmentation” OR “Image Classifiers” OR “Image Processing” OR “Fuzzy Logic” OR “Decision Tree” OR “Decision Support System” OR “Support Vector Regression” OR “Regression” OR “Bayesian” OR “K-nearest Neighbors” OR “K-means”) AND (“Leprosy” OR “Hansen’s Disease”))”.
Inclusion and exclusion criteria were defined for selecting articles, as listed in Table 1. Initially, we retrieved documents and compared them to remove duplicate records. We screened articles for eligibility based on their title, abstract, and keywords. In a second moment, the researchers read and analyzed the full text of the screened studies to identify those suitable for the scope of this review. We then evaluated the selection process by applying Cohen’s Kappa coefficient [77], which measures the level of agreement between researchers’ analyses. In the end, when there was no consensus among the researchers, the other researchers (co-authors) held discussions to resolve selection conflicts. We finally performed the snowballing technique [78,79] to maximize results in the selection process.

3.3. Quality Assessment

Two independent researchers evaluated the selected studies using a quality assessment tool adapted by Cabitza and Campagner [80] to evaluate the robustness of the methodology in medical machine learning studies and the ability to reproduce its findings qualitatively. The checklist contains 30 items, which are quality criteria (QC), organized into six phases: problem understanding, data understanding, data preparation, modeling, validation, and deployment. Each item represents a requirement and is associated with three possible options: adequately addressed (OK), sufficient but unlikely, minor revision required (mR), and inadequately addressed, major revision required (MR). The studies were individually classified on a trichotomous scale associated with the tool’s options, with a score of 1 (OK), 0.5 (mR), and 0 (MR). The quality assessment score is calculated based on the sum of the scores assigned to the items. Two researchers who analyzed the studies to assign the scores resolved evaluation conflicts through discussions. When there was no agreement, a third researcher acted as a judge and resolved the conflicts. The QCs used are shown below in Table 2.

3.4. Data Extraction

The researchers read each selected article to extract the necessary information to answer the research questions, characterize the studies, and outline opportunities for future work. Table 3 presents the items of the data extraction form and their respective research question.

4. Results

4.1. Study Selection

Figure 1 presents the PRISMA flow diagram with the study selection process. We initially retrieved 1659 articles from digital libraries. We then identified and removed 355 duplicate studies. Among the remaining 1304 articles, 51 were selected by reading the title and abstract. Eighteen papers were eligible after a complete reading of the study. Cohen’s Kappa test between researchers’ analyses was Kappa = 0.84 (p < 0.001), considered “almost perfect agreement” [81]. We resolved conflicts through discussions, resulting in 18 studies. The snowballing technique resulted in the addition of three studies meeting the selection criteria. A total of 21 studies were included in the review for qualitative analysis.

4.2. Study Characterization

Table 4 presents the data extracted from the selected articles to answer the RQs.

4.3. Answering the Research Questions

4.3.1. Leprosy Types Targeted by AI Models (RQ1)

No study classified the types of leprosy according to the Madrid classification or Ridley Jopling classification. Some papers have classified leprosy according to the operational classification, which is recommended by the WHO. Studies by [14,87,94] classified leprosy as either paucibacillary or multibacillary. Binary classification occurred in 43% of the selected articles [12,14,76,82,83,87,94,95,96]. The remaining 57% of the articles used multiclass classification tasks, in which the models classified different skin diseases and, among them, leprosy [84,85,86,88,89,90,91,92,93,97,98,99]. Also, results revealed that 24% were classified as leprosy or not. Most papers do not prioritize leprosy in proposing an AI model to aid in diagnosing the condition. Therefore, the works are directed at classifying skin diseases, including leprosy.

4.3.2. Data Types (RQ2)

The data types used in most models were images of skin lesions, and models classified leprosy against other dermatological diseases [12,82,83,84,85,86,88,89,90,91,92,93,97,98]. However, Barbieri et al. [12] used images of skin lesions combined with clinical information to develop an AI model. The clinical information was the loss of thermal sensation, nodules and papules, paresthesia in the feet, number of lesions, sex, flaking surface, itching, trunk, absence of symptoms in the skin lesion, and diffuse infiltration.
The studies [14,76,87,94,96] utilized the outcomes of tests as input data to AI models, such as the RNA sequencing technique (RNA-Seq) and real-time reverse transcription polymerase chain reaction (RT-qPCR) used in molecular and cellular biology. The RNA-Seq technique extracts total RNA from a biological sample, converts it into complementary DNA (cDNA), and performs next-generation sequencing (NGS) [100]. The RT-qPCR and RNA-Seq techniques are used to quantify gene expression [101]. The study by Tió-Coma et al. [76] used the results of gene expression analyses from RNA-Seq and RT-qPCR as input to develop an AI model. The study by Pillai and Chouhan [96] analyzed the H37Rv strain to study the immunology and pathogenesis of tuberculosis. H37Rv is a strain of Mycobacterium tuberculosis [102] and share characteristics similar to M. leprae.
To create an AI model, the study by Gama et al. [94] used the data age, sex, treatment time, qPCR test result (M. leprae DNA level), IgG/IgM serology level, and sputum smear index. The IgG/IgM serology levels tell about the amount of IgG/IgM antibodies present in a person’s blood sample and indicate whether the person has been exposed to a pathogen, virus, or bacteria [103]. The sputum smear index is an indicator to assess the bacillary load of a Mycobacterium in a sputum sample [104].
Cytokines are a group of signaling molecules produced by the immune system, such as tumor necrosis factor (TNF), interferon-gamma (IFN-y), interleukin 4 (IL-4), and interleukin 10 (IL-10), and their presence can indicate a specific disease [105]. Marçal et al. [14] used results from an in vitro assay model of the M. leprae antigen and measurements of the cytokines TNF, IFN-y, IL-4, and IL-10 as input to develop an AI model for the operational classification of leprosy.

4.3.3. Preprocessing Techniques (RQ3)

Preprocessing techniques to prepare the dataset may depend on the choice of data type and the algorithm or architecture used to train the model. Some authors used numerical data, so requiring the data to be normalized [12]. Researchers who used datasets with images and ML classical algorithms in [86,89,92,93,97,98,99] had to apply various image preparation techniques and feature extraction. The features most explored by the authors were related to texture and edges, with applications of spatial filters aiming to correct, smooth, or enhance specific regions. In addition, image compression techniques (e.g., YCbCr algorithm [97] and DCT [89,98]), segmentation-related techniques (e.g., binary mask [92,97], histogram [90,97], OTSU [86], global thresholding [93]), and image noise reduction techniques (e.g., median filter, smooth filter [92,97]) were explored by the studies. Jin et al. [90] used the ResNet-50 and VGG16 architectures and the HOG technique for feature extraction. Mondal et al. [85,91] used techniques for image normalization and augmentation; [83,84] used data augmentation.

4.3.4. Algorithms and Architectures (RQ4)

Most of the authors chose classical ML techniques. Researchers in [12,14,76,87,94] used the classical algorithms MLP, LR, RF, and DT to train numerical datasets. Researchers in [86,89,90,92,93,97,98,99] utilized classic ML algorithms DT, SVM, FFBPN, and ANN to build models with image datasets. Researchers in [82,83,84,85,88,91,95] used the MobileNet-V2, DenseNet-121, Inception-V3, ResNet-50, EfficientNet-B2, LeprosyNet, and Siamese Network architectures to develop models with image datasets. Refer to the references [106,107] for a comprehensive understanding of the ML algorithms and ANN architectures utilized in the studies.

4.3.5. Performance of the Models (RQ5)

The most common metrics used to measure the performance of AI models were found in the selected papers: accuracy, precision, sensitivity/recall, specificity, F1 score, and AUC. The review revealed that accuracy is the metric used by the authors in 90% of the studies. In addition, accuracy was the only metric to measure model performance in 52% of articles [14,82,85,89,90,93,95,96,97,98,99]. Accuracy can be misleading in multiclass classification tasks with imbalanced datasets, in which one class may have more samples than others. In such cases, other metrics, such as precision, recall, and F1 score, should be utilized together with accuracy for better clarity of performance [108].
Figure 2 shows the metrics, the data types, the algorithms/architectures, and the performance of the models developed by the selected studies. A study implementing an architecture with a binary classification task called LeprosyNet obtained the best performance when considering accuracy. The most used algorithm among the selected studies is the SVM. In addition, the research revealed that the best DL techniques that used multiclass classification tasks of leprosy against other dermatoses were the CNN MobileNet-V2 and DenseNet-121 architectures. The models developed from image datasets had an average accuracy in the classification of leprosy of 89.97%. In comparison, the models created from numerical datasets reached an average accuracy of 87.98%.

4.4. Study Quality

Evaluating the quality of the selected studies allowed us to qualitatively analyze the methodological rigor of the ML studies in the medical area, their contributions, and the reproducibility of the results (see detailed quality assessment per study in Supplementary File S2). In 11 articles [83,86,90,92,93,94,95,96,97,98,99], the scores reached less than 40% of the tool score. Other studies attained results above 40% of the quality criteria [76,82,87], and the study in [12] achieved 63.3%, which was the best evaluation. Figure 3 depicts an overview of the quality assessment result per phase of the checklist.
The modeling phase was adequately addressed by all studies evaluated. In the problem-understanding phase, the studies in [12,14,76,82,85,87,88,89,90,94,97] demonstrated satisfactory quality. In contrast, the remaining studies could have provided more robust information in the problem-understanding phase. In the phases of data understanding, data preparation, validation and deployment, the studies failed to address the quality criteria assessed.

5. Discussion

To the best of our knowledge, this is the first SLR to focus on leprosy diagnostic aid supported by AI techniques. Therefore, this work can help researchers in AI and health informatics by characterizing the studies regarding datasets, preprocessing techniques, AI algorithms/architectures, also comparing the performance of different ML/DL models. In this section, we identify trends and open issues in current research, which are opportunities for future research. Also, we acknowledge the limitations of this SLR.

5.1. Trends

We recognized several trends presented by the studies on the diagnostic aid of AI-supported leprosy. First, we identified a trend of using images in datasets (n = 16) for developing AI models for leprosy classification. We also recognized a trend to use classical supervised ML algorithms (n = 14), highlighting SVM (n = 8), RF (n = 5) and DT (n = 4) as the most used ones. We also recognize that most models are developed for multiclass classification tasks (n = 12), and the metric most used was accuracy (n = 19).
Figure 4 displays three bar charts with the number of papers published by year, categorized by the type of model (i.e., classical machine learning vs. deep learning), the type of task modeled (i.e., multiclass classification task vs. binary classification task), and the type of dataset (image data vs. numerical data).

5.2. Open Issues

Studies identified in our SLR present promising solutions for diagnostic aid of leprosy using AI techniques. However, we recognize that there are open issues to be addressed by further research.

5.2.1. Open Science

Open science promotes openness and accessibility of research results, including data and methods. Reproducibility is an essential component of open science, as it aims to ensure that the results of a study can be reproduced and validated by other researchers [80]. Open science adopts and promotes the Findable, Accessible, Interoperable, Reusable (FAIR) principles, which are guidelines that aim to make scientific data auditable [109]. In this regard, accountability in AI governance ensures that research is conducted ethically, transparently, and responsibly [110].
In most studies (n = 17) identified in our SLR, the dataset, code, and methods used to implement AI models were not shared in public repositories, which impacted the quality assessment of the studies (see Section 4.4). Thus, this open issue can (and should) be addressed, such as implementing AI models that can aid in leprosy diagnosis following the principles of open science by sharing work information in a public repository under a permissive license to undergo external validation. This is enabled by free online repository services, such as GitHub and Zenodo.

5.2.2. Data Fusion

Data fusion in AI refers to combining and integrating information from various data sources that may include different types of data, such as text, images, audio, and databases [111]. Data fusion aims to leverage the complementary information from each data source to improve the accuracy, reliability, and understanding of the results obtained. The fusion process can involve data integration, alignment, aggregation techniques, and the application of ML algorithms to explore and extract knowledge from the combined data [112].
Barbiere et al. [12] combined skin lesion images with clinical data from leprosy patients to train disease classification models. An open issue is an in-depth exploration of the combination of different multimodal data and originated from different sources (e.g., personal information, clinical signs and symptoms, skin lesion images, and information on reactions to polychemotherapy) to implement AI models to contribute to the diagnosis of leprosy.

5.2.3. Differential Diagnostic

Several diseases have skin lesions with characteristics similar to leprosy, which can significantly increase the rate of false diagnosis, hence the stigma associated with the disease [113]. The results revealed that proposed AI models classified leprosy according to different classification forms (e.g., paucibacillary vs. multibacillary; binary classification to identify the presence of leprosy; and leprosy classified against other skin diseases). Yet, a research gap to be addressed is to build AI models through image analysis to classify the skin lesions caused by leprosy according to its clinical forms (e.g., Madrid or Ridley and Jopling classifications), so facilitating the differential diagnosis; that is, the distinction between leprosy and other dermatological conditions that may present similar symptoms.

5.2.4. External Validation

External validation refers to testing a model’s ability to make accurate and useful predictions on datasets not used during training, and then providing evidence of the model’s generalizability. External validation can improve the reliability of models, allowing them to be applied safely in different populations and clinical environments [114,115,116]. None of the studies identified in our review externally validate their developed models (see detailed quality assessment per study in Supplementary File S2). Therefore, this remains an open issue that needs attention in future studies to ensure the prediction models’ reliability and utility.

5.3. Limitations of the SLR

This SLR has limitations to be acknowledged and considered when conducting future research. First, we did not review gray literature, so we did not include articles such as research reports, theses, dissertations, government reports, and tutorials. Consequently, future work may extend this SLR to consider gray literature on this research topic. Second, we searched for articles only in the leading digital libraries. Therefore, future work may also extend the search to additional databases.

6. Conclusions

The results of this SLR provided new insights into the literature related to AI techniques in aiding leprosy diagnosis. Key trends were identified, such as the prevalence of classical supervised ML algorithms, and that most models are developed for multiclass classification tasks and using dermatological images as a non-invasive technique. Most of the articles did not consider leprosy as the study’s primary objective but rather the classification of different skin diseases and, among them, leprosy. In addition, most of the selected papers did not adhere to the open science principles, showing low quality regarding transparency, data sharing, and responsibility. Such findings highlight the need for more research on leprosy diagnosis and to promote open science in the application of AI in healthcare to ensure reliable and impactful results. Therefore, AI-supported leprosy diagnosis is constantly evolving, but research in the area is still at an early stage, so it is not mature enough to be transformed into clinical practice.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcm13010180/s1, Supplementary File S1: PRISMA 2020 Checklist; Supplementary File S2: Quality Assessment—Cabitza and Campagner’s checklist (2021)—Artificial Intelligence on Diagnostic Aid of Leprosy: A Systematic Literature Review.

Author Contributions

Conceptualization, J.R.N.F. and T.R.S.F.; methodology, T.R.S.F. and A.S.T.; validation, J.R.N.F., T.R.S.F., L.D.B.L. and A.S.T.; formal analysis, J.R.N.F., T.R.S.F., L.D.B.L. and A.S.T.; investigation, J.R.N.F., L.D.B.L., and A.S.T.; writing—original draft preparation, J.R.N.F., T.R.S.F. and A.S.T.; writing—review and editing, J.R.N.F., T.R.S.F., L.D.B.L., S.B., N.G., A.S.T. and S.T.; supervision, A.S.T. and S.T.; funding acquisition, A.S.T. and S.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Coordination for the Improvement of Higher Education Personnel (CAPES) [Finance Code 001], and the Brazilian National Council for Scientific and Technological Development (CNPq) [grants 308059/2022-0 and 308736/2022-2].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All relevant data are within this manuscript and its Supplementary Files.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
MLMachine Learning
DLDeep Learning
CVComputer Vision
NTDNeglected Tropical Disease
CNNConvolutional Neural Network
ANNArtificial Neural Network
WHOWorld Health Organization
PBPaucibacillary
MBMultibacillary
PGL-1Phenolic glycolipid-I
ELISAEnzyme-linked immunosorbent assay
PCRPolymerase Chain Reaction
SVMSupport Vector Machine
RQResearch Questions
QCQuality Criteria
mRMinor revision required
MRMajor revision required
ADAMAdaptive Moment Estimation
SGDStochastic Gradient Descent
LRLogistic Regression
XGBXGBoost
RFRandom Fores
AUCArea Under Curve
DTDecision Trees
LOOCVLeave-one-out-cross-validation
KNNK-Nearest Neighbors
LBPLocal Binary Pattern
WLDWeber Local Descriptor
GLCMGray-Level Co-Occurrence Matrix
riLBProtation invariant LBP
HOGHistogram of Oriented Gradients
ROIRegion of Interest
GCNGlobal Contrast Normalization
GANGenerative Adversarial Network
ID3Iterative Dichotomiser 3
SMOSequential Minimal Optimization
MLPMultilayer Perceptron
FFBPNFeed-Forward Back Propagation Network
DCTDiscrete Cosine Transform
DFTDiscrete Fourier Transform
RNA-SeqRNA sequencing technique
RT-qPCRReal-time Reverse Transcription Polymerase Chain Reaction
NGSNext-generation sequencing
TNFTumor Necrosis Factor
IFN-yInterferon-gamma
IL-4Interleukin 4
IL-10Interleukin 10
IgGImmunoglobulin G
IgMImmunoglobulin M

References

  1. WHO. Leprosy. 2023. Available online: https://www.who.int/news-room/fact-sheets/detail/leprosy (accessed on 13 March 2023).
  2. Martins-Melo, F.R.; Carneiro, M.; Ramos, A.N., Jr.; Heukelbach, J.; Ribeiro, A.L.P.; Werneck, G.L. The burden of Neglected Tropical Diseases in Brazil, 1990–2016: A subnational analysis from the Global Burden of Disease Study 2016. PLoS Neglected Trop. Dis. 2018, 12, e0006559. [Google Scholar] [CrossRef] [PubMed]
  3. Ochola, E.A.; Elliott, S.J.; Karanja, D.M.S. The Impact of Neglected Tropical Diseases (NTDs) on Women’s Health and Wellbeing in Sub-Saharan Africa (SSA): A Case Study of Kenya. Int. J. Environ. Res. Public Health 2021, 18, 2180. [Google Scholar] [CrossRef] [PubMed]
  4. Pescarini, J.M.; Strina, A.; Nery, J.S.; Skalinski, L.M.; Andrade, K.V.F.d.; Penna, M.L.F.; Brickley, E.B.; Rodrigues, L.C.; Barreto, M.L.; Penna, G.O. Socioeconomic risk markers of leprosy in high-burden countries: A systematic review and meta-analysis. PLoS Neglected Trop. Dis. 2018, 12, e0006622. [Google Scholar] [CrossRef] [PubMed]
  5. Santacroce, L.; Del Prete, R.; Charitos, I.A.; Bottalico, L. Mycobacterium leprae: A historical study on the origins of leprosy and its social stigma. Infez. Med. 2021, 29, 623–632. [Google Scholar] [CrossRef] [PubMed]
  6. Han, X.Y.; Silva, F.J. On the Age of Leprosy. PLoS Neglected Trop. Dis. 2014, 8, e2544. [Google Scholar] [CrossRef] [PubMed]
  7. Mi, Z.; Liu, H.; Zhang, F. Advances in the immunology and genetics of leprosy. Front. Immunol. 2020, 11, 567. [Google Scholar] [CrossRef]
  8. Vieira, M.C.A.; Nery, J.S.; Paixão, E.S.; de Andrade, K.V.F.; Penna, G.O.; Teixeira, M.G. Leprosy in children under 15 years of age in Brazil: A systematic review of the literature. PLoS Neglected Trop. Dis. 2018, 12, e0006788. [Google Scholar] [CrossRef]
  9. Nunzi, E.; Massone, C. Leprosy: A Practical Guide; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar] [CrossRef]
  10. Ghorpade, A. Inoculation (tattoo) leprosy: A report of 31 cases. J. Eur. Acad. Dermatol. Venereol. 2002, 16, 494–499. [Google Scholar] [CrossRef]
  11. Patrocínio, L.G.; Goulart, I.M.B.; Goulart, L.R.; Patrocínio, J.A.; Ferreira, F.R.; Fleury, R.N. Detection of Mycobacterium leprae in nasal mucosa biopsies by the polymerase chain reaction. FEMS Immunol. Med. Microbiol. 2005, 44, 311–316. [Google Scholar] [CrossRef]
  12. Barbieri, R.R.; Xu, Y.; Setian, L.; Souza-Santos, P.T.; Trivedi, A.; Cristofono, J.; Bhering, R.; White, K.; Anna M, S.; Miller, G.; et al. Reimagining leprosy elimination with AI analysis of a combination of skin lesion images with demographic and clinical data. Lancet Reg. Health-Am. 2022, 9, 100192. [Google Scholar] [CrossRef]
  13. Martins, P.V.; Iriart, J.A.B. Itinerários terapêuticos de pacientes com diagnóstico de hanseníase em Salvador, Bahia. Physis Rev. Saúde Coletiva 2014, 24, 273–289. [Google Scholar] [CrossRef]
  14. Marçal, P.H.F.; de Souza, M.L.M.; Gama, R.S.; de Oliveira, L.B.P.; Gomes, M.d.S.; do Amaral, L.R.; Pinheiro, R.O.; Sarno, E.N.; Moraes, M.O.; Fairley, J.K.; et al. Algorithm Design for a Cytokine Release Assay of Antigen-Specific In Vitro Stimuli of Circulating Leukocytes to Classify Leprosy Patients and Household Contacts. Open Forum Infect. Dis. 2022, 9, ofac036. [Google Scholar] [CrossRef] [PubMed]
  15. Das, K.; Cockerell, C.J.; Patil, A.; Pietkiewicz, P.; Giulini, M.; Grabbe, S.; Goldust, M. Machine Learning and Its Application in Skin Cancer. Int. J. Environ. Res. Public Health 2021, 18, 13409. [Google Scholar] [CrossRef]
  16. Young, A.T.; Xiong, M.; Pfau, J.; Keiser, M.J.; Wei, M.L. Artificial Intelligence in Dermatology: A Primer. J. Investig. Dermatol. 2020, 140, 1504–1512. [Google Scholar] [CrossRef] [PubMed]
  17. Dildar, M.; Akram, S.; Irfan, M.; Khan, H.U.; Ramzan, M.; Mahmood, A.R.; Alsaiari, S.A.; Saeed, A.H.M.; Alraddadi, M.O.; Mahnashi, M.H. Skin Cancer Detection: A Review Using Deep Learning Techniques. Int. J. Environ. Res. Public Health 2021, 18, 5479. [Google Scholar] [CrossRef] [PubMed]
  18. Korotkov, K.; Garcia, R. Computerized analysis of pigmented skin lesions: A review. Artif. Intell. Med. 2012, 56, 69–90. [Google Scholar] [CrossRef]
  19. Oliveira, R.B.; Filho, M.E.; Ma, Z.; Papa, J.P.; Pereira, A.S.; Tavares, J.M.R. Computational methods for the image segmentation of pigmented skin lesions: A review. Comput. Methods Programs Biomed. 2016, 131, 127–141. [Google Scholar] [CrossRef]
  20. Pai, V.V.; Pai, R.B. Artificial intelligence in dermatology and healthcare: An overview. Indian J. Dermatol. Venereol. Leprol. 2021, 87, 457–467. [Google Scholar] [CrossRef]
  21. Li, Z.; Koban, K.C.; Schenck, T.L.; Giunta, R.E.; Li, Q.; Sun, Y. Artificial Intelligence in Dermatology Image Analysis: Current Developments and Future Trends. J. Clin. Med. 2022, 11, 6826. [Google Scholar] [CrossRef]
  22. Brinker, T.J.; Hekler, A.; Utikal, J.S.; Grabe, N.; Schadendorf, D.; Klode, J.; Berking, C.; Steeb, T.; Enk, A.H.; von Kalle, C. Skin cancer classification using convolutional neural networks: Systematic review. J. Med. Internet Res. 2018, 20, e11936. [Google Scholar] [CrossRef]
  23. Popescu, D.; El-Khatib, M.; El-Khatib, H.; Ichim, L. New trends in melanoma detection using neural networks: A systematic review. Sensors 2022, 22, 496. [Google Scholar] [CrossRef]
  24. Wu, Y.; Chen, B.; Zeng, A.; Pan, D.; Wang, R.; Zhao, S. Skin cancer classification with deep learning: A systematic review. Front. Oncol. 2022, 12, 893972. [Google Scholar] [CrossRef] [PubMed]
  25. Kumar, Y.; Koul, A.; Singla, R.; Ijaz, M.F. Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. J. Ambient Intell. Humaniz. Comput. 2022, 14, 8459–8486. [Google Scholar] [CrossRef] [PubMed]
  26. Yu, K.; Syed, M.N.; Bernardis, E.; Gelfand, J.M. Machine learning applications in the evaluation and management of psoriasis: A systematic review. J. Psoriasis Psoriatic Arthritis 2020, 5, 147–159. [Google Scholar] [CrossRef] [PubMed]
  27. White, C.; Franco-Paredes, C. Leprosy in the 21st century. Clin. Microbiol. Rev. 2015, 28, 80–94. [Google Scholar] [CrossRef] [PubMed]
  28. Manta, F.S.d.N.; Leal-Calvo, T.; Moreira, S.J.M.; Marques, B.L.C.; Ribeiro-Alves, M.; Rosa, P.S.; Nery, J.A.C.; Rampazzo, R.d.C.P.; Costa, A.D.T.; Krieger, M.A.; et al. Ultra-sensitive detection of Mycobacterium leprae: “DNA” extraction and “PCR” assays. PLoS Neglected Trop. Dis. 2020, 14, e0008325. [Google Scholar] [CrossRef] [PubMed]
  29. Makhakhe, L. Leprosy review. S. Afr. Fam. Pract. 2021, 63, e1–e6. [Google Scholar] [CrossRef]
  30. Moet, F.J.; Meima, A.; Oskam, L.; Richardus, J.H. Risk factors for the development of clinical leprosy among contacts, and their relevance for targeted interventions. Lepr. Rev. 2004, 75, 310–326. [Google Scholar] [CrossRef]
  31. Job, C.K.; Jayakumar, J.; Kearney, M.; Gillis, T.P. Transmission of leprosy: A study of skin and nasal secretions of household contacts of leprosy patients using PCR. Am. J. Trop. Med. Hyg. 2008, 78, 518–521. [Google Scholar] [CrossRef]
  32. Hambridge, T.; Nanjan Chandran, S.L.; Geluk, A.; Saunderson, P.; Richardus, J.H. Mycobacterium leprae transmission characteristics during the declining stages of leprosy incidence: A systematic review. PLoS Neglected Trop. Dis. 2021, 15, e0009436. [Google Scholar] [CrossRef]
  33. Lockwood, D.N.J.; Nicholls, P.; Smith, W.C.S.; Das, L.; Barkataki, P.; van Brakel, W.; Suneetha, S. Comparing the Clinical and Histological Diagnosis of Leprosy and Leprosy Reactions in the INFIR Cohort of Indian Patients with Multibacillary Leprosy. PLoS Neglected Trop. Dis. 2012, 6, e1702. [Google Scholar] [CrossRef] [PubMed]
  34. Moura, R.S.; Penna, G.O.; Cardoso, L.P.V.; de Andrade Pontes, M.A.; Cruz, R.; de Sá Gonçalves, H.; Fernandes Penna, M.L.; de Araújo Stefani, M.M.; Bührer-Sékula, S. Description of leprosy classification at baseline among patients enrolled at the uniform multidrug therapy clinical trial for leprosy patients in Brazil. Am. J. Trop. Med. Hyg. 2015, 92, 1280–1284. [Google Scholar] [CrossRef] [PubMed]
  35. Araújo, M.G. Hanseníase no Brasil. Rev. Soc. Bras. Med. Trop. 2003, 36, 373–382. [Google Scholar] [CrossRef] [PubMed]
  36. Bührer-Sékula, S.; Visschedijk, J.; Grossi, M.A.F.; Dhakal, K.P.; Namadi, A.U.; Klatser, P.R.; Oskam, L. The ML flow test as a point of care test for leprosy control programmes: Potential effects on classification of leprosy patients. Lepr. Rev. 2007, 78, 70–79. [Google Scholar] [CrossRef]
  37. Eichelmann, K.; González, S.G.; Salas-Alanis, J.; Ocampo-Candiani, J. Leprosy. An Update: Definition, Pathogenesis, Classification, Diagnosis, and Treatment. Actas Dermo-Sifiliográficas Engl. Ed. 2013, 104, 554–563. [Google Scholar] [CrossRef]
  38. Ridley, D.S.; Jopling, W.H. Classification of leprosy according to immunity. A five-group system. Int. J. Lepr. Other Mycobact. Dis. 1966, 34, 255–273. Available online: https://pubmed.ncbi.nlm.nih.gov/5950347/ (accessed on 1 November 2023).
  39. Santana, J.S.; Silva, R.A.N.; Lima, T.O.S.; Basso, N.; Machado, L.B.; dos Santos, D.S.; Reginaldo, M.P.; de Sá Junior, J.X.; Bandeira, M.; Abrão, R.K. The role of nurses in leprosy control in primary care. Res. Soc. Dev. 2022, 11, e51811427664. [Google Scholar] [CrossRef]
  40. Britton, W.J.; Lockwood, D.N. Leprosy. Lancet 2004, 363, 1209–1219. [Google Scholar] [CrossRef]
  41. Moschella, S.L. An update on the diagnosis and treatment of leprosy. J. Am. Acad. Dermatol. 2004, 51, 417–426. [Google Scholar] [CrossRef]
  42. Saunderson, P.; Groenen, G. Which physical signs help most in the diagnosis of leprosy? A proposal based on experience in the AMFES project, ALERT, Ethiopia. Lepr. Rev. 2000, 71, 34–42. [Google Scholar] [CrossRef]
  43. Naaz, F.; Mohanty, P.; Bansal, A.; Kumar, D.; Gupta, U. Challenges beyond elimination in leprosy. Int. J. Mycobacteriol. 2017, 6, 222. [Google Scholar] [CrossRef] [PubMed]
  44. Chen, X.; Zha, S.; Shui, T.J. Presenting symptoms of leprosy at diagnosis: Clinical evidence from a cross-sectional, population-based study. PLoS Neglected Trop. Dis. 2021, 15, e0009913. [Google Scholar] [CrossRef] [PubMed]
  45. Wexler, R.; Melchior, H. Dorsal sensory impairment in hands and feet of people affected by Hansen’s disease in Israel. Lepr. Rev. 2007, 78, 362–368. [Google Scholar] [CrossRef] [PubMed]
  46. Somensi, D.N.; de Jesus Soares de Sousa, E.; Lopes, G.L.; de Sousa, G.C.; Xavier, M.B. Clinical and electrophysiological characteristics of neuropathic pain in leprosy patients: A prospective cross-sectional study. Indian J. Dermatol. Venereol. Leprol. 2021, 88, 641–644. [Google Scholar] [CrossRef] [PubMed]
  47. Rao, P.N. Leprosy: The challenges ahead for India. J. Ski. Sex. Transm. Dis. 2021, 3, 106–110. [Google Scholar] [CrossRef]
  48. Maymone, M.B.; Laughter, M.; Venkatesh, S.; Dacso, M.M.; Rao, P.N.; Stryjewska, B.M.; Hugh, J.; Dellavalle, R.P.; Dunnick, C.A. Leprosy: Clinical aspects and diagnostic techniques. J. Am. Acad. Dermatol. 2020, 83, 1–14. [Google Scholar] [CrossRef]
  49. Antunes, S.L.G.; Jardim, M.R.; Vital, R.T.; Pascarelli, B.M.d.O.; Nery, J.A.d.C.; Amadeu, T.P.; Sales, A.M.; da Costa, E.A.F.; Sarno, E.N. Fibrosis: A distinguishing feature in the pathology of neural leprosy. Mem. Inst. Oswaldo Cruz 2019, 114, e190056. [Google Scholar] [CrossRef]
  50. Alecrim, E.S.d.; Chaves, A.T.; Pôrto, L.A.B.; Grossi, M.A.d.F.; Lyon, S.; Rocha, M.O.d.C. Reading of the Mitsuda test: Comparison between diameter and total area by means of a computerized method. Rev. Inst. Med. Trop. Sao Paulo 2019, 61, e5. [Google Scholar] [CrossRef]
  51. Young, D.B.; Buchanan, T.M. A Serological Test for Leprosy with a Glycolipid Specific for Mycobacterium leprae. Science 1983, 221, 1057–1059. [Google Scholar] [CrossRef]
  52. Barbieri, R.R.; Manta, F.S.N.; Moreira, S.J.M.; Sales, A.M.; Nery, J.A.C.; Nascimento, L.P.R.; Hacker, M.A.; Pacheco, A.G.; Machado, A.M.; Sarno, E.M.; et al. Quantitative polymerase chain reaction in paucibacillary leprosy diagnosis: A follow-up study. PLoS Neglected Trop. Dis. 2019, 13, e0007147. [Google Scholar] [CrossRef]
  53. Martinez, A.N.; Talhari, C.; Moraes, M.O.; Talhari, S. PCR-Based Techniques for Leprosy Diagnosis: From the Laboratory to the Clinic. PLoS Neglected Trop. Dis. 2014, 8, e2655. [Google Scholar] [CrossRef]
  54. Araujo, S.; Freitas, L.O.; Goulart, L.R.; Goulart, I.M.B. Molecular Evidence for the Aerial Route of Infection of Mycobacterium leprae and the Role of Asymptomatic Carriers in the Persistence of Leprosy. Clin. Infect. Dis. 2016, 63, 1412–1420. [Google Scholar] [CrossRef] [PubMed]
  55. Manta, F.S.N.; Barbieri, R.R.; Moreira, S.J.M.; Santos, P.T.S.; Nery, J.A.C.; Duppre, N.C.; Sales, A.M.; Pacheco, A.G.; Hacker, M.A.; Machado, A.M.; et al. Quantitative PCR for leprosy diagnosis and monitoring in household contacts: A follow-up study, 2011–2018. Sci. Rep. 2019, 9, 16675. [Google Scholar] [CrossRef] [PubMed]
  56. Rayan, Z.; Alfonse, M.; Salem, A.B.M. Machine Learning Approaches in Smart Health. Procedia Comput. Sci. 2019, 154, 361–368. [Google Scholar] [CrossRef]
  57. Fourcade, A.; Khonsari, R. Deep learning in medical image analysis: A third eye for doctors. J. Stomatol. Oral Maxillofac. Surg. 2019, 120, 279–288. [Google Scholar] [CrossRef]
  58. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed]
  59. Yunchao, G.; Jiayao, Y. Application of Computer Vision and Deep Learning in Breast Cancer Assisted Diagnosis. In Proceedings of the ICMLSC 2019: 3rd International Conference on Machine Learning and Soft Computing, New York, NY, USA, 25–28 January 2019; pp. 186–191. [Google Scholar] [CrossRef]
  60. Maglogiannis, I.; Doukas, C.N. Overview of Advanced Computer Vision Systems for Skin Lesions Characterization. IEEE Trans. Inf. Technol. Biomed. 2009, 13, 721–733. [Google Scholar] [CrossRef] [PubMed]
  61. Chomutare, T.; Tejedor, M.; Svenning, T.O.; Marco-Ruiz, L.; Tayefi, M.; Lind, K.; Godtliebsen, F.; Moen, A.; Ismail, L.; Makhlysheva, A.; et al. Artificial Intelligence Implementation in Healthcare: A Theory-Based Scoping Review of Barriers and Facilitators. Int. J. Environ. Res. Public Health 2022, 19, 16359. [Google Scholar] [CrossRef]
  62. Abbas, A.; Afzal, M.; Hussain, J.; Ali, T.; Bilal, H.S.M.; Lee, S.; Jeon, S. Clinical Concept Extraction with Lexical Semantics to Support Automatic Annotation. Int. J. Environ. Res. Public Health 2021, 18, 10564. [Google Scholar] [CrossRef]
  63. Rajkomar, A.; Dean, J.; Kohane, I. Machine Learning in Medicine. N. Engl. J. Med. 2019, 380, 1347–1358. [Google Scholar] [CrossRef]
  64. Hao, Z.; Ma, J.; Sun, W. The Technology-Oriented Pathway for Auxiliary Diagnosis in the Digital Health Age: A Self-Adaptive Disease Prediction Model. Int. J. Environ. Res. Public Health 2022, 19, 12509. [Google Scholar] [CrossRef] [PubMed]
  65. Briganti, G.; Le Moine, O. Artificial Intelligence in Medicine: Today and Tomorrow. Front. Med. 2020, 7, 27. [Google Scholar] [CrossRef] [PubMed]
  66. Jayatilake, S.M.D.A.C.; Ganegoda, G.U. Involvement of machine learning tools in healthcare decision making. J. Healthc. Eng. 2021, 2021, 6679512. [Google Scholar] [CrossRef] [PubMed]
  67. Diniz, E.J.S.; Fontenele, J.E.; de Oliveira, A.C.; Bastos, V.H.; Teixeira, S.; Rabêlo, R.L.; Calçada, D.B.; dos Santos, R.M.; de Oliveira, A.K.; Teles, A.S. Boamente: A Natural Language Processing-Based Digital Phenotyping Tool for Smart Monitoring of Suicidal Ideation. Healthcare 2022, 10, 698. [Google Scholar] [CrossRef] [PubMed]
  68. Moura, I.; Teles, A.; Viana, D.; Marques, J.; Coutinho, L.; Silva, F. Digital Phenotyping of Mental Health using multimodal sensing of multiple situations of interest: A Systematic Literature Review. J. Biomed. Inform. 2023, 138, 104278. [Google Scholar] [CrossRef]
  69. Du-Harpur, X.; Watt, F.; Luscombe, N.; Lynch, M. What is AI? Applications of artificial intelligence to dermatology. Br. J. Dermatol. 2020, 183, 423–430. [Google Scholar] [CrossRef] [PubMed]
  70. Yassin, N.I.; Omran, S.; El Houby, E.M.; Allam, H. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. Comput. Methods Programs Biomed. 2018, 156, 25–45. [Google Scholar] [CrossRef] [PubMed]
  71. Chan, S.; Reddy, V.; Myers, B.; Thibodeaux, Q.; Brownstone, N.; Liao, W. Machine learning in dermatology: Current applications, opportunities, and limitations. Dermatol. Ther. 2020, 10, 365–386. [Google Scholar] [CrossRef]
  72. Jaradat, A.S.; Al Mamlook, R.E.; Almakayeel, N.; Alharbe, N.; Almuflih, A.S.; Nasayreh, A.; Gharaibeh, H.; Gharaibeh, M.; Gharaibeh, A.; Bzizi, H. Automated Monkeypox Skin Lesion Detection Using Deep Learning and Transfer Learning Techniques. Int. J. Environ. Res. Public Health 2023, 20, 4422. [Google Scholar] [CrossRef]
  73. Zafar, M.; Sharif, M.I.; Sharif, M.I.; Kadry, S.; Bukhari, S.A.C.; Rauf, H.T. Skin Lesion Analysis and Cancer Detection Based on Machine/Deep Learning Techniques: A Comprehensive Survey. Life 2023, 13, 146. [Google Scholar] [CrossRef]
  74. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372. [Google Scholar] [CrossRef]
  75. Parsif.all, v2.2.0. 2022. Available online: https://parsif.al/ (accessed on 2 January 2023).
  76. Tió-Coma, M.; Kiełbasa, S.M.; van den Eeden, S.J.; Mei, H.; Roy, J.C.; Wallinga, J.; Khatun, M.; Soren, S.; Chowdhury, A.S.; Alam, K.; et al. Blood RNA signature RISK4LEP predicts leprosy years before clinical onset. eBioMedicine 2021, 68, 103379. [Google Scholar] [CrossRef] [PubMed]
  77. Cohen, J. Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychol. Bull. 1968, 70, 213–220. [Google Scholar] [CrossRef] [PubMed]
  78. Wohlin, C. Guidelines for Snowballing in Systematic Literature Studies and a Replication in Software Engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, New York, NY, USA, 13–14 May 2014; Number 38 in EASE ’14. p. 10. [Google Scholar] [CrossRef]
  79. Kitchenham, B.; Brereton, P. A systematic review of systematic review process research in software engineering. Inf. Softw. Technol. 2013, 55, 2049–2075. [Google Scholar] [CrossRef]
  80. Cabitza, F.; Campagner, A. The need to separate the wheat from the chaff in medical informatics: Introducing a comprehensive checklist for the (self)-assessment of medical AI studies. Int. J. Med. Inform. 2021, 153, 104510. [Google Scholar] [CrossRef]
  81. Viera, A.J.; Garrett, J.M. Understanding interobserver agreement: The kappa statistic. Fam. Med. 2005, 37, 360–363. [Google Scholar] [PubMed]
  82. Beesetty, R.; Reddy, S.A.; Modali, S.; Sunkara, G.; Dalal, J.; Damagathla, J.; Banerjee, D.; Venkatachalapathy, M. Leprosy Skin Lesion Detection: An AI Approach Using Few Shot Learning in a Small Clinical Dataset. Indian J. Lepr. 2023, 95, 89–102. [Google Scholar]
  83. Baweja, A.K.; Aditya, S.; Kanchana, M. Leprosy Diagnosis using Explainable Artificial Intelligence Techniques. In Proceedings of the 2023 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS), Erode, India, 23–25 March 2023; pp. 551–556. [Google Scholar] [CrossRef]
  84. Rafay, A.; Hussain, W. EfficientSkinDis: An EfficientNet-based classification model for a large manually curated dataset of 31 skin diseases. Biomed. Signal Process. Control 2023, 85, 104869. [Google Scholar] [CrossRef]
  85. Yotsu, R.R.; Ding, Z.; Hamm, J.; Blanton, R.E. Deep learning for AI-based diagnosis of skin-related neglected tropical diseases: A pilot study. PLoS Neglected Trop. Dis. 2023, 17, e0011230. [Google Scholar] [CrossRef]
  86. Steyve, N.; Steve, P.; Ghislain, M.; Ndjakomo, S.; pierre, E. Optimized real-time diagnosis of neglected tropical diseases by automatic recognition of skin lesions. Inform. Med. Unlocked 2022, 33, 101078. [Google Scholar] [CrossRef]
  87. De Souza, M.L.M.; Lopes, G.A.; Branco, A.C.; Fairley, J.K.; Fraga, L.A.D.O. Leprosy screening based on artificial intelligence: Development of a cross-platform app. JMIR MHealth UHealth 2021, 9, e23718. [Google Scholar] [CrossRef] [PubMed]
  88. Jaikishore, C.; Udutalapally, V.; Das, D. AI Driven Edge Device for Screening Skin Lesion and Its Severity in Peripheral Communities. In Proceedings of the 2021 IEEE 18th India Council International Conference (INDICON), Guwahati, India, 19–21 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  89. Banerjee, A.; Das, N.; Nasipuri, M. Skin Diseases Detection using LBP and WLD- An Ensembling Approach. arXiv 2020, arXiv:2004.04122. [Google Scholar]
  90. Jin, B.; Cruz, L.; Gonçalves, N. Deep Facial Diagnosis: Deep Transfer Learning From Face Recognition to Facial Diagnosis. IEEE Access 2020, 8, 123649–123661. [Google Scholar] [CrossRef]
  91. Mondal, B.; Das, N.; Santosh, K.; Nasipuri, M. Improved Skin Disease Classification Using Generative Adversarial Network. In Proceedings of the 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 28–30 July 2020; pp. 520–525. [Google Scholar] [CrossRef]
  92. Casuayan de Goma, J.; Devaraj, M. Recognizing Common Skin Diseases in the Philippines Using Image Processing and Machine Learning Classification. In Proceedings of the ICCBD ’20: 2020 the 3rd International Conference on Computing and Big Data, New York, NY, USA, 5–7 August 2020; pp. 68–72. [Google Scholar] [CrossRef]
  93. Joshi, A.D.; Manerkar, S.S.; Nagvekar, V.U.; Naik, K.P.; Palekar, C.G.; Pugazhenthi, V.; Naik, S. Skin disease detection and classification. Int. J. Adv. Eng. Res. Sci. 2019, 6, 396–400. [Google Scholar] [CrossRef]
  94. Gama, R.S.; Souza, M.L.M.d.; Sarno, E.N.; Moraes, M.O.d.; Gonçalves, A.; Stefani, M.M.A.; Garcia, R.M.G.; Fraga, L.A.d.O. A novel integrated molecular and serological analysis method to predict new cases of leprosy amongst household contacts. PLoS Neglected Trop. Dis. 2019, 13, e0007400. [Google Scholar] [CrossRef] [PubMed]
  95. Baweja, H.S.; Parhar, T. Leprosy lesion recognition using convolutional neural networks. In Proceedings of the 2016 International Conference on Machine Learning and Cybernetics (ICMLC), Jeju, Republic of Korea, 10–13 July 2016; Volume 1, pp. 141–145. [Google Scholar] [CrossRef]
  96. Pillai, L.; Chouhan, U. Comparative analysis of machine learning algorithms for mycobacterium tuberculosis protein sequences on the basis of physicochemical parameters. J. Med. Imaging Health Inform. 2014, 4, 212–219. [Google Scholar] [CrossRef]
  97. Yasir, R.; Rahman, M.A.; Ahmed, N. Dermatological disease detection using image processing and artificial neural network. In Proceedings of the 8th International Conference on Electrical and Computer Engineering, Dhaka, Bangladesh, 20–22 December 2014; pp. 687–690. [Google Scholar] [CrossRef]
  98. Das, N.; Pal, A.; Mazumder, S.; Sarkar, S.; Gangopadhyay, D.; Nasipuri, M. An SVM Based Skin Disease Identification Using Local Binary Patterns. In Proceedings of the 2013 Third International Conference on Advances in Computing and Communications, Cochin, India, 29–31 August 2013; pp. 208–211. [Google Scholar] [CrossRef]
  99. Pal, A.; Das, N.; Sarkar, S.; Gangopadhyay, D.; Nasipuri, M. A New Rotation Invariant Weber Local Descriptor for Recognition of Skin Diseases. In Proceedings of the Pattern Recognition and Machine Intelligence, Kolkata, India, 10–14 December 2013; Maji, P., Ghosh, A., Murty, M.N., Ghosh, K., Pal, S.K., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 355–360. [Google Scholar] [CrossRef]
  100. Wang, Z.; Gerstein, M.; Snyder, M. RNA-Seq: A revolutionary tool for transcriptomics. Nat. Rev. Genet. 2009, 10, 57–63. [Google Scholar] [CrossRef]
  101. Bolstad, B.M.; Irizarry, R.A.; Astrand, M.; Speed, T. A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics 2003, 19, 185–193. [Google Scholar] [CrossRef]
  102. Deng, J.; Bi, L.; Zhou, L.; Guo, S.J.; Fleming, J.; Jiang, H.W.; Zhou, Y.; Gu, J.; Zhong, Q.; Wang, Z.X.; et al. Mycobacterium tuberculosis proteome microarray for global studies of protein function and immunogenicity. Cell Rep. 2014, 9, 2317–2329. [Google Scholar] [CrossRef]
  103. van Hooij, A.; Tjon Kon Fat, E.M.; Batista da Silva, M.; Carvalho Bouth, R.; Cunha Messias, A.C.; Gobbo, A.R.; Lema, T.; Bobosha, K.; Li, J.; Weng, X.; et al. Evaluation of immunodiagnostic tests for leprosy in Brazil, China and Ethiopia. Sci. Rep. 2018, 8, 17920. [Google Scholar] [CrossRef]
  104. Minion, J.; Pai, M.; Ramsay, A.; Menzies, D.; Greenaway, C. Comparison of LED and Conventional Fluorescence Microscopy for Detection of Acid Fast Bacilli in a Low-Incidence Setting. PLoS ONE 2011, 6, e22495. [Google Scholar] [CrossRef] [PubMed]
  105. Chen, L.; Deng, H.; Cui, H.; Fang, J.; Zuo, Z.; Deng, J.; Li, Y.; Wang, X.; Zhao, L. Inflammatory responses and inflammation-associated diseases in organs. Oncotarget 2018, 9, 7204–7218. [Google Scholar] [CrossRef] [PubMed]
  106. Raschka, S.; Liu, Y.; Mirjalili, V. Machine Learning with PyTorch and Scikit-Learn: Develop Machine Learning and Deep Learning Models with Python, 1st ed.; Packt Publishing: Birmingham, UK, 2022. [Google Scholar]
  107. Russell, S.J.; Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed.; Pearson Education: London, UK, 2021. [Google Scholar]
  108. Dinga, R.; Penninx, B.W.; Veltman, D.J.; Schmaal, L.; Marquand, A.F. Beyond accuracy: Measures for assessing machine learning models, pitfalls and guidelines. bioRxiv 2019. [Google Scholar] [CrossRef]
  109. Gundersen, O.E.; Gil, Y.; Aha, D.W. On Reproducible AI: Towards Reproducible Research, Open Science, and Digital Scholarship in AI Publications. AI Mag. 2018, 39, 56–68. [Google Scholar] [CrossRef]
  110. Novelli, C.; Taddeo, M.; Floridi, L. Accountability in artificial intelligence: What it is and how it works. AI Soc. 2023. [Google Scholar] [CrossRef]
  111. Meng, T.; Jing, X.; Yan, Z.; Pedrycz, W. A survey on machine learning for data fusion. Inf. Fusion 2020, 57, 115–129. [Google Scholar] [CrossRef]
  112. Zheng, Y. Methodologies for Cross-Domain Data Fusion: An Overview. IEEE Trans. Big Data 2015, 1, 16–34. [Google Scholar] [CrossRef]
  113. Stingl, P. Die Differentialdiagnose der Lepra in Entwicklungsländern–Haut und Mundhöhle [Differential diagnosis of leprosy in developing countries–the skin and oral cavity. Z. Hautkrankh. 1987, 62, 227–231. [Google Scholar]
  114. Ramspek, C.L.; Jager, K.J.; Dekker, F.W.; Zoccali, C.; van Diepen, M. External validation of prognostic models: What, why, how, when and where? Clin. Kidney J. 2021, 14, 49–58. [Google Scholar] [CrossRef]
  115. Collins, G.S.; de Groot, J.A.; Dutton, S.; Omar, O.; Shanyinde, M.; Tajar, A.; Voysey, M.; Wharton, R.; Yu, L.M.; Moons, K.G.; et al. External validation of multivariable prediction models: A systematic review of methodological conduct and reporting. BMC Med. Res. Methodol. 2014, 14, 40. [Google Scholar] [CrossRef]
  116. Bleeker, S.; Moll, H.; Steyerberg, E.; Donders, A.; Derksen-Lubsen, G.; Grobbee, D.; Moons, K. External validation is necessary in prediction research:: A clinical example. J. Clin. Epidemiol. 2003, 56, 826–832. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA flow diagram.
Figure 1. PRISMA flow diagram.
Jcm 13 00180 g001
Figure 2. Data types, algorithms/architectures, performance metrics and results of the selected studies [12,14,76,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99].
Figure 2. Data types, algorithms/architectures, performance metrics and results of the selected studies [12,14,76,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99].
Jcm 13 00180 g002
Figure 3. Quality assessment results for the six phases of the checklist [80].
Figure 3. Quality assessment results for the six phases of the checklist [80].
Jcm 13 00180 g003
Figure 4. Number of articles published by year, organized according to different characteristics.
Figure 4. Number of articles published by year, organized according to different characteristics.
Jcm 13 00180 g004
Table 1. Selection criteria.
Table 1. Selection criteria.
Inclusion Criteria (IC)Exclusion Criteria (EC)
(IC1) Studies that used AI to aid in the leprosy diagnosis.(EC1) Studies that did not use AI in the leprosy diagnosis.
(IC2) Full articles.(EC2) Studies that used AI to diagnose other skin diseases.
(IC3) Articles in English.(EC3) Gray literature: reviews, reports, short papers, conference abstracts, communications, theses, and dissertations.
(IC4) Peer-reviewed articles.(EC4) Articles in a language other than English.
Table 2. Quality criteria [80].
Table 2. Quality criteria [80].
ItemProblem Understanding
(QC1)Is the study population described, also in terms of inclusion/exclusion criteria?
(QC2)Is the study design described?
(QC3)Is the study setting described?
(QC4)Is the source of data described?
(QC5)Is the medical task reported??
(QC6)Is the data collection process described, also in terms of setting-specific data collection strategies?
ItemData Understanding
(QC7)Are the subject demographics described in terms of average age, age variability, gender breakdown, main comorbidities, ethnic group, socioeconomic status?
(QC8)If the task is supervised, is the gold standard described?
(QC9)In the case of tabular data, are the features described?
ItemData Preparation
(QC10)Is outlier detection and analysis performed and reported?
(QC11)Is missing-value management described?
(QC12)Is feature pre-processing performed and described?
(QC13)Is data imbalance analysis and adjustment performed and reported?
ItemModeling
(QC14)Is the model task reported?
(QC15)Is the model output specified?
(QC16)Is the model architecture or type described?
ItemValidation
(QC17)Is the data splitting described (e.g., no data splitting; k-fold cross-validation (CV); nested k-fold CV; repeated CV; bootstrap validation; leave-one-out CV; 80%/10%10% train/validation/test)?
(QC18)Are the model training and selection described?
(QC19)(classification models) Is the model calibration described?
(QC20)Is the internal/internal-external model validation procedure described, (e.g., internal 10-fold CV, time-based cross-validation)?
(QC21)Has the model been externally validated?
(QC22)Are the main error-based metrics used?
(QC23)Are some relevant errors described?
ItemDeployment
(QC24)Is the target user indicated?
(QC25)(Classification models) Is the utility of the model discussed?
(QC26)Is information regarding model interpretability and explainability available?
(QC27)Is there any discussion regarding model fairness, ethical concerns, or bias risks, (for a list of clinically relevant biases, refer to)?
(QC28)Is any point made about the environmental sustainability of the model, the carbon footprint, of either the training phase or inference phase (use) of the model?
(QC29)Is code and data shared with the community?
(QC30)Is the system already adopted in daily practice?
Table 3. Data extraction form.
Table 3. Data extraction form.
Research QuestionsForm Questions
(RQ1)What types of leprosy were targeted?
(RQ2)What data types are used in the dataset?
(RQ3)What data preparation techniques?
(RQ3)What was the data preparation process?
(RQ4)What algorithm/architecture was used to develop models?
(RQ5)How was the model evaluated?
(RQ5)What performance metrics were used?
(RQ5)How well did the models perform?
Table 4. Data extracted from the selected articles.
Table 4. Data extracted from the selected articles.
StudyDiseasesData TypesData PreparationAlgorithm/ArchitectureModel EvaluationPerformance Metrics
Beesetty et al. (2023) [82]Leprosy and other skin lesionsImagesNot AvailableSiamese Network and Inception-V3, Adaptive Moment Estimation (ADAM)Not AvailableAccuracy (73.12%)
Baweja et al. (2023) [83]Leprosy and other skin lesionsImagesData augmented by Rotation, Scale Transformation, BlurringAlexNet, ResNet, and LeprosyNet, optimized by ADAM80/20Accuracy (98.00%)
Rafay and Hussain (2023) [84]Leprosy Borderline, Leprosy Lepromatous, Leprosy Tuberculoid, Basal Cell Carcinoma, Dariers’s Disease, Epidermolysis Bullosa Pruriginosa, Hailey-Hailey Disease, Herpes Simplex, Impetigo, Larva Migrans, Lichen Planus, Lupus, Melanoma, Molluscum Contagiosum, Mycosis Fungoides, Neurofibromatosis, Papilomatosis Confluentes And Reticulate, Pediculosis Capitis, Pityriases Rosea, Porokeratosis Actinic, Psoriasis, Tinea Corporis, Tinea Nigra, Tungiasis, Actinic Keratosis, Dermatofibrona, Nevus, Pigmented Benign Keratosis, Squamous Cell Carcinoma and Vascular LesionImagesData augmented by Rotation, Shear, Center Zoom, Horizontal Flip, Vertical Flip, BrightnessResNet, VGG and EfficientNet-B280/20, 10-fold Cross-ValidationAccuracy (87.15%), Precision (87.00%), Recall (87.00%), and F1 score (87.00%)
Yotsu et al. (2023) [85]Leprosy, Buruli Ulcer, Mycetoma, Scabies, and YawsImagesImages resized to 224 × 224, Data augmentation and normalizationResNet-50 and VGG-16, optimized by Stochastic Gradient Descent (SGD)70/30Accuracy (84.63%)
Barbieri et al. (2022) [12]Leprosy and other skin diseasesImages, Numerical DataNumeric data: normalization. Images: tuning strategy or freezeInception-V4, ResNet-50, Elastic-net Logistic Regression (LR), XGBoost (XGB) and Random Forest (RF)Dataset split into 80% training, 20% test (80/20); 5-fold and 10-fold cross-validationAccuracy (90.00%), Area Under Curve (AUC) (96.46%), Sensitivity (89.00%), and Specificity (91.00%)
Marçal et al. (2022) [14]Paucibacillary or Multibacillary LeprosyNumerical DataNot AvailableDecision Trees (DT)Leave-one-out-cross-validation (LOOCV)Accuracy (84.00%)
Steyve et al. (2022) [86]Leprosy, Leishmaniasis, Buruli UlcerImagesOTSU thresholding and filters Canny, Sober, Gabor, and RobertSupport Vector Machine (SVM), SVM optimized by Black Hole Algorithm (BHO), K-Nearest Neighbors (KNN), DTNot AvailableAccuracy (96.00%), Specificity (94.00%), F-Score (89.00%), Recall (90.00%), and Sensitivity (92.00%)
De Souza et al. (2021) [87]Paucibacillary or Multibacillary LeprosyNumerical DataNot AvailableRF10-fold cross-validationAccuracy (92.38%), Sensitivity (93.97%), and Specificity (87.09%)
Tió-Coma et al. (2021) [76]LeprosyNumerical DataNot AvailableRF80/20; LOOCVAccuracy (87.50%), Sensitivity (100.0%), Specificity (80.0%), and AUC (96.70%)
Jaikishore et al. (2021) [88]Leprosy, Eczema, and MeaslesImagesRe-scaling to normalize the image, zoom in and zoom out, width shift, height shift, and rotation angle of 45°MobileNet-V2, VGG16, Inception-V3, Xception80/20Accuracy (94.32%), F1 score (93.02%), Precision (93.53%), and Recall (92.76%)
Banerjee et al. (2020) [89]Leprosy, Vitiligo, and Tinea versicolorImagesLocal Binary Pattern (LBP), Weber Local Descriptor (WLD), Gray-Level Co-Occurrence Matrix (GLCM), riLBP (rotation invariant LBP) and WLDRI (rotation invariant WLD)GoogLeNet, MobileNet-V1, ResNet-152, DenseNet-121, ResNet-101 and SVM80/20Accuracy (91.38%)
Jin et al. (2020) [90]Leprosy, Thalassemia, Hyperthyroidism, and Down’s syndromeImagesOpenCV, Histogram of Oriented Gradients (HOG), Dlib library, ResNet50, VGG16SVM Linear80/20Accuracy (93.30%)
Mondal et al. (2020) [91]Leprosy, Tinea versicolor, and vitiligoImagesImages cropped and centered a Region of Interest (ROI) manually, Global Contrast Normalization (GCN), Generative Adversarial Network (GAN), Wasserstein GAN with gradient penalty (WGAN-GP)ResNet-101, DenseNet-169 and DenseNet-12180/20Accuracy (94.00%), Recall (90.00%), and F1 score (92.00%)
Casuayan and Devaraj (2020) [92]Leprosy, Acne Vulgaris, Atopic Dermatitis, Keratosis Pilaris (Chicken Skin), Psoriasis and WartsImagesDull Razor Algorithm, GLCM, Sharpening filter, Median filter, Smoothing filter, Binary mask, Sobel OperatorANN and SVM70/30; 10-fold cross-validationPrecision (96.55%), and Recall (93.33%)
Amruta et al. (2019) [93]Leprosy, Melanoma, EczemaImagesHistogram equalization, Global Thresholding, Thresholding, GLCMIterative Dichotomiser 3 (ID3)Not AvailableAccuracy (87.00%)
Gama et al. (2019) [94]Paucibacillary or Multibacillary LeprosyNumerical DataNot AvailableRFNot AvailableSensitivity (90.50%), and Specificity (92.50%)
Baweja and Parhar (2016) [95]LeprosyImagesNot AvailableInception-V380/20; Dataset into 50% positive and negative imagesAccuracy (91.60%)
Pillai and Chouhan (2014) [96]LeprosyNumerical DataNot AvailableSequential Minimal Optimization (SMO), LibSVM and Multilayer Perceptron (MLP)80/20; Stratified 5–fold and 10-fold cross-validationAccuracy (85.00%)
Yasir et al. (2014) [97]Leprosy, Eczema, Acne, Psoriasis, Scabies, Foot ulcer, Vitiligo, Tinea Corporis, Pityriasis roseaImagesFilters sharpening, median, smooth, binary mask, histogram, YCbCr algorithm, Sobel operatorFeed-Forward Back Propagation Network (FFBPN)85/15; 10-fold cross-validationAccuracy (90.00%)
Das et al. (2013) [98]Leprosy, Tinea versicolor, VitiligoImagesLBP, GLCM, Discrete Cosine Transform (DCT), and Discrete Fourier Transform (DFT)LibSVM80/20Accuracy (89.66%)
Pal et al. (2013) [99]Leprosy, Tinea versicolor, VitiligoImagesDifferential Excitation, WLD Histogram, WLD, WLDRISVM80/20Accuracy (86.78%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fernandes, J.R.N.; Teles, A.S.; Fernandes, T.R.S.; Lima, L.D.B.; Balhara, S.; Gupta, N.; Teixeira, S. Artificial Intelligence on Diagnostic Aid of Leprosy: A Systematic Literature Review. J. Clin. Med. 2024, 13, 180. https://doi.org/10.3390/jcm13010180

AMA Style

Fernandes JRN, Teles AS, Fernandes TRS, Lima LDB, Balhara S, Gupta N, Teixeira S. Artificial Intelligence on Diagnostic Aid of Leprosy: A Systematic Literature Review. Journal of Clinical Medicine. 2024; 13(1):180. https://doi.org/10.3390/jcm13010180

Chicago/Turabian Style

Fernandes, Jacks Renan Neves, Ariel Soares Teles, Thayaná Ribeiro Silva Fernandes, Lucas Daniel Batista Lima, Surjeet Balhara, Nishu Gupta, and Silmar Teixeira. 2024. "Artificial Intelligence on Diagnostic Aid of Leprosy: A Systematic Literature Review" Journal of Clinical Medicine 13, no. 1: 180. https://doi.org/10.3390/jcm13010180

APA Style

Fernandes, J. R. N., Teles, A. S., Fernandes, T. R. S., Lima, L. D. B., Balhara, S., Gupta, N., & Teixeira, S. (2024). Artificial Intelligence on Diagnostic Aid of Leprosy: A Systematic Literature Review. Journal of Clinical Medicine, 13(1), 180. https://doi.org/10.3390/jcm13010180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop