Next Article in Journal
Optical Biosensor Based on Porous Silicon and Tamm Plasmon Polariton for Detection of CagA Antigen of Helicobacter pylori
Previous Article in Journal
A Novel Method for Heat Haze-Induced Error Mitigation in Vision-Based Bridge Displacement Measurement
Previous Article in Special Issue
Compressed Sensing for Biomedical Photoacoustic Imaging: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feasibility Study on the Use of Infrared Cameras for Skin Cancer Detection under a Proposed Data Degradation Model

by
Ricardo F. Soto
and
Sebastián E. Godoy
*
Departamento de Ingeniería Eléctrica, Facultad de Ingeniería, Universidad de Concepción, Concepción 4070409, Chile
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(16), 5152; https://doi.org/10.3390/s24165152
Submission received: 29 June 2024 / Revised: 31 July 2024 / Accepted: 6 August 2024 / Published: 9 August 2024
(This article belongs to the Special Issue Sensors and Devices for Biomedical Image Processing)

Abstract

:
Infrared thermography is considered a useful technique for diagnosing several skin pathologies but it has not been widely adopted mainly due to its high cost. Here, we investigate the feasibility of using low-cost infrared cameras with microbolometer technology for detecting skin cancer. For this purpose, we collected infrared data from volunteer subjects using a high-cost/high-quality infrared camera. We propose a degradation model to assess the use of lower-cost imagers in such a task. The degradation model was validated by mimicking video acquisition with the low-cost cameras, using data originally captured with a medium-cost camera. The outcome of the proposed model was then compared with the infrared video obtained with actual cameras, achieving an average Pearson correlation coefficient of more than 0.9271. Therefore, the model successfully transfers the behavior of cameras with poorer characteristics to videos acquired with higher-quality cameras. Using the proposed model, we simulated the acquisition of patient data with three different lower-cost cameras, namely, Xenics Gobi-640, Opgal Therm-App, and Seek Thermal CompactPRO. The degraded data were used to evaluate the performance of a skin cancer detection algorithm. The Xenics and Opgal cameras achieved accuracies of 84.33% and 84.20%, respectively, and sensitivities of 83.03% and 83.23%, respectively. These values closely matched those from the non-degraded data, indicating that employing these lower-cost cameras is appropriate for skin cancer detection. The Seek camera achieved an accuracy of 82.13% and a sensitivity of 79.77%. Based on these results, we conclude that this camera is appropriate for less critical applications.

1. Introduction

Temperature is a key indicator for identifying anomalies in both living and inert systems. Infrared thermography (IRT) is a mature and widely accepted technique used as a non-contact temperature monitoring tool; it is used in the early detection of equipment failures and process anomalies in industrial operations [1], as well as in medical diagnoses [2].
Concerning medical applications of IRT, its use in diagnosing breast cancer, diabetes, neuropathy, and peripheral vascular disease has been highlighted [2]. In a similar manner, IRT has been successfully used in detecting skin cancer [3,4], monitoring skin burns [5], detecting problems associated with rheumatoid arthritis [6,7], detecting necrotizing enterocolitis [8], evaluating infectious diseases (such as the coronavirus 2019 (COVID-19)), and detecting fever conditions by working together with algorithms based on artificial intelligence [9,10], among others.
Regarding the use of IRT for skin cancer detection, the performance of the active/dynamic IRT stands out over passive IRT [11]. Buzug et al. [12] evidenced differences in thermoregulation curves between healthy and cancerous skin areas. Çetingül and Herman [13] proposed a methodology that quantifies the difference between the thermal responses of healthy skin and melanoma. Di Carlo et al. [14] showed that active thermography, unlike dermoscopy, distinguishes a clear pattern to differentiate basal cell carcinoma (BCC) tumors from actinic keratosis (AK). Godoy et al. [3] proposed a standardized analysis of dynamic thermography to discriminate malignant from benign lesions; the study included more than 100 patients, who presented benign, BCC, squamous cell carcinoma (SCC), or malignant melanoma (MM) lesions. A more sophisticated analysis of dynamic thermography is proposed by Godoy et al. [4]; it combines the thermoregulation curves (TRCs) modeling and a detection theory scheme, achieving a sensitivity and specificity of 99%. Magalhaes et al. [15] performed support vector machine classifiers to distinguish benign lesions (melanocytic nevi) from malignant melanoma lesions; this study explored features extracted from steady state (passive thermography) and dynamic thermography using a frequency of one frame per minute; thus, the steady state features were more relevant to solve this problem. Magalhaes et al. [16] proposed a deep learning classifier for processing passive thermography, achieving an accuracy of 96.91% in discriminating between malignant and benign lesions. However, when differentiating malignant from benign lesions, its performance declined considerably, highlighting the potential of active thermography to address the challenge of increased inter-class variability. Soto and Godoy [17] proposed key features and a scheme for skin cancer detection using active thermography and machine learning. Using a support vector machine (SVM) with a radial basis function (RBF) kernel classifier resulted in an accuracy close to 85%. Unlike other studies, Bu et al. [18] used active thermography with warm excitation to propose a 3D model of heat evolution in skin tumors; the simulations revealed a dependency between tumor thickness and the maximum contrast parameter, which serves as a discriminant in tumor classification. Similarly, using a cold stimulus, Cardoso and Azevedo [19] proposed a 3D model to analyze breast tumor sizes, considerably improving the contrast with the proposed methodology.
Despite decades of research and development of medical applications based on IRT, its massification has been limited mainly by the high cost of infrared (IR) cameras. According to Narayanamurthy et al. [20], IRT needs optimum instrumentation for recording purposes, given that it is widely affected by external noise. Due to the continuous development of electronic technology, new detector array structures and new semiconductor alloys are available. This has reduced costs and sizes and increased the resolution and precision of IR devices [21,22]. The development of longwave infrared (LWIR) microbolometer technology has led to the creation of multipurpose, low-cost cameras compatible with smartphones [23], and several other applications.
Recently, there has been high interest in the development of medical applications based on low-cost IR cameras, with tests being conducted mainly to support the evaluation of diabetic foot conditions [24,25,26,27,28,29,30,31]. Studies have evaluated skin burns [32] and assessed the healing progress of thoracic surgical incisions [33]. Villa et al. [28] characterized and compared the low-cost cameras Seek Thermal CompactPRO and Thermal Expert TE-Q1 Plus (TE-Q1), using the high-end camera INO IRXCAM-640 as a reference. Their findings suggest that TE-Q1 is suitable for e-health applications, particularly when assessing diabetic foot ulcers. Due to the quality of the measured noise equivalent temperature difference (NETD), the residual non-uniformity (RNU) validated at 25 °C is comparable to the IRXCAM-640 camera.
To determine if a camera is useful for a task, its performance should be evaluated by acquiring a dataset with said camera, the data should be processed, and its performance should be calculated. Medical problems involve requesting permission from the clinical unit, coordinating work teams, looking for volunteers, and preparing informative documentation for volunteers, among other things. Given that space and time are limited in a clinical environment, it is relevant to know the feasibility of using such equipment before going to any clinical trial. Moreover, most investigations are initially performed with high-quality IR cameras, so it is relevant to properly model the decrease in performance with lower-quality imagers. In this paper, we propose an IR video degradation model that degrades videos captured with high-quality cameras to simulate the performance of lower-quality cameras. Our case study evaluates the feasibility of using Xenics Gobi-640, Opgal Therm-App, and Seek Thermal CompactPRO cameras for skin cancer detection through active thermography. This evaluation is based on patient data previously acquired with a higher-quality imager [4]. However, the proposed degradation model can be applied to any kind of infrared camera, in any type of application in both passive and active thermography.

2. Theoretical Background

2.1. Skin Cancer

The skin is a large organ, covering approximately 16% of body mass. It is organized into two primary layers—the epidermis and dermis. The epidermis, which is considered the body’s armor, is a peripheral layer of skin that interacts with the environment and works as a physiochemical barrier against environmental stressors such as pathogens, chemicals, and ultraviolet (UV) radiation. The dermis, originating from the mesoderm, underlies the epidermis and serves as the anchorage for cutaneous structures such as hair follicles, nerves, sebaceous glands, and sweat glands. The dermis also contains abundant immune cells and fibroblasts, which actively participate in many physiological responses of the skin [34].
Thermal regulation is a relevant skin function used to maintain core temperatures within a small range, between 36 and 38 °C. Vasodilatation and vasoconstriction are among the main factors that control thermal regulation. Vasodilatation and vasoconstriction refer to changes in the blood vessel diameter, which affect skin temperature by changing the rate of blood exchange with the interior. If the ambient temperature rises, then increased conductance below the skin surface (due to increased blood flow) facilitates heat transfer from the body interior to the skin. In the cold, muscle tensing and shivering increase heat production and body temperature. As blood flow decreases, thermal conductance also decreases, decreasing heat loss from the body to the environment [35].
Skin—as any other organ—may be affected by solid malignant tumors. Solid malignant tumors require a blood supply in order to grow larger than a few millimeters in diameter. Tumors induce the growth of new capillary blood vessels (a process called angiogenesis) by producing specific angiogenesis-promoting growth factors [36].
With the presence of these new blood vessels and the increase in blood supply, the thermal response of an area with a tumor changes with respect to the thermal response without a tumor. By thermal response, we mean the recovery of the skin temperature when a stimulus is applied. Under this fundament, many studies have highlighted the potential of analyzing thermal recovery curves obtained from dynamic thermography in order to distinguish malignant from benign lesions [3,4,12,13,37].

2.2. Active Thermography

IRT is a fast, non-contact, and non-invasive technique used to analyze the temperature of an area of interest. It has been used to detect arthritis, allergies, breast cancer, and burns, among others. It allows a high-resolution view of the temperature of the lesion, from the epidermis to the depth of the tissue, in addition to measuring the variations in body temperature [2,38].
There are two types of IRT: passive and active. The passive IRT is used to evaluate the temperature of an object in a steady state, while the active IRT is used to evaluate the transient state after a thermal stimulus [1]. Active IRT is considered more powerful than passive IRT because it allows obtaining quantitative information related to the thermal properties of the sample [37]. When active IRT is used, one can plot the temperature measured by each pixel with respect to time obtaining one TRC per pixel.

2.3. Microbolometer Technology

IR technology is divided according to the wavelength response range: 0.75–1.4 μm near IR (NIR), 1.4–3 μm short wave IR (SWIR), 3–8 μm medium wave IR (MWIR), 8–15 μm LWIR, and 15–1000 μm far IR (FIR) [39].
Due to skin emissivity being close to 1, it can be considered a blackbody. Based on Wien’s displacement law, it is possible to define the maximum radiation wavelength of the skin λ m a x [37]. Typically, the superficial skin temperature is 27 °C, so λ m a x is approximately 10 μm. Then, LWIR technology, which is the most appropriate for skin screening, is the most widely used [2].
The most popular LWIR commercial detectors are made from an alloy of H g 1 x C d x T e , corresponding to Quantum Well and non-refrigerated bolometer technologies [40]. The Quantum well-infrared photodetector (QWIP) technology is preferred in research and development for its high thermal sensitivity (approximately 30 mK), higher temporal resolution (100 to 200 Hz), and higher spatial resolution (up to 1280 × 1024 pixels) [37]. However, cameras based on QWIP technology are refrigerated, which implies a high cost. As an alternative, non-refrigerated microbolometer technology has emerged, which today is the most popular, especially low-cost commercial versions [23].
Recently low-cost microbolometers have entered the market with lower sensitivity (about 70 mK), spatial resolution (about 320 × 240 pixels), and temporal resolution (8 to 25 Hz). To tackle the lower sensitivity of these cameras, Bonmarin et al. [41] proposed using dynamic thermography, specifically the lock-in thermal imaging technique. The lower spatial resolution should not be a problem because it can be improved by choosing a suitable optical setup. Concerning temporal resolution, according to [37], dermatological applications do not require high temporal resolution, because the conduction of heat through the human skin is relatively slow (with respect to metals, for example). Based on these fundamentals, we propose using low-cost microbolometer cameras for medical applications, specifically related to skin cancer.

3. Materials and Methods

This section describes the equipment and dataset used, detailing the capture process. We explain the proposed data adaptation scheme and a skin cancer detection scheme proposed by Soto and Godoy [17]. Combining both schemes evaluates the feasibility of using microbolometer cameras for skin cancer detection.

3.1. Infrared Imagers

This study utilized data captured by a high-quality IR camera, QmagiQ [42], which uses quantum dots in a well (DWELL) technology. The data were adapted in terms of thermal, spatial, and temporal resolutions to the characteristics of the following microbolometer cameras: Xenics Gobi-640 [43], Opgal Therm-App [44], and Seek Thermal CompactPRO [45]. Table 1 summarizes the technical features of these cameras. All cameras have a similar spectral range; QmagiQ presents the best features in terms of field of view (FOV) and temporal resolution and NETD, followed by Xenics Gobi-640, Seek Thermal CompactPRO, and Opgal Therm-App. The focal plane array (FPA) size of the QmagiQ camera is inferior to that of the other cameras.

3.2. Dataset

The dataset is composed of 144 videos acquired by S. E. Godoy in his doctoral research [46] with the support of the University of New Mexico’s Dermatology Clinic staff. The data were captured with informed consent, which precludes their public release and ensures the confidentiality of patient information through an identification system accessible only to medical staff nurses. The study mainly included Caucasian and Hispanic populations from New Mexico, with participants aged between 26 and 96 years, with the main inclusion criterion being that the patient had a lesion suspected of being skin cancer, such that the lesion was diagnosed via a biopsy. In summary, the dataset is composed of 87 lesions diagnosed as benign and 57 as malignant. Of the malignant lesions, 41 were diagnosed as BCC, 9 as SCC, and 7 as MM.
The data acquisition process and its basic processing for subsequent analysis are described below.

3.2.1. Data Acquisition Process

The data acquisition was carried out using active thermography in a controlled environment, with room temperature in the range of 20 °C to 22 °C. The acquisition procedure consisted of selecting a region of interest (ROI) with a plastic marker, as shown in Figure 1. We then took a visible image and a 15 s IR sequence of the ROI, used as a reference. Subsequently, using an air conditioning unit, the skin registered in the ROI was cooled for 30 s. After the cooling stage, the skin was heated via thermoregulation to room temperature. During the cooling and thermal recovery stages, a 1.5 min IR sequence was captured.

3.2.2. Image Registration Algorithm

For the correct analysis of the acquired images, it is necessary to apply an image registration algorithm, which is responsible for correcting involuntary movements of the patient, and aligning the data sequence. Along with this, it is useful to differentiate the areas of lesion (pigmented area) and healthy skin on IR images.
The registration algorithm used corresponds to the one presented in Díaz et al. [47], which consists of the following stages:
  • Manually select the corners of the plastic marker in the visible image and the first IR image. For subsequent images in the IR sequence, the corners are automatically detected using the selections from the previous image as references.
  • Estimate an affine transformation matrix that maps motion between consecutive images (one matrix is estimated for each pair of images).
  • Apply the inverse transformation to each image to align the image sequence relative to the first IR image.
Once the image registration process is finished, a 3D array u R I × J × K is obtained, where I and J represent the number of horizontal and vertical pixels, respectively, within the ROI. K denotes the number of images that compose the array (1 visible and K-1 IR images). In what follows, we utilize the term u k to denote the k-th frame within the video sequence, i.e., u k = u ( · , · , k ) R I × J . Additionally, we refer to the ( i , j ) -th pixel within the k-th frame as u i , j , k .
This registration algorithm provides a maximum image shift of three pixels. Because of this, once the data cube is registered, a 3-pixel border is removed to avoid processing TRCs with temperature measurements of the plastic marker. A set of nearby TRCs is highly correlated, so in general, this does not affect the data processing thanks to the TRC selection stage of the detection algorithm explained in Section 3.3. It is possible that with poorer-quality images, the image motion may be greater. However, this can be corrected with digital image processing techniques such as filter smoothing and contrast adjustment.

3.3. Skin Cancer Screening

To evaluate the effect of using lower-quality IR videos in skin cancer detection, we used the machine learning algorithm proposed in a previous work [17]. As shown in Figure 2, the detection scheme is composed of six stages, which are described as follows:
  • Lesion selection. A mask is created over the visible image to delineate the lesion area, generated manually by outlining the pigmented region. Consequently, two sets of TRCs are established: one designated as L, comprising TRCs within the lesion area delineated by the mask, and the other denoted as N, consisting of TRCs within the non-lesion area.
  • Initial temperature estimation. The subsequent step involves the selection of TRCs, which depends on the initial temperature of each TRC. In order to select curves whose initial temperature is less affected by non-uniformities in the cooling process, each TRC is modeled using a double exponential function, defined as follows:
    f i , j ( t ) = θ i , j ( 1 ) + θ i , j ( 2 ) exp θ i , j ( 3 ) t + θ i , j ( 4 ) exp θ i , j ( 5 ) t ,
    whose parameters θ i , j = θ i , j ( 1 ) θ i , j ( 5 ) are computed with a nonlinear least squares fitting, such that f i , j ( k T s ) = u i , j , k . Here, T s is defined by the inverse of the camera frames per second (fps). With this model, the initial temperature is estimated by simply setting t = 0 in (1).
  • TRCs selection. A reference temperature is calculated as T r e f = E f L ( 0 ) , where E is the expectation operator and f L ( 0 ) is the vector function of the initial temperature of the TRCs of the modeled lesion area within the L set.
    The selection of points to process considers a margin of error of p · 100 % with respect to T r e f . In this way, the set of points to be used is defined as follows:
    S = ( i , j ) : f i , j ( 0 ) T r e f p · T r e f ,
    in this case, we experimentally define p = 0.01 as a good value. Using the set S, the sets of TRCs with similar initial temperatures from the lesion and non-lesion areas are defined as L * = L S and N * = N S , respectively.
  • Representative TRCs. For each set L * and N * , a representative TRC is computed as the average TRCs among the selected pixels within each set, generating the average curves T R C L * ¯ and T R C N * ¯ .
  • Feature extraction. From the representative TRCs, a combination of features is extracted (features vector). In this case, the feature vector is as follows:
    [ E d , σ ρ B L , σ p r o j B N , σ d M N ] , which are detailed in Section 3.3.1.
  • Classifier. As the final step, the feature vector is processed by a classifier, which defines if the lesion is suspicious or not. In this study, we evaluated the performance of several machine learning techniques, including K-nearest neighbors (KNN), SVM with RBF kernel, random forest, and eXtreme Gradient Boosting (XGBoost). The random forest classifier achieved the best results, as detailed in Section 4.2. The results for the other classification techniques are provided in Appendix B.

3.3.1. Feature Extraction Techniques

From the representative TRC of the lesion ( T R C L * ¯ ) area and non-lesion area ( T R C N * ¯ ), the following features are extracted:
  • Euclidean distance (d). This feature is calculated as the norm of the difference T R C L * ¯ T R C N * ¯ , normalized by the amount of points, i.e., as shown in the following equation:
    d = 1 K 1 T R C L * ¯ T R C N * ¯ .
    The concept behind this feature is as follows: A small Euclidean distance indicates similar curves, making it highly likely that the lesion is benign. Conversely, a large Euclidean distance indicates significant differences in thermal recovery between the curves. This suggests that the lesion is likely malignant, as its thermal recovery behavior deviates from that of normal tissue [3].
  • Energy difference ( E d ). Let T R C X * ¯ ¯ = T R C X * ¯ m i n ( T R C X * ¯ ) , i.e., the unbiased TRC of the X * area. The energy difference E d is calculated as follows:
    E d = T R C L * ¯ ¯ 2 T R C N * ¯ ¯ 2 .
    This feature is closely related to d. However, because of the triangular inequality E d d , E d quantifies smaller differences than d.
  • Statistical similitude features. Here, six base features are defined to measure the similarity of a set of TRCs to a normalized modeled TRC, using the dual exponential model shown in (1). It is assumed here that the modeled TRC has K sample points and, thus, f m R K . With this, its inner product is computed by < f , g > = f T g , where f , g R K , and T denotes the column-vector transpose. f m is obtained by computing the five model parameters θ i , j using a non-linear least-squares fitting approach. These parameters are averaged to obtain the descriptive TRC f M for each class (namely, cancerous TRCs and non-cancerous TRCs). Then, f m is forced to have a unit norm, i.e., f m < f m , f m > = 1 .
    Let f m and f n denote a model TRC and an arbitrary TRC, respectively, both modeled and normalized (here, it is assumed that f m = 1 ). The following characteristics of projection, correlation, and Euclidean distance are described below.
    (a)
    Projection. The projection of f n onto f m is calculated as p r o j n = < f n , f m > . This operation is performed for a set of at least 10 TRCs, and then the mean projection, p r o j ¯ , and the standard deviation of the projection, σ p r o j , are calculated over this set of projections.
    (b)
    Correlation. The correlation of f n onto f m is calculated using the following expression:
    ρ = c o v ( f m , f n ) σ f m σ f n ,
    where c o v ( f m , f n ) is the covariance between f m and f n , and σ f m and σ f n are the standard deviations (in time samples) of f m and f n , respectively. This operation is performed for a set of at least 10 TRCs, and then the ρ ¯ (the mean correlation) and σ ρ (the standard deviation of the correlation) are calculated.
    (c)
    Distance. This parameter is calculated according to (3) but using f m as a reference and a family of at least 10 TRCs, f n . The mean value d ¯ and standard deviation σ d are calculated over this set of distance values.
    Considering a model of malignant and benign TRCs, f M and f B , respectively, for each data cube, the projection characteristics p r o j ¯ and σ p r o j , correlations ρ ¯ and σ ρ , and distances d ¯ and σ d are calculated for the sets of TRCs L * and N * . In this way, 24 features are extracted, and grouped into four categories based on the origin of the model curve and the analysis group. For example, the malignant model f M over non-lesion area N * allows us to extract p r o j M N ¯ , σ p r o j M N , ρ M N ¯ , ρ p r o j M N , d M N ¯ , and σ d M N features.

3.4. Proposed Data Adaptation Process

The proposed data adaptation scheme was designed to adapt IR videos acquired with a high-quality imager, e.g., as described in Section 3.2, to the characteristics of lower-quality cameras. In this study, we intend to simulate the data acquisition with Xenics Gobi-640, Opgal Therm-App, and Seek Thermal CompactPRO, but the proposed approach can be utilized in any pair of cameras. In order to define the scheme, the temporal, spatial, and temperature resolutions of the cameras were analyzed.
In Figure 3, the proposed model for IR video degradation is presented, which, as mentioned earlier, consists of the following three edges involving five stages:
  • Temporal resolution. This edge only contemplates the temporary downsampling stage. As the temporal resolution of the camera to adapt is lower than the camera that captures the data, the sequence of IR images is downsampled following the proportion f p s H Q : f p s c a m , where f p s H Q and f p s c a m denote the sample rate of the high-quality camera and the camera to adapt, respectively.
  • Spatial resolution. This aspect addresses two areas: spatial resolution and optical distortion.
    (a)
    Spatial downsampling. The QmagiQ camera has a high instantaneous field of view (IFOV); however, the size of its FPA is smaller than the FPA of the cameras to model, in terms of the number of pixels. To avoid introducing artificial elements with unknown effects, the spatial dimensions of the images were not modified.
    (b)
    Point spread function (PSF). Images captured by each camera are usually affected by blurring due to optical distortions of the lens of each camera. The point spread function (PSF) applied corresponds to a 2D model, which was calculated as described by Jara et al. [48]. This model considers both optical and electronic aberrations.
    In order to transfer the optical response of the camera to be simulated to the high-quality videos, the PSF obtained from the lower-quality camera is applied to each frame of the higher-quality camera. Thus, the modified frame is the result of the spatial convolution of the image u k with the PSF of the camera to simulate PSF cam :
    u ˜ k = u k PSF cam .
    It is important to mention that usually the PSF is used to correct the optical distortions. In this study, it is used to degrade the images; such a worst-case scenario is to be evaluated. So it is expected that using the real camera can improve the performance of the detection algorithm by correcting the optical distortion.
  • Thermal resolution. In order to simulate the thermal sensitivity of a camera, adding the characteristic noise of the camera to be modeled is proposed.
    The IR images contain spatial and temporal noise. According to Feng et al. [49], the noise affecting images captured by microbolometer IR cameras contains low-, medium-, and high-frequency spatial noise, along with horizontal and vertical component noise and non-uniformities.
    For applications utilizing active thermography, temporal noise is equally or more important than spatial noise affecting the camera.
    (a)
    Spatial noise. The spatial noise characteristics of the camera to be simulated are added according to the following expression: u ˜ k = u ˜ k + β , where β is the spatial noise resulting from blackbody radiator measurements as described in Section 3.5.1. This spatial noise already contains the low-, medium-, and high-frequency components proposed by Feng et al. [49] since we extracted them from actual measurements.
    (b)
    Temporal noise. The temporal noise is composed of two components: low frequency ( N L F ) and high frequency ( N H F ). Where the low-frequency noise is associated with ambient temperature fluctuations, while the high-frequency noise is related to the chemical and electrical characteristics of each camera component.
    Therefore, the response of each detector over time is degraded according to the following expression:
    u ˜ i , j , k c a m = u ˜ i , j , k + N i , j , k L F + N i , j , k H F .
    To reiterate, the subscripts i , j , k mean that we are evaluating the noise component at the ( i , j ) -th pixel, at the k-th frame.

3.5. Noise Characterization of Imagers

3.5.1. Spatial Noise Characterization

As described earlier, the spatial noise of microbolometer IR cameras consists of low-, medium-, and high-frequency spatial noise, horizontal and vertical component noises, and non-uniformities. To transfer these noise features, a characteristic noise β of the camera to be modeled was extracted from measurements on a blackbody radiator.
To extract spatial noise, a sequence of 180 frames was captured. In our study, we considered setting the room temperature to 20 °C because the clinical data acquisition protocol stipulates that the ambient temperature should be in the range of 20 °C to 22 °C [3]. Each camera was positioned in front of the blackbody, ensuring that the entire width of it was visible so that all cameras covered the same field of view at the same time.
Using the cube u , an image u ¯ was generated by averaging all the 180 frames. This effectively removed the temporal noise present in each frame, leaving only the spatial noise components. Then, the spatial noise of a camera, β , is defined as follows:
u ¯ minus its two-dimensional mean value. Thus, β contains the horizontal, vertical, high, medium, and low-frequency noises, as well as the non-uniformities of each camera to be simulated, as we previously discussed.
In the study, for each camera, β was calculated for six measurements on the blackbody, i.e., between 15 °C and 40 °C with a step of 5 °C. The temperature range was selected to model the camera noise within the body temperature ranges (recall that we want to assess the feasibility of these cameras in medical applications).

3.5.2. Temporal Noise Characterization

Usually, applications utilizing active thermography require a controlled temperature environment. In our skin cancer detection application, according to the data acquisition protocol, the room was set to a temperature between 20 °C and 22 °C using an air conditioning (AC) unit [3]. In our laboratory experiments with these cameras, we observed that due to the on–off control performed by the AC, fluctuations in ambient temperature affected the camera measurements.
To understand and model the ambient temperature fluctuations generated by the AC affecting the cameras to be simulated, measurements were taken using a Mikron M345 blackbody (manufactured by LumaSense Technologies, Inc., Santa Clara, CA, USA), with the Xenics Gobi-640, Opgal Therm-App, and Seek Thermal CompactPRO microbolometer cameras. All these elements were placed in a 5 m2 room, which was conditioned with an AC unit of 10,000 BTU. The ambient temperature fluctuations were measured using an Elitech RC-4HC thermometer, manufactured by Elitech Technology, Inc. (Milpitas, CA, USA).
Similar to the spatial noise characterization, measurements were conducted by stabilizing the blackbody between 15 °C and 40 °C with a 5 °C step. These measurements were performed by setting the ambient temperature of the AC to 15 °C, 20 °C, and 25 °C.
In Figure 4a–c, a sample of measurements over 5 min on the blackbody with the Xenics, Opgal, and Seek cameras, respectively, is presented. It can be observed that the Xenics and Seek cameras exhibit jumps, which are effects of the Non-Uniformity Correction (NUC). It is also noticeable that all cameras exhibit low-frequency oscillatory behavior related to ambient temperature fluctuations. For the Opgal camera, a highly correlated behavior with ambient temperature fluctuations is observed, demonstrating a linear relationship during periods of increasing or decreasing ambient temperatures. Changes in ambient temperature are reflected in the camera’s response with a variable delay, and this response is attenuated.
On the other hand, the Xenics and Seek cameras exhibited fluctuations in their measurements over time; their behaviors were not easy to comprehend because the NUC modified the gain of the measurements to correct errors originating from changes in ambient temperature. Applications utilizing active thermography ideally require smoothness over time in the temperature measurements of each detector in a camera. Therefore, jump correction produced by the NUC was applied to the Xenics and Seek cameras, as shown in Figure 5 and Figure 6, respectively.
According to these observations, it is not possible to accurately model low-frequency temporal noise as a function dependent on ambient temperature variations. This is because the NUC makes the camera’s corrective actions unpredictable. However, low-frequency oscillations are observed, which we suggest modeling using a Fourier series for each detector, according to the following equation:
N i , j , k L F = z = 0 Z a i , j z c o s z ω i , j · k f s + b i , j z s i n z ω i , j · k f s .
Then, the high-frequency component N i , j H F corresponds to white Gaussian noise with standard deviation σ i , j , whose value is calculated over the difference between the measurement over time of the detector u ( i , j ) and its low-frequency component N i , j L F .

4. Results

Two main experiments were conducted and are shown here. The first aims to validate the proposed degradation model, and the second evaluates the feasibility of using low-cost IR cameras in skin cancer detection using active thermography.

4.1. Validation of the Degradation Model

To validate the degradation model, a simultaneous video was acquired with the three cameras positioned over a lesion on a volunteer. The video was captured according to the protocol described in Section 3.2.1, and then each video was registered using the registration algorithm described in Section 3.2.2.
To validate the model, the video acquired with the Xenics Gobi-640 camera was considered a high-quality video. Thus, the videos captured with the Opgal Therm-App and Seek Thermal CompactPRO cameras, which have lower quality and costs, will be mimicked. Following the methodology described in Section 3.4, the video captured with the Xenics camera was adapted to the noise characteristics of the other cameras. First, the frame rate and image dimensions were reduced to match those of the camera being modeled. Second, the PSF characteristics of each camera were applied to the reduced-sized imagery. Third, characteristic spatial noise was added to the degraded imagery. Fourth, temporal noise specific to the camera being modeled was also added.
Thus, let u i , j , · be a TRC captured at position ( i , j ) ; · indicates that we are considering all time samples of the video. The temporal noise present in u i , j , · is N T i , j = u i , j f i , j , where f i , j corresponds to the curve u i , j modeled as a double exponential, as defined in (1). Then, the low-frequency component is obtained by modeling N T i , j as a Fourier series, as defined in Section 3.5.2, to obtain N i , j L F . Meanwhile, the high-frequency component corresponds to white Gaussian noise, whose standard deviation is calculated over the residual noise, as described above.
Now, with the data cube u c acquired using camera c and the data cube u ˜ c from a high-quality camera degraded to mimic camera c, the Pearson correlation coefficient ρ ( u i , j c , u ˜ i , j c ) was calculated to estimate the level of similarity between the actual and simulated TRCs. Mimicking the Opgal Therm-App camera, which included 6889 TRCs, achieved an average correlation of 0.9756 ± 0.0104 . Meanwhile, mimicking the Seek Thermal CompactPRO camera, which included 7031 TRCs, achieved an average correlation of 0.9271 ± 0.0861 .
Given that the average Pearson correlation coefficient exceeded 0.9, the correlation between the real and simulated curves was deemed very strong [50]; this indicates a high degree of similarity between the data collected with the actual camera and the data that mimicked it. Therefore, we consider the proposed degradation model to be valid within the scope of the TRC measurements. Clearly, more evidence must be collected to validate our model to mimic more cameras and different case studies.
A sample of the degradation model applied to the data cube acquired with the Xenics camera to Opgal camera characteristics is presented in Figure 7, where Figure 7a corresponds to an image of the IR video acquired with the Xenics camera, and Figure 7d corresponds to a TRC of the cube. Figure 7b corresponds to an image from the video acquired with the Opgal camera, and Figure 7e corresponds to an image of the cube. Figure 7c corresponds to an image of the video acquired with the Xenics camera that was modified to features of the Opgal camera according to the proposed model, and Figure 7f corresponds to an image of the degraded cube.
Similarly, a sample of the degradation model applied to the same camera, modified to mimic the features of the Seek camera, is presented in Figure 8. Figure 8a corresponds to an image of the IR video acquired with the Xenics camera, and Figure 8d corresponds to the TRC of the cube. Figure 8b corresponds to an image from the video acquired with the Seek camera, and Figure 8e corresponds to an image of the cube. Figure 8c corresponds to an image of the video acquired with the Xenics camera that was modified to features of the Seek camera according to the proposed model, and Figure 8f corresponds to an image of the degraded cube.
The proposed method does not rely on mimicking the TRCs exactly at every sample point, but to transfer all the noise characteristics from one camera to the other. Even though the TRCs are not visually identical, we observe that the low-frequency oscillations and high-frequency noise from the low-cost cameras are included in the modeled data (see Figure 8e,f, for example). Some of the differences that are not transferred are due to the NUC correction shutter that the cameras have constantly operating.
A larger graphic sample is presented in Appendix A. Where the results of mimicking each camera on the four types of lesions studied in this work are shown.

4.2. Feasibility Study of Using Low-Cost IR Cameras in Skin Cancer Detection

We now utilize the degraded data cube to assess the performance of the skin cancer detection algorithm we describe in Section 3.3. As such, we can evaluate the performance one may obtain when low-cost infrared imagers can achieve this task.
To understand the variability of the classifier performance, each one of the modeled datasets was trained using the bootstrap method, which involved taking multiple samples with replacements from the original dataset and generating a new dataset of the same size. The modified dataset was then divided, allocating 80% of the data for training and 20% for testing. This process was repeated 2000 times. The results were previously reported by our group.
The algorithm performance with the original dataset acquired with the QmagiQ camera is shown in the second column of Table 2. The results obtained by degrading these videos to match the characteristics of the Xenics, Opgal, and Seek cameras are presented in the following columns of the same table. The reported indices include accuracy, true positive rate (TPR), true negative rate (TNR), and positive predictive value (PPV), indicating the minimum, maximum, average (AVG), and standard deviation (SD) values for the evaluation.
As expected, the best performance is achieved with the QmagiQ camera, with an average accuracy of 87.29%, sensitivity (TPR) of 87.26%, specificity of 87.15%, and precision (PPV) of 87.39%. The adaptations to the Xenics and Opgal cameras offer a similar performance to that of the QmagiQ camera. The Xenics camera reached an accuracy of 84.33% and a sensitivity of 83.03%, while the Opgal camera achieved an accuracy of 84.20% and a sensitivity of 83.23%. Based on these results, both the Xenics and Opgal cameras are suitable for skin cancer detection, as they achieve similar levels of performance to the QmagiQ camera, with approximately a 3% difference in accuracy.
The worst performance was observed when adapting to the characteristics of the Seek camera, with an average accuracy of 82.13%, sensitivity of 79.77%, and specificity (TNR) of 83.74%. The sensitivity was approximately 8% lower than that of the QmagiQ camera. Consequently, this camera is not really suitable for the skin cancer application we are investigating.
Regarding the system’s performance in detecting the different types of skin cancer, the highest sensitivity was obtained in detecting SCC lesions, with values of 87.66%, 90.92%, 94.74%, and 95.94%, with the Seek, Opgal, Xenics, and QmagiQ cameras, respectively. Concerning BCC lesions, sensitivity values of 78.68%, 83.60%, 83.18%, and 87.23% were achieved with the Seek, Opgal, Xenics, and QmagiQ cameras, respectively. The worst performance was obtained in detecting MM lesions, with sensitivity values of 74.98%, 71.68%, 66.91%, and 76.42%, with the Seek, Opgal, Xenics, and QmagiQ cameras, respectively.
Table 3 presents the performances in terms of the sensitivity and specificity of the proposed methods using different cameras, along with the performances of highly trained dermatologists using naked eye evaluations and dermoscopy. The results obtained by Magalhaes et al. [16] using passive thermography and deep learning are also presented. Dermoscopy significantly outperformed naked eye evaluation, demonstrating very high performance. However, it is important to note that this level of performance is achieved by dermatologists with years of clinical training. Along with this, the reported performances correspond to different detection problems. The passive thermography analysis method using deep learning proposed by Magalhaes et al. [16] outperforms dermatologists in this detection problem. However, when distinguishing between malignant and benign lesions (with the malignant class including MM, BCC, and SCC), the algorithm’s performance drops considerably. This is where the relevance of active thermography analysis becomes evident, achieving over 80% sensitivity and 85% specificity. This work demonstrates that it is possible to significantly reduce equipment costs while maintaining similar performance.

5. Discussion and Conclusions

In this work, a degradation model of IR videos is proposed and evaluated to mimic the performances of different cameras in medical applications under laboratory conditions. Based on this model, videos captured with a high-quality camera were degraded to the characteristics of three low-cost imagers, namely, the Xenics Gobi-640, Opgal Therm-App, and Seek Thermal CompactPRO cameras. These synthetic datasets were then used to evaluate the feasibility of using the modeled cameras over the skin cancer detection algorithm proposed by Soto and Godoy [17].
The proposed degradation model focuses on three key areas to accurately mimic the performance of any camera: temporal, spatial, and thermal resolution. It has been demonstrated that the model achieves a high level of similarity in the TRC, which is the most important aspect in applications that use active thermography. Moreover, qualitatively, it is appreciated that, spatially, the model manages to transmit the texture of the images captured by the modeled cameras.
The proposed model may not fully transfer the characteristics of a lower-quality camera to images captured with a higher-quality camera. The main characteristics that the model does not consider are as follows:
  • Adjustment of the image size. The model does not adjust the size of high-quality images to match those captured with lower-quality cameras when the FPA of the higher-quality camera is larger. This approach was not considered because it requires an interpolation process, which may introduce noise that is not characteristic of the camera being simulated.
  • Shape and size of the detector. This feature is critical for determining the minimum size of detectable objects. However, since the size of skin lesions is significantly larger than the detector size, this characteristic was not considered. The size of a typical detector in microbolometer technology is approximately 20 μm; with the right optics, it would be possible to detect a 40 × 40 μm lesion.
  • Temporal noise introduced by the shutter. When analyzing the temporal variations in the cameras due to changes in ambient temperature, the camera adjusts the offset and modifies the gain. However, incorporating this feature is problematic because the camera’s logic for applying these adjustments is unknown and cannot be determined.
Despite not considering the points mentioned above, the proposed model achieves a high similarity between the degraded videos and the original ones. We anticipate that when these cameras are used in a clinical environment, their behavior will mirror what was described in this study. This is because the cameras were evaluated in one of the worst scenarios in terms of ambient temperature fluctuations. The air-conditioning unit aggressively controlled the room temperature, and when activated, it decreased the temperature by approximately 5 °C, which translated into a drift in the temperature measurements. However, we do not rule out the possibility that the camera measurements might be influenced by other issues that we have not yet considered.
We believe that this model can be useful to evaluate the use of an IR camera in different applications, without the need to spend time, space, and other resources generating a dataset with a camera that does not fit the requirements of the application. However, as we have shown, IR cameras are highly affected by the ambient temperature, so it is important to characterize the behavior of the camera in the environment in which it will be used so that the simulation is as close as possible to its real performance. Nevertheless, some nuances may not be captured by our model, and that is exactly what we are currently doing in our current research.
We showed that the model manages to simulate the behaviors of the three cameras studied, achieving a high similarity of TRCs, with a Pearson correlation coefficient higher than 0.9. We believe that this model can be applied to most microbolometer technology chambers. However, we cannot be sure, especially in cameras with worse characteristics than those studied.
For our case study, we demonstrated that—within certain boundaries—the Xenics Gobi-640 and Opgal Therm-App are the most suitable IR cameras for skin cancer detection using active thermography. This is because their characteristics allow the evaluated algorithm to achieve similar performance to the high-quality camera. Moreover, the NUC approach in these two cameras is less aggressive, allowing discontinuities to be easily corrected over time. Meanwhile, the Seek camera presents an average decline of 5% in accuracy and 7% in TPR, so we do not consider it suitable for skin cancer detection using active thermography.
It is relevant to analyze the performance of the tool in detecting different types of skin cancer to observe its robustness in different scenarios. The performance of the system is outstanding when processing SCC-type lesions, reaching an average sensitivity between 87.66% and 95.94%; with BCC-type lesions, an average sensitivity between 78.68% and 87.23% is achieved. Whereas, when analyzing MM lesions, the average sensitivity is in the range of 66.91% to 76.42%. It is important to note that the dataset used is small, with only seven cases of MM, which implies that the tests performed with the bootstrap technique contain at most two IR cubes of MM lesions. This is a disadvantage at the time of training the detection models because it does not manage to represent the generality of the behavior of this type of lesion; and at the time of evaluation, it implies that when a case is missed, the sensitivity is reduced to 50%. Because of this, it is very important to increase the dataset to make the system more robust.
This study was conducted using a dataset captured from the New Mexican population, primarily involving Caucasian and Hispanic individuals. Although we consider the dataset small, it is representative of this demographic. We strongly believe that the malignancy of the lesions is encrypted in the thermal recovery of the skin, so we consider it important to increase the dataset to ensure it represents all types of populations. We are currently working to conduct a similar study in the Chilean population of the Biobío region. Conducting a clinical study involves many challenges, starting with the authorization of the clinical field, which must ensure the confidentiality of the patient’s data and that the patient is not exposed to any treatment that is harmful to his or her physical and mental health. For this study, we obtained informed consent from all subjects, and the clinical staff was responsible for anonymizing the data, ensuring that we could never correlate patient identities with their data. Another challenge is adjusting to the limited space and time to capture the data in a real public health clinical setting. This was a main motivator for developing the video degradation model, which allowed us to select the right equipment for the research. This model reduced the time required to use multiple devices simultaneously and improved our ability to work comfortably in constrained spaces.
Most of the algorithms attempt to find a good approach to detect skin cancer with the best-available sensor; our research attempts to change the paradigm by evaluating the feasibility of these algorithms when applied to data acquired with low-cost sensors. The similarity of the results between QmagiQ, Xenics, and Opgal cameras is due to the robustness of the detection algorithm. As evidenced by the results presented in Section 4.2 and Appendix B, more advanced classification techniques such as random forest, SVM, and XGBoost allow for similar performance in skin cancer detection when using high- or low-quality technology, as opposed to simpler algorithms, such as KNN. Thus, we can conclude that—thanks to advances in machine learning and feature extraction techniques—it is possible to utilize lower-quality technology. We hope to report better results soon; we are developing new statistical tools to extract the spatial thermal information of a process that is hidden within noisy TRCs. For this, the degradation model will play a key role in understanding the ways the actual data are acquired with low-cost imagers.
As with any other type of cancer, early detection is key for skin cancer patients. The tool presented in this work uses non-invasive, non-contact, and low-cost technology that can screen a suspicious lesion within a few minutes. The portability and low cost of our tool allow its rapid massification, allowing it to be accessible in primary care centers, even in difficult-to-access villages, where it is complex for a patient to be evaluated by a trained dermatologist. Our tool also aims to support specialists in whether to perform a biopsy or not; with this, we hope to contribute to the optimization of resources, avoiding unnecessary biopsies. this way, our tool contributes to public health by aiding in the early detection of skin cancer. It serves as an initial screening to refer patients to a dermatologist, supports the decision to perform a biopsy, and helps reduce the costs associated with more severe diseases caused by the late detection of cancerous lesions.

Author Contributions

Conceptualization, methodology, investigation, software, validation, formal analysis, writing—original draft and visualization, R.F.S.; resources, conceptualization, methodology, supervision, writing—review and editing, S.E.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Agency of Research and Development (ANID) of Chile under the doctoral scholarship PCHA/Doctorado Nacional Folio 2019-21191485.

Institutional Review Board Statement

The dataset used was captured by the project “Functional infrared imaging for melanoma in patients”, which was conducted in accordance with the Declaration of Helsinki, and approved by the Human Research Review Committee of the University of New Mexico (protocol code 10-304 approved on 15 October 2010 and renewed annually until the year 2013).

Informed Consent Statement

In this work, we used the active thermography dataset prepared by S. E. Godoy in his doctoral research with the support of the University of New Mexico Dermatology Clinic staff. The volunteers participating in S.E. Godoy’s research provided their written, informed consent prior to participation.

Data Availability Statement

The dataset used is private due to the confidentiality with which it was acquired.

Acknowledgments

We acknowledge the following graduates from the University of Concepción: Luis de la Cerda, Constanza Santibañez, Paulina Vejar, Silvana Díaz, Fabián Quiroz, and Kathyana Perez for the advances reported in their undergraduate thesis. The advances reported in their undergraduate theses were instrumental in conducting this study. Their contributions included providing algorithms, presenting preliminary results, preparing equipment, and managing the permits required to acquire data at the Regional Hospital of Concepción, among other tasks.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BCCbasal cell carcinoma
SCCsquamous cell carcinoma
AKactinic keratosis
MMmalignant melanoma
fpsframes per second
UVultraviolet
FOVfield of view
FPAfocal plane array
LWIRlongwave infrared
IRinfrared
IRTinfrared thermography
QWIPquantum well-infrared photodetector
NUCnon-uniformity correction
ROIregion of interest
SVMsupport vector machine
RBFradial basis function
KNNK-nearest neighbors
XGBoosteXtreme gradient boosting
TRCthermoregulation curve
DWELLquantum dots in a well
NETDnoise equivalent temperature difference
PSFpoint spread function
TPRtrue positive rate
TNRtrue negative rate
PPVpositive predictive value

Appendix A. IR Video Degradation Sample

In this section, we present graphical samples of the results from degrading an active thermography IR video for cases of benign and malignant lesions, specifically MM, BCC, and SCC, corresponding to Figure A1, Figure A2, Figure A3 and Figure A4, respectively.
The curves of the QmagiQ cameras exhibit a low noise level, similar to the image. Among the degraded images, the best quality is achieved with the Xenics camera, followed by the Opgal and Seek cameras. It is evident that the model successfully transfers non-uniformities, as well as vertical and horizontal noise, transferring the characteristic texture of each camera. Among the TRCs, the model successfully transfers the low-frequency noise associated with ambient temperature fluctuations, with variations in the noise affecting each detector.
Figure A1. Sample results of the degradation of an IR video to the characteristics of different simulated cameras in the case of a benign lesion. The top row shows an image from the video and the bottom row shows characteristic TRCs of the mole and non-mole areas. (a,e) Correspond to high-quality data captured with the QmagiQ camera; (b,f) data adapted to Xenics Gobi-640; (c,g) data adapted to Opgal Therm-App; (d,h) data adapted to Seek Thermal CompactPRO.
Figure A1. Sample results of the degradation of an IR video to the characteristics of different simulated cameras in the case of a benign lesion. The top row shows an image from the video and the bottom row shows characteristic TRCs of the mole and non-mole areas. (a,e) Correspond to high-quality data captured with the QmagiQ camera; (b,f) data adapted to Xenics Gobi-640; (c,g) data adapted to Opgal Therm-App; (d,h) data adapted to Seek Thermal CompactPRO.
Sensors 24 05152 g0a1
Figure A2. Sample results of the degradation of an IR video to the characteristics of different simulated cameras in a case of a malignant lesion, diagnosed as MM. The top row shows an image from the video and the bottom row shows characteristic TRCs of the mole and non-mole areas. (a,e) Correspond to high-quality data captured with the QmagiQ camera; (b,f) data adapted to Xenics Gobi-640; (c,g) data adapted to Opgal Therm-App; (d,h) data adapted to Seek Thermal CompactPRO.
Figure A2. Sample results of the degradation of an IR video to the characteristics of different simulated cameras in a case of a malignant lesion, diagnosed as MM. The top row shows an image from the video and the bottom row shows characteristic TRCs of the mole and non-mole areas. (a,e) Correspond to high-quality data captured with the QmagiQ camera; (b,f) data adapted to Xenics Gobi-640; (c,g) data adapted to Opgal Therm-App; (d,h) data adapted to Seek Thermal CompactPRO.
Sensors 24 05152 g0a2
Figure A3. Sample results of the degradation of an IR video to the characteristics of different simulated cameras in the case of a malignant lesion, diagnosed as BCC. The top row shows an image from the video and the bottom row shows characteristic TRCs of the mole and non-mole areas. (a,e) Correspond to high-quality data captured with the QmagiQ camera; (b,f) data adapted to Xenics Gobi-640; (c,g) data adapted to Opgal Therm-App; (d,h) data adapted to Seek Thermal CompactPRO.
Figure A3. Sample results of the degradation of an IR video to the characteristics of different simulated cameras in the case of a malignant lesion, diagnosed as BCC. The top row shows an image from the video and the bottom row shows characteristic TRCs of the mole and non-mole areas. (a,e) Correspond to high-quality data captured with the QmagiQ camera; (b,f) data adapted to Xenics Gobi-640; (c,g) data adapted to Opgal Therm-App; (d,h) data adapted to Seek Thermal CompactPRO.
Sensors 24 05152 g0a3
Figure A4. Sample results of the degradation of an IR video to the characteristics of different simulated cameras in the case of a malignant lesion, diagnosed as SCC. The top row shows an image from the video and the bottom row shows characteristic TRCs of the mole and non-mole areas. (a,e) Correspond to high-quality data captured with the QmagiQ camera; (b,f) data adapted to Xenics Gobi-640; (c,g) data adapted to Opgal Therm-App; (d,h) data adapted to Seek Thermal CompactPRO.
Figure A4. Sample results of the degradation of an IR video to the characteristics of different simulated cameras in the case of a malignant lesion, diagnosed as SCC. The top row shows an image from the video and the bottom row shows characteristic TRCs of the mole and non-mole areas. (a,e) Correspond to high-quality data captured with the QmagiQ camera; (b,f) data adapted to Xenics Gobi-640; (c,g) data adapted to Opgal Therm-App; (d,h) data adapted to Seek Thermal CompactPRO.
Sensors 24 05152 g0a4

Appendix B. Performance of the Detection Algorithm Using Different Machine Learning Techniques

This section presents the results of the detection system on the different types of cameras analyzed using KNN, SVM, and XGBoost classifiers, corresponding to Table A1, Table A2 and Table A3, respectively. The worst performance is achieved with the KNN technique, followed by XGBoost and SVM. It is important to note that when using XGBoost and SVM techniques, slightly lower performance is achieved compared to using the random forest technique reported in Section 4.2.
Table A1. Performance of the skin cancer detection algorithm using a KNN classifier, processing data captured with a high-quality QmagiQ camera and adapted to features of the Xenics, Opgal, and Seek cameras.
Table A1. Performance of the skin cancer detection algorithm using a KNN classifier, processing data captured with a high-quality QmagiQ camera and adapted to features of the Xenics, Opgal, and Seek cameras.
Camera AdaptationQmagiQ (Original)XenicsOpgalSeek
IndexMin.Max.AVG±SDMin.Max.AVG±SDMin.Max.AVG±SDMin.Max.AVG±SD
Accuracy (%)48.28100.0078.39 ± 8.1344.83100.0076.59 ± 8.1541.3896.5575.37 ± 8.4041.3896.5570.85 ± 8.83
TPR (%)11.11100.0070.87 ± 15.399.09100.0066.57 ± 15.925.88100.0065.80 ± 16.197.69100.0058.97 ± 16.33
TPR MM (%)0.00100.0053.41 ± 42.830.00100.0045.45 ± 42.450.00100.0052.73 ± 42.870.00100.0058.18 ± 42.16
TPR BCC (%)0.00100.0071.44 ± 17.980.00100.0066.77 ± 19.280.00100.0065.87 ± 19.110.00100.0059.03 ± 19.56
TPR SCC (%)0.00100.0082.25 ± 31.040.00100.0082.26 ± 31.260.00100.0075.85 ± 35.200.00100.0060.65 ± 39.37
TNR (%)38.89100.0083.44 ± 9.9741.18100.0083.25 ± 10.1840.00100.0081.72 ± 10.4826.67100.0078.75 ± 11.41
PPV (%)11.11100.0074.26 ± 14.2416.67100.0073.00 ± 15.0014.29100.0070.83 ± 14.7914.29100.0065.21 ± 15.92
Table A2. Performance of the skin cancer detection algorithm using an SVM classifier, processing data captured with a high-quality QmagiQ camera and adapted to features of the Xenics, Opgal, and Seek cameras.
Table A2. Performance of the skin cancer detection algorithm using an SVM classifier, processing data captured with a high-quality QmagiQ camera and adapted to features of the Xenics, Opgal, and Seek cameras.
Camera AdaptationQmagiQ (Original)XenicsOpgalSeek
IndexMin.Max.AVG±SDMin.Max.AVG±SDMin.Max.AVG±SDMin.Max.AVG±SD
Accuracy (%)51.72100.0085.03 ± 7.3151.72100.0084.12 ± 7.2858.62100.0084.55 ± 7.3348.28100.0079.73 ± 7.99
TPR (%)11.11100.0082.09 ± 15.6012.50100.0080.33 ± 15.4522.22100.0080.25 ± 13.8615.38100.0073.15 ± 15.12
TPR MM (%)0.00100.0073.57 ± 38.000.00100.0069.63 ± 39.450.00100.0072.96 ± 38.230.00100.0068.69 ± 39.78
TPR BCC (%)0.00100.0081.77 ± 17.070.00100.0081.26 ± 17.110.00100.0081.23 ± 15.7111.11100.0074.02 ± 17.31
TPR SCC (%)0.00100.0090.72 ± 25.040.00100.0085.83 ± 28.510.00100.0082.03 ± 31.620.00100.0073.18 ± 36.24
TNR (%)43.75100.0087.12 ± 9.4744.44100.0086.65 ± 9.4352.94100.0087.42 ± 8.7729.41100.0084.20 ± 10.04
PPV (%)14.29100.0081.47 ± 12.7025.00100.0080.53 ± 12.8920.00100.0081.15 ± 12.4825.00100.0075.86 ± 14.15
Table A3. Performance of the skin cancer detection algorithm with an XGBoost classifier, processing data captured with a high-quality QmagiQ camera and adapted to features of the Xenics, Opgal, and Seek cameras.
Table A3. Performance of the skin cancer detection algorithm with an XGBoost classifier, processing data captured with a high-quality QmagiQ camera and adapted to features of the Xenics, Opgal, and Seek cameras.
Camera AdaptationQmagiQ (Original)XenicsOpgalSeek
IndexMin.Max.AVG±SDMin.Max.AVG±SDMin.Max.AVG±SDMin.Max.AVG±SD
Accuracy (%)55.17100.0085.69 ± 6.7651.72100.0082.66 ± 7.4155.17100.0082.64 ± 7.3648.28100.0080.25 ± 7.75
TPR (%)25.00100.0085.18 ± 12.3128.57100.0081.26 ± 13.1721.43100.0080.19 ± 13.5914.29100.0079.19 ± 14.02
TPR MM (%)0.00100.0076.21 ± 36.240.00100.0064.17 ± 40.810.00100.0067.43 ± 39.530.00100.0071.17 ± 38.85
TPR BCC (%)16.67100.0084.39 ± 14.5014.29100.0081.38 ± 15.420.00100.0080.01 ± 16.0611.11100.0078.19 ± 16.61
TPR SCC (%)0.00100.0096.36 ± 15.480.00100.0094.48 ± 18.600.00100.0091.22 ± 23.380.00100.0088.92 ± 25.86
TNR (%)35.00100.0086.05 ± 9.1850.00100.0083.67 ± 9.9547.37100.0084.34 ± 9.9138.46100.0080.93 ± 10.92
PPV (%)22.22100.0080.45 ± 12.2425.00100.0077.05 ± 13.2120.00100.0077.62 ± 13.1122.22100.0073.85 ± 13.40

References

  1. Bagavathiappan, S.; Lahiri, B.; Saravanan, T.; Philip, J.; Jayakumar, T. Infrared thermography for condition monitoring—A review. Infrared Phys. Technol. 2013, 60, 35–55. [Google Scholar] [CrossRef]
  2. Lahiri, B.; Bagavathiappan, S.; Jayakumar, T.; Philip, J. Medical applications of infrared thermography: A review. Infrared Phys. Technol. 2012, 55, 221–235. [Google Scholar] [CrossRef] [PubMed]
  3. Godoy, S.E.; Ramirez, D.A.; Myers, S.A.; von Winckel, G.; Krishna, S.; Berwick, M.; Padilla, R.S.; Sen, P.; Krishna, S. Dynamic infrared imaging for skin cancer screening. Infrared Phys. Technol. 2015, 70, 147–152. [Google Scholar] [CrossRef]
  4. Godoy, S.E.; Hayat, M.M.; Ramirez, D.A.; Myers, S.A.; Padilla, R.S.; Krishna, S. Detection theory for accurate and non-invasive skin cancer diagnosis using dynamic thermal imaging. Biomed. Opt. Express 2017, 8, 2301–2323. [Google Scholar] [CrossRef] [PubMed]
  5. Kaczmarek, M.; Nowakowski, A. Active dynamic thermography in medical diagnostics. In Application of Infrared to Biomedical Sciences; Springer: Singapore, 2017; pp. 291–310. [Google Scholar] [CrossRef]
  6. Pauk, J.; Ihnatouski, M.; Wasilewska, A. Detection of inflammation from finger temperature profile in rheumatoid arthritis. Med. Biol. Eng. Comput. 2019, 57, 2629–2639. [Google Scholar] [CrossRef] [PubMed]
  7. Frize, M.; Adéa, C.; Payeur, P.; Gina Di Primio, M.D.; Karsh, J.; Ogungbemile, A. Detection of rheumatoid arthritis using infrared imaging. In Proceedings of the Medical Imaging 2011: Image Processing, Lake Buena Vista, FL, USA, 14–16 February 2011; Dawant, B.M., Haynor, D.R., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2011; Volume 7962, p. 79620M. [Google Scholar] [CrossRef]
  8. Nur, R.; Frize, M. Image processing of infrared thermal images for the detection of necrotizing enterocolitis. In Proceedings of the Medical Imaging 2013: Image Processing, Lake Buena Vista, FL, USA, 10–13 February 2013; International Society for Optics and Photonics: Bellingham, WA, USA, 2013; Volume 8669, p. 86692M. [Google Scholar] [CrossRef]
  9. Alwashmi, M.F. The Use of Digital Health in the Detection and Management of COVID-19. Int. J. Environ. Res. Public Health 2020, 17, 2906. [Google Scholar] [CrossRef] [PubMed]
  10. Taylor, W.; Abbasi, Q.H.; Dashtipour, K.; Ansari, S.; Shah, S.A.; Khalid, A.; Imran, M.A. A Review of the State of the Art in Non-Contact Sensing for COVID-19. Sensors 2020, 20, 5665. [Google Scholar] [CrossRef] [PubMed]
  11. Verstockt, J.; Verspeek, S.; Thiessen, F.; Tjalma, W.A.; Brochez, L.; Steenackers, G. Skin Cancer Detection Using Infrared Thermography: Measurement Setup, Procedure and Equipment. Sensors 2022, 22, 3327. [Google Scholar] [CrossRef] [PubMed]
  12. Buzug, T.M.; Schumann, S.; Pfaffmann, L.; Reinhold, U.; Ruhlmann, J. Functional infrared imaging for skin-cancer screening. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 2766–2769. [Google Scholar] [CrossRef]
  13. Çetingül, M.P.; Herman, C. Quantification of the thermal signature of a melanoma lesion. Int. J. Therm. Sci. 2011, 50, 421–431. [Google Scholar] [CrossRef]
  14. Di Carlo, A.; Elia, F.; Desiderio, F.; Catricalà, C.; Solivetti, F.M.; Laino, L. Can video thermography improve differential diagnosis and therapy between basal cell carcinoma and actinic keratosis? Dermatol. Ther. 2014, 27, 290–297. [Google Scholar] [CrossRef]
  15. Magalhaes, C.; Vardasca, R.; Rebelo, M.; Valenca-Filipe, R.; Ribeiro, M.; Mendes, J. Distinguishing melanocytic nevi from melanomas using static and dynamic infrared thermal imaging. J. Eur. Acad. Dermatol. Venereol. 2019, 33, 1700–1705. [Google Scholar] [CrossRef] [PubMed]
  16. Magalhaes, C.; Tavares, J.M.R.; Mendes, J.; Vardasca, R. Comparison of machine learning strategies for infrared thermography of skin cancer. Biomed. Signal Process. Control 2021, 69, 102872. [Google Scholar] [CrossRef]
  17. Soto, R.F.; Godoy, S.E. A novel feature extraction approach for skin cancer screening using active thermography. In Proceedings of the 2023 IEEE Latin American Conference on Computational Intelligence (LA-CCI), Recife, Brazil, 29 October–1 November 2023; pp. 1–6. [Google Scholar] [CrossRef]
  18. Bu, C.; Xu, H.; Mao, Z.; Zhang, D.; Pu, C. Non-destructive testing theoretical study on skin tumor detection using long-pulsed infrared thermal wave testing technology. Therm. Sci. 2019, 23, 1401–1408. [Google Scholar] [CrossRef]
  19. Barros, T.C.; Figueiredo, A.A.A. Three-dimensional numerical evaluation of skin surface thermal contrast by application of hypothermia at different depths and sizes of the breast tumor. Comput. Methods Programs Biomed. 2023, 236, 107562. [Google Scholar] [CrossRef]
  20. Narayanamurthy, V.; Padmapriya, P.; Noorasafrin, A.; Pooja, B.; Hema, K.; Nithyakalyani, K.; Samsuri, F. Skin cancer detection using non-invasive techniques. RSC Adv. 2018, 8, 28095–28130. [Google Scholar] [CrossRef]
  21. Dhar, N.K.; Dat, R.; Sood, A.K. Advances in infrared detector array technology. Optoelectron.-Adv. Mater. Devices 2013, 7, 149–188. [Google Scholar] [CrossRef]
  22. Martyniuk, P.; Antoszewski, J.; Martyniuk, M.; Faraone, L.; Rogalski, A. New concepts in infrared photodetector designs. Appl. Phys. Rev. 2014, 1, 041102. [Google Scholar] [CrossRef]
  23. Khodayar, F.; Sojasi, S.; Maldague, X. Infrared thermography and NDT: 2050 horizon. Quant. InfraRed Thermogr. J. 2016, 13, 210–231. [Google Scholar] [CrossRef]
  24. Fraiwan, L.; AlKhodari, M.; Ninan, J.; Mustafa, B.; Saleh, A.; Ghazal, M. Diabetic foot ulcer mobile detection system using smart phone thermal camera: A feasibility study. Biomed. Eng. Online 2017, 16, 117. [Google Scholar] [CrossRef] [PubMed]
  25. Fraiwan, L.; Ninan, J.; Al-Khodari, M. Mobile application for ulcer detection. Open Biomed. Eng. J. 2018, 12, 16. [Google Scholar] [CrossRef]
  26. Van Doremalen, R.; Van Netten, J.; Van Baal, J.; Vollenbroek-Hutten, M.; Van der Heijden, F. Validation of low-cost smartphone-based thermal camera for diabetic foot assessment. Diabetes Res. Clin. Pract. 2019, 149, 132–139. [Google Scholar] [CrossRef]
  27. Niri, R.; Lucas, Y.; Treuillet, S.; Douzi, H. Smartphone-Based Thermal Imaging System for Diabetic Foot Ulcer Assessment. 2019. Available online: https://hal.science/hal-02161044 (accessed on 25 May 2024).
  28. Villa, E.; Arteaga-Marrero, N.; Ruiz-Alzola, J. Performance Assessment of Low-Cost Thermal Cameras for Medical Applications. Sensors 2020, 20, 1321. [Google Scholar] [CrossRef]
  29. Kirimtat, A.; Krejcar, O.; Selamat, A.; Herrera-Viedma, E. FLIR vs SEEK thermal cameras in biomedicine: Comparative diagnosis through infrared thermography. BMC Bioinform. 2020, 21, 88. [Google Scholar] [CrossRef] [PubMed]
  30. Arteaga-Marrero, N.; Bodson, L.C.; Hernández, A.; Villa, E.; Ruiz-Alzola, J. Morphological Foot Model for Temperature Pattern Analysis Proposed for Diabetic Foot Disorders. Appl. Sci. 2021, 11, 7369. [Google Scholar] [CrossRef]
  31. Gulshan; Arora, A.S. Automated prediction of diabetes mellitus using infrared thermal foot images: Recurrent neural network approach. Biomed. Phys. Eng. Express 2024, 10, 025025. [Google Scholar] [CrossRef]
  32. Xue, E.Y.; Chandler, L.K.; Viviano, S.L.; Keith, J.D. Use of FLIR ONE smartphone thermography in burn wound assessment. Ann. Plast. Surg. 2018, 80, S236–S238. [Google Scholar] [CrossRef]
  33. Li, F.; Wang, M.; Wang, T.; Wang, X.; Ma, X.; He, H.; Ma, G.; Zhao, D.; Yue, Q.; Wang, P.; et al. Smartphone-based infrared thermography to assess progress in thoracic surgical incision healing: A preliminary study. Int. Wound J. 2023, 20, 2000–2009. [Google Scholar] [CrossRef]
  34. Simões, M.F.; Sousa, J.S.; Pais, A.C. Skin cancer and new treatment perspectives: A review. Cancer Lett. 2015, 357, 8–42. [Google Scholar] [CrossRef]
  35. Arens, E.A.; Zhang, H. The Skin’s Role in Human Thermoregulation and Comfort. 2006. Available online: https://escholarship.org/uc/item/3f4599hx (accessed on 25 May 2024).
  36. Nishida, N.; Yano, H.; Nishida, T.; Kamura, T.; Kojiro, M. Angiogenesis in cancer. Vasc. Health Risk Manag. 2006, 2, 213. [Google Scholar] [CrossRef]
  37. Bonmarin, M.; Le Gal, F.A. Thermal Imaging in Dermatology. In Imaging in Dermatology; Academic Press: Boston, MA, USA, 2016; pp. 437–454. [Google Scholar] [CrossRef]
  38. Gurjarpadhye, A.A.; Parekh, M.B.; Dubnika, A.; Rajadas, J.; Inayathullah, M. Infrared imaging tools for diagnostic applications in dermatology. SM J. Clin. Med Imaging 2015, 1, 1. [Google Scholar]
  39. AlZubaidi, A.; Ethawi, Y.; Schmölzer, G.; Sherif, S.; Narvey, M.; Seshia, M. Review of Biomedical Applications of Contactless Imaging of Neonates Using Infrared Thermography and Beyond. Methods Protoc. 2018, 1, 39. [Google Scholar] [CrossRef] [PubMed]
  40. Diakides, M.; Bronzino, J.D.; Peterson, D.R. Medical Infrared Imaging: Principles and Practices; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  41. Bonmarin, M.; Le Gal, F.A. Lock-in thermal imaging for the early-stage detection of cutaneous melanoma: A feasibility study. Comput. Biol. Med. 2014, 47, 36–43. [Google Scholar] [CrossRef] [PubMed]
  42. QmagicQ-LLC. Datasheet QmagicQ Falcon, 2024. Available online: http://www.qmagiq.com/falcon256.html (accessed on 25 May 2024).
  43. Xenics. Datasheet Xenics Gobi-640, 2024. Available online: https://www.exosens.com/products/gobi (accessed on 25 May 2024).
  44. Nax-Instruments. Datasheet Opgal Therm-App, 2024. Available online: https://www.naxsg.com/product/therm-app-th/#tab-downloads (accessed on 25 May 2024).
  45. Seek-Thermal-Inc. Datasheet Seek Thermal CompactPRO, 2024. Available online: https://www.thermal.com/uploads/1/0/1/3/101388544/compactpro-sellsheet-usav1.pdf (accessed on 25 May 2024).
  46. Godoy, S.E. Communication-Theoretic Approach for Skin Cancer Detection Using Dynamic Thermal Imaging; The University of New Mexico: Albuquerque, NM, USA, 2015. [Google Scholar]
  47. Díaz, S.; Krohmer, T.; Moreira, Á.; Godoy, S.E.; Figueroa, M. An Instrument for Accurate and Non-Invasive Screening of Skin Cancer Based on Multimodal Imaging. IEEE Access 2019, 7, 176646–176657. [Google Scholar] [CrossRef]
  48. Jara, A.; Torres, S.N.; Machuca, G.; Coelho, P.; Viafora, L.A. Three-dimensional point spread function estimation method for mid-wave infrared microscope imaging. Appl. Opt. 2022, 61, 8467–8474. [Google Scholar] [CrossRef]
  49. Feng, T.; Jin, W.; Si, J. Spatial-noise subdivision evaluation model of uncooled infrared detector. Infrared Phys. Technol. 2021, 119, 103954. [Google Scholar] [CrossRef]
  50. Schober, P.; Boer, C.; Schwarte, L.A. Correlation coefficients: Appropriate use and interpretation. Anesth. Analg. 2018, 126, 1763–1768. [Google Scholar] [CrossRef]
  51. Yélamos, O.; Braun, R.P.; Liopyris, K.; Wolner, Z.J.; Kerl, K.; Gerami, P.; Marghoob, A.A. Usefulness of dermoscopy to improve the clinical and histopathologic diagnosis of skin cancers. J. Am. Acad. Dermatol. 2019, 80, 365–377. [Google Scholar] [CrossRef]
Figure 1. Example of a plastic marker used to select the region of interest. The region of interest is indicated in red, and the suspicious lesion is highlighted in blue.
Figure 1. Example of a plastic marker used to select the region of interest. The region of interest is indicated in red, and the suspicious lesion is highlighted in blue.
Sensors 24 05152 g001
Figure 2. Detection scheme used to evaluate the feasibility of using different IR cameras in detecting skin cancer using active thermography.
Figure 2. Detection scheme used to evaluate the feasibility of using different IR cameras in detecting skin cancer using active thermography.
Sensors 24 05152 g002
Figure 3. Schematic of the proposed degradation model, which addresses 3 areas: temporal, spatial, and thermal resolution. Giving rise to a process composed of 5 stages.
Figure 3. Schematic of the proposed degradation model, which addresses 3 areas: temporal, spatial, and thermal resolution. Giving rise to a process composed of 5 stages.
Sensors 24 05152 g003
Figure 4. Samples of the measurement behaviors from different IR cameras on a blackbody stabilized at 40 °C in a room with controlled ambient temperature at 20 °C using an AC unit. The order of measurements taken with the different cameras is as follows: (a) Xenics Gobi-640, (b) Opgal Therm-App and (c) Seek Thermal CompactPRO.
Figure 4. Samples of the measurement behaviors from different IR cameras on a blackbody stabilized at 40 °C in a room with controlled ambient temperature at 20 °C using an AC unit. The order of measurements taken with the different cameras is as follows: (a) Xenics Gobi-640, (b) Opgal Therm-App and (c) Seek Thermal CompactPRO.
Sensors 24 05152 g004
Figure 5. Sample of the results of the jump correction produced by the NUC in the Xenics Gobi-640 camera. (a) Uncorrected measurements; (b) corrected measurements.
Figure 5. Sample of the results of the jump correction produced by the NUC in the Xenics Gobi-640 camera. (a) Uncorrected measurements; (b) corrected measurements.
Sensors 24 05152 g005
Figure 6. Sample of the results of the jump correction produced by the NUC in the Seek Thermal CompactPRO camera. (a) Uncorrected measurements; (b) corrected measurements.
Figure 6. Sample of the results of the jump correction produced by the NUC in the Seek Thermal CompactPRO camera. (a) Uncorrected measurements; (b) corrected measurements.
Sensors 24 05152 g006
Figure 7. Sample of the degradation performed on a high-quality video to mimic Opgal Therm-App camera features. (a) Image captured with the Xenics Gobi-640 camera, (b) image captured at the same instant of time and same area with the Opgal Therm-App camera, (c) Xenics image adapted to Opgal camera features, (d,e) correspond to representative TRCs of the video acquired with the Xenics and Opgal cameras, respectively. (f) TRC from the adaptation.
Figure 7. Sample of the degradation performed on a high-quality video to mimic Opgal Therm-App camera features. (a) Image captured with the Xenics Gobi-640 camera, (b) image captured at the same instant of time and same area with the Opgal Therm-App camera, (c) Xenics image adapted to Opgal camera features, (d,e) correspond to representative TRCs of the video acquired with the Xenics and Opgal cameras, respectively. (f) TRC from the adaptation.
Sensors 24 05152 g007
Figure 8. Sample of the degradation performed on a high-quality video to mimic Seek Thermal CompactPRO camera features. (a) Image captured with the Xenics Gobi-640 camera, (b) image captured at the same instant of time and same area with the Seek Thermal CompactPRO camera, (c) Xenics image adapted to Seek camera features, (d,e) correspond to representative TRCs of the video acquired with the Xenics and Seek cameras, respectively. (f) TRC from the adaptation.
Figure 8. Sample of the degradation performed on a high-quality video to mimic Seek Thermal CompactPRO camera features. (a) Image captured with the Xenics Gobi-640 camera, (b) image captured at the same instant of time and same area with the Seek Thermal CompactPRO camera, (c) Xenics image adapted to Seek camera features, (d,e) correspond to representative TRCs of the video acquired with the Xenics and Seek cameras, respectively. (f) TRC from the adaptation.
Sensors 24 05152 g008
Table 1. Technical features of the involved cameras.
Table 1. Technical features of the involved cameras.
CameraQmagiQXenics Gobi-640Opgal Therm-AppSeek Thermal Compact PRO
Characteristic
Focal distance [mm]50256.812.5
Spectral range [μm]8–148–147.5–147.5–14
FOV [°]5.5 × 4.425.84 × 19.3855.56 × 41.6733.40 × 24.81
Temporal resolution [fps]60508.715
FPA size [pixels]320 × 256640 × 480384 × 288320 × 240
NETD [mK]20507070
ManufacturerQmagiQ, LLC (Nashua, NH, USA)Xenics nv (Leuven, Belgium)Opgal Optronic Industries Ltd. (Karmiel, Israel)Seek Thermal Inc. (Santa Barbara, CA, USA)
Table 2. Performance of the skin cancer detection algorithm with a random forest classifier, processing data captured with a high-quality QmagiQ camera and adapted to features of the Xenics, Opgal, and Seek cameras.
Table 2. Performance of the skin cancer detection algorithm with a random forest classifier, processing data captured with a high-quality QmagiQ camera and adapted to features of the Xenics, Opgal, and Seek cameras.
Camera AdaptationQmagiQ (Original)XenicsOpgalSeek
IndexMin.Max.AVG ± SDMin.Max.AVG ± SDMin.Max.AVG ± SDMin.Max.AVG ± SD
Accuracy (%)62.07100.0087.29 ± 6.6051.72100.0084.33 ± 7.1155.17100.0084.20 ± 7.0951.72100.0082.13 ± 7.43
TPR (%)33.33100.0087.26 ± 11.4928.57100.0083.03 ± 12.6227.27100.0083.23 ± 12.5828.57100.0079.77 ± 13.79
TPR MM (%)0.00100.0076.42 ± 36.290.00100.0066.91 ± 40.110.00100.0071.68 ± 38.140.00100.0074.98 ± 37.30
TPR BCC (%)20.00100.0087.23 ± 13.6020.00100.0083.18 ± 14.960.00100.0083.60 ± 14.6316.67100.0078.68 ± 16.40
TPR SCC (%)0.00100.0095.94 ± 15.810.00100.0094.74 ± 18.370.00100.0090.92 ± 24.090.00100.0087.66 ± 26.62
TNR (%)47.62100.0087.39 ± 8.9847.62100.0085.28 ± 9.5044.44100.0084.94 ± 9.7536.84100.0083.74 ± 10.23
PPV (%)28.57100.0082.45 ± 11.9428.57100.0079.11 ± 12.9025.00100.0078.91 ± 12.7426.67100.0076.96 ± 13.25
Table 3. Performances of classical skin cancer detection methods and automatic passive and active thermography methods.
Table 3. Performances of classical skin cancer detection methods and automatic passive and active thermography methods.
MethodologyDetection ProblemTPR (%)TNR (%)
Naked eye evaluation [51]MM vs. benign71.0081.00
Dermoscopy evaluation [51]MM vs. benign90.0090.00
Passive thermography + deep learning [16]MM vs. benign94.1298.41
Passive thermography + deep learning [16]Malignant vs. benign62.8157.58
Active thermography − QmagiQ cameraMalignant vs. benign87.2687.39
Active thermography − Xenics cameraMalignant vs. benign83.0385.28
Active thermography − Opgal cameraMalignant vs. benign83.2384.94
Active thermography − Seek cameraMalignant vs. benign79.7783.74
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Soto, R.F.; Godoy, S.E. Feasibility Study on the Use of Infrared Cameras for Skin Cancer Detection under a Proposed Data Degradation Model. Sensors 2024, 24, 5152. https://doi.org/10.3390/s24165152

AMA Style

Soto RF, Godoy SE. Feasibility Study on the Use of Infrared Cameras for Skin Cancer Detection under a Proposed Data Degradation Model. Sensors. 2024; 24(16):5152. https://doi.org/10.3390/s24165152

Chicago/Turabian Style

Soto, Ricardo F., and Sebastián E. Godoy. 2024. "Feasibility Study on the Use of Infrared Cameras for Skin Cancer Detection under a Proposed Data Degradation Model" Sensors 24, no. 16: 5152. https://doi.org/10.3390/s24165152

APA Style

Soto, R. F., & Godoy, S. E. (2024). Feasibility Study on the Use of Infrared Cameras for Skin Cancer Detection under a Proposed Data Degradation Model. Sensors, 24(16), 5152. https://doi.org/10.3390/s24165152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop