Next Article in Journal
Pitfalls in Metaheuristics Solving Stoichiometric-Based Optimization Models for Metabolic Networks
Next Article in Special Issue
Methods for Corrosion Detection in Pipes Using Thermography: A Case Study on Synthetic Datasets
Previous Article in Journal
An Efficient Optimization of the Monte Carlo Tree Search Algorithm for Amazons
Previous Article in Special Issue
Sub-Band Backdoor Attack in Remote Sensing Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Color Standardization of Chemical Solution Images Using Template-Based Histogram Matching in Deep Learning Regression

by
Patrycja Kwiek
and
Małgorzata Jakubowska
*
Faculty of Materials Science and Ceramics, AGH University of Krakow, al. Mickiewicza 30, 30-059 Kraków, Poland
*
Author to whom correspondence should be addressed.
Algorithms 2024, 17(8), 335; https://doi.org/10.3390/a17080335
Submission received: 11 July 2024 / Revised: 29 July 2024 / Accepted: 30 July 2024 / Published: 1 August 2024
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)

Abstract

:
Color distortion in an image presents a challenge for machine learning classification and regression when the input data consists of pictures. As a result, a new algorithm for color standardization of photos is proposed, forming the foundation for a deep neural network regression model. This approach utilizes a self-designed color template that was developed based on an initial series of studies and digital imaging. Using the equalized histogram of the R, G, B channels of the digital template and its photo, a color mapping strategy was computed. By applying this approach, the histograms were adjusted and the colors of photos taken with a smartphone were standardized. The proposed algorithm was developed for a series of images where the entire surface roughly maintained a uniform color and the differences in color between the photographs of individual objects were minor. This optimized approach was validated in the colorimetric determination procedure of vitamin C. The dataset for the deep neural network in the regression variant was formed from photos of samples under two separate lighting conditions. For the vitamin C concentration range from 0 to 87.72 µg·mL−1, the RMSE for the test set ranged between 0.75 and 1.95 µg·mL−1, in comparison to the non-standardized variant, where this indicator was at the level of 1.48–2.29 µg·mL−1. The consistency of the predicted concentration results with actual data, expressed as R2, ranged between 0.9956 and 0.9999 for each of the standardized variants. This approach allows for the removal of light reflections on the shiny surfaces of solutions, which is a common problem in liquid samples. This color-matching algorithm has universal character, and its scope of application is not limited.

1. Introduction

The development of measurement methods and research strategies across various fields of science and technology is closely linked to the potential support offered by machine learning (ML) and deep learning (DL) techniques. If the information in the problem being solved is communicated through images, drawings, sketches, or photographs, the use of DL in data modeling is often the optimal approach. The application of neural networks in typical supervised learning requires access to large data sets, which makes the method of obtaining and labeling data a significant limitation. Today’s technology provides many electronic devices, such as cameras or scanners, which can be used to obtain data for constructing models quickly and without special qualifications. Information can often be encoded simply in the form of objects, the relationship between objects in the analyzed area, textures, the quantity of small components, and colors.
Several color spaces are widely used across different industries for various applications [1]. RGB space, based on primary colors of light (red, green, and blue), is extensively used in digital displays and imaging. CMYK (cyan, magenta, yellow, and black) is used in color printing, relying on the subtractive color model to produce a wide range of colors on paper. HSI/HSV (hue, saturation, intensity/value) is often used in image analysis and computer vision due to its alignment with human perception of colors. YUV and YCbCr are used in video compression and broadcasting, separating the image into luminance and chrominance components to optimize data storage and transmission. The CIE Lab color space, developed by the International Commission on Illumination, provides a perceptually uniform color representation, making it ideal for color differentiation and transformation tasks. RAL CLASSIC is a widely used color matching system in Europe, consisting of 213 standardized colors, while RAL DESIGN offers a more extensive palette of 1625 colors for designers and architects. RAL DIGITAL facilitates precise color communication in digital applications through RGB and CMYK values. The PANTONE Matching System (PMS) is globally recognized in graphic design, printing, and fashion, providing standardized color codes to ensure consistency across different media and materials. Each of these color spaces has its own advantages and applications, catering to the specific needs of various industries.
If color is the main, decisive criterion in image analysis that codes the information we are looking for, it is crucial to take photos in such a way as to maintain the appropriate quality of this parameter. The true color of the image allows for the correct interpretation of various features of the photographed objects. Undesirable variability may arise from the use of multiple devices to capture images or from the method of acquiring images with a single device under uncontrolled or changing conditions. Significant factors include changes in lighting conditions, reflections on shiny surfaces, and variations such as the time of day. These elements may not directly impact the photo’s quality but can influence its informational value. Therefore, the importance of color standardization as a crucial step in data preprocessing should be considered. Various computational approaches for preserving or restoring the true color of images within a dataset for deep learning are described in numerous works, offering solutions to this problem.
This study presents an original algorithm for standardizing the colors of photos captured with a single smartphone camera under various lighting conditions. It also considers variability resulting from incidental factors associated with the photography process and the inherent lack of perfect repeatability of the camera. The research was focused on a homogeneous liquid that showed minimal color variation among the tested objects. The entirety of the photo used in the modeling process shared the same color, although with a certain level of random variability that could not be entirely eliminated. In this context, even slight changes in color held significant information value, emphasizing the importance of accurate color interpretation for enhancing the quality of the deep learning regression model and minimizing prediction errors.
A standardization algorithm was proposed that used the histogram of a specially designed template photographed under controlled conditions together with the object tested in relation to the histogram of the template prepared using a graphic application and its photograph. This study also presents the verification of the proposed approach for the colorimetric determination of vitamin C using a regression model defined by DNN (deep neural networks).

2. Related Works

2.1. Color Standardization

There are numerous categories of color calibration problems, with two fundamental groups pertaining to either the use of different devices within a series of photos or variations in the lighting conditions affecting the photographed subjects. Hardware color compensation can be achieved by employing camera sensors that automatically adjust settings based on the lighting of the scene being captured. The alternative approach involves numerical processing of photos using various strategies that ultimately aim to achieve consistent color tones across multiple images. To accomplish this, calibration charts are initially employed, which correlate and map the individual pixels of captured photos with the standardized values of the corresponding color patches on the chart [2,3]. A standard color calibration chart serves as a reference tool in photography, film creation, video recording, or graphic design, ensuring accurate and consistent color reproduction in images. Typically, it comprises a grid of colored patches with known and standardized colors. These charts are applied in calibrating equipment and adjusting during post-processing, thereby ensuring consistency in color representation across various devices and conditions. Commercially, several calibration charts are available, ranging from those with six to 100 color patches. However, the 24-color patch calibration chart, often referred to by its original name as the Macbeth ColorChecker, has found widespread use in numerous applications [4]. Additionally, the development of customized calibration charts plays an important role in standardizing image colors.
Color standardization plays an important role in many fields of science and technology, including medical diagnostics, bioanalysis, agriculture, analysis of evidence in forensic sciences, analytical chemistry, etc. In agricultural image processing, the focus lies in linking alterations in the color of images, in designated areas of interest, with particular quality attributes such as plant health, crop stress, and maturity level. In [5], a technique was proposed to calibrate digital images, ensuring their uniformity and improving phenological comparisons in agriculture. The color transformation of the images was achieved using a standard 24-color calibration chart. Calibration delivers typical and expected results by adjusting the RGB values of the color patches in the input image to standard RGB values. In [6], a digital shade-matching device was developed for determining tooth color, as accurate shade assessment is crucial in dental restoration procedures. Image recordings made with an intraoral camera served as the basis for the calculations and transformations. The support vector machine (SVM) algorithm and the CIEL*a*b* color space were used to evaluate tooth color. The color correction was based on the calculation of the Euclidean distance between the shades. The authors created a dedicated template, distinct from the typical one with a broad spectrum of colors, specifically adapted to the range of tooth color variations addressed in the study. The next proposition concerns improving the accuracy of color reproduction in digital cameras and smartphones. The first method uses white balance correction followed by a color space transformation. The second approach, applicable to images processed offline, is based on performing full color balance instead of white balance and requires the use of machine learning to predict the color balance matrix [7]. The review paper [8] discusses the use of various histopathological image recognition algorithms. The differences in color in histopathology result from the use of various equipment or tissue staining techniques. Typically, color normalization involves transferring the average color of the target image to the source image and separating the stain present in the source image. The article describes an extensive set of color standardization methods that have been verified using histopathological images, including those obtained in the diagnosis of breast and colon cancer.
Due to the development of digital imaging technology and consequently the emergence of many completely new applications, solutions based on smartphone images are being proposed in various fields. Table 1 contains a list of selected articles in which color images serve as important information carriers, and where color correction and standardization significantly impact modeling results and the quality of the proposed solutions.

2.2. Vitamin C Determination

Color standardization techniques are crucial in analytical chemistry, especially in the colorimetric determination of different substances. This is particularly evident in methods where solutions of tested compounds are photographed, and concentration information is derived from these photos using various statistical tools and multivariate analysis. An illustration of such a methodology is the colorimetric determination of vitamin C.
Ascorbic acid (AA), i.e., vitamin C, is a natural, highly water-soluble compound, which plays a dominant role in human health and is essential for a variety of metabolic reactions and biological processes, with strong antioxidant activity, limiting free radical damage. Insufficient concentration of vitamin C in the body, among others, causes scurvy and anemia and is associated with some psychological abnormalities, for example, depression. Overconsumption of vitamin C can have negative consequences, such as impeding vitamin B12 absorption and potentially causing gastrointestinal problems or kidney stones [19,20]. As humans are unable to produce vitamin C internally, it is essential to supplement with a regulated amount of AA, typically from fresh fruits and vegetables [21,22].
Analytical chemistry provides many strategies to determine AA in food products and vitamin C supplements. Traditionally, instrumental analysis techniques have been utilized to measure ascorbic acid levels, offering a wide concentration range with a low detection limit. These methods include direct titration [23], ultraperformance liquid chromatography (UPLC), high-performance liquid chromatography (HPLC) [24,25], Fourier transform infrared spectrometry (FT-IR) [26], chemiluminescence [27], Raman spectroscopy [28], near-infrared (NIR), FT-Raman spectroscopy [29], and capillary electrophoresis [30]. Furthermore, a variety of electrochemical methods have been used to determine AA, offering extremely high sensitivity, selectivity, reproducibility, and reliability [31,32,33]. However, the mentioned methods of chemical instrumental analysis require specialized laboratory rooms, expensive instrumentation, many auxiliary reagents, time-consuming extraction steps, and complicated measurement procedures that require highly skilled staff for tasks such as preparing reagents and samples, conducting research, or interpreting results.
The development of novel methodologies that either substitute or support traditional laboratory strategies with cost-effective instruments fulfills the need for highly efficient and adaptable approaches in food, pharmaceutical, and environmental analysis. Popular and widely available smartphones offer a promising analytical platform characterized by full automation, large local memory capacity, access to external cloud storage, and especially high-quality cameras, making them suitable for wide use as portable measuring devices. Compared to conventional methods, smartphone-based measurements use digital imaging, offering many applications in various fields and significant advantages. These include low cost, simple operation, the possibility of installing various dedicated sensors, precise measurements in real time, a non-destructive nature of testing in many cases, fast analysis, and portability to other similar devices. In analytical chemistry, commonly available devices, such as cameras, webcams, scanners, and smartphones, are applied to register images from colorimetric reactions in which the intensity of the resulting color is directly correlated to the concentration of the analyte. The literature provides various descriptions of the use of digital imaging methods for determining AA. The work [34] presents a straightforward, rapid, and cost-effective technique employing smartphone image analysis to quantify AA in an aqueous solution. The study was based on the evaluation of the disturbing effect of AA in the enzymatic colorimetric detection of glucose. Another approach has been suggested, involving digital image colorimetry to determine the vitamin C content in natural fruit juices. This method involves measuring the color of the iron(II)—o-phenanthroline complex [35]. In the next article, the authors developed a new circular dichroism spectrometer based on smartphones for highly sensitive and ultraportable colorimetric analysis, offering the advantages of cost-effectiveness and simplicity. This spectrometer utilized an HSV color model and determined absorbance by summing the V value of adjacent positions to calculate the overall intensity. The TMB-MnO2 nanosheet reaction was also used and, in effect, a highly sensitive and specific system was proposed for the detection of vitamin C [36]. In [37] a point-of-care sensor was developed, consisting of a fluorescent paper chip, 3D printed parts, and a smartphone. This system enabled quantitative and highly sensitive real-time visual detection of AA. The design of this device involved placing a fluorescent sensor on a portable sensing platform connected to a smartphone.
The analytical measurement method used in this work was based on the reducing properties of vitamin C. AA reduces Fe(III) to Fe(II) and then forms a complex with o-phenanthroline. The strong red complex of Fe(II) and o-phenanthroline is widely used for colorimetric determination and as an indicator [38]. This reaction is characterized by high speed, stability, and reproducibility. Utilizing this camera-based approach coupled with digital imaging of colored compounds enhances accuracy and reduces the cost of measuring vitamin C content.

2.3. Multivariate Regression Algorithms

Regression techniques are extensively used as a supervised learning method, involving modeling the relationship between features and the target variable to address tasks that aim to predict continuous values. Typical algorithms used for this purpose are PCR (principal component regression) [39], PLSR (partial least squares regression) [40,41], and SVR (support vector regression) [42]. DNN, with various architectures that map the nonlinear structure of multivariate data, can also be applied to regression problems [43,44]. Convolutional neural networks (CNNs) are a type of deep learning algorithm whose operation is based on a mathematical operation called convolution. They consist of multiple layers of convolutional filters that automatically learn hierarchical representations of features directly from raw pixel data, making them highly effective for tasks such as image classification or segmentation and object detection. Typically, the quality of regression models is evaluated using the mean squares of the errors between real values and the predictions (MSE) or the mean absolute error (MAE).
MSE and MAE are different error metrics used to evaluate the performance of predictive models. MSE is more sensitive to large errors because they are squared, while MAE provides a more intuitive understanding of the average error and is expressed in the unit of the target variable. These scores used as loss functions during neural network training minimize the mean difference between the predicted and the actual values. The training and prediction performance of the calibration model is also evaluated based on the following scores: the root mean square error of calibration (RMSEC) and prediction (RMSEP), and the coefficient of determination R2 for calibration and prediction [45]. Since the prediction result is expected to match the actual value, the points with coordinates (actual, predicted) should ideally lie on the line y = x. However, due to random variability and imperfect model fit to the data, two conditions should be met: the value of one should fall within the confidence interval of the slope, and a value of zero should fall within the confidence interval of the intercept [46].

3. Proposed Method

3.1. Color Matching Algorithm

In this work, the main objective was to develop an approach for determining vitamin C based on the color change of the solution influenced by its concentration. Therefore, a crucial step in achieving a high-quality regression model was performing the color standardization procedure on the photographed vitamin C samples. The proposed color standardization algorithm was divided into two parts. In the first step, standard histogram matching was performed between a digital color template and a photograph of a printed template obtained under a given lighting condition. In the second step, histogram matching was performed between a photo of the solution (tested sample) taken under the same illumination as the pattern and the reference template. In the end, 512 smaller random squares were cut from the reconstructed image to obtain a sufficiently large dataset for the subsequent training of the regression neural network. The number of randomly selected small squares was optimized, as preliminary calculations showed that insufficient data resulted in lower model quality. It was also necessary to take an additional step of adapting the self-designed reference template to the appropriate value range. This operation resulted in the development of an adapted histogram-matching algorithm.
Histogram matching is an operation in which the pixels of an input image are manipulated in a way that allows its histogram to match the histogram of the reference image. In this work, the RGB color space was chosen for the calculations. It is widely available and popular, as most cameras and image capture devices record data directly in RGB format, eliminating the need for additional color space transformations. The RGB space is also simple to implement, being intuitive and directly related to the physical properties of light, which simplifies algorithm development. Additionally, in our specific application, RGB has proven to give satisfactory results. Therefore, despite the advantages of other color spaces, RGB was the most suitable for practical and technical reasons. In the case of color images, each channel (RGB) was considered independently. In the context of image processing, this was a normalization task, associated with, for example, changing illumination (Figure 1).
In mathematical terms, histogram matching can be represented as an image transformation such that the cumulative distribution function (CDF) of the values in each band corresponds to the CDF of bands in another image.
To match the histograms of two images, it is necessary to equalize their histograms. After that, each pixel of the first image is modified and mapped to the second image. This operation was performed for each of the channels (R, G, B) of a photographed template under a given lighting condition (natural light and artificial light) and for the channels of a digital template. The whole process of picture color transformation is represented in Figure 2.
In order to standardize the images of the tested chemical solutions, the correct range of RGB values was selected from the histograms of the photo of the tested sample. If the histogram value for a sample was greater than or equal to 1% of the maximum value of the chemical solution histogram, the histogram value was not modified, otherwise it was set to 0. The discrepancy in the number of pixels was neutralized by their uniform random distribution across the range of values. The final step in the standardization algorithm was to match the histograms of an image of a solution with the adapted template histograms. This was done in the same way as described in Figure 3, followed by the reconstruction of an image from the R, G, and B channels.

3.2. Software

All calculations were performed in Python 3.10.0. The following libraries and packages were used: opencv 4.8.1, scikit-image 0.22.0, numpy 1.26.1, matplotlib 3.5.3, scikit-learn 1.0.2, tensorflow 2.9.1, and keras 2.10.0.

4. Results and Discussion

4.1. Reagents, Laboratory Equipment and Measurement Procedure

In this work, the colorimetric reaction used to determine ascorbic acid (AA) was based on the reduction of Fe(III) to Fe(II) by AA, which forms a red color complex with orthophenanthroline. In this experiment, L(+)-ascorbic acid pure p.a., CAS: 50-81-7 (POCH, Gliwice, Poland), 1,10 phenanthroline p.a., CAS: 66-71-7 (POCH, Poland), and ammonium iron(III) sulfate dodecahydrate pure p.a., CAS: 7783-83-7 (POCH, Poland) were used. The 1 M acetate buffer was prepared in our laboratory by mixing 1.0 mol·L−1 sodium acetate (CH3COOH) with 1.0 mol·L−1 acetic acid (CH3COONa) (both reagents purchased from Avantor Performance Materials, Gliwice, Poland) and adjusting to the desired pH using 10N HCl. All solutions, including AA at a concentration of 2.5 gL−1, phenanthroline at a concentration of 4.0 gL−1, ammonium iron(III) sulfate at a concentration of 2.5 gL−1, and 1 M acetate buffer (pH 4.6), were prepared with double-distilled water. Small laboratory equipment was used in the experiments, including a precise analytical balance (RADWAG, model AS 60/220.XS, Radom, Poland), a magnetic stirrer (WIGO, Toszek, Poland), automatic pipettes of various volumes (Eppendorf, Germany), and glassware, i.e., beakers and Petri dishes. To adjust the pH values of the buffer solution, we used the SevenCompact S210 laboratory pH meter (Mettler Toledo, Greifensee, Switzerland). All experiments were performed at room temperature.
Solutions with different concentrations of vitamin C for direct color assessment were prepared by first adding 9 mL of distilled water to the beakers, then 1 mL of acetate buffer, then the appropriate volume of ascorbic acid, and 0.5 mL of ammonium iron(III) sulfate. After waiting for 3 min, 0.5 mL of phenanthroline was added. The additions of AA were 0, 20, 50, 80, 100, 150 200, 300, and 400 µL. Figure 3 shows the photo of the samples prepared for further testing. The parameters studied for optimization were reagent concentration, volume, and the time between the individual stages (the influence of chemical kinetics). After waiting about 20 min, 10 mL of each colored solution was poured into Petri dishes with flat bottoms and photos were taken with a smartphone camera.

4.2. Preparation of a Color Template and Picture Acquisition

The experiment was divided into several steps and the procedure is shown in Figure 4.
The first step was the preparation of the template for color standardization. To determine which colors should be used as a reference, a series of vitamin C solutions was prepared covering the concentration range considered in this study. This was done to estimate the range of RGB values significant for these calculations. Pictures of the tested solutions were taken, and using dedicated homemade software, 12 configurations of RGB values (colors) were chosen to be used on the template. The averaged RGB values formed the basis for determining the color range in the template dedicated to the problem being solved. To slightly expand the color palette, additional upper and lower values were estimated, and the template was created to contain 12 colors. The template, with each color represented by one square, was printed in several copies with one printer. The size of each square was 180 × 180 pixels, and the total size of the template was 600 × 790 pixels.
The process of taking the pictures involved placing 10 mL of the solution in a Petri dish on a white background (highlighting the contrast between the background and the Petri dish with solution), next to the template (which served as a color control set) on a black surface. There were two conditions in which the pictures were taken, natural lighting and the light from the torch of a smartphone. In the second variant, a box completely isolating the photographed objects from the surroundings was placed onto the experimental setup to cut off access to the natural light. Differences in the colors of solutions and the photographed color charts were observed under these two illuminations (Figure 5). The process was repeated 11 times for each sample with a different concentration of vitamin C. All of the pictures were taken with a Xiaomi smartphone (Xiaomi Redmi Note 9, Xiaomi, Beijing, China), which has a 48 MP camera with an f/1.79 aperture. The resolution of the pictures was 3984 × 1840 px.
To histogram-match the templates used in standardization, additional pictures were taken of the template under both conditions. Templates were cut from the images, and histogram matching was performed between the resulting templates and the original one. The process with the results can be seen in Figure 6. Each template is presented as an RGB image, along with its histograms for each of the R, G, and B channels. This figure clearly illustrates the operation of our algorithm. The first column shows the ideal histogram of the digital color template designed for this task. The color histograms of the template photos are expected to match the ideal characteristics. However, under real conditions, there are significant differences (second and fourth columns) between the histograms of the photos and those of the digital version. However, after applying the algorithm proposed in this work, the expected shape of the color histogram for all three components, R, G, and B, can be reproduced. The graphs in the third and fifth columns match those in the first column.
The cumulative histograms of the matched images are the same as the cumulative histogram of the digital template (which serves as the reference) for each channel. Each spike on a given histogram corresponds to one component of a certain color on the template. The less discrete appearance of the histograms of the original template images is caused by variance from printing and different variables that affect how the camera perceives the object with certain changes (e.g., shadows and highlights).
The dataset for the calculations was prepared as follows: 198 pictures were taken fort for nine concentrations of vitamin C under two different lighting conditions. This means that for a given solution for each of the conditions, 11 pictures were taken. Each of the images underwent a process of standardization, as described in Section 3.1. The exemplary effect is illustrated in Figure 7. In Figure 8, the effect of applying the algorithm to various images of the vitamin C solution with a concentration of 33.63 µg/mL is presented. This resulted in the generation of datasets for two lighting conditions. The combination of both sets resulted in the preparation of a mixed condition set, which consisted of:
  • training set: 60,416 images
  • validation set: 20,480 images
  • test set: 20,480 images.
The prepared image datasets served as input data for training a regression neural network, which was used to obtain a model for determining AA concentration based on the photos taken.

4.3. Network Architecture

A multivariate regression model was defined using deep neural networks with an unconventional architecture specifically dedicated to this task. The network design considered essential information such as color-based modeling, the similarity of colors corresponding to successive values of the target variable, and the approximately uniform color of each photo.
The neural network’s input layer accepted images with dimensions of 50 × 50 and three color channels (RGB), randomly selected from each photo. Subsequently, Conv2D convolutional layers with small 1 × 1 filters, L2 regularization, and the GELU (Gaussian error linear unit) activation function were applied after each convolutional layer. A normalization layer was also used, operating along the feature axis for each sample to stabilize and improve the learning process. The architecture also included pooling layers that reduced the spatial size of the feature map, including AveragePooling2D and GlobalMaxPooling2D, which reduced the spatial dimension to one value per feature map. The combination of global pooling and flattening is a standard technique for dimension reduction before fully connected layers. In the next stage, the data was flattened to a 1D vector, which was passed to fully connected dense layers with GELU activation and L2 regularization, while the addition of a Dropout layer prevented overfitting. The output was a value corresponding to the predicted concentration of the solution in the image. A detailed description of the network architecture is provided in Table 2 and an overview is illustrated in Figure 9. The total number of parameters was 23,879, and training in a single cycle took a few to several seconds (depending on the computation variant).
In a convolutional layer such as Conv2D, the applied L2 regularization works on the convolutional filters (kernels). These filters are responsible for extracting features from the input images. Adding L2 regularization means that during the optimization process, the loss function the model tries to minimize includes an additional term that penalizes large weight values of the filters. Adding L2 regularization to the convolutional layer helps prevent overfitting, stabilizes the optimization process, and promotes more uniform and stable solutions, leading to better model generalization.
The kernel size of 1 × 1 in the convolutional layer has specific and useful properties in neural networks. The 1 × 1 filters are typically used to reduce spatial dimensions without spatial aggregation, which can be useful in reducing the number of feature channels or introducing nonlinearity without changing the spatial size. This configuration is often called a “pointwise convolution”. The 1 × 1 kernel changes the number of output channels (depth) without changing the width and height of the input image, allowing for the combination of information contained in different input channels without integrating spatial information. Each output point is a linear combination of the input channel values at a given spatial point. For example, if at a given input point (x, y) we have values in the channels (w1, w2, w3), the 1 × 1 kernel applies different weights to these values and combines them, creating new output channels. Such a kernel has fewer parameters compared to larger convolutional kernels, resulting in fewer computational operations. This is efficient in terms of memory and computation time. In modeling based on RGB components, using a 1 × 1 kernel means examining the relationships between individual color components, rather than the neighborhood of each pixel. In our task, there was no need to detect small and then larger details, as is the case in shape recognition.
The GELU activation function [47], applied as an alternative to ReLU (Rectified Linear Unit) and other activation functions in neural networks, ensures better gradient behavior and more efficient learning in deep neural networks (Figure 10). Its continuity, differentiability, and ability to preserve nonlinear data characteristics make it a valuable tool in the design of deep learning models. Unlike ReLU, which can suffer from gradient flow issues during training (known as the vanishing gradient problem), GELU provides better gradient throughput due to its structure. GELU preserves nonlinear features that are crucial for learning data representations, helping models better capture complex dependencies in training data.
In the LayerNormalization layer, normalization is performed along the feature axis for each sample, meaning that normalization is independent of other samples in the batch. Each feature vector (e.g., channels in an image) was normalized separately, considering only the values within that vector, not the entire batch. Unlike BatchNormalization, which operates depending on batch size, LayerNormalization remains stable even with small batches or a batch size of one. Normalization can help prevent issues like vanishing or exploding gradients and accelerate convergence. Normalizing the activations helps maintain values within a reasonable range, stabilizing the learning process. When activations are normalized, the optimizer (learning algorithm) finds it easier to adjust network weights, speeding up convergence to optimal weight values.
GlobalMaxPool2D is a layer used in neural networks to reduce spatial data dimensions after applying convolutional layers or other layers processing spatial data, such as pooling layers [48,49]. GlobalMaxPool2D operates by selecting the maximum value of each feature channel across the entire spatial area (all points) of the input feature tensor. GlobalMaxPool2D is an effective tool for spatial dimensionality reduction, retaining only the most significant features (highest values) from each feature channel. This reduces the number of model parameters, which can help reduce overfitting and computational complexity. GlobalMaxPool2D has no trainable parameters; its sole purpose is to select the maximum value from each feature channel throughout the feature map. For each feature channel, the GlobalMaxPool2D layer selects the highest value across the entire spatial area (height and width). The result of GlobalMaxPool2D is a tensor with reduced dimensions, containing only the maximum values for each feature channel. After applying GlobalMaxPool2D, the resulting tensor can be passed to a fully connected layer or other types of layers that process one-dimensional data vectors.
In this study, the model was compiled using the Adam optimizer with customized beta parameters and a defined loss function, which was the sum of MSE and MAE. When the loss function is a combination of MSE and MAE, it affects how the model learns to minimize errors. MSE penalizes large errors more severely (due to the squares of differences), whereas MAE treats all errors equally. Combining these loss functions means that the model will aim to reduce both large and small errors. The gradient of MSE is steeper for larger errors, causing the model to learn faster from significant errors. The MAE gradient is constant, leading to a more consistent learning rate across all errors. MAE is more resistant to outliers than MSE because it does not involve squared differences. A loss function that combines MSE and MAE can lead to faster convergence in some cases by leveraging the strengths of both methods. MSE can accelerate the reduction of large errors, while MAE can provide stability and a consistent learning pace.
During training, a modification of the learning rate was applied after reaching a stable val_loss level for five epochs. The initial value was 0.001. Details regarding the learning parameters are provided in Table 3.
The input dataset consisted of images with dimensions of 400 × 400 and was divided into three parts: training, validation, and test sets in a 60/20/20 ratio. During training, 50 × 50 squares were dynamically cropped from the images, and these squares served as the direct input to the first layer of the network.
There were six different variations of testing in deep learning for training a regression algorithm to determine vitamin C concentration based on images: original images for artificial lighting, matched images for artificial lighting, original images for natural lighting, matched images for artificial lighting, original images for mixed dataset, and matched images for mixed dataset. For each of them, the model was trained (Section 3.2) and predictions were made. Training and validation losses for calculations of artificial light data, for both original and histogram-matched images, are presented in Figure 11. The range was similar in both cases; however, the value fluctuations were smaller for matched images, and in this condition, validation and training losses were more similar to each other.
Figure 12 presents the results of the prediction made using the presented neural network. There were three sets of pairs of outcomes: artificial lightning, natural lightning, and mixed data. Each set was repeated for both the original images and the processed ones. There was an improvement in the higher concentration values for each of the conditions (Table 4). The standard deviation of the predictions was lower when applying the proposed standardization method. The highest difference could be seen in the case of artificial light, which could be caused by the uniformization of light reflections visible in the original images.
The values of the numerical model evaluation and adjustment factors are presented in Table 4. The R2 score was very high (above 0.99) in all cases; however, in every instance, the R2 score was higher when standardization was performed. As for root mean squared error values, they were lower for matched images compared to the original ones. Depending on the lighting conditions, the learning rate changed from an initial value of 10−3 to 10−5.

4.4. Ablation Study

In this study, we also analyzed how different modifications of the network architecture affected the quality of the regression model. In the first stage, we considered a network that did not account for the specifics of the problem. The model contained six layers: one convolution, one pooling, one flatten, and three dense. It was trained under the same conditions as earlier. The detailed quantitative results are presented in Table 5. The use of a more conventional architecture worsened the model’s ability to fit the data and its predictive capabilities. However, improvements in modeling quality were still observed after applying the color standardization algorithm.
Moreover, several various changes in the architecture were introduced before the final version was chosen. Some of them, with the comparison to the final version, are evaluated in Table 6.

5. Conclusions

This work presents a new color standardization algorithm, in which the main stages consist of preparing a template based on an initial series of photos, developing a color mapping principle using equalized digital RGB histograms of an ideal pattern and its photo, color conversion of photographs of the examined object involving the use of information about mapping template histograms, and image reconstruction based on the transformed histogram. The algorithm was developed for cases where the examined objects differ little in color and the color of each photographed object is almost uniform. The operation of the proposed procedure was demonstrated and verified in an analytical chemistry task, which involved the colorimetric determination of vitamin C. Photographs of AA solutions taken under controlled conditions did not show much color difference within the concentration range. The research was carried out under daylight and artificial lighting conditions in order to combine them into a single dataset and obtain a model that allows for the determination of vitamin C regardless of the lighting conditions used when taking the photos. Sets of standardized images formed the basis for deep machine learning using the architecture of neural networks in the regression variant.
For the concentration range of vitamin C from 0 to 87.72 µg·mL−1, the RMSE for the test set ranged between 0.75 and 1.95 µg·mL−1 in comparison to the non-standardized variant, where the RMSE was between 1.35 and 2.29 µg·mL−1. The compatibility of the concentration prediction results with the actual data, as expressed by R2, was greater than 0.99 for each of the standardized variants. The greatest improvement was noted in the case of artificial light, which can be explained by the elimination of reflections on the surfaces of the solutions in the images after the standardization process. Further studies are needed to determine the usefulness of the algorithm in different scenarios (e.g., colorimetric prediction of the concentration of different compounds). This new approach may contribute to the development of a new field of chemical analysis, which includes the quantitative evaluation of a broad spectrum of chemical analytes using mobile phones and image analysis. This presented study shows promise for applications in several fields of science and technology, such as biomedical engineering, chemical engineering, and the food industry.

Author Contributions

P.K.: Conceptualization; Methodology; Data curation; Formal analysis; Investigation; Software; Resources; Visualization; Writing—original draft; Writing—Review and Editing. M.J.: Conceptualization; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Supervision; Validation; Writing—original draft; Writing—Review and Editing. All authors have read and agreed to the published version of the manuscript.

Funding

Research project supported by the program “Excellence initiative—research university” for the AGH University of Krakow (ID1479).

Data Availability Statement

The algorithms implemented in Python and the sample of the data for calculations are available on the Github platform at: https://github.com/pwkwiek/vitaminC (accessed on: 13 July 2024).

Acknowledgments

The authors thank Inż. Filip Ciepiela (AGH University of Krakow) for valuable guidance during the project implementation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Fairchild, M.D. Color Appearance Models, 3rd ed.; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2013; ISBN 9781119967033. [Google Scholar]
  2. Minz, P.S.; Saini, C.S. Evaluation of RGB Cube Calibration Framework and Effect of Calibration Charts on Color Measurement of Mozzarella Cheese. J. Food Meas. Charact. 2019, 13, 1537–1546. [Google Scholar] [CrossRef]
  3. Ernst, A.; Papst, A.; Ruf, T.; Garbas, J.U. Check My Chart: A Robust Color Chart Tracker for Colorimetric Camera Calibration. In Proceedings of the 6th International Conference on Computer Vision/Computer Graphics Collaboration Techniques and Applications, MIRAGE’13, Berlin, Germany, 6–7 June 2013. [Google Scholar]
  4. McCamy, C.S.; Marcus, H.; Davidson, J.G. Color-Rendition Chart. J. Appl. Photogr. Eng. 1976, 2, 95–99. [Google Scholar]
  5. Sunoj, S.; Igathinathane, C.; Saliendra, N.; Hendrickson, J.; Archer, D. Color Calibration of Digital Images for Agriculture and Other Applications. ISPRS J. Photogramm. Remote Sens. 2018, 146, 221–234. [Google Scholar] [CrossRef]
  6. Kim, M.; Kim, B.; Park, B.; Lee, M.; Won, Y.; Kim, C.Y.; Lee, S. A Digital Shade-Matching Device for Dental Color Determination Using the Support Vector Machine Algorithm. Sensors 2018, 18, 3051. [Google Scholar] [CrossRef]
  7. Karaimer, H.C.; Brown, M.S. Improving Color Reproduction Accuracy on Cameras. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake, UT, USA, 18–23 June 2018; pp. 6440–6449. [Google Scholar]
  8. Roy, S.; kumar Jain, A.; Lal, S.; Kini, J. A Study about Color Normalization Methods for Histopathology Images. Micron 2018, 114, 42–61. [Google Scholar] [CrossRef]
  9. Zhao, Y.; Ferguson, S.; Zhou, H.; Elliott, C.; Rafferty, K. Color Alignment for Relative Color Constancy via Non-Standard References. IEEE Trans. Image Process. 2022, 31, 6591–6604. [Google Scholar] [CrossRef]
  10. Rashid, F.; Jamayet, N.B.; Farook, T.H.; AL-Rawas, M.; Barman, A.; Johari, Y.; Noorani, T.Y.; Abdullah, J.Y.; Eusufzai, S.Z.; Alam, M.K. Color Variations during Digital Imaging of Facial Prostheses Subjected to Unfiltered Ambient Light and Image Calibration Techniques within Dental Clinics: An In Vitro Analysis. PLoS ONE 2022, 17, e0273029. [Google Scholar] [CrossRef]
  11. Barbero-Álvarez, M.A.; Rodrigo, J.A.; Menéndez, J.M. Minimum Error Adaptive RGB Calibration in a Context of Colorimetric Uncertainty for Cultural Heritage Preservation. Comput. Vis. Image Underst. 2023, 237, 103835. [Google Scholar] [CrossRef]
  12. Noor Azhar, M.; Bustam, A.; Naseem, F.S.; Shuin, S.S.; Md Yusuf, M.H.; Hishamudin, N.U.; Poh, K. Improving the Reliability of Smartphone-Based Urine Colorimetry Using a Colour Card Calibration Method. Digit. Health 2023, 9, 20552076231154684. [Google Scholar] [CrossRef]
  13. Zhang, G.; Song, S.; Panescu, J.; Shapiro, N.; Dannemiller, K.C.; Qin, R. A Novel Systems Solution for Accurate Colorimetric Measurement through Smartphone-Based Augmented Reality. PLoS ONE 2023, 18, e0287099. [Google Scholar] [CrossRef]
  14. Chairat, S.; Chaichulee, S.; Dissaneewate, T.; Wangkulangkul, P.; Kongpanichakul, L. AI-Assisted Assessment of Wound Tissue with Automatic Color and Measurement Calibration on Images Taken with a Smartphone. Healthcare 2023, 11, 273. [Google Scholar] [CrossRef]
  15. Suominen, J.; Egiazarian, K. Camera Color Correction Using Splines. In Proceedings of the IS&T International Symposium on Electronic Imaging, Burlingame, CA, USA, 21–25 January 2024; pp. 165-1–165-6. [Google Scholar]
  16. Souissi, M.; Chaouch, S.; Moussa, A. Color Matching of Bicomponent (PET/PTT) Filaments with High Performances Using Genetic Algorithm. Sci. Rep. 2024, 14, 10949. [Google Scholar] [CrossRef]
  17. Wannasin, D.; Grossmann, L.; McClements, D.J. Optimizing the Appearance of Plant-Based Foods Using Natural Pigments and Color Matching Theory. Food Biophys. 2024, 19, 120–130. [Google Scholar] [CrossRef]
  18. Wu, Y. Reference Image Aided Color Matching Design Based on Interactive Genetic Algorithm. J. Electr. Syst. 2024, 20, 400–410. [Google Scholar] [CrossRef]
  19. Food and Agriculture Organization; World Health Organization. Vitamin and Mineral Requirements in Human Nutrition, 2nd ed.; FAO/WHO: Geneva, Switzerland, 1998; pp. 1–20. ISBN 9241546123. [Google Scholar]
  20. Food and Agriculture Organization; World Health Organization. Human Vitamin and Mineral Requirements; FAO/WHO: Geneva, Switzerland, 2001. [Google Scholar]
  21. Lykkesfeldt, J. On the Effect of Vitamin C Intake on Human Health: How to (Mis)Interprete the Clinical Evidence. Redox Biol. 2020, 34, 101532. [Google Scholar] [CrossRef]
  22. Dosed, M.; Jirkovsk, E.; Kujovsk, L.; Javorsk, L.; Pourov, J.; Mercolini, L.; Remi, F. Vitamin C—Sources, Physiological Role, Kinetics, Deficiency, Use, Toxicity, and Determination. Nutrients 2021, 615, 1–34. [Google Scholar]
  23. Suntornsuk, L.; Gritsanapun, W.; Nilkamhank, S.; Paochom, A. Quantitation of Vitamin C Content in Herbal Juice Using Direct Titration. J. Pharm. Biomed. Anal. 2002, 28, 849–855. [Google Scholar] [CrossRef]
  24. Klimczak, I.; Gliszczyńska-Świgło, A. Comparison of UPLC and HPLC Methods for Determination of Vitamin C. Food Chem. 2015, 175, 100–105. [Google Scholar] [CrossRef]
  25. Gazdik, Z.; Zitka, O.; Petrlova, J.; Adam, V.; Zehnalek, J.; Horna, A.; Reznicek, V.; Beklova, M.; Kizek, R. Determination of Vitamin C (Ascorbic Acid) Using High Performance Liquid Chromatography Coupled with Electrochemical Detection. Sensors 2008, 8, 7097–7112. [Google Scholar] [CrossRef]
  26. Bunaciu, A.A.; Bacalum, E.; Aboul-Enein, H.Y.; Udristioiu, G.E.; Fleschin, Ş. FT-IR Spectrophotometric Analysis of Ascorbic Acid and Biotin and Their Pharmaceutical Formulations. Anal. Lett. 2009, 42, 1321–1327. [Google Scholar] [CrossRef]
  27. Zhu, Q.; Dong, D.; Zheng, X.; Song, H.; Zhao, X.; Chen, H.; Chen, X. Chemiluminescence Determination of Ascorbic Acid Using Graphene Oxide@copper-Based Metal-Organic Frameworks as a Catalyst. RSC Adv. 2016, 6, 25047–25055. [Google Scholar] [CrossRef]
  28. Berg, R.W. Investigation of L (+)-Ascorbic Acid with Raman Spectroscopy in Visible and UV Light. Appl. Spectrosc. Rev. 2015, 50, 193–239. [Google Scholar] [CrossRef]
  29. Yang, H.; Irudayaraj, J. Rapid Determination of Vitamin C by NIR, MIR and FT-Raman Techniques. J. Pharm. Pharmacol. 2010, 54, 1247–1255. [Google Scholar] [CrossRef]
  30. Zykova, E.V.; Sandetskaya, N.G.; Ostrovskii, O.V.; Verovskii, V.E. Methods of Analysis and Process Control Determining Ascorbic Acid in Medicinal Preparations By Capillary Zone Electrophoresis and Micellar. Pharm. Chem. J. 2010, 44, 463–465. [Google Scholar] [CrossRef]
  31. Dodevska, T.; Hadzhiev, D.; Shterev, I. A Review on Electrochemical Microsensors for Ascorbic Acid Detection: Clinical, Pharmaceutical, and Food Safety Applications. Micromachines 2023, 14, 41. [Google Scholar] [CrossRef]
  32. Huang, L.; Tian, S.; Zhao, W.; Liu, K.; Guo, J. Electrochemical Vitamin Sensors: A Critical Review. Talanta 2021, 222, 121645. [Google Scholar] [CrossRef]
  33. Broncová, G.; Prokopec, V.; Shishkanova, T.V. Potentiometric Electronic Tongue for Pharmaceutical Analytics: Determination of Ascorbic Acid Based on Electropolymerized Films. Chemosensors 2021, 9, 110. [Google Scholar] [CrossRef]
  34. Coutinho, M.S.; Morais, C.L.M.; Neves, A.C.O.; Menezes, F.G.; Lima, K.M.G. Colorimetric Determination of Ascorbic Acid Based on Its Interfering Effect in the Enzymatic Analysis of Glucose: An Approach Using Smartphone Image Analysis. J. Braz. Chem. Soc. 2017, 28, 2500–2505. [Google Scholar] [CrossRef]
  35. Porto, I.S.A.; Santos Neto, J.H.; dos Santos, L.O.; Gomes, A.A.; Ferreira, S.L.C. Determination of Ascorbic Acid in Natural Fruit Juices Using Digital Image Colorimetry. Microchem. J. 2019, 149, 104031. [Google Scholar] [CrossRef]
  36. Kong, L.; Gan, Y.; Liang, T.; Zhong, L.; Pan, Y.; Kirsanov, D.; Legin, A.; Wan, H.; Wang, P. A Novel Smartphone-Based CD-Spectrometer for High Sensitive and Cost-Effective Colorimetric Detection of Ascorbic Acid. Anal. Chim. Acta 2020, 1093, 150–159. [Google Scholar] [CrossRef]
  37. Li, C.; Xu, X.; Wang, F.; Zhao, Y.; Shi, Y.; Zhao, X.; Liu, J. Portable Smartphone Platform Integrated with Paper Strip-Assisted Fluorescence Sensor for Ultrasensitive and Visual Quantitation of Ascorbic Acid. Food Chem. 2023, 402, 134222. [Google Scholar] [CrossRef]
  38. Zhaoa, W.; Caoa, P.; Zhua, Y.; Liua, S.; Gaob, H.-W.; Huang, C. Rapid Detection of Vitamin C Content in Fruits and Vegetables Using a Digital Camera and Color Reaction. Quim. Nov. 2020, 43, 1421–1430. [Google Scholar] [CrossRef]
  39. Dumancas, G.G.; Ramasahayam, S.; Bello, G.; Hughes, J.; Kramer, R. Chemometric Regression Techniques as Emerging, Powerful Tools in Genetic Association Studies. TrAC Trends Anal. Chem. 2015, 74, 79–88. [Google Scholar] [CrossRef]
  40. Wold, S.; Sjöström, M.; Eriksson, L. PLS-Regression: A Basic Tool of Chemometrics. Chemom. Intell. Lab. Syst. 2001, 58, 109–130. [Google Scholar] [CrossRef]
  41. Li, B.; Morris, J.; Martin, E.B. Model Selection for Partial Least Squares Regression. Chemom. Intell. Lab. Syst. 2002, 64, 79–89. [Google Scholar] [CrossRef]
  42. Smola, A.J.; Schokopf, B. A Tutorial on Support Vector Regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
  43. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; ISBN 9780262035613. [Google Scholar]
  44. Lathuiliere, S.; Mesejo, P.; Alameda-Pineda, X.; Horaud, R. A Comprehensive Analysis of Deep Regression. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2065–2081. [Google Scholar] [CrossRef]
  45. Pascual, L.; Gras, M.; Vidal-Brotóns, D.; Alcañiz, M.; Martínez-Máñez, R.; Ros-Lis, J.V. A Voltammetric E-Tongue Tool for the Emulation of the Sensorial Analysis and the Discrimination of Vegetal Milks. Sens. Actuators B Chem. 2018, 270, 231–238. [Google Scholar] [CrossRef]
  46. Wójcik, S.; Ciepiela, F.; Jakubowska, M. Computer Vision Analysis of Sample Colors versus Quadruple-Disk Iridium-Platinum Voltammetric e-Tongue for Recognition of Natural Honey Adulteration. Meas. J. Int. Meas. Confed. 2023, 209, 112514. [Google Scholar] [CrossRef]
  47. Lee, M. Mathematical Analysis and Performance Evaluation of the GELU Activation Function in Deep Learning. J. Math. 2023, 1, 4229924. [Google Scholar] [CrossRef]
  48. Sikandar, S.; Mahum, R.; Alsalman, A.M. A Novel Hybrid Approach for a Content-Based Image Retrieval Using Feature Fusion. Appl. Sci. 2023, 13, 4581. [Google Scholar] [CrossRef]
  49. Hasan, M.A.; Haque, F.; Sabuj, S.R.; Sarker, H.; Goni, M.O.F.; Rahman, F.; Rashid, M.M. An End-to-End Lightweight Multi-Scale CNN for the Classification of Lung and Colon Cancer with XAI Integration. Technologies 2024, 12, 56. [Google Scholar] [CrossRef]
Figure 1. Example of histogram matching for illustrative purposes.
Figure 1. Example of histogram matching for illustrative purposes.
Algorithms 17 00335 g001
Figure 2. Diagram of the color matching algorithm.
Figure 2. Diagram of the color matching algorithm.
Algorithms 17 00335 g002
Figure 3. Vitamin C solutions (0, 4.54, 11.31, 18.05, 22.52, 33.63, 44.64, 66.37, and 87.72 µg·mL−1) in the presence of double-distilled water, acetate buffer, ammonium iron(III) sulfate, and orthophenanthroline.
Figure 3. Vitamin C solutions (0, 4.54, 11.31, 18.05, 22.52, 33.63, 44.64, 66.37, and 87.72 µg·mL−1) in the presence of double-distilled water, acetate buffer, ammonium iron(III) sulfate, and orthophenanthroline.
Algorithms 17 00335 g003
Figure 4. Scheme of conducted experiments and calculations: preparation of AA solutions with varying concentrations; application of solutions onto Petri dishes; image capture under different lighting conditions (AA solutions with color templates); image preprocessing—color standardization; training and evaluation of the regression network.
Figure 4. Scheme of conducted experiments and calculations: preparation of AA solutions with varying concentrations; application of solutions onto Petri dishes; image capture under different lighting conditions (AA solutions with color templates); image preprocessing—color standardization; training and evaluation of the regression network.
Algorithms 17 00335 g004
Figure 5. Control of lighting conditions—two approaches: natural light (top) and smartphone lamp lighting (bottom). Right side of an image—digital template used in histogram matching; arrows indicate the lighting condition under which the images were taken.
Figure 5. Control of lighting conditions—two approaches: natural light (top) and smartphone lamp lighting (bottom). Right side of an image—digital template used in histogram matching; arrows indicate the lighting condition under which the images were taken.
Algorithms 17 00335 g005
Figure 6. Templates and their histograms for RGB channels: (i) original digital template, (ii) image of a template under natural lighting, (iii) image of a matched template under natural lighting, (iv) image of a template under artificial lighting, and (v) image of a matched template under artificial lighting.
Figure 6. Templates and their histograms for RGB channels: (i) original digital template, (ii) image of a template under natural lighting, (iii) image of a matched template under natural lighting, (iv) image of a template under artificial lighting, and (v) image of a matched template under artificial lighting.
Algorithms 17 00335 g006
Figure 7. Images of the vitamin C solution (33.63 µg/mL) under artificial lighting conditions before (source) and after (matched) application of the standardization algorithm and corresponding histograms of RGB channels.
Figure 7. Images of the vitamin C solution (33.63 µg/mL) under artificial lighting conditions before (source) and after (matched) application of the standardization algorithm and corresponding histograms of RGB channels.
Algorithms 17 00335 g007
Figure 8. Various exemplary images of the vitamin C solution (33.63 µg/mL) under artificial lighting conditions before (top) and after (bottom) application of the standardization algorithm.
Figure 8. Various exemplary images of the vitamin C solution (33.63 µg/mL) under artificial lighting conditions before (top) and after (bottom) application of the standardization algorithm.
Algorithms 17 00335 g008
Figure 9. Neural network architecture used in this study.
Figure 9. Neural network architecture used in this study.
Algorithms 17 00335 g009
Figure 10. ReLU and GELU activation functions.
Figure 10. ReLU and GELU activation functions.
Algorithms 17 00335 g010
Figure 11. Training and validation losses for calculations of artificial lightning conditions for both original and (i) and histogram-matched (ii) images.
Figure 11. Training and validation losses for calculations of artificial lightning conditions for both original and (i) and histogram-matched (ii) images.
Algorithms 17 00335 g011
Figure 12. Evaluation of DL regression models for six different variations of testing, before (first column) and after color matching (second column).
Figure 12. Evaluation of DL regression models for six different variations of testing, before (first column) and after color matching (second column).
Algorithms 17 00335 g012
Table 1. Comparison of related works.
Table 1. Comparison of related works.
Ref.YearProblem SolvedDataset UsedColor Correction Strategy
[9]2022The general concept of relative color constancy (RCC) i.e., the ability to align colors of the same objects between images independent of illumination and cameraThree image datasets collected by multiple cameras under various illumination and exposure conditions (Belfast, Middlebury, Gehler-Shi)Three steps of color alignment framework: camera response calibration with color patches, response linearization, pixel-wise color intensity, and chromaticity matching
[10]2022Color variations within clinical images of maxillofacial prosthetic silicone specimens (in vitro approach)Images of the pigmented prosthetic silicone specimens within different ambient lighting conditionsComputerized software-based post-processing white balance calibration (PPWBC) using a gray card and Macbeth color chart
[11]2023Non-invasive cultural heritage conservation in the context of material degradationFour sets of images in different lighting conditions, acquired with a conventional cellphone camera in sRGB color spaceAn adaptive transfer function, working with a self-made color calibration chart; neural network-based final tuning of the calibration functions
[12]2023Smartphone-based urine colorimetryUrine samples from 58 patients photographed in a customized photo box, under five simulated lighting conditions, using five smartphones in RGB color spaceColor calibration with SpyderCHEKRTM color chart and the Adobe Photoshop proprietary algorithms to scale the uncorrected RGB values
[13]2023A smartphone-based solution for accurate colorimetric measurements, with an augmented-reality guiding system Simulated and real images, images of color stripe kits for pH readingA color correction algorithm with the first-order spatial varying regression model which leverages both the absolute color magnitude and scale; the system includes a color reference board
[14]2023Assessment of the healing process of chronic woundsWound tissue images taken at different times, under different lighting conditions, distances, lenses, and smartphone cameras in RGB color spaceImages of wound tissue taken with a Macbeth chart of 24 square color patches and four ArUco markers at corners of the chart; transformation matrix calculation using the Moore–Penrose inverse matrix and the target matrix
[15]2024A general purpose algorithmPhotos of nature and city landscapes, vegetation, and human-made objects in RGB color spaceA matrix-based transformation using tensor product B-splines with optimization of the smoothing parameter λ and number of spline basis functions
[16]2024Reproduction of reference color of textile clothing by selecting appropriate disperse dyesResults of colorimetric measurements using a spectrophotometer type Spectraflash 600 Plus (Datacolor International, USA)A genetic algorithm to optimize the selection and concentration of dyes for exact color matching in bicomponent (PET/PTT) filaments
[17]2024Optimization of the
appearances of plant-based foods considering that animal-based products may contain a
wide range of different pigments and structural elements that scatter light
L*, a*, b* values of animal-based products and plant-based emulsions produced measured by colorimeter (ColorFlex EZ 45/0-LAV, Hunter Associates Laboratory Inc., Reston, VA, USA)The color matching model based on Kubelka–Munk theory, used to calculate the spectral reflectance of the mixed emulsions from the spectral reflectance of the individual color-loaded emulsions
[18]2024Color design for fashion—color selection and matching different color combinationsGraphic objects designed in CorelDrawColor matching based on an interactive genetic algorithm using reference images
Table 2. Neural network architecture and parameters used in this study.
Table 2. Neural network architecture and parameters used in this study.
Layer (Type)Output ShapeNumber of Parameters
Input layer(None, 50, 50, 3)0
Conv2D(None, 50, 50,3 2)128
Activation(None, 50, 50, 32)0
Conv2D(None, 50, 50, 64)2112
Activation(None, 50, 50, 64)0
AveragePooling2D(None, 46, 46, 64)0
LayerNormalization(None, 46, 46, 64)128
Conv2D(None, 46, 46, 64)4160
Activation(None, 46, 46, 64)0
AveragePooling2D(None, 44, 44, 64)0
GlobalMaxPooling2D(None, 64)0
Flatten(None, 64)0
Dense(None, 150)9750
Dropout(None, 150)0
Dense(None, 50)7550
Dense(None, 1)51
Total params: 23,879 (93.28 KB)
Trainable params: 23,879 (93.28 KB)
Non-trainable params: 0 (0.00 Byte)
Table 3. Parameters of network training.
Table 3. Parameters of network training.
ParameterSetting
Image size50 × 50 × 3
Loss functionMSE + MAE
Optimizer Adam
Initial learning rate0.001
MetricRMSE
Batch size30
Epoch20
ShuffleEvery epoch
Table 4. Evaluation of DL regression models with the best results for six different variations of testing.
Table 4. Evaluation of DL regression models with the best results for six different variations of testing.
Artificial Lighting, OriginalArtificial Lighting, MatchedNatural Lighting, OriginalNatural Lighting, MatchedMixed Lighting, OriginalMixed Lighting, Matched
Slope0.97990.99610.97060.94520.97840.9825
Intercept−0.0036−0.20911.18880.60510.58320.4086
RMSE/µg·mL−11.57690.75252.29001.95421.48281.3510
R2 score0.99790.99990.99530.99560.99660.9968
Table 5. Evaluation of DL regression models considered in the ablation study for six different variations of testing, depending on the lighting conditions and chosen dataset.
Table 5. Evaluation of DL regression models considered in the ablation study for six different variations of testing, depending on the lighting conditions and chosen dataset.
Artificial Lighting, OriginalArtificial Lighting, MatchedNatural Lighting, OriginalNatural Lighting, MatchedMixed Lighting, OriginalMixed Lighting, Matched
Slope0.90581.01380.95381.00711.00051.0112
Intercept0.6891−0.3598−0.83470.22630.87470.0203
RMSE/µg·mL−15.84151.70773.95042.51514.34882.8211
R2 score0.95570.99620.97980.99180.97550.9897
Table 6. Evaluation of DL regression models considered in the ablation study for mixed lighting and matched condition of testing, depending on changes in the model.
Table 6. Evaluation of DL regression models considered in the ablation study for mixed lighting and matched condition of testing, depending on changes in the model.
Change in ArchitectureRMSE/µg·mL−1R2 ScoreComparison
Batch size = 1202.35350.9947Decrease of R2 score and increase of RMSE in comparison to final model.
Image size = 164.01460.9825Significant decrease of R2 score and increase of RMSE in comparison to final model. Lower decline of initial rate in comparison to final version.
Loss function modification3.82030.9856Decrease of R2 score and significant increase of RMSE in comparison to final model. Minimized differences between training and validation loss during training.
Optimizer = Adagrad3.16250.9877Decrease of R2 score and increase of RMSE in comparison to final model.
Optimizer = SGD--There was no change in loss value between the epochs. The model did not learn to predict the concentration of vitamin C solutions.
Initial learning rate = 0.01 or 0.0001--No significant change in the results was detected. The results achieved did not vary significantly in comparison to final model results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kwiek, P.; Jakubowska, M. Color Standardization of Chemical Solution Images Using Template-Based Histogram Matching in Deep Learning Regression. Algorithms 2024, 17, 335. https://doi.org/10.3390/a17080335

AMA Style

Kwiek P, Jakubowska M. Color Standardization of Chemical Solution Images Using Template-Based Histogram Matching in Deep Learning Regression. Algorithms. 2024; 17(8):335. https://doi.org/10.3390/a17080335

Chicago/Turabian Style

Kwiek, Patrycja, and Małgorzata Jakubowska. 2024. "Color Standardization of Chemical Solution Images Using Template-Based Histogram Matching in Deep Learning Regression" Algorithms 17, no. 8: 335. https://doi.org/10.3390/a17080335

APA Style

Kwiek, P., & Jakubowska, M. (2024). Color Standardization of Chemical Solution Images Using Template-Based Histogram Matching in Deep Learning Regression. Algorithms, 17(8), 335. https://doi.org/10.3390/a17080335

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop