Next Article in Journal
Radiomics Texture Analysis of Bone Marrow Alterations in MRI Knee Examinations
Previous Article in Journal
Aortic Valve Calcium Score by Computed Tomography as an Adjunct to Echocardiographic Assessment—A Review of Clinical Utility and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Grading the Color Fastness to Rubbing of Fabrics Based on Spectral Reconstruction and BP Neural Network

1
School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan 430200, China
2
School of Automation, Qingdao University, Qingdao 266071, China
3
School of Textile Science and Engineering, Wuhan Textile University, Wuhan 430200, China
4
Analysis and Testing Center, Wuhan Textile University, Wuhan 430200, China
5
School of Design, University of Leeds, Leeds LS2 9JT, UK
*
Authors to whom correspondence should be addressed.
J. Imaging 2023, 9(11), 251; https://doi.org/10.3390/jimaging9110251
Submission received: 2 October 2023 / Revised: 26 October 2023 / Accepted: 9 November 2023 / Published: 16 November 2023
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)

Abstract

:
To digital grade the staining color fastness of fabrics after rubbing, an automatic grading method based on spectral reconstruction technology and BP neural network was proposed. Firstly, the modeling samples are prepared by rubbing the fabrics according to the ISO standard of 105-X12. Then, to comply with visual rating standards for color fastness, the modeling samples are professionally graded to obtain the visual rating result. After that, a digital camera is used to capture digital images of the modeling samples inside a closed and uniform lighting box, and the color data values of the modeling samples are obtained through spectral reconstruction technology. Finally, the color fastness prediction model for rubbing was constructed using the modeling samples data and BP neural network. The color fastness level of the testing samples was predicted using the prediction model, and the prediction results were compared with the existing color difference conversion method and gray scale difference method based on the five-fold cross-validation strategy. Experiments show that the prediction model of fabric color fastness can be better constructed using the BP neural network. The overall performance of the method is better than the color difference conversion method and the gray scale difference method. It can be seen that the digital rating method of fabric staining color fastness to rubbing based on spectral reconstruction and BP neural network has high consistency with the visual evaluation, which will help for the automatic color fastness grading.

1. Introduction

Color fastness is the expression of the color of a textile to various effects during the processing and use of the material, and it is a very important indicator when testing the color quality of fabrics [1,2,3]. Color fastness of textile fabrics is usually determined by assessing the discoloration of the sample, or by assessing the staining of the undyed lining fabric. Expressed as a number of grades, the color fastness is usually divided into five grades with a half grade between two adjacent grades, forming five grades and nine steps. Generally, the higher the grade, the better the color fastness, and, the lower the grade, the worse the color fastness.
For color fastness grading, the traditional method mainly uses the artificial visual method [4]. The color difference between the sample to be rated and the reference sample is usually visually observed by a specially trained professional in a dark room and under a standard light box. The visually observed color difference in textile samples is then compared with the standard grading gray card under the same observed conditions. The grade of the standard grading gray card closest to the visually observed color difference in textile fabrics is assigned as the color fastness grade of it. However, due to the subjective differences in the work experience and physiological state of the person, the artificial visual method will inevitably affect the final rating results. In addition, the artificial visual method can only help to estimate the approximate differences in tested samples based on human perception experience; it does not allow for quantitative analysis of visually perceived color differences. More importantly, the practical application of the artificial visual method is also inefficient.
With the development of technology, the grading of color fastness based on color measurement equipment such as spectrophotometers is used for fabric quality evaluation. As one of the precise optical measuring instruments, the spectrophotometer can accurately measure the spectral reflectance of the fabric, and the color data of the fabric can be calculated through colorimetric theory [5,6]. However, these kinds of instruments have their limitations when used for color fastness evaluation. Firstly, the color measurement results of the spectrophotometer just represent the average color of a circle area that is determined by the measuring aperture. Second, the spectrophotometers are not suitable for color measurement and evaluation of textile fabrics with complex and fine color patterns. In addition, the spectrophotometer can only complete one color measurement at a time; the measurement efficiency is low compared with the digital-imaging-based color measurement method, which can perform multiple points of color measurement through one capture and image processing techniques.
Recently, digital-imaging-based methods have been used for digital grading color fastness of fabric [7], where the color fastness is calculated based on the captured images of fabric and the mathematical grading method. In terms of the digital-imaging-based methods, the extracted RGB values are first converted to the CIEXYZ tristimulus and then converted to the CIELab values. Based on the calculated CIELab values, the color difference between the tested and reference fabric can be calculated [8,9]. Hence, the color fastness can be graded referring to the calculated color difference.
Two different types of digital color fastness grading methods have been proposed. The first type is called the color difference conversion method [10,11], and the second is called the gray scale difference method [12,13]. For the first type of method, using the calculated CIELab values from captured RGB values, the CIEDE2000 color difference is calculated between the tested and reference fabric, and the relationship between the CIEDE2000 color difference and standard color fastness grades are fitted by mathematical algorithms such as linear regression. With the established regression model, the color fastness of the newly tested fabric will be predicted [10,11]. For the gray scale difference method, the color image of standard grading gray card of color fastness captured with a digital camera is first converted to the grayscale image. Then, the grayscale difference between a pair of gray patches of each grade is calculated. After that, the grayscale differences are fitted with their corresponding color fastness grades using the gray scale difference method. With the fitted curve between the grayscale difference and corresponding grades, the newly tested fabric will be predicted for its color fastness based on the gray image difference between rubbed and unrubbed areas [12,13].
However, for the digital-imaging-based methods, due to the extracted RGB values being imaging-condition-related, if we do not perform color correction on the digital camera in advance, the calculated CIELab values are not the groundtruth color attributes of the tested fabric sample, which will inevitably contribute grading error to the grading results. In addition, even if the color correction has been performed on a digital camera, we can only acquire the color values under one specific illuminant, which will limit the using of the method if we should evaluate the color difference or color fastness grades under several different illuminants. In recent years, with the fast development of spectral reconstruction, it has been widely used in many fields as it can recover the ’fingerprint’ of the color values, the spectral reflectance. With the reconstructed spectral reflectance, the color values under any different illuminant are easily calculated based on the colorimetry. Furthermore, with the optimized spectral reconstruction algorithms published in recent years [14,15,16], the spectral reconstruction accuracy of the colorimetric has been obviously improved compared with the traditional color correction method, such as the classical polynomial-based color correction method [17]. Indeed, measuring fabric color using spectral reconstruction technology has been widely studied and achieved good results [18,19,20].
To further promote the application of spectral reconstruction technology in the textile industry, we propose the digital grading method for the color fastness to rubbing of textiles based on spectral reconstruction technology and BP (Backward Propagation) neural network modeling. Different from the existing methods that directly calculate the color data from RGB values, the spectral reconstruction technology can help us to acquire the spectral reflectance of the captured image of fabrics, which will provide the basis to calculate the groundtruth color data of the fabrics. In addition, the BP neural network is selected to model the relationship between the color attributes difference in the tested fabric and the corresponding color fastness grades. BP neural network has the advantages of learning ability and adaptability, and has been widely used in solving complex nonlinear problems [21,22,23]. The experimental results show that the digital-imaging-based color fastness grading method proposed in this paper has better performance when compared to the existing methods.

2. Methodology

Conventional color fastness mainly includes color fastness to rubbing, color fastness to washing, color fastness to perspiration, and color fastness to light [24,25,26], etc. This paper mainly studies the color fastness to rubbing. The overall flowchart of the proposed method is shown in Figure 1. Firstly, the fabric samples are prepared for color fastness modeling. The fabrics are paired samples; one of them will be rubbed as tested and another remains as a reference. After the rubbing test, all the samples will be visually graded by a specially trained professional under standard grading conditions. So, the ground truth of each tested sample is acquired as a reference for digital grading model construction. Then, the spectral reconstruction technology is used to reconstruct the spectral reflectance of the modeling samples, and the color data of the modeling sample are calculated by colorimetry theory. Based on the calculated color data, we can acquire the color difference (such as CIEDE2000) and color attribute difference (such as ΔL*, Δa*, and Δb*) of each paired sample. Finally, the BP neural network is adopted to construct the color fastness model between the input data (color difference and color attribute difference) and the visually graded color fastness result. Using the constructed color fastness prediction model, the color fastness grade of the newly tested samples will be easily predicted.
Details of color data acquisition based on spectral reconstruction technology are presented in Section 2.1, and the construction of the color fastness digital grading model based on the BP neural network is illustrated in Section 2.2. In addition, a brief introduction to the current two types of digital-imaging-based methods is also presented in Section 2.2.

2.1. Color Data Acquisition Based on Spectral Reconstruction

In this study, spectral reconstruction technology was used to calculate the color data of fabric samples. The first step is to take digital images of fabric samples and spectral characterization samples (such as the X-rite ColorChecker color chart or custom fabrics chart) with a digital camera. With uniform illumination of daylight light source in a closed light box of ColorEye, the sample is placed on the sample platform and the optical path of the digital camera is perpendicular to the plane of the platform. The geometric diagram of the uniformly illuminated light box is shown in Figure 2a, and the rendering effect of light box is shown in Figure 2b. The real product of light box is presented in Figure 2c and the inner illumination uniformity data of the imaging area over the platform checked with the X-rite gray card and Nikon D7200 digital camera are plotted in Figure 2d. To set the appropriate imaging parameters, such as ISO, shutter speed, and aperture size, we make sure the RGB values of the white and black patch in the X-rite ColorChecker 24 color chart are approximately 235 and 35, respectively. Therefore, we set the ISO as 100, the shutter speed as 1/20 s, and the aperture size as f/5.6 with a focal length of 35 mm. In the experiment, we use these imaging parameters to capture the samples.
Using the set imaging parameters, the digital images of fabric samples and spectral characterization samples are captured in the light box by a digital camera, and the average RGB values of each fabric sample and each color patch in the modeling samples are extracted. If we set the extract area as m × n pixels, the average RGB response values in the extraction area are calculated as shown in Equation (1) ,
d = 1 m × n i = 1 m × n r i , g i , b i ,
where i indicates the ith pixel in the extracted area, r i , g i , and b i are the red, green, and blue channel RGB response values of the ith pixel, and d is the response value vector with the dimension of 1 × 3. It should be noted that the raw format digital response values without being post-processed by the digital camera ISP (Image Signal Processing) module are used in this study. Compared to the normal RGB response values commonly used in current methods, the raw format response values are cleaner and have more linearity than post-processed RGB values, which will benefit the higher spectral reconstruction accuracy. The response of each channel is no longer linear after processing the raw image but is better represented by a more complex non-linear law. Additionally, the post-processing methods of different camera manufacturers often differ and are difficult to accurately simulate or describe, making it challenging to model post-processing steps accurately [27,28].
In this study, the spectral characterization of a digital camera is carried out utilizing a traditional polynomial-based regularized pseudo-inverse technique [29]. As an illustration, the raw response value from the spectral characterization sample is extended into a second-order polynomial, as shown in Equation (2), which has a total of 10 expansion items.
d exp = 1 , r , g , b , r g , r b , g b , r 2 , g 2 , b 2 T ,
where d e x p is the vector of digital response values following polynomial expansion, the superscript T denotes the transpose, and r, g, and b are the red, green, and blue channel raw response values of any sample. Equation (3) is the expanded matrix of digital response values of the spectral characterization samples after polynomial expansion.
D train = d exp , 1 , d exp , 2 , , d exp , j T ( j = 1 , 2 , , P ) ,
where j is the jth spectral characterization sample, P indicates the total number of spectral characterization samples, d e x p , j stands for the extended vector of numerical response values for the jth sample, and D t r a i n is the extended matrix of spectral modeling samples.
As indicated in Equations (4)–(7), in the proposed method, we use the Tikhonov regularization to regularize the solution of the spectral reconstruction matrix. Firstly, the singular value decomposition (SVD) algorithm is applied to the expanded response matrix D t r a i n of the spectral modeling samples. Then, a very small number α is added to the eigenvalues to obtain the constrained eigenvalues to reduce the condition number of the expanded response matrix. After that, we reconstruct the response expansion matrix D t r a i n , r e c . Finally, the spectral reconstruction matrix Q is obtained by solving with pseudo-inverse (PI) algorithm and to obtain the spectral reconstruction model.
D train = USV T ,
P = S + α I ,
D train , rec = UPV T ,
Q = R train × pinv D train , rec ,
where R t r a i n is the spectral matrix of the spectral modeling samples, U and V are the orthogonal decomposition matrices obtained by SVD algorithm, S and P are diagonal matrices containing eigenvalues, I is the unit matrix, and pinv(·) is the mathematical function of pseudo-inverse algorithm.
In the next step, we use the established spectral reconstruction model to reconstruct the spectral reflectance of the newly tested fabric sample. The extracted raw response d t e s t of the newly tested fabric is first expanded using the polynomial as in Equation (1) to obtain the expanded response vector d t e s t , e x p . Then, the spectral reflectance of tested fabric samples is reconstructed using the spectral reconstruction matrix Q, as indicated in Equation (8).
r test = Q × d test , exp ,
where r t e s t is the reconstructed spectral reflectance of the tested fabric sample and matrix Q is the established spectral reconstruction matrix.
Using the constructed spectral reflectance above, the corresponding color data of the fabric samples are calculated based on the colorimetry theory. The tristimulus values of the fabric samples are calculated as indicated in Equations (9) and (10).
X = k λ x ( λ ) E ( λ ) S ( λ ) d λ Y = k λ y ( λ ) E ( λ ) S ( λ ) d λ Z = k λ z ( λ ) E ( λ ) S ( λ ) d λ ,
where
k = 100 / λ y ( λ ) E ( λ ) d λ ,
where x(λ), y(λ), and z(λ) are standard observer color matching functions, E(λ) is the fabric samples spectral reflectance, S(λ) is the relative spectral power distribution function of the light source, λ is the wavelength, k is the adjustment factor, and X, Y, and Z are the three tristimulus value data of the fabric sample.
Then, the CIELab color data of the fabric samples are calculated from the corresponding tristimulus. According to the theory of chromaticity, the method of calculating the corresponding CIELab color data from the tristimulus value data is shown in Formula (11) to Formula (12).
L * = 116 f Y Y n 16 a * = 500 f X X n f Y Y n , b * = 200 f Y Y n f Z Z n
where
f H H n = H H n 1 / 3 if H H n > ( 24 / 116 ) 1 / 3 f H H n = ( 841 / 108 ) H H n + 16 / 116 if H H n ( 24 / 116 ) 1 / 3 ,
where L*, a*, and b* represent the lightness, red–green, and yellow–blue color values of the fabric sample in the CIELab color space, respectively. X, Y, and Z are the three stimulus value data of the fabric, and X n , Y n , and Z n are the three stimulus value data of the reference light source. In Equation (12), H and H n represent the three stimulus values of the fabric and reference light source, respectively.
With the CIELab color data of fabric samples, the next step is to calculate the color difference and color attribute difference in order to perform the color fastness test. Using the CIELab color data of the fabric sample pair, the CIEDE2000 color difference value of the fabric sample pair and the corresponding ΔL*, Δa*, and Δb* color difference values are calculated. The CIEDE2000 color difference formula is shown in formula (13).
Δ E 2000 * = Δ L k L S L 2 + Δ C k C S C 2 + Δ H k H S H 2 + R T Δ C k C S C Δ H k H S H 1 2 ,
where ΔL , ΔC , and ΔH represent the lightness, chroma, and hue differences in the fabric sample pair in the CIELCh color space, which are automatically converted from the CIELab color space when calculating the color difference. k H , k L , and k C are the weights for hue, lightness, and chroma when calculating the color difference. For fabric samples, k L is usually set to 1.5, and k H and k C are set to 1. S L , S C , and S H are the weighting functions for lightness, chroma, and hue, respectively, and R T is the adjustment term for color difference calculation. The calculations of the color differences ΔL*, Δa*, and Δb* are shown in Equations (14)–(16).
Δ L * = L 1 * L 2 * ,
Δ a * = a 1 * a 2 * ,
Δ b * = b 1 * b 2 * ,
where (L* 1 , a* 1 , b* 1 ) and (L* 2 , a* 2 , b* 2 ) are the CIELab color data of the reference sample and the color fastness test sample in the fabric sample pair, respectively.

2.2. Color Fastness Prediction Methods

2.2.1. Existing Methods

  • The color difference conversion method.
For the color difference conversion method, the RGB values of the target sample images are first extracted as described in Equation (1). Then, the RGB values are converted to CIEXYZ stimulus values according to the conversion method between the working RGB color space and CIEXYZ color space. However, since the CIEXYZ is not a perceptually uniform color space, we therefore further transform the CIEXYZ to the device-independent and perceptually more uniform CIELab color space; this will benefit to calculate the color difference in accordance with the visual perception. After we have the CIELab color data, the color difference in the paired sample is calculated using the CIE DE2000 color difference equations. Then, the color fastness grade of staining is calculated according to the relevant regulations in ISO 105-A11 standard [30], and the final rating result is obtained [10,11]. The specific algorithm formula is shown in Equations (17) and (18):
G R S = 0.061 Δ E G R S + 2.474 1 + e 0.191 Δ E G R S ,
where
Δ E G R S = Δ E 00 0.423 Δ E 00 2 Δ L 00 2 ,
where GRS stands for calculated grade, ΔE 00 is the color difference value calculated using the CIEDE2000 color difference formula.
  • The gray scale difference method.
The gray scale difference method is proposed based on the concept that the color fastness grade is highly correlated to the gray scale difference in the standard grading gray card [12,13], where the further assumption is that the gray scale difference in the standard grading gray card is highly correlated with the color difference. Based on the above assumptions, one should first capture the image of standard grading gray card and calculate the gray scale difference in the paired gray patches for each color fastness grade. Then, the polynomial-based gray scale difference method is applied to the corresponding gray scale difference and the color fastness grade to construct the color fastness prediction model. With the constructed model, the color fastness grade of the testing paired sample will be predicted through calculating the gray image difference and putting it into the prediction model. It should be noted that the imaging of the paired testing sample should be captured at the same imaging conditions as the prediction model is constructed or the prediction result will be incorrect.
In addition, except for the gray scale difference method itself, we also use the ΔL* to replace the gray scale difference to establish the fitting model between the visual grade rating results and the ΔL* of paired staining samples. This is to test whether we can directly construct the prediction model from ΔL* of paired staining samples to the visual grade rating results.

2.2.2. The Proposed Method

In this study, the BP neural network is used to construct a color fastness prediction model for fabric samples. The BP neural network is a multi-layer feedforward network trained using the backpropagation algorithm. It typically consists of three layers, namely the input layer, hidden layer, and output layer. Each layer of neurons is fully connected to the adjacent layer, but there is no connection among neurons in the same layer. The neurons in different layers are not connected in a feedback manner, forming a hierarchical and feedforward neural network system [21,31].
The BP neural network is a useful tool for both classification and regression problems. In the case of color fastness rating, it can be approached as either a classification or regression problem. When we treat the color fastness rating as classification problem, of which the color fastness grade is divided into nine grades ranging from 1 to 5 with 0.5 as the step, we can obtain the color fastness grade of test samples directly, but it is not beneficial for us to evaluate the precision of the prediction model more accurately. Therefore, we treat the color fastness rating as a regression problem in this study, and the prediction model is trained by the BP neural network and prepared training samples. The specific predicted color fastness values of the testing paired samples between 0 and 5 will be provided by the prediction model with continuous numbers. Further, based on the predicted color fastness value, the color fastness grade will be determined to the nearest grade level.

2.3. Evaluation Metrics

In this study, the general metric of root mean square error (RMSE) is used to measure the deviation between the predicted color fastness grade and the visual grade of fabric samples obtained from the experts. The calculation of RMSE is shown in Equation (19):
R M S E = i = 1 n y y 1 2 n ,
where RMSE represents the root mean square error, y represents the predicted color fastness value of the tested samples, y 1 represents the actual value of the tested samples, and n represents the number of tested samples. The smaller the RMSE value, the better the consistency between the model prediction results and the visual rating results. In addition, the absolute value of the predicted error of each tested fabric is also calculated when we compare each method statistically.

3. Experiment

To validate the proposed fabric color fastness rating method for rubbing tests, we employed the following instruments and materials in the experiment. They are the Y(B) 571-III color fastness rubbing tester, the pure cotton twill fabric with the size of 10 × 25 cm, the white cotton rubbing cloth with the size of 5 × 5 cm , the standard gray card for assessing staining of color, the standard colorimetric light box ColorChex N7, the Nikon D7200 digital camera, the X-rite Color Checker classical 24 color chart, and the closed lighting box ColorEye with uniform illumination. Using the above instruments and materials, the experiment aimed to assess the color fastness of the textile fabric after multiple rubbing tests and verify the accuracy of the proposed rating method. The rubbing fastness experiment is carried out in Section 3.1, the visual rating experiment is described in Section 3.2, the construction of prediction model based on BP neural network is illustrated in Section 3.3, and the testing of the existing methods is presented in Section 3.4.

3.1. The Rubbing Color Fastness Experiment

The color fastness to rubbing is an important testing item for fabric. It is the degree of color retention of fabrics after they are subjected to rubbing during use. Usually, the color fastness to rubbing of fabrics is reflected by the staining grade. According to the ISO 105-X12 standard [32], dry rubbing and wet rubbing tests should be tested simultaneously [22,23]. The testing parameters of the fabric samples are shown in Table 1. During the testing process, the tested fabric samples are fixed to the rubbing platform of the color fastness rubbing tester of Y (B) 571-III, with a dry rubbing cloth on the one rubbing head and a wet rubbing cloth with a water content of 95%–100% on the other rubbing head. Through adjusting the number of rubbing times between 150 and 200 with the pressure of 9 N, we can obtain the experiment samples with the color fastness grades covering all nine steps from 1 to 5 with the 0.5 step. Finally, the database including the rubbed samples and their corresponding color-stained rubbing cloth were constructed; some of the samples are shown in Figure 3.

3.2. The Visual Rating Experiment

According to standard ISO 105-A03 [33], the gray card for staining is used to visually evaluate the rubbered cloth samples. The visual rating experiment is carried out in a dark room and in a professional light box ColorChex N7 (see as in Figure 4). Using D65 standard light source, the professional rating person sits in front of the lighting box to visually rate the color fastness grade of the sample. The vertical viewing distance from the eyes to the viewing surface is about 30 cm. During the rating, the unstained lining fabric and the stained lining fabric were placed side by side in the same plane, while the gray sample card was also placed nearby on the same plane. The color difference between the unstained lining fabric and the stained lining fabric was visually assessed according to the difference level on the standard grading gray card, and the reference color fastness grade was acquired of all the modeling and testing fabric samples through the visual rating experiment.

3.3. The BP Neural Network Modeling

3.3.1. Data Preprocessing

We produced a total of 70 groups of rubbing samples, and the sample diagram is shown in Figure 3. We used the colorimetry theory introduced previously to calculate the relevant color data values of the sample image. Then, we performed simple preprocessing on the data, that is, normalization of the CIEDE2000, ΔL*, Δa*, and Δb*. We adopted the min–max normalization method, as shown in Equation (20):
x = ( x min ) / ( max min ) ,
where x is the normalized value, x is the original value, min is the minimum value of all the input values, and max is the maximum value of all the input values. The normalized input values of CIEDE2000, ΔL*, Δa*, and Δb* will be scaled to the range [0–1].

3.3.2. Model Building and Training

In this study, the CIEDE2000 color difference value and the ΔL*, Δa*, and Δb* values of fabric samples are used as input, and the corresponding visual rating grades of fabric samples were used as output to train the BP neural network. According to the number of input and output parameters, we set the number of input layer nodes to 4, and the number of output layer nodes to 1. The number of hidden layer nodes is set to 5. The structure diagram of the training model is shown in Figure 5.
During the training process, we repeatedly adjust the number of iterations to balance the model’s generalization ability and training effect. Through experiments, we finally set the maximum number of iterations to 1000. The learning rate is set to 0.001, and the Sigmoid function is selected as the activation function in the hidden layer neurons; the Sigmoid function is shown in Equation (21).
S ( x ) = 1 1 + e x ,
where S(·)is the Sigmoid activation function, x is the independent variable, and e is the natural logarithm. The weights between the nodes of the BP neural network were continuously adjusted based on the root mean square error of the modeling samples until the overall average error of the modeling samples reached a stable convergence state. When the BP neural network reaches the convergence state, the color fastness prediction model is constructed.

3.4. Testing of Existing Methods

3.4.1. Testing of Color Difference Conversion Method

The color data of all the fabric samples were extracted using the method described in Section 2.1, and the color fastness prediction results were acquired using the color difference conversion method described in Section 2.2.1. To validate the performance of the color difference conversion method, a paired-sample t-test was conducted between the prediction results and the visual results. Firstly, 25 samples are randomly selected from all fabric samples, and they satisfy the normal distribution condition after normal distribution test. Then, the paired-sample t-test was performed to compare whether the prediction results and visual results were significantly different from each other, and the result of paired-sample t-test is shown in Figure 6.
In Figure 6, the purple dot on the left represents the color fastness of the visual rating, the blue dot on the right represents the predicted color fastness of the color difference conversion method, and each line between them connects the same sample. The ‘****’ above the straight line indicates that the p-value of the paired sample t-test is less than 0.0001, which means that there is a significant difference between the prediction results of color difference conversion method and visual rating results. Therefore, a further step is needed to modify the prediction results of color difference conversion method to the visual rating results.
The color difference conversion method is proposed based on the calculation of the CIEDE2000 color difference from the color attributes of L*, a*, and b* in the CIELAB color space. To realize the modification from the prediction results of color difference conversion method to the visual rating results, the ΔL*, Δa*, and Δb* were selected to modify the prediction results of color difference conversion method. The modification of the predicted result is based on a linear regression model, where the ΔL*, Δa*, and Δb* are treated as the independent variables, and the grade difference in color fastness between predicted and visual rated is treated as the dependent variable. The results of linear regression analysis between independent and dependent variables are presented in Table 2.
It can be seen from Table 2 that the p-value of the significance analysis of ΔL* is less than 0.05, indicating a significant impact on the grade difference in color fastness between predicted and visual rated results. The p-values of Δa* and Δb* are greater than 0.05, and the regression coefficient is not significant, indicating that the Δa* and Δb* have little influence on the grade difference in color fastness. At the same time, the VIFs are all less than 5, indicating that there is no multicollinearity among the independent variables. According to the linear regression analysis results, the multiple linear regression model is constructed as shown in Equation (22).
Y = 0.121 0.034 X 1 ,
where X 1 represents the ΔL*, and Y represents the grade difference in color fastness between predicted and visual rated results. Therefore, with the established linear regression model, the predicted color fastness based on color difference conversion method can be easily modified to the visual rating grade. The five-fold cross-validation is also performed for the color difference conversion method when compared with the proposed method in Section 4.

3.4.2. Testing of Gray Scale Difference Method

For testing of the gray scale difference method, the standard grading gray card images were first collected and converted into gray scale images. The gray scale difference between two patches of each grade was extracted, and the gray scale difference was fitted to the corresponding color fastness grade using a curve fitting method. According to the tests, the third-order polynomial curve was found to have the best fitting effect, as shown in Figure 7. The red dots in the figure represent the grayscale difference between the two pieces at each grade of the gray card.
The analysis of the third-order polynomial fitting is presented in Table 3. Based on the fitted parameters, a third-order polynomial fitting equation was obtained, where x represents the gray scale difference in the fabric sample to be evaluated, p1, p2, p3, p4 are the coefficients of the equation, and the predicted color fastness grade is denoted as D. The correlation coefficient of the fitting equation of R 2 is 0.99, indicating a good fitting result between the gray scale difference and the color fastness grade. The next step is to convert the testing fabric sample into a gray scale image, and to extract its gray scale value. After that, the color fastness grade of the testing fabric sample will be predicted using the established third-polynomial prediction model.
Furthermore, in order to test whether we can directly construct the prediction model from the ΔL* of paired staining samples to the visual grade rating results, we also used the ΔL* of the paired staining samples to replace the gray scale difference in them to fit with the visual grade rating results based on polynomial regression.

4. Results and Discussion

In this study, a total of 70 groups of samples were produced to study the staining fastness grade of rubbing. The color data values of the sample images were extracted using the principle of spectral reconstruction, and then a prediction model was built through the BP neural network. The RGB and XYZ space color values of 70 groups of samples are shown in Figure 8.
Due to the limited number of samples, we used five-fold cross-validation to obtain the predicted values of 70 groups of samples. Five-fold cross-validation divides the dataset into five subsets, four of which are used to train the model, while the remaining one is used to evaluate the performance of the model. After training, the average root mean square error of the final five training results is 0.29. We plotted the error scatterplot of 70 groups of samples as shown in Figure 9a. At the same time, to understand the error distribution more clearly, we drew an error histogram as shown in Figure 9b. A histogram uses a series of rectangles with equal width and different heights to represent data. The width represents the group interval, and the height represents the number of samples within the specified group interval.
Figure 9a shows that most of the sample prediction errors are distributed below 0.5, and the prediction error of a few samples exceeds 0.5. From Figure 9b, it can be further seen that there are more than 30 groups of sample error predictions between 0 and 0.2. Among them, the prediction error of twenty-five groups of samples is in the range of 0.2 to 0.4, eleven samples of the prediction error are between 0.4 and 0.6, and only two samples of the prediction error are located between 0.6 and 0.8.
We also calculated the predicted errors of 70 groups of samples. The global version of the color difference conversion method and the gray scale difference method itself as introduced in Section 2.2.1 and the histogram of their prediction errors are plotted in Figure 10.
Figure 10a,b represent the sample error histograms of the color difference conversion method and the gray scale difference method, respectively. It can be seen that, for the color difference conversion method, even though there are more than 35 groups of samples with a predicted error smaller than 0.2, there are still five samples with a predicted error larger than 0.6, and two of them even larger than 0.8. It seems that the error distributions of the color difference conversion method are more dispersed than the proposed method. For the gray scale method, it looks like the same trend, and the prediction result of the gray scale method is much worse as the average of the RMSE of it reaches 0.42, which is much larger than the proposed method of 0.29 and the color difference conversion method of 0.34.
To more fairly compare with the proposed method, five-fold cross-validation is performed for both the color conversion method and the optimized gray scale method, where the gray scale difference is replaced with ΔL* of the paired staining samples. For the color difference conversion method, the first step is to calculate the GRS using Equations (17) and (18) based on ISO 105-A11 [30], and then the calculated GRS is mapped to the visual rating results based on regression. We use the five-fold cross-validation method to test the regression method, and, for each fold during the test, the regression model is constructed using 56 training samples and the remaining 14 samples are used as tests. This is the same validation strategy for the optimized gray scale difference method, where, during the five-fold cross-validation, the third-order polynomial is used to fit the model using 56 samples and the remaining 14 samples are used as testing. After all the five-fold cross-validation, we can calculate the prediction errors of all the samples. The prediction error histograms of the two methods are shown in Figure 11.
As can be seen from Figure 11a, for the color difference conversion method after five-fold cross-validation, there are 34 sample error values below 0.2, 19 sample error values between 0.2 and 0.4, and a few sample error values exceeding 0.4. At the same time, there is one sample whose error value is relatively large, between 1.4 and 1.6. For the optimized gray scale difference method in Figure 11b, 29 groups of samples of predicted error are less than 0.2, 20 groups of samples are between 0.2 and 0.4, and the remaining 21 groups sample between 0.4 and 0.8.
In order to better compare the results of the five-fold cross-validation of the three methods, we provide the boxplot of the predicted error of the three methods in Figure 12. Boxplots are a stable method of describing the distribution of data and are not affected by outliers. For a boxplot, the upper boundary of the box represents the upper quartile of the data, while the lower boundary of the box represents the lower quartile of the data. The line in the middle of the box represents the median value of the data. The upper and lower limits of the box represent the maximum and minimum values of the data. The points outside the box (represented by filled circles in Figure 12) can be considered as outliers, which are typically defined as values that fall below the lower quartile minus 1.5 times the interquartile range or above the upper quartile plus 1.5 times the interquartile range.
As can be seen from Figure 12, the predicted error distributions of the BP neural network and the color difference conversion method are relatively concentrated, while the predicted error distribution of the optimized gray scale difference method is relatively dispersed. At the same time, the average predicted error of the BP model and the color difference conversion method is lower, while the average predicted error of the optimized gray scale difference method is higher. Comparing the BP model and the color difference conversion method, we can see that the color difference conversion method has more outlier points, that is, points with larger errors.
To further compare the five methods tested in this study, we have also summarized all the statistical results of the predicted error of each method in Table 4. They are the average prediction error, maximum error, minimum error, 90 percent error (referring to the predicted error of the 63rd sample after arranging the 70 sample errors sorted from minimum to maximum), and standard deviation. As can be seen from the table, the average error, maximum error, and standard deviation of the BP model are all the smallest. The average error of the gray scale difference method is the largest, which is 0.34. The maximum error of the color difference conversion with five-fold cross-validation is the largest, which is 1.43. The minimum error of all five tested methods is equal to zero or very near to zero, and, for the 90% error, the BP method and the optimized gray scale difference method are almost equal to each other, and obviously smaller than the other three methods. However, upon analysis together with the boxplot of the predicted error in Figure 12 and the predicted error histogram, we can see that the BP model is still better than the optimized gray scale difference method.
Through the fairly comparative analysis of the different methods above, it is concluded that the color fastness prediction method based on spectral reconstruction technology and BP neural network has better performance than the existing method and the optimized version of them, which verifies that the predicted color fastness grades are consistent with the professional visual rating result. It also indicates that the color fastness rating method based on spectral reconstruction and BP neural network can be a good choice in the future to help with digital grading the color fastness regarding rubbing of fabrics.

5. Conclusions

In this study, the color fastness grading method was investigated and a new grading method based on spectral reconstruction technology and BP neural network modeling was proposed for digital grading the color fastness to rubbing of fabrics. The experimental results indicated the effectiveness and superiority of the proposed method. This method eliminates the subjective differences in traditional visual color fastness rating methods. Compared with the spectrophotometer method, the imaging-based color fastness evaluation method is more efficient and flexible. At the same time, the imaging-based color measurement method is also easily combined with imaging process technology to implement the other applications. Therefore, the research results of this article further enhance the feasibility of color fastness grading methods based on digital imaging in practical applications.
However, this study still has some shortcomings. Firstly, colorfastness to rubbing encompasses assessing change in color and staining. This study only discussed the staining color fastness to rubbing but did not study the color fastness to color change. Therefore, it has certain limitations. Secondly, traditional color fastness includes color fastness to rubbing, color fastness to light, color fastness to washing, etc. In this study, only color fastness to rubbing was discussed, and there was no relatively comprehensive study on color fastness. This is also what we need to consider in our next work. Finally, the number of samples used in this study is limited, which may affect the experimental results to a certain extent. Therefore, in future work, we will further increase the number of experimental samples to improve the experiment.

Author Contributions

Methodology, validation, writing—review and editing, J.L. and H.L.; methodology, data collection and analysis, writing—original draft preparation, J.Z. and G.C.; investigation, resources, L.L.; funding acquisition, writing—review, X.H.; data curation, K.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (No. 62305255) and Hubei Provincial Natural Science Foundation General Project (No. 2022CFB537) and Hubei Provincial Department of Education Science and Technology Research Program Youth Talent (No. Q20221706).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yu, X.; Wang, H. Support vector machine classification model for color fastness to ironing of vat dyes. Text. Res. J. 2021, 91, 1889–1899. [Google Scholar] [CrossRef]
  2. Popa, S.; Radulescu-Grad, M.E.; Perdivara, A.; Mosoarca, G. Aspects regarding colour fastness and adsorption studies of a new azo-stilbene dye for acrylic resins. Sci. Rep. 2021, 11, 5889. [Google Scholar] [CrossRef]
  3. Liu, J.; Yuan, Y.; Zhang, X.; Lixia, S. Development of intelligent grade system for textiles color fastness. Cotton Text. Technol. 2019, 47, 41–47. [Google Scholar]
  4. Huang, S.; Tu, X.; Zhou, S. Application of Kappa Coefficient Consistency Test in Visual Evaluation of Color Fastness; Knitting Industry: Nantwich, UK, 2022; p. 5. [Google Scholar]
  5. González-Morales, D.; Valencia, A.; Díaz-Nuñez, A.; Fuentes-Estrada, M.; López-Santos, O.; García-Beltrán, O. Development of a low-cost UV-Vis spectrophotometer and its applicatiorn for the detection of mercuric ions assisted by chemosensors. Sensors 2020, 20, 906. [Google Scholar] [CrossRef] [PubMed]
  6. Deidda, R.; Sacre, P.Y.; Clavaud, M.; Coïc, L.; Avohou, H.; Hubert, P.; Ziemons, E. Vibrational spectroscopy in analysis of pharmaceuticals: Critical review of innovative portable and handheld NIR and Raman spectrophotometers. TrAC Trends Anal. Chem. 2019, 114, 251–259. [Google Scholar] [CrossRef]
  7. An, Y.; Xue, W.; Ding, Y.; Zhang, S. Evaluation of textile color rubbing fastness based on image processing. J. Text. Res. 2023, 43, 131–137. [Google Scholar]
  8. Salueña, B.H.; Gamasa, C.S.; Rubial, J.M.D.; Odriozola, C.A. CIELAB color paths during meat shelf life. Meat Sci. 2019, 157, 107889. [Google Scholar] [CrossRef] [PubMed]
  9. Lin, C.J.; Prasetyo, Y.T.; Siswanto, N.D.; Jiang, B.C. Optimization of color design for military camouflage in CIELAB color space. Color Res. Appl. 2019, 44, 367–380. [Google Scholar] [CrossRef]
  10. Cui, G.; Luo, M.; Rhodes, P.; Rigg, B.; Dakin, J. Grading textile fastness. Part 2: Development of a new staining fastness formula. Color. Technol. 2003, 119, 219–224. [Google Scholar] [CrossRef]
  11. Cui, G.; Luo, M.; Rigg, B.; Butterworth, M.; Dakin, J. Grading textile fastness. Part 3: Development of a new fastness formula for assessing change in colour. Color. Technol. 2004, 120, 226–230. [Google Scholar] [CrossRef]
  12. Zheng, C. Discussion on Evaluation of Color Fastness Grade of Fabrics by Image Method. Light Text. Ind. Technol. 2010, 39, 3. [Google Scholar]
  13. Zhang, Q.; Liu, J.; Gao, W. Grade evaluation of color fastness to laundering based on image analysis. J. Text. Res. 2013, 34, 100–105. [Google Scholar]
  14. Liang, J.; Xin, L.; Zuo, Z.; Zhou, J.; Liu, A.; Luo, H.; Hu, X. Research on the deep learning-based exposure invariant spectral reconstruction method. Front. Neurosci. 2022, 16, 1031546. [Google Scholar] [CrossRef]
  15. Zhao, Y.; Po, L.M.; Yan, Q.; Liu, W.; Lin, T. Hierarchical Regression Network for Spectral Reconstruction from RGB Images. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 1695–1704. [Google Scholar]
  16. Liang, J.; Wan, X. Optimized method for spectral reflectance reconstruction from camera responses. Opt. Express 2017, 25, 28273–28287. [Google Scholar] [CrossRef]
  17. Hong, G.; Luo, M.R.; Rhodes, P.A. A study of digital camera colorimetric characterization based on polynomial modeling. Color Res. Appl. 2001, 26, 76–84. [Google Scholar] [CrossRef]
  18. Jianxin, Z.; Kangping, Z.; Junkai, W.; Xudong, H. Color segmentation and extraction of yarn-dyed fabric based on a hyperspectral imaging system. Text. Res. J. 2021, 91, 729–742. [Google Scholar] [CrossRef]
  19. Qiu, K.; Chen, W.; Zhou, H. Research and development of textile color measurement based on imaging technologies. J. Text. Res. 2020, 41, 73–80. [Google Scholar]
  20. Zhang, J.; Wu, J.; Hu, X.; Zhang, X. Multi-color measurement of printed fabric using the hyperspectral imaging system. Text. Res. J. 2020, 90, 1024–1037. [Google Scholar] [CrossRef]
  21. Li, Y.; Liu, P.; Zhou, J.; Ren, Y.; Jin, J. Center extraction of structured light stripe based on back propagation neural network. Acta Opt. Sin. 2019, 39, 1212005. [Google Scholar]
  22. Luo, Y.; Pei, L.; Zhang, H.; Zhong, Q.; Wang, J. Improvement of the rubbing fastness of cotton fiber in indigo/silicon non-aqueous dyeing systems. Polymers 2019, 11, 1854. [Google Scholar] [CrossRef]
  23. Lei, Z. Study on Test of Color Fastness to Rubbing of Textiles. Iop Conf. Ser. Mater. Sci. Eng. 2020, 793, 012017. [Google Scholar] [CrossRef]
  24. Kert, M.; Gorjanc, M. The study of colour fastness of commercial microencapsulated photoresponsive dye applied on cotton, cotton/polyester and polyester fabric using a pad-dry-cure process. Color. Technol. 2017, 133, 491–497. [Google Scholar] [CrossRef]
  25. Sezgin Bozok, S.; Ogulata, R.T. Effect of silica based sols on the optical properties and colour fastness of synthetic indigo dyed denim fabrics. Color. Technol. 2021, 137, 209–216. [Google Scholar] [CrossRef]
  26. Kumar, R.; Ramratan, K.A.; Uttam, D. To study natural herbal dyes on cotton fabric to improving the colour fastness and absorbency performance. J. Text. Eng. Fash. Technol. 2021, 7, 51–56. [Google Scholar]
  27. Liang, J.; Xiao, K.; Pointer, M.R.; Wan, X.; Li, C. Spectra estimation from raw camera responses based on adaptive local-weighted linear regression. Opt. Express 2019, 27, 5165–5180. [Google Scholar] [CrossRef]
  28. Liu, Y.; Li, J.; Wang, X.; Li, X.; Song, Y.; Li, R. A study of spectral reflectance reconstruction using the weighted fitting algorithm based on the Sino Colour Book. Color. Technol. 2023, in press.
  29. Connah, D.R.; Hardeberg, J.Y. Spectral recovery using polynomial models. In Proceedings of the Color Imaging X: Processing, Hardcopy, and Applications, SPIE, San Jose, CA, USA, 17 January 2005; Volume 5667, pp. 65–75. [Google Scholar]
  30. ISO 105-A11:2012; Textiles—Tests for Colour Fastness—Part A11: Determination of Colour Fastness Grades by Digital Imaging Techniques. ISO: Geneva, Switzerland, 2012.
  31. Song, S.; Xiong, X.; Wu, X.; Xue, Z. Modeling the SOFC by BP neural network algorithm. Int. J. Hydrogen Energy 2021, 46, 20065–20077. [Google Scholar] [CrossRef]
  32. ISO 105-X12:2016; Textiles—Tests for Colour Fastness—Part X12: Colour Fastness to Rubbing. ISO: Geneva, Switzerland, 2016.
  33. ISO 105-A03:1996; Textiles—Tests for Colour Fastness—Part A03: Grey Scale for Assessing Staining. ISO: Geneva, Switzerland, 1996.
Figure 1. The overall flowchart of the proposed color fastness digital grading method.
Figure 1. The overall flowchart of the proposed color fastness digital grading method.
Jimaging 09 00251 g001
Figure 2. (a) Left up: the geometric diagram of the uniformly illuminated light box; (b) right up: the real rendering effect of light box; (c) left down: the real product of light box; and (d) right down: the inner illumination uniformity over the imaging area over the platform.
Figure 2. (a) Left up: the geometric diagram of the uniformly illuminated light box; (b) right up: the real rendering effect of light box; (c) left down: the real product of light box; and (d) right down: the inner illumination uniformity over the imaging area over the platform.
Jimaging 09 00251 g002
Figure 3. Digital images of the rubbed samples in the database; for each sample, the rubbed samples and their corresponding color-stained rubbing cloth are presented.
Figure 3. Digital images of the rubbed samples in the database; for each sample, the rubbed samples and their corresponding color-stained rubbing cloth are presented.
Jimaging 09 00251 g003
Figure 4. The scene and geometric diagram of visual rating experiment settings.
Figure 4. The scene and geometric diagram of visual rating experiment settings.
Jimaging 09 00251 g004
Figure 5. BP neural network structure diagram.
Figure 5. BP neural network structure diagram.
Jimaging 09 00251 g005
Figure 6. Results of the paired-sample t-test between prediction results of color difference conversion method and visual rating results using 25 randomly selected fabric samples.
Figure 6. Results of the paired-sample t-test between prediction results of color difference conversion method and visual rating results using 25 randomly selected fabric samples.
Jimaging 09 00251 g006
Figure 7. The third-order polynomial curve to fit the relationship between gray scale difference in standard grading gray card and the corresponding color fastness grade.
Figure 7. The third-order polynomial curve to fit the relationship between gray scale difference in standard grading gray card and the corresponding color fastness grade.
Jimaging 09 00251 g007
Figure 8. The color distribution of 70 samples in RGB space (left) and CIEXYZ color space (right).
Figure 8. The color distribution of 70 samples in RGB space (left) and CIEXYZ color space (right).
Jimaging 09 00251 g008
Figure 9. (a) The predicted error distribution of the proposed method tested with five-fold cross-validation; (b) predicted error histogram of proposed method.
Figure 9. (a) The predicted error distribution of the proposed method tested with five-fold cross-validation; (b) predicted error histogram of proposed method.
Jimaging 09 00251 g009
Figure 10. (a) The predicted error histogram of color difference conversion method; (b) the predicted error histogram of gray scale difference method.
Figure 10. (a) The predicted error histogram of color difference conversion method; (b) the predicted error histogram of gray scale difference method.
Jimaging 09 00251 g010
Figure 11. (a) The predicted error histogram of color difference conversion method tested with five-fold cross-validation; (b) the predicted error histogram of the optimized gray scale method tested with five-fold cross-validation.
Figure 11. (a) The predicted error histogram of color difference conversion method tested with five-fold cross-validation; (b) the predicted error histogram of the optimized gray scale method tested with five-fold cross-validation.
Jimaging 09 00251 g011
Figure 12. Boxplot of predicted error of proposed BP model, color difference conversion, and optimized gray scale difference method tested with five-fold cross-validation.
Figure 12. Boxplot of predicted error of proposed BP model, color difference conversion, and optimized gray scale difference method tested with five-fold cross-validation.
Jimaging 09 00251 g012
Table 1. Parameters to test the fabric samples.
Table 1. Parameters to test the fabric samples.
Texture100% cotton twill
Size10 × 25 cm
Yarn count40 counts
Density133 × 72
Colorpink, purple, yellow, blue, orange, green
Table 2. The results of linear regression analysis between independent and dependent variables.
Table 2. The results of linear regression analysis between independent and dependent variables.
ModelUnstandardized CoefficientsStandardized CoefficienttSignificance (p-Value)Covariance Statistics
B Standard Error β Tolerance VIF
Constant−0.1210.044 −2.7770.007
ΔL*−0.0340.009−0.469−3.7790.0000.6371.57
Δa*−0.0210.017−0.209−1.2340.2220.3432.92
Δb*−0.0140.014−0.171−0.9850.3280.3253.075
Table 3. The analysis of the third-order polynomial fitting results.
Table 3. The analysis of the third-order polynomial fitting results.
Gray Scale DifferenceFitting EquationCorrelation Coefficient
third-order polynomial
D = p 1 x 3 + p 2 x 2 + p 3 x + p 4
p 1 = 1.72 e 05
p 2 = 0.0029
p 3 = 0.17
p 4 = 4.97
R 2 = 0.99
Table 4. Statistics of the predicted error of all the tested methods in this study.
Table 4. Statistics of the predicted error of all the tested methods in this study.
BP ModelColor Difference ConversionGray Scale DifferenceColor Difference Conversion (Five-Fold)Optimized Gray Scale Difference
Ave.0.240.260.340.290.28
Max.0.721.170.951.430.75
Min.0.010.000.010.000.01
90%0.490.540.710.630.48
Std.0.170.220.260.260.18
Note: The bold vlues represent the best result for each statistic metric.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, J.; Zhou, J.; Hu, X.; Luo, H.; Cao, G.; Liu, L.; Xiao, K. Digital Grading the Color Fastness to Rubbing of Fabrics Based on Spectral Reconstruction and BP Neural Network. J. Imaging 2023, 9, 251. https://doi.org/10.3390/jimaging9110251

AMA Style

Liang J, Zhou J, Hu X, Luo H, Cao G, Liu L, Xiao K. Digital Grading the Color Fastness to Rubbing of Fabrics Based on Spectral Reconstruction and BP Neural Network. Journal of Imaging. 2023; 9(11):251. https://doi.org/10.3390/jimaging9110251

Chicago/Turabian Style

Liang, Jinxing, Jing Zhou, Xinrong Hu, Hang Luo, Genyang Cao, Liu Liu, and Kaida Xiao. 2023. "Digital Grading the Color Fastness to Rubbing of Fabrics Based on Spectral Reconstruction and BP Neural Network" Journal of Imaging 9, no. 11: 251. https://doi.org/10.3390/jimaging9110251

APA Style

Liang, J., Zhou, J., Hu, X., Luo, H., Cao, G., Liu, L., & Xiao, K. (2023). Digital Grading the Color Fastness to Rubbing of Fabrics Based on Spectral Reconstruction and BP Neural Network. Journal of Imaging, 9(11), 251. https://doi.org/10.3390/jimaging9110251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop