The calculation method of color difference is introduced in Equation (17), where represents the difference in brightness; represents the difference in redness and greenness; represents the difference in yellowness and blueness. So, color difference represents the colorimetric accuracy. For Equations (18) and (19), where represents the recovery spectral reflectance, R represents the original spectral reflectance; in this work, . The root of mean square shows the distance between the original and recovered spectral reflectance. The goodness-of-fit coefficient shows the similarity between the original and recovered spectral reflectance.
3.1. Simulation Experiment
The 1269 Munsell Matt chips [
27], 140 ColorChecker SG [
35] and 354 Vrhel spectral datasets [
36] are used in the simulation experiment. Firstly, The Munsell Matt chips are used as the training samples. The Munsell Matt chips have 1269 color chips, which are mostly used in spectral recovery, and there are corresponding color blocks for each hue in this training sample. Therefore, the Munsell Matt chips are more convincing as the training sample. Using only one type of color chip undoubtedly affects the universality and effectiveness of the experiment. So, other color chips and Munsell Matt chips are used together to verify the proposed method.
The simulated environment is described in this section. The NokiaN900(Nokia Corporation, Espoo, Finland) is selected as the spectral sensitivity function, and the CIE D65 is selected as the light source environment in
Figure 2.
All the spectral reflectance data from in the experiment are presented in
Figure 3.
Figure 3 shows that our experiment involves three kinds of color chips.
Figure 3a shows the Munsell Matt chips with 1269 color chips.
Figure 3b shows the 140 ColorChecker SG.
Figure 3c shows the 354 Vrhel spectral dataset. The spectral reflectance ranges from 400 to 700 at a 10 nm interval.
After analyzing the spectral information data, the response value information is also analyzed. The equipment that obtains the response values greatly depends on the real environment. The camera response value depends a lot on the equipment, which is not a uniform space. In order to facilitate better observation and description of the color, the CIE Lab space with good spatial uniformity is selected as the description background. The full name of Lab is CIELAB, sometimes written as CIE L *a*b*. It is a color pattern developed by the CIE (International Commission on Illumination). Therefore, in
Figure 4a, it can be easily seen that the data are distributed uniformly in space, which describes the LAB information of Munsell Matt chips. From
Figure 4b,c, it is also easy to see that since the color chips selected are ColorChecker SG and Vrhel spectral dataset, the number is obviously smaller. The LAB is calculated under CIE D65 illuminants.
After the spectral information and response values of the experiment are introduced, the proposed method is tested in the following sections. Distance is used as a parameter in the experiment, and the number of merging iterations between subspaces becomes more and more with the increase in distance. The number of center points is also less and less with the increase in distance, but this does not mean that the center point can directly determine whether the recovery accuracy is good or bad, due to the complex relationship inside. Therefore, the relationship between parameters and accuracy is explored through experiments. In
Figure 5, the Munsell Matt chips are used as the training samples. It can be easily seen that both the self-recovery and the recovery of the ColorChecker SG and Vrhel spectral datasets have the best results under the distance of 40. As can be seen from
Figure 5, when the Munsell Matt chips are used to recover the other three kinds of data, the results all have the same trend of change. So, the distance is set to 40.
As we can see from
Table 1, the proposed method, PI, PCA and Wang’s [
37] and Zhang’s [
32] methods recover spectral reflectance under the same conditions. The evaluation of the recovery accuracy can be divided into two parts: colorimetric accuracy and spectral accuracy. Firstly, colorimetric analysis is the analysis of color difference. For the color difference analysis in
Table 1, it can be seen that the average color difference obtained by either self-recovery or using other samples as training samples using the proposed method is the smallest. The smallest average color difference is self-recovery, which is 0.3063. Secondly, spectral accuracy analysis is the RMSE and GFC. The RMSE and GFC show the same results. The best results are in bold.
To visualize the recovery accuracy, a boxplot is used in
Figure 6. Munsell Matt chips are used as training samples and ColorChecker SG is used as the testing sample.
Figure 6a represents CIE DE76 color difference and
Figure 6b represents CIE DE2000 color difference.
Figure 6c represents RMSE and
Figure 6b represents GFC. The more compact the box, the better the precision. It is not difficult to conclude that the proposed method shows better performance. In
Figure 6a–c, it can be seen that the distance between the red dots is relatively close, and the accuracy of recovery is more stable than other methods. There are six important data points related to a boxplot: upper edge, lower edge, upper quartile, lower quartile, the median, and outlier. The upper and lower solid black lines represent the upper and lower edge values. The top and bottom of the blue box line indicate the top and bottom quartiles. The red color inside the box indicates the median, and red circles represent outliers.
In
Figure 7, Munsell Matt chips are used as training samples and ColorChecker SG is used as the testing sample. Four random samples are selected for comparison. It can be easily seen that the proposed method is closer to the original sample, so the proposed method shows better performance.
In
Figure 7, different colors represent different methods. Correspond to the color of the method in the
Figure 7d. After simple verification of the proposed method, in order to show the good performance of the method, which is applied to the spectral images [
27], the spectral images ColorChecker and fruitandflowers are used.
It can easily be seen in
Figure 8 that the results comparison of the spectral images uses the different methods to recover spectral reflectance.
Figure 8a represents the original RGB image.
Figure 8b–f is called the error map, which calculates the color difference of the spectral reflectance recovered by different methods. More red means a larger color difference, and more blue means a lesser color difference. Therefore, the proposed method shows better performance.
Previous research results show that so many training samples also contain sample redundancy in the field of spectral recovery. It is not necessary to use all samples in the database as training samples. Otherwise, it will cause a heavy workload and inconvenience for sample collection and processing, especially in outdoor applications. Therefore, the optimal selection of representative samples from existing databases has always been an important aspect of spectral recovery.
According to
Table 2, using different distances will select the corresponding representative samples. As the distance increases, the accuracy shows a trend of first increasing, then decreasing.
As can be seen from
Figure 9, the results show that there is an extreme value of recovery accuracy, which is very similar to
Figure 5. It is also a concave linear curve with a small amplitude, which shows better properties at the distance of 30, and the selected sample is 24. So, we set the distance to 30 to determine the representative samples. The comparison of the proposed method using representative points and some current methods is shown in
Table 3.
After processing selected representative samples,
Figure 10 shows the distribution of the training samples and the representative samples selected by several methods in the xyY space. Blue points represent training points and red points represent representative points. Timo Eckhard [
38] discussed the sample selection method proposed above, and we select his detection results as the selection samples. Liang’s [
30] selected sample used 60 in his article.
Figure 10a represents the distribution of selected samples obtained by the proposed method.
Figure 10b uses Cheung’s method to calculate the distribution of the selected samples in the training sample.
Figure 10c represents Hardburg’s method to calculate the distribution of the selected samples in the training sample.
Figure 10d uses Liang’s method to obtain the selected samples. The red dots represent the selected representative points, and the blue dots represent the overall data in
Figure 10 and
Figure 11.
3.2. Recovery for Different Illuminants and Cameras
Considering that different illuminants’ [
24,
39] and cameras’ spectral sensitivities will affect the proposed method, the Munsell chips are used as training samples, and different illuminants and spectral sensitivity functions are used to verify the effectiveness of the proposed method in
Table 4,
Table 5,
Table 6 and
Table 7. Then, the results are recovered according to the ColorChecker SG data testing sample.
Table 4 shows the results of RMSE, GFC and color difference calculated by each comparison method under different illuminants. Five kinds of illuminants are used in
Table 4, which are CIE illuminant B, CIE illuminant C, CIE illuminant D50, CIE illuminant D65, CIE illuminant E, and CIE illuminant F2, respectively. It is easy to see that both the mean of color difference and the average value obtained by RMSE and GFC show better performance.
Table 5 shows the results of the whole recovery by selecting representative samples under different illuminants. The proposed method shows better performance in more scenarios.
As can be seen from
Figure 11, the distribution of representative samples with the distance of 30 under different illuminants is shown.
The experiments are performed by using the spectral sensitivity of different commercial cameras. The results are general due to different spectral sensitivities. The red, green and blue channels of the digital camera replace the NokiaN900 sensitivity function mentioned above, which is the database of camera sensitivity functions measured by Jiang in 2013 [
40].
The results in
Table 6 are obtained using Munsell Matt chips as training samples and ColorChecker SG data as testing samples. The spectral sensitivity functions in
Figure 12 are used as the observer condition. The results are the same as those shown in
Table 4, and the proposed method shows better results in terms of mean values.
After the spectral sensitivities are introduced,
Table 6 shows the results of several recovery methods using different camera sensitivities.
Table 7 shows the results of several recovery methods using different illuminants under Canon5D Mark II spectral sensitivity function after the exploration of the illuminants and spectral sensitivity. Recovery accuracy is demonstrated by using spectral maps.
Figure 13 and
Figure 14 both use Munsell as the training sample to obtain the response value under the Canon5D Mark II and the illuminant CIE D65.
Figure 13 shows fruitandflowers as the testing sample and
Figure 14 shows ColorChecker as the testing sample. Their error bars are the same as in
Figure 5.
Figure 13 and
Figure 14 show the results obtained with six sensitivity functions using different spectral recovery methods. The range of color from blue to red indicates that the error ranges from small to large. The results show that the proposed method is superior to other methods.