A Novel Approach to Image Recoloring for Color Vision Deficiency
Abstract
:1. Introduction
2. State of the Art and the Current Contribution
2.1. Image Recoloring for the Color-Blind: State of the Art
2.2. The Current Contribution
- The first contribution concerns the number of colors to be modified. In contrast to other approaches that adapt all colors of the input image [26,27,28,30], our approach modifies only the colors confused by the color blind. Since not all image colors are modified it is expected that the recolored image will maintain the naturalness.
- The second contribution assumes that the adaptation of confusing colors should be driven by a confusion-line based approach. Confusion lines are the product of extensive experimentations [3,6,9]. As such, they accurately reflect the way a dichromat perceives colors. In contrast to other approaches that perform the recoloring only in terms of optimization [15,16,23,25,32], this paper introduces a mechanism to remove specific confusing colors to specific confusion lines, thus enhancing the contrast. Since each color is transferred to its closest non-occupied confusion line, it is expected that the naturalness will be preserved, also.
- The third contribution concerns the need to further optimize both naturalness and contrast. Unlike other approaches that use color or plane rotation mechanisms [13,17,27,28], herein we manipulate the luminance channel to minimize a regularized objective that uniformly combines the naturalness and contrast criteria.
3. The Proposed Method
3.1. Preliminaries
3.2. Module 1: Key Color Extraction
3.3. Module 2: Key Color Translation
- Case 1: A confusion line contains at least one color from the set U. If it also contains colors from the set V, then all these colors are going to be translated to separate confusion lines.
- Case 2: A confusion line does not contain colors from the set U, but it contains at least two colors from the set V. In this case, the color with the lowest rank remains on the confusion line, while the rest of the colors are translated to different confusion lines.
- Case 3: A confusion line contains only one color, which belongs to the set V. In this case, no color is going to be translated.
Algorithm 1: Translation process of the colors belonging to the set V |
Inputs: The sets , , ; Output: The set |
Set and |
Whileanddo |
|
End While |
|
- 1.
- It is possible that at least two colors will move to distant confusion lines. Although this will increase contrast, the naturalness will be compromised.
- 2.
- It is recommendedso that, and all colors ofwill move to different confusion lines. We performed extensive experiments on the Flowers and Fruits data sets, which contain 195 calibrated color images and were taken from the McGill’s calibrated color image database [41] and found that the above condition is effective as far as the color segmentation of the input image is concerned. However, depending on the designer’s choice, ifit is possible to getand some key colors ofwill not be removed. In this case the naturalness will be enhanced, and the contrast will be reduced.
- 1.
- Let us assume that there is an occupied confusion line, which falls in the above-mentioned Case 1. Thus, the confusion line contains key colors from the sets U and V and therefore, all key colors belonging to V and lying on that confusion line must be translated. By translating, first, the key color with the highest rank, this color will be removed to its closest non-occupied confusion line, and the final color will be close to the original one. In this direction, a low ranked key color will be removed to a distant non-occupied confusion line. Following this strategy, large image areas will be recolored using colors similar to the original ones, while small image areas using colors much different to the original ones. This fact directly implies that the recolored image will preserve the naturalness criterion. On the other hand, if we choose to remove the low ranked key colors first, the opposite effect will take place and the naturalness criterion of the recolored image will be seriously compromised.
- 2.
- Let us assume that there is an occupied confusion line, which falls in the above-mentioned Case 2. Thus, the confusion line contains key colors from the set V and therefore all but one key colors must be translated. If we choose to remove the low ranked key colors first, then the non-occupied confusion lines closer to the above occupied one will be exhausted, and the higher ranked key colors will be forced to be removed to distant confusion lines. Thus, large images areas will be recolored using much different colors to the original one and the naturalness will be seriously damaged. Yet, the highest ranked key color will remain the same. However, there is no guarantee that this counterbalancing effect will be strong enough to improve the naturalness criterion.
3.4. Module 3: Key Color Optimization
3.5. Module 4: Cluster-to-Cluster Color Transfer
3.6. Computational Complexity Analysis
4. Experimental Evaluation
4.1. Quantitative Evaluation
4.1.1. Quantitative Evaluation Using the Data Set of the Art Paintings
4.1.2. Quantitative Evaluation Using the Data Set of Natural Images
4.2. Qualitative Evaluation
4.3. Subjective Evaluation
5. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
References
- Fairchild, M.D. Color Appearance Models; Wiley: West Sussex, UK, 2005. [Google Scholar]
- Stockman, A.; Sharpe, L.T. The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from meas-urements in observers of known genotype. Vis. Res. 2000, 40, 1711–1737. [Google Scholar] [CrossRef] [Green Version]
- Smith, V.C.; Pokorny, J. Color matching and color discrimination. In The Science of Color; Elsevier BV: Oxford, UK, 2003; pp. 103–148. [Google Scholar]
- Pridmore, R.W. Orthogonal relations and color constancy in dichromatic colorblindness. PLoS ONE 2014, 9, e107035. [Google Scholar] [CrossRef]
- Fry, G.A. Confusion lines of dichromats. Color Res. Appl. 1992, 17, 379–383. [Google Scholar] [CrossRef]
- Moreira, H.; Álvaro, L.; Melnikova, A.; Lillo, J. Colorimetry and dichromatic vision. In Colorimetry and Image Processing; IntechOpen: London, UK, 2018; pp. 2–21. [Google Scholar]
- Han, D.; Yoo, S.J.; Kim, B. A novel confusion-line separation algorithm based on color segmentation for color vision deficiency. J. Imaging Sci. Technol. 2012, 56, 1–17. [Google Scholar] [CrossRef]
- Choi, J.; Lee, J.; Moon, H.; Yoo, S.J.; Han, D. Optimal color correction based on image analysis for color vision deficiency. IEEE Access 2019, 7, 154466–154479. [Google Scholar] [CrossRef]
- Judd, D.B. Standard response functions for protanopic and deuteranopic vision. J. Opt. Soc. Am. 1945, 35, 199–221. [Google Scholar] [CrossRef]
- Ribeiro, M.; Gomes, A.J. Recoloring algorithms for colorblind people: A survey. ACM Comput. Surv. 2019, 52, 1–37. [Google Scholar] [CrossRef] [Green Version]
- Wakita, Κ.; Shimamura, Κ. SmartColor: Disambiguation framework for the colorblind. In Proceedings of the 7th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’05), Baltimore, MD, USA, 9–12 October 2005; pp. 158–165. [Google Scholar]
- Jefferson, L.; Harvey, R. Accommodating color blind computer users. In Proceedings of the 8th international ACM SIGACCESS Conference on Computers and Accessibility—Assets ’06, Portland, OR, USA, 23–25 October 2006; pp. 40–47. [Google Scholar]
- Kuhn, G.R.; Oliveira, M.M.; Fernandes, L.A.F. An efficient naturalness-preserving image-recoloring method for dichro-mats. IEEE Trans. Vis. Comput. Graph. 2008, 14, 1747–1754. [Google Scholar] [CrossRef] [Green Version]
- Zhu, Z.; Toyoura, M.; Go, K.; Fujishiro, I.; Kashiwagi, K.; Mao, X. Naturalness- and information-preserving image recoloring for red–green dichromats. Signal Process. Image Commun. 2019, 76, 68–80. [Google Scholar] [CrossRef]
- Kang, S.-K.; Lee, C.; Kim, C.-S. Optimized color contrast enhancement for dichromats suing local and global contrast. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 1048–1052. [Google Scholar]
- Meng, M.; Tanaka, G. Lightness modification method considering visual characteristics of protanopia and deuteranopia. Opt. Rev. 2020, 27, 548–560. [Google Scholar] [CrossRef]
- Huang, J.-B.; Tseng, Y.-C.; Wu, S.-I.; Wang, S.-J. Information preserving color transformation for protanopia and deuteran-opia. IEEE Signal Process. Lett. 2007, 14, 711–714. [Google Scholar] [CrossRef]
- Nakauchi, S.; Onouchi, T. Detection and modification of confusing color combinations for red-green dichromats to achieve a color universal design. Color Res. Appl. 2008, 33, 203–211. [Google Scholar] [CrossRef]
- Rigos, A.; Chatzistamatis, S.; Tsekouras, G.E. A systematic methodology to modify color images for dichromatic human color vision and its application in art paintings. Int. J. Adv. Trends Comput. Sci. Eng. 2020, 9, 5015–5025. [Google Scholar] [CrossRef]
- Bennett, M.; Quigley, A. A method for the automatic analysis of colour category pixel shifts during dichromatic vision. Lect. Notes Comput. Sci. 2006, 4292, 457–466. [Google Scholar]
- Martínez-Domingo, M.Á.; Valero, E.M.; Gómez-Robledo, L.; Huertas, R.; Hernández-Andrés, J. Spectral filter selection for increasing chromatic diversity in CVD subjects. Sensors 2020, 20, 2023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jeong, J.-Y.; Kim, H.-J.; Wang, T.-S.; Yoon, Y.-J.; Ko, S.-J. An efficient re-coloring method with information preserving for the color-blind. IEEE Trans. Consum. Electron. 2011, 57, 1953–1960. [Google Scholar] [CrossRef]
- Simon-Liedtke, J.T.; Farup, I. Multiscale daltonization in the gradient domain. J. Percept. Imaging 2018, 1, 10503-1–10503-12. [Google Scholar] [CrossRef]
- Farup, I. Individualised Halo-Free Gradient-Domain Colour Image Daltonisation. J. Imaging 2020, 6, 116. [Google Scholar] [CrossRef]
- Huang, J.-B.; Chen, C.-S.; Jen, T.-S.; Wang, S.-J. Image recolorization for the color blind. In Proceedings of the 2009 IEEE Inter-national Conference on Acoustics, Speech and Signal Processing (ICASSP 2009), Taipei, Taiwan, 19–24 April 2009; pp. 1161–1164. [Google Scholar]
- Hassan, M.F.; Paramesran, R. Naturalness preserving image recoloring method for people with red–green deficiency. Signal Process. Image Commun. 2017, 57, 126–133. [Google Scholar] [CrossRef]
- Wong, A.; Bishop, W. Perceptually-adaptive color enhancement of still images for individuals with dichromacy. In Proceedings of the 2008 Canadian Conference on Electrical and Computer Engineering; Institute of Electrical and Electronics Engineers (IEEE), Vancouver, BC, Canada, 6–7 October 2008; pp. 002027–002032. [Google Scholar]
- Ching, S.-L.; Sabudin, M. Website image colour transformation for the colour blind. In Proceedings of the 2nd International Conference on Computer Technology and Development; Institute of Electrical and Electronics Engineers (IEEE), Cairo, Egypt, 2–4 November 2010; pp. 255–259. [Google Scholar]
- Lin, H.-Y.; Chen, L.-Q.; Wang, M.-L. Improving Discrimination in Color Vision Deficiency by Image Re-Coloring. Sensors 2019, 19, 2250. [Google Scholar] [CrossRef] [Green Version]
- Ma, Y.; Gu, X.; Wang, Y. Color discrimination enhancement for dichromats using self-organizing color transformation. Inf. Sci. 2009, 179, 830–843. [Google Scholar] [CrossRef]
- Li, J.; Feng, X.; Fan, H. Saliency Consistency-Based Image Re-Colorization for Color Blindness. IEEE Access 2020, 8, 88558–88574. [Google Scholar] [CrossRef]
- Chatzistamatis, S.; Rigos, A.; Tsekouras, G.E. Image Recoloring of Art Paintings for the Color Blind Guided by Semantic Seg-mentation. In Proceedings of the 21st International Conference on Engineering Applications of Neural Networks (EANN 2020), Halkidiki, Greece, 5–7 June 2020; pp. 261–273. [Google Scholar]
- Vienot, F.; Brettel, H.; Ott, L.; Ben M’Barek, A.; Mollon, J.D. What do color-blind people see. Nature 1995, 376, 127–128. [Google Scholar] [CrossRef]
- Vienot, F.; Brettel, H.; Mollon, J.D. Digital Video Colourmaps for Checking the Legibility of Displays by Dichromats. Color Res. Appl. 1999, 24, 243–252. [Google Scholar] [CrossRef]
- Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
- Price, K.V.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A Practical Approach to Global Optimization; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Ruderman, D.L.; Cronin, T.W.; Chiao, C.-C. Statistics of cone responses to natural images: Implications for visual coding. J. Opt. Soc. Am. A 1998, 15, 2036–2045. [Google Scholar] [CrossRef] [Green Version]
- Poynton, C.A. Digital Video and HDTV: Algorithms and Interfaces; Morgan Kaufmann: San Francisco, CA, USA, 2003. [Google Scholar]
- Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Eng. Med. Biol. Mag. 2001, 21, 34–41. [Google Scholar] [CrossRef]
- Reinhard, E.; Pouli, T. Colour spaces for colour transfer. Lect. Notes Comput. Sci. 2011, 6626, 1–15. [Google Scholar]
- Olmos, A.; Kingdom, F.A.A. A Biologically Inspired Algorithm for the Recovery of Shading and Reflectance Images. Perception 2004, 33, 1463–1473. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
- Ishihara, S. Tests for Color Blindness. Am. J. Ophthalmol. 1918, 1, 376. [Google Scholar] [CrossRef]
- Farnsworth, D. The Farnsworth Dichotomous Test for Color Blindness—Panel D–15; Psychological Corporation: New York, NY, USA, 1947. [Google Scholar]
- Thurstone, L.L. A law of comparative judgement. Psychol. Rev. 1927, 34, 273–286. [Google Scholar] [CrossRef]
- Mosteller, F. Remarks on the method of paired comparisons: I. The least squares solution assuming equal standard deviations and equal correlations. Psychometrika 1951, 16, 3–9. [Google Scholar] [CrossRef]
Module 1 | Modules 2 and 3 | Differential Evolution | |||
---|---|---|---|---|---|
Parameter | Value | Parameter | Value | Parameter | Value |
10 | (Protanopia) | 17 | 20 | ||
25 | (Deuteranopia) | 15 | 0.8 | ||
5 | 5 | 0.6 | |||
5 | 0.2 | 100 |
Protanopia | Deuteranopia | |||||||
---|---|---|---|---|---|---|---|---|
Painting | Proposed | Method 1 | Method 2 | Method 3 | Proposed | Method 1 | Method 2 | Method 3 |
1 | 4.8324 | 9.9372 | 13.2600 | 12.0398 | 7.4648 | 13.6121 | 13.2600 | 9.6462 |
2 | 2.2735 | 7.7299 | 8.3201 | 8.4141 | 2.4797 | 9.1270 | 8.3201 | 5.6994 |
3 | 4.4311 | 6.5477 | 9.3297 | 9.5566 | 2.1356 | 4.6454 | 9.3297 | 6.9328 |
4 | 4.2495 | 6.5726 | 11.8014 | 11.7561 | 4.1392 | 4.5544 | 11.8014 | 9.4815 |
5 | 3.7393 | 7.5320 | 8.7266 | 10.8788 | 3.9259 | 7.9379 | 8.7266 | 7.5273 |
6 | 2.6914 | 2.4915 | 6.7193 | 6.4676 | 2.6676 | 1.5740 | 6.7193 | 4.7121 |
Protanopia | Deuteranopia | |||||||
---|---|---|---|---|---|---|---|---|
Painting | Proposed | Method 1 | Method 2 | Method 3 | Proposed | Method 1 | Method 2 | Method 3 |
1 | 0.9386 | 0.9828 | 0.9654 | 0.9342 | 0.9443 | 0.9430 | 0.9654 | 0.9752 |
2 | 0.9939 | 0.9832 | 0.9663 | 0.9518 | 0.9690 | 0.9936 | 0.9663 | 0.9922 |
3 | 0.9891 | 0.9735 | 0.9254 | 0.9099 | 0.9922 | 0.9831 | 0.9254 | 0.9788 |
4 | 0.9454 | 0.9747 | 0.9647 | 0.9141 | 0.9212 | 0.9778 | 0.9647 | 0.9707 |
5 | 0.9911 | 0.9882 | 0.9676 | 0.9545 | 0.9907 | 0.9854 | 0.9676 | 0.9828 |
6 | 0.9895 | 0.9917 | 0.9631 | 0.9423 | 0.9953 | 0.9760 | 0.9631 | 0.9842 |
Method | Min | 1st Quartile (Q1) | Median | 3rd Quartile (Q3) | Max |
---|---|---|---|---|---|
Jnat | |||||
Method 1 | 0.994 | 8.579 | 12.881 | 16.082 | 22.042 |
Method 2 | 5.262 | 10.149 | 12.181 | 14.003 | 17.825 |
Method 3 | 6.494 | 11.649 | 13.277 | 14.783 | 20.177 |
Proposed | 0.036 | 2.297 | 4.802 | 8.208 | 17.376 |
FSIMc | |||||
Method 1 | 0.897 | 0.955 | 0.972 | 0.986 | 0.999 |
Method 2 | 0.836 | 0.916 | 0.939 | 0.955 | 0.986 |
Method 3 | 0.756 | 0.889 | 0.926 | 0.947 | 0.988 |
Proposed | 0.885 | 0.959 | 0.973 | 0.986 | 1.000 |
Method | Min | Q1 | Median | Q3 | Max | 95% CIs for Medians (Bonferroni adj.) | p-Value (Bonferroni adj.) |
---|---|---|---|---|---|---|---|
Jnat Differences (Method–Proposed) | |||||||
Method 1 | −9.887 | 3.214 | 6.579 | 10.987 | 18.659 | (5.736, 8.035) | <0.015 |
Method 2 | −3.085 | 4.611 | 7.020 | 8.435 | 13.017 | (6.016, 7.400) | <0.015 |
Method 3 | −1.251 | 4.912 | 7.475 | 10.254 | 14.898 | (6.810, 8.208) | <0.015 |
FSIMc Differences (Proposed–Method) | |||||||
Method 1 | −0.078 | −0.012 | 0.002 | 0.015 | 0.072 | (−0.001,0.006) | 0.603 |
Method 2 | −0.046 | 0.022 | 0.037 | 0.053 | 0.118 | (0.033, 0.040) | <0.015 |
Method 3 | −0.041 | 0.027 | 0.049 | 0.076 | 0.198 | (0.042, 0.058) | <0.015 |
Method | Min | 1st Quartile (Q1) | Median | 3rd Quartile (Q3) | Max |
---|---|---|---|---|---|
Jnat | |||||
Method 1 | 0.312 | 6.901 | 11.590 | 15.072 | 20.727 |
Method 2 | 5.262 | 10.149 | 12.181 | 14.003 | 17.825 |
Method 3 | 2.842 | 7.155 | 9.485 | 12.239 | 15.946 |
Proposed | 0.045 | 2.523 | 4.890 | 8.197 | 17.076 |
FSIMc | |||||
Method 1 | 0.897 | 0.956 | 0.976 | 0.910 | 1.000 |
Method 2 | 0.836 | 0.916 | 0.939 | 0.955 | 0.986 |
Method 3 | 0.790 | 0.949 | 0.972 | 0.985 | 1.000 |
Proposed | 0.908 | 0.960 | 0.978 | 0.990 | 0.998 |
Method | Min | Q1 | Median | Q3 | Max | 95% CIs for Medians (Bonferroni adj.) | p-Value (Bonferroni adj.) |
---|---|---|---|---|---|---|---|
Jnat Differences (Method–Proposed) | |||||||
Method 1 | −8.516 | 0.460 | 4.880 | 10.072 | 19.144 | (3.569, 7.572) | <0.015 |
Method 2 | −2.655 | 5.022 | 6.558 | 7.945 | 12.800 | (5.866, 7.151) | <0.015 |
Method 3 | −2.209 | 2.259 | 3.768 | 5.335 | 10.739 | (3.223, 4.345) | <0.015 |
FSIMc Differences (Proposed–Method) | |||||||
Method 1 | −0.081 | −0.020 | −0.001 | 0.018 | 0.090 | (−0.005, 0.006) | 1.000 |
Method 2 | −0.031 | 0.022 | 0.038 | 0.056 | 0.096 | (0.033, 0.044) | <0.015 |
Method 3 | −0.084 | −0.004 | 0.005 | 0.019 | 0.141 | (0.001, 0.009) | <0.015 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tsekouras, G.E.; Rigos, A.; Chatzistamatis, S.; Tsimikas, J.; Kotis, K.; Caridakis, G.; Anagnostopoulos, C.-N. A Novel Approach to Image Recoloring for Color Vision Deficiency. Sensors 2021, 21, 2740. https://doi.org/10.3390/s21082740
Tsekouras GE, Rigos A, Chatzistamatis S, Tsimikas J, Kotis K, Caridakis G, Anagnostopoulos C-N. A Novel Approach to Image Recoloring for Color Vision Deficiency. Sensors. 2021; 21(8):2740. https://doi.org/10.3390/s21082740
Chicago/Turabian StyleTsekouras, George E., Anastasios Rigos, Stamatis Chatzistamatis, John Tsimikas, Konstantinos Kotis, George Caridakis, and Christos-Nikolaos Anagnostopoulos. 2021. "A Novel Approach to Image Recoloring for Color Vision Deficiency" Sensors 21, no. 8: 2740. https://doi.org/10.3390/s21082740
APA StyleTsekouras, G. E., Rigos, A., Chatzistamatis, S., Tsimikas, J., Kotis, K., Caridakis, G., & Anagnostopoulos, C. -N. (2021). A Novel Approach to Image Recoloring for Color Vision Deficiency. Sensors, 21(8), 2740. https://doi.org/10.3390/s21082740