Next Article in Journal
A Model for Evaluating the Performance of a Multiple Keywords Spotting System for the Transcription of Historical Handwritten Documents
Next Article in Special Issue
A Cortical-Inspired Sub-Riemannian Model for Poggendorff-Type Visual Illusions
Previous Article in Journal
Detecting Morphing Attacks through Face Geometry Features
Previous Article in Special Issue
On Computational Aspects of Krawtchouk Polynomials for High Orders
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Individualised Halo-Free Gradient-Domain Colour Image Daltonisation

Department of Computer Science, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway
J. Imaging 2020, 6(11), 116; https://doi.org/10.3390/jimaging6110116
Submission received: 7 October 2020 / Revised: 24 October 2020 / Accepted: 26 October 2020 / Published: 29 October 2020

Abstract

:
Daltonisation refers to the recolouring of images such that details normally lost by colour vision deficient observers become visible. This comes at the cost of introducing artificial colours. In a previous work, we presented a gradient-domain colour image daltonisation method that outperformed previously known methods both in behavioural and psychometric experiments. In the present paper, we improve the method by (i) finding a good first estimate of the daltonised image, thus reducing the computational time significantly, and (ii) introducing local linear anisotropic diffusion, thus effectively removing the halo artefacts. The method uses a colour vision deficiency simulation algorithm as an ingredient, and can thus be applied for any colour vision deficiency, and can even be individualised if the exact individual colour vision is known.

1. Introduction

There are three types of cone cells, or cones, each with different pigment, in the retinas of the human eye [1]. The cones are refered to as L-cones, M-cones, and S-cones, suggesting sensitivities to long, medium, and short wavelenghts, respectively. Among colour normal observers (trichromats) there are only minor variations in the spectral sensitivities of the three cone types. However, about eight percent of the male population and 0.4 percent of the female population suffer from a congenital colour vision deficiency (CVD) [2]. This is manifested either as a change in one of the cone types (anomalous trichromacy), or as a complete lack of one (dichromacy) or, very seldomly, two (monochromacy) of the cone types. Referring to the affected L, M, and S-cones with the Greek prefixes prot-, deuter-, and tri-, respectively, these CVDs can be further characterised as protanomaly, deuteranomaly, and tritanomaly for the anomalous trichromacies, and protanopia, deuteranopia, and tritanopia for the dichromacies. Anomalies in or absence of the L or M cones both lead to various forms of red–green CVDs, whereas anomalies in of absence of the S cones lead to blue–yellow CVDs. In the population, the various forms of red–green CVDs are by far the most common ones.
In many situations, e.g., in graphic design, it can be important for colour normal observers to understand how the world looks through the eyes of a CVD observer. Various simulation methods have been developed for this purpose. In a thorough behavioural comparison of existing CVD simulation methods performed by Simon-Liedtke and Farup [3], the method of Brettel et al. [4] was found to be the best performing one for dichromats.
For the most part, people with CVDs can live completely normal lives. However, in some situations, they may experience certain disadvantages [5]. A considerable effort has been undertaken for coming up with techniques to improve the ability of CVD people to differentiate colour better through optical filters (https://enchroma.com/, visited 28 October 2020), introduction of lost information through sensory substitution devices [6], and even a medical ‘cure’ has been proposed [7]. For the display of digital images, so-called daltonisation techniques, i.e., recolouring of images for increased visibility for CVD observers, have been proposed [8,9,10,11]. CVD simulation algorithms are often an important ingredient in the daltonisation algorithms. Most daltonisation methods use some form of a global recolouring scheme, but daltonisation methods highlighting only colour edges and details that are lost to CVD observers have also been proposed [12]. In a behavioural experiment, the methods of Anagnostopoulos et al. [8] and Kotera [10] were found to be the best performing ones available at the time [13]. A thorough review of global daltonisation methods, i.e., mappings in colour space not taking the local image content into account, was recently given by Ribeiro and Gomes [14]. For image-dependent daltonisation, recent contributions include Chatzistamatis et al.’s method for recoloring of art paintings guided by semantic segmentation [15], and Iqbal et al.’s use of sharpened wavelet bands and inpainting [16].
Recently, we developed a daltonisation method that performed better or on par with state-of-the-art algorithms in both behavioural and psychometric experiments [17]. (An at least principally very similar method, although significantly differently formulated, was almost simultaneously proposed by Zhu et al. [18].) The method was based on a global transformation in the gradient domain of the original colour image, followed by a multi-scale reintegration to obtain the final daltonised image. Although giving good results, the method had two main problems: (i) it was very computationally expensive, and (ii) it tended to produce visual halo artefacts near strong chromatic edges. In the present paper, we present a solution to these two problems by (i) finding a much better initial condition for the iterative algorithm, and (ii) introducing local linear anisotropic diffusion in place of the non-local isotropic diffusion used in the previous method. Together this leads to significantly shorter computing times, as well as to the elimination of visual halo artefacts while still preserving the qualitative behaviour of the previously proposed algorithm.

2. Background

In the previously proposed algorithm [17], the original image was represented as the vector-valued function
u 0 : Ω C ,
where Ω R 2 was the spatial image domain, and C R 3 the colour space. The spatial gradient u 0 represented the two spatial directions for all image channels [19]. The CVD simulated image using the method of Brettel et al. [4], was denoted s ( u 0 ) . The difference image between the original image and its simulation, d 0 = u 0 s ( u 0 ) , contained the information that is lost for CVD observers.
We applied a principal component analysis (PCA) [20] of d 0 to obtain its first principal component p 1 ( d 0 ) , representing the main diretion of difference, i.e., the main direction of information loss for CVD observers. An orthonormal basis for the daltonisation was built using the Gram–Schmidt process starting from the normalised grey axis taken as either the diagonal of the RGB space, e l = [ 1 , 1 , 1 ] , or as the direction orthogonal to the plane of constant lightness in, e.g., sRGB space, e l = [ 0.2126 , 0.7152 , 0.0722 ] [21], and p 1 ( d 0 ) . This resulted in the orthonormal triplet e l , e d , and e c , representing the direction of lightness, difference, and maximally visible chroma change of the CVD observer, respectively.
The gradient of the difference between the original image and the result of the simulation algorithm was then approximated by the projection of the image gradient onto e d , d 0 ( u 0 · e d ) e d (for a simulation algorithm that is an orthogonal projection in an equi-luminant plane of the colour space, the approximation is exact). This difference was rotated and scaled into the direction of maximal chromatic visibility, e c , and added to the original gradient field to obtain a new gradient tensor
G = u 0 + χ ( u 0 · e d ) e c ,
where χ was a suitable scalar field.
In order to determine χ , we assumed the norm of the simulated tensor S ( G ) to be equal to the norm of the original gradient because the visible gradient for a colour-deficient observer looking at the daltonised image should equal in magnitude the gradient for a normal-sighted observer looking at the original image,
S ( G ) F 2 = u 0 F 2 .
Here, F 2 represents the Frobenius norm over both the colour space and the spatial gradient space simultaneously. For simplicity, we approximated the simulation method by a linear projection onto the ( e c , e l ) plane, S ( G ) s ( u 0 ) + χ ( u 0 · e d ) e c . Inserting this into Equation (2) gives
χ 2 ( u 0 · e d ) e c F 2 + 2 χ ( u 0 · e d ) e c : s ( u 0 ) + s ( u 0 ) F 2 u 0 F 2 = 0 ,
a quadratic equation for χ with, in general, two solutions, χ ± . Which one to use was found by integrating both solutions over the image, selecting the smaller one in absolute value.
Finally, the daltonised image from the tensor G was found by reintegrating the gradient field iteratively through gradient descent [22] using the finite difference method with explicit time integration—hence referred to as the isotropic diffusion equation
u t = · ( u G ) ,
with a suitable stopping criterion. Further fine tuning was made using a fidelity term to preserve memory colours such as skin tones.
The original image u 0 was used as an initial condition for the gradient descent. Since the daltonisation process in some cases lead to a significant change of the colours of the image, and since high image resolution gave very long diffusion paths, the algorithm took very long to converge. In order to reduce the computing time, a multi-scale pyramidal technique was used, first solving the daltonisation problem for a very small down-scaled image, then iteratively using the result as a starting point for the solution at a higher resolution until the solution for the full scale image was found.

3. Proposed Method

3.1. The Initial Value

One main problem of the previously proposed algorithm is the computation time, mainly stemming from long diffusion paths and the use of the original image u 0 as the initial value. Like for all optimisation problems, the selection of a good initial guess can be crucial for the time of convergence.
Instead of using the original colour image u 0 as the initial value, we propose here to perform an operation similar to the one applied above to the gradient directly in the colour domain. Letting s ( u 0 ) denote the simulated image, and constructing e l , e d , and e c as before, the colour information lost by the CVD observer represented by the simulation algoritm can be approximated by the projection of the original image onto e d , i.e., d 0 ( u 0 · e d ) e d . Rotating this estimate of the lost information into the direction of maximal chromatic visibility and clipping to the colour gamut gives us a much better first estimate of the daltonised image,
u s = g ( u 0 + ( u 0 · e d ) e c ) ,
where g ( · ) is the gamut clipping operator. Actually, this simple operation works well as a simple daltonisation method all by itself. It produces a recolouring similar to the one by the previously proposed algorithm, but destroys quite some image features and textures due to the gamut clipping operation. However, no halo artefacts are introduced. See Section 4 for sample results.

3.2. Local Linear Anisotropic Diffusion

The second significant problem of the previously proposed method is the production of visible halo artefacts near strong chromatic edges in the images. Switching from non-local isotropic to local linear anisotropic diffusion has proven to be a strong method to reduce haloing problems in various PDE based colour imaging solutions such as colour gamut mapping [23], colour-to-greyscale image conversion [24], and colour image demosaicing [25]. Thus, we propose to apply the same technique here.
The components of the structure tensor S of the original image can be expressed as [26]
S i j = μ , ν u 0 μ x i u 0 ν x j .
The eigenvalues of S are denoted λ + and λ , and the corresponding normalised eigenvectors e + and e are stored as columns in the orthonormal eigenvector matrix E , such that the structure tensor can be written S = E T diag ( λ + , λ ) E .
From this, the diffusion tensor can, in agreement with Sapiro and Ringach [27], be defined as
D = E T diag ( d ( λ + ) , d ( λ ) ) E ,
where d ( λ ) is a nonlinear diffusion coefficient function,
d ( λ ) = 1 1 + κ λ 2 ,
and κ is a suitably chosen numeric constant.
Inserting this into Equation (3), gives the anisotropic diffusion equation
u t = · [ D ( u G ) ]
subject to the constraint u C with the initial condition u ( t = 0 ) = u s .

3.3. Further Simplifications

One complicating factor in the previously proposed method is the computation of the scalar field χ and the selection of the best solution for it. Through experimentation, we have found that we obtain at least visually equally good daltonised images by simply setting χ = 1 thorughout, i.e., simply removing χ from Equation (1) giving
G = u 0 + ( u 0 · e d ) e c .
A second complicating factor in the previously proposed method was the introduction of the data attachment term for certain memory colours. Since the main purpose of daltonisation methods as such is behavioural, i.e., making observers able to perform certain tasks with the images, we propose here to skip that term altogether.

3.4. Implementation

The methods are discretised using the finite difference method with explicit time integration, and implemented in Python using the NumPy, Scipy, and Matplotlib libraries. The code is freely available on GitHub (https://github.com/ifarup/anisotropic-daltonisation/).

4. Results

In order to test the proposed algorithm, we gathered a small set of colourful images from the web with dominating red–green and blue–yellow contrasts. All the gathered images are freely available under the CC0 license (https://creativecommons.org/share-your-work/public-domain/cc0/). The images are shown in Figure 1.
Since the algorithm proposed above works for any kind of CVD simulation algorithm, we could have used the dichromat simulation method by Brettel et al. [4] like in [17]. However, in order to test the behaviour for various kinds of dichromats and anomalous trichromats, we develop a simple parametrised version of their method. The simulation is performed directly in the RGB colour space using the two dichromatic matrices,
M RG = 1 / 2 1 / 2 0 1 / 2 1 / 2 0 0 0 1 and M BY = 1 / 2 0 1 / 2 0 1 / 2 1 / 2 1 / 4 1 / 4 1 / 2 ,
for red–green (RG) and blue–yellow (BY) dichromats, respectively. In order to simulate also anomalous trichromats, we weigh in the orinal image with a certain weight α
s ( u ) = α M u + ( 1 α ) u ,
where M is either M RG or M BY , such that α = 0 corresponds to colour normal observes, 0 < α < 1 to anomalous trichromats, and α = 1 to dichromats.
We run the images through the three following daltonisation algorithms:
  • the simple daltonisation algorithm proposed in Equation (4),
  • the isotropic daltonisation algorithm proposed in Equation (3), which is essentially the same as the one proposed in [17], but with the simplifications described in Section 3.3, and the simple daltonisation as the initial condition, and
  • the anisotropic daltonisation algorithm proposed in Equation (5).
This is performed for four different simulated CVD observers: RG and BY with α = 1.0 and α = 0.8 , resulting in a total of 14 × 3 × 4 = 504 daltonised images. Typical results for one original image and one observer (RG dichromat with α = 1 ) are shown in Figure 2.
Zooming into a detail of the sample image, we see in Figure 3 that the simple daltonisation algorithm leads to a significant loss of details. This is caused by the the clipping of colours that end up being mapped outside the RGB gamut. The anisotropic daltonisation, on the other hand, is able to preserve much more of the image details. The difference can be quantified by looking at the gamut pixel fraction (GPF), i.e., the fraction of the image pixels that reside on the gamut boundary, indicated by having pixel values of 0 or 1,
GPF = # ( pixels on the gamut boundary ) # ( pixels in the image ) .
The resulting GPF values for all the images are shown as a box plot in Figure 4. Although the distributions for isotropic and anisotropic daltonisation are overlapping, they are statistically significantly different according to the Wilcoxon signed-rank test with a p-value p = 6.7 × 10 8 , and with a median in favour of the anisotropic daltonisation.
As noted already in the previous paper [17], visible halo artefacts are observed for the isotropic daltonisation method in many of the images. One such example is shown in Figure 5. The anisotropic daltonisation, on the other hand, produces far less halo artefacts. This can be quantified by comparing how similar the various daltonised images are to the original images by measuring the PSNR values,
PSNR = 10 log ( MSE ) , MSE = 1 M N i = 0 M 1 j = 0 N 1 [ I ( i , j ) K ( i , j ) ] 2 ,
where I and K indicate the two images that are compared, and M and N denote the horisontal and vertical image resolutions, respectively. The resulting PSNR values for all the images are shown as a box plot in Figure 6. Although the distributions for isotropic and anisotropic daltonisation are overlapping, they are statistically significantly different according to the Wilcoxon signed-rank test with a p-value of p = 7.5 × 10 11 , and with a median in favour of the anisotropic daltonisation.

5. Conclusions

We have developed a new daltonisation algorithm based on the previously developed method presented in [17]. Unlike the previous method, the proposed method is individualised in the sense that it can be used successfully with any CVD simulation method for dichromats and anomalous trichromats. The method is computationally much more efficient than the previously proposed method due to the simple daltonisation algorithm that acts as a much improved initial value for the iterative numerical scheme. The resulting images are free of the halo artefacts seen previously due to the introduction of a local linear anisotropic diffusion replacing the non-local isotropic diffusion of the previous method. As such, it is the first spatially dependent artefact-free daltonisation algorithm known to the authors. Comparison of the proposed algorithm with other recent state-of-the-art daltonisation methods in behavioural and psychometric experiments is left for future research.

Funding

This resesarch was funded by the Research Council of Norway over the project ‘Individualised Colour Vision-based Image Optimisation’, grant number 287209.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Valberg, A. Light Vision Color; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  2. Hansen, E. Fargeblindhet: Årsaker, Forekomst og Typer; Gyldendal Akademisk: Oslo, Norway, 2010. [Google Scholar]
  3. Simon-Liedtke, J.T.; Farup, I. Using a Behavioral Match-to-Sample Method to Evaluate Color Vision Deficiency Simulation Methods. J. Imaging Sci. Technol. 2016, 60, 50409-1–50409-9. [Google Scholar]
  4. Brettel, H.; Viénot, F.; Mollon, J.D. Computerized simulation of color appearance for dichromats. JOSA A 1997, 14, 2647–2655. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Simon-Liedtke, J.; Farup, I. Empirical Disadvantages for Color-Deficient People. In Proceedings of the AIC2015 Tokyo Color and Image Conference. International Colour Association (AIC), Japan, Tokyo, 19–22 May 2015; pp. 391–394. [Google Scholar]
  6. Ward, J.; Wright, T. Sensory substitution as an artificially acquired synaesthesia. Neurosci. Biobehav. Rev. 2014, 41, 26–35. [Google Scholar] [CrossRef] [PubMed]
  7. Mancuso, K.; Hauswirth, W.W.; Li, Q.; Connor, T.B.; Kuchenbecker, J.A.; Mauck, M.C.; Neitz, J.; Neitz, M. Gene therapy for red–green colour blindness in adult primates. Nature 2009, 461, 784–787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Anagnostopoulos, C.N.; Tsekouras, G.; Anagnostopoulos, I.; Kalloniatis, C. Intelligent modification for the daltonization process of digitized paintings. In Proceedings of the International Conference on Computer Vision Systems (ICVS 2007), Bielefeld, Germany, 21–24 March 2007. [Google Scholar]
  9. Huang, J.B.; Chen, C.S.; Jen, T.C.; Wang, S.J. Image recolorization for the colorblind. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 1161–1164. [Google Scholar]
  10. Kotera, H. Optimal daltonization by spectral shift for dichromatic vision. In Proceedings of the Color and Imaging Conference.Society for Imaging Science and Technology, Los Angeles, CA, USA, 12–16 November 2012; pp. 302–308. [Google Scholar]
  11. Kuhn, G.R.; Oliveira, M.M.; Fernandes, L.A. An efficient naturalness-preserving image-recoloring method for dichromats. IEEE Trans. Vis. Comput. Graph. 2008, 14, 1747–1754. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Simon-Liedtke, J.; Farup, I.; Eschbach, R. On the edge: A scalable daltonization method focusing chromatic edges and contrast. Electron. Imaging 2017, 2017, 28–35. [Google Scholar] [CrossRef] [Green Version]
  13. Simon-Liedtke, J.T.; Farup, I. Evaluating color vision deficiency daltonization methods using a behavioral visual-search method. J. Vis. Commun. Image Represent. 2016, 35, 236–247. [Google Scholar] [CrossRef] [Green Version]
  14. Ribeiro, M.; Gomes, A.J. Recoloring algorithms for colorblind people: A survey. ACM Comput. Surv. (CSUR) 2019, 52, 1–37. [Google Scholar] [CrossRef] [Green Version]
  15. Chatzistamatis, S.; Rigos, A.; Tsekouras, G.E. Image Recoloring of Art Paintings for the Color Blind Guided by Semantic Segmentation. In International Conference on Engineering Applications of Neural Networks; Springer: Berlin/Heidelberg, Germany, 2020; pp. 261–273. [Google Scholar]
  16. Iqbal, M.; Riaz, M.M.; Ghafoor, A.; Ahmad, A. Image recoloring using specular decomposition and in-painting. Imaging Sci. J. 2020, 68, 188–194. [Google Scholar] [CrossRef]
  17. Simon-Liedtke, J.T.; Farup, I. Multiscale Daltonization in the Gradient Domain. J. Percept. Imaging 2018, 1, 010503-1–010503-12. [Google Scholar] [CrossRef]
  18. Zhu, Z.; Toyoura, M.; Go, K.; Fujishiro, I.; Kashiwagi, K.; Mao, X. Naturalness-and information-preserving image recoloring for red–green dichromats. Signal Process. Image Commun. 2019, 76, 68–80. [Google Scholar] [CrossRef]
  19. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Pearson Education Inc.: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
  20. Jolliffe, I. Principal Component Analysis; Wiley Online Library: Hoboken, NJ, USA, 2002. [Google Scholar]
  21. International Electrotechnical Commission (IEC). IEC 61966-2-1:1999; Technical Report; International Electrotechnical Commission (IEC): Geneva, Switzerland, 1999. [Google Scholar]
  22. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  23. Alsam, A.; Farup, I. Spatial Colour Gamut Mapping by Means of Anisotropic Diffusion. In Computational Colour Imaging Workshop (CCIW); Lecture Notes in Computer Science; Springer: Berlin, Germany, 2011; Volume 6626, pp. 113–124. [Google Scholar]
  24. Farup, I.; Pedersen, M.; Alsam, A. Colour-to-Greyscale Image Conversion by Linear Anisotropic Diffusion of Perceptual Colour Metrics. In Proceedings of the Colour and Visual Computing Symposium, Gjøvik, Norway, 19–20 September 2018. [Google Scholar]
  25. Thomas, J.B.; Farup, I. Demosaicing of Periodic and Random Colour Filter Arrays by Linear Anisotropic Diffusion. J. Imaging Sci. Technol. 2018, 62, 050401-1–050401-8. [Google Scholar] [CrossRef] [Green Version]
  26. Di Zenzo, S. A note on the gradient of a multi-image. Comput. Vision, Graph. Image Process. 1986, 33, 116–125. [Google Scholar] [CrossRef]
  27. Sapiro, G.; Ringach, D.L. Anisotropic diffusion of multivalued images with applications to color filtering. IEEE Trans. Image Process. 1996, 5, 1582–1586. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The image dataset used for testing. Highly saturated images from the web with dominant red–green and blue–yellow contrast. All images are available under the CC0 licence.
Figure 1. The image dataset used for testing. Highly saturated images from the web with dominant red–green and blue–yellow contrast. All images are available under the CC0 licence.
Jimaging 06 00116 g001
Figure 2. Sample image for an RG α = 1 dichromatic observer. (Top to bottom, left hand column): original, simple daltonisation, isotropic daltonisation, and anisotropic daltonisation. Corresponding simulated images for a the RG α = 1 dichromatic observer in the right hand column.
Figure 2. Sample image for an RG α = 1 dichromatic observer. (Top to bottom, left hand column): original, simple daltonisation, isotropic daltonisation, and anisotropic daltonisation. Corresponding simulated images for a the RG α = 1 dichromatic observer in the right hand column.
Jimaging 06 00116 g002
Figure 3. Loss of details in the simple daltonisation algorithm for an RG α = 1 dichromatic observer. Left to right: original image, simple daltonisation, and anisotropic daltonisation.
Figure 3. Loss of details in the simple daltonisation algorithm for an RG α = 1 dichromatic observer. Left to right: original image, simple daltonisation, and anisotropic daltonisation.
Jimaging 06 00116 g003
Figure 4. Box plot of the gamut pixel fraction (GPF) for the original and the simple, isotropic, and anisotropic daltonised images.
Figure 4. Box plot of the gamut pixel fraction (GPF) for the original and the simple, isotropic, and anisotropic daltonised images.
Jimaging 06 00116 g004
Figure 5. Halo artefacts in the isotropic daltonisation algorithm for a BY α = 1 observer. (Left to right): original image, isotropic daltonisation, and anisotropic daltonisation.
Figure 5. Halo artefacts in the isotropic daltonisation algorithm for a BY α = 1 observer. (Left to right): original image, isotropic daltonisation, and anisotropic daltonisation.
Jimaging 06 00116 g005
Figure 6. PSNR (dB) of the daltonised versus the original images for the simple, isotropic, and anisotropic daltonisation algorithms.
Figure 6. PSNR (dB) of the daltonised versus the original images for the simple, isotropic, and anisotropic daltonisation algorithms.
Jimaging 06 00116 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Farup, I. Individualised Halo-Free Gradient-Domain Colour Image Daltonisation. J. Imaging 2020, 6, 116. https://doi.org/10.3390/jimaging6110116

AMA Style

Farup I. Individualised Halo-Free Gradient-Domain Colour Image Daltonisation. Journal of Imaging. 2020; 6(11):116. https://doi.org/10.3390/jimaging6110116

Chicago/Turabian Style

Farup, Ivar. 2020. "Individualised Halo-Free Gradient-Domain Colour Image Daltonisation" Journal of Imaging 6, no. 11: 116. https://doi.org/10.3390/jimaging6110116

APA Style

Farup, I. (2020). Individualised Halo-Free Gradient-Domain Colour Image Daltonisation. Journal of Imaging, 6(11), 116. https://doi.org/10.3390/jimaging6110116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop