From RGB to L*a*b* color space (2024)

(kaizoudou.com)

40 points | by kqr 4 days ago

6 comments

  • guidedlight 2 hours ago
    This article describes how to convert from the sRGB color space, not RGB.

    sRGB like Lab* is device independent so a transformation between the two is possible.

    RGB on the other hand is device dependent, and would therefore require a device ICC Profile to convert to Lab*.

  • mattdesl 1 hour ago
    For those just learning about perceptual colour spaces, I’d recommend exploring OKLab which is simpler to implement and overcomes some of the problems of CIELab.

    https://bottosson.github.io/posts/oklab/

    • the_mitsuhiko 1 hour ago
      Oklab is awesome and it’s such a great example of a person putting in time somewhere, where many just glanced over but still complained over the years. And it was so good that it was adopted everywhere.
  • srean 53 minutes ago
    I right away admit that I am an absolute novice in this space, but I have a few questions. The question I always had is why do we not model it closer to the actual tangible Physics and Biology going on ?

    For example, the Physical reality is the different frequencies (equivalently wavelengths) of light. The Biological reality is that different types of cells on our retina respond with differing intensity to each of those frequencies.

    So to my naive mind, a way of modeling color is to have (i) a forward model that map light frequencies to response intensities of the different types of cellular light receptors and (ii) an inverse model that estimates the frequency mix of light from the cellular responses.

    That is, have two spaces (i) the light frequencity space (a list of tuple of frequency and intensity/power at that frequency) and the (ii) cellular response space.

    Once we have these, we can go from a pigment or excited phosphor to a biological response in a two step process.

    From (a) pigment/phosphor (+ frequency mix of illuminating light) to output light frequencities (b) from those frequencities to cellular response.

    For all processing make frequencities the base space to work in (allowing us to change/personalize the forward model).

    Yes, the inverse model leads to an ill posed inverse problem, but we are now very knowledgeable about how to solve these.

    The frequencies may need to discretized for convenience.

    I am obviously a novice and don't know much about modeling color, but this way of modeling seems more grounded in the tangibles. This also gives a way to model how a color-blind person might perceive a picture.

    Is the reason that we do not do it this way it's complexity ?

    Eager to be illuminated (pun intended).

    • jansan 42 minutes ago
      The problem is that this will only work in one direction. You can calculate the stimulation of the photoreceptors for a certain spectrum, but not the other way around. For example the eye cannot distinguish between purple light consisting of one specific wavelength and purble light mixed by red and blue wavelengths, because both give the same stimulation of the receptors. So there is an infinite number of possible spectra for any given stimulation of the photoreceptors. All we can do is take the stimulation values (X, Y and Z) and convert from there to all kinds of color models and back.

      Your approach would make a lot of sense for sensors that are full spectrum analyzers, but the eye isn't one.

      • srean 38 minutes ago
        You are talking about the inverse problem.

        Yes because it's not a one to one map we cannot invert the map uniquely, but that's ok, we will maintain a distribution over the possible frequencities consistent with the reaponse. That's how it's done in other areas of mathematics where similar non-bijections arise.

        Much thanks for answering though, because I suspect I am asking a very basic question.

        • sillysaurusx 30 minutes ago
          You're correct, for what it's worth. I too have always wished that light was modeled based on physics, not on how humans happen to see.

          Unfortunately the problem is data acquisition (cameras), and data creation (artists). You need lots of data to figure out e.g. what a certain metal's spectrum is, and it's not nearly as clear-cut as just painting RGB values onto a box in a game engine.

          For better or worse, all our tools are set up to work in RGB, regardless of the color space you happen to be using. So your physics-based approach would have the monumental task of redefining how to create a texture in Photoshop, and how to specify a purple light in a game engine.

          I think the path toward actual photorealism is to use ML models. They should be able to take ~any game engine's rendered frame as input, and output something closer to what you'd see in real life. And I'm pretty sure it can be done in realtime, especially if you're using a GAN based approach instead of diffusion models.

          • srean 28 minutes ago
            I see. Makes sense.
  • 127 2 hours ago
    To people who aren't yet aware also: https://bottosson.github.io/posts/oklab/
  • thangalin 2 hours ago
    A practical application is pie charts, apologies for the XSLT:

    * https://repo.autonoma.ca/repo/delibero/blob/HEAD/source/xsl/...

    * https://repo.autonoma.ca/repo/delibero/blob/HEAD/source/xsl/...

    An example pie chart is on page 33, section 9.2.3:

    * https://repo.autonoma.ca/repo/delibero/raw/HEAD/docs/manual/...

    * https://i.ibb.co/ymDLcPNj/pie-chart.png (screenshot)

    The colours are harmonious, visually distinct, and not hard-coded to a fixed number of slices.

  • aparadja 1 hour ago
    If you ever need to generate a gradient between colors in any of your code, interpolating colors in the Lab color space is an awesome option. A simple linear interpolation of the components gives impressively beautiful results.

    (Although, like several other commenters, I do recommend OKLab.)