I wrote these notes in order to have an easy resource to link people involved in graphics work to. For a more in-depth overflow on digital color, read all of The Hitchhiker’s Guide to Digital Color, starting with question 1.
Notes
sRGB (standard RGB) is this thing called a ‘color space’, but is also a means of compressing image data.
A way to store color information at 8-bits per color channel (red, green, blue) is desired as 32 or even 16 bits per channel is too high.
sRGB uses a compromise where brightness values (0.0 to 1.0) are stored non-linearly. 128 != 0.5.
The function is roughly (but not exactly) x^2.4, so for 128 you have (128/256)^2.4 = 0.189. Half the bit space is dedicated to values less than 0.2. Conversely, there are only 128 values for 0.2 to 1.0.
This is because it’s easier to see the difference in monitor brightness between e.g. 0-0.05 than 0.95-1.
sRGB is used for all color JPEG, PNG, etc images. For extremely rare non-color purposes (e.g. normal maps for 3d models) linear 8-bit values are stored instead where 128 should be used as 0.5.
Monitors generally accept sRGB-encoded data.
using a SRGB GPU texture format means that the linear-to-sRGB and sRGB-to-linear transfer functions are automatically applied when reading and writing. Sometimes you want/need to perform the transfer functions yourself instead and have sRGB-data in a SRGB texture.
in a shader, there is no loss of data when going from sRGB texture (think of a greyscale 256 by 1 texture consisting of 256 brightness levels) to sRGB texture as the sRGB-to-linear transfer function is applied and then perfectly inverted again. Same with non-sRGB data to non-sRGB data.
there IS going to be loss of data going from non-sRGB or sRGB or sRGB to non-sRGB as some of the stops will need to be rounded when converting to 8-bit again.
Digital Color Pipeline
If you take a photo with a digital camera and then display it on a web page, the pipeline goes something like this:
Thought Experiment
If human beings had 4 types of cones instead of 3 (maybe one at ~600nm), what aspects of this pipeline would need to change?
Graphvis Source
This is here mostly for my own benefit
digraph g {
"Light IRL\nformat: photons of various wavelengths" -> "Camera Sensor\nHits Bayer filter, converted to RGB triplet\nformat: RGB of some kind" -> "Camera Raw\nformat: RGB, basically unprocessed from sensor" -> "Jpeg\nHeavily processed, demosaicing etc.\nformat: 8-bit sRGB" -> "GPU Texture\nDecompressed and extended with an alpha channel\nformat: RGBA8_SRGB" -> "Shader Code\nConverted from 8-bit to float, linearized and sampled (all done by hardware)\nformat: 32-bit float" -> "GPU Surface Texture\nConverted back into 8-bit sRGB (either done by hardware or software (e.g. in a compute shader))\nformat: RGBA8_SRGB" -> "Driver & HDMI/DisplayPort cable\nformat: presumably still sRGB but driver specific" -> "Monitor\nDecoded, linearized, used to light LEDs as a fraction of max brightness" -> "Light IRL again\nformat: still photons" -> "Human Eye\nLots of complex biology stuff. Absorbed by rods & cones\nformat: ???" -> "Brain\nformat: ???"
}