As we learned through our entry question #1, changing the value of an RGB slider changes the intensity of light emitted from a single channel of our display. Depending on your DCC of choice, you might see a range of values change from some small value to some larger value. Sometimes, instead of a legible value such as
0.6, you may see goofy looking values such as
121 or even more confusing cryptic looking codes such as
This is a nice little segue right into our foundational concept number two, and it too is a nice tidy question…
Question #2: What does
0.6 represent in a typical DCC context?
Ponder on it a little bit, as it’s another one of those so-obvious-that-most-folks-haven’t-thought-about-it sorts of questions. What is the something that
0.6 is part of? Why do some of our graphics friends use other representation systems? I’ll see you when you come back…
As you have probably realized, seeing the value
0.6 is a rather less tricky question than pondering what
153 means, or the more ridiculous and bats*it stupid value
0x99. Decimals are a sane way to begin our deeper dives into RGB colour. The refreshingly good news is that the last two absurd representations are in fact synonymous with the more legible
0.6 above. The better news is that after this post you won’t have to suffer listening to the crazies who insist on using the latter two awful formats, because you’ll know they don’t carry any more magical meaning than the entirely comprehensible decimal approach.
The first thing we need to understand is that, crucially,
0.6 is not a universally meaningful value with some sort of absolute attached meaning. In fact, it’s absolutely meaningless in the bigger picture, and is wise to think of it as a code value; it is an encoded value that requires decoding. Before we can fully comprehend how to decode the code values, we need to backtrack a little and realize that in most DCC applications,
0.6 represents a value within a range of values. That is, just like your basic understanding of percentages, it’s nothing more than a ratio, and that ratio specifically relates to our first answer regarding the intensity of light output.
Answer #2: In an RGB DCC application,
0.6 is a ratio between a minimum and maximum, representing some ratio of light intensity.
You might be wondering why I said some in the above statement and just what that some is. You’ll have to wait a few more questions before we can answer that question with a degree of authority.
At this point though, it is worth noting how
0.6 holds a synonymous meaning with the idiot-inducing value
153 or the other circle-jerk value
0x99. To understand that, it requires focusing on a small, itsy bit about representing values in our computing devices. If you are already way ahead of this teeny dive and know where it is going, feel free to skip it. If the values are a bit head-scratchy for you, read on…
At risk of boring the hell out of you, computers are pretty low tech calculators. You are probably well aware that computers process based on low level electronic toggles known as bits. If we had a single bit, we could create meaning for the bit by assigning it context. The bit could represent up or down, on or off, yes or no, yin or yang, apple or orange, or whatever meaning we choose to assign to it.
If we were fleshing out an RGB model, we could say a single bit of information has a range of 2¹, or two, positions just like a coin. That means we could assign the meaning of those positions to represent the states of our little lights in our little pixels as being either on or off, or minimum and maximum. If we add a single bit of information, we have 2² positions, or four!¹ That might mean minimum, maximum, and two values in between. Carry on with this, and we hit the magic number eight, which for reasons I won’t go into here, is an historical quantity of information “steps” that we have used to represent colour for a long while. It’s a boring historical romp into computer colour for those who want to dive deeper.
Why is eight bit relevant here? Eight bit gives rise to the two latter encodings we see above. That is, value
153 and the ridiculous
0x99 are nerdy computer methods of describing the much more legible value
0.6. At eight bits, we can encode a total of
256 unique combinations. If we were to go a step further in refining the meaning of those combinations, we could interpret them as whole-numbers, or in nerdy-speak integers. Under such an interpretation, the values could be thought of as simply a number of steps from the first combination to the last, which would be from
255. So what then, is
153/255? You guessed it!
0.6! And if we go down the gobbly gook hole of computer history, we discover that
0x99 is hexadecimal for the integer whole-number value
153, which again, is nothing more than a fractional representation of the maximum value, or more simply, the decimal value
0.6. Seems like a lot of work to make
0.6 less legible, agree²?
Thankfully, I’ll keep the cryptic WANKer speak to a minimum, and try to refer to the more sane decimal representation where possible.
Given we know that RGB sliders impact the intensity of emission of the lower level reddish, greenish, and blueish lights that compose a single pixel, we now can concretely cut through the bulls*it of the various representations of code values.
0.6 represents a ratio along the way from minimum to maximum. We know it is somehow tied to the lights in our display, but sadly, we aren’t much better off than when we started. What the hell is it
0.6, or 60%, of exactly? Better, if we set it to 100%, does the colour change from when we had the light set at 60%? That seems like a pretty darn good next question…
¹ A simple way to understand the basic math is to think in terms of coins. A single coin has a number of sides (2), and the number of coins gives us the exponent. If we had 6 coins, the total number of unique positions if we laid them out on a table would be 2⁶, or 64 unique combinations.
² I have some bad news for you WANKs who think you are doing the world a wonderful service by confusing the hell out of everyone using eight bit code values
153 or the ever more super-secret-I-know-what-I-am-doing-and-you-don’t cryptic value
0x99; you haven’t added any meaning. No really, you haven’t. Feel free to head home with your lunch box and tell your mom I insulted you. To everyone else, let’s wrap this nightmare of nerd-speak up and forge ahead to the next question.
6 replies on “Question #2: What the F*ck Does 0.6 Mean?”
153 is actually less accurate as it only applies to 8 bit colour whereas 0.6 could also work in 16 or 32 bit colour depths.
Complete agreement. The issue with integer representations, and especially hex codes, is that they are truly treated as some sort of computer-magic. If I were to speculate, I’d think that the confusion comes from the idea that representation accuracy is conflated with meaning. That is, many folks believe that if they meticulously copy ten decimal places of RGB values that there is meaning in the value alone. Hex codes feel as an extension of this conflation of meaning, buried beneath a layer of obfuscating hexadecimal representation.
Hey Troy, just finished Q1 and now I’m here. I’m looking for a deep understanding of colour in the digital world, and this seems like the perfect place. Thank you for hg2dc!
I completely agree on the ridiculousness of hexadecimal notation. But I do think “bit-notation” makes sense. Not all percentages (0,6) can be accurately represented by an 8-bit number (153). So using percentages would require the program to guess whether to round up or to round down. E.g. 0,7*255 = 178,5. This also means that there has to be a consensus in the programming world on how to represent 0,7 bit-wise and may complicate things. Although we might agree that the difference between rounding up or rounding down is negligible in the end result…
I apologize in advance if I’m a wanker 😉
Hello Claus! Glad you find it a suitable entry point.
Regarding Hex, the issue has cascaded; there is so much poor colour science understanding, or more specifically, light transport, that legend, lore, and ritual has taken over.
A few points I’d make…
1. We are now living in an era of higher bit depth, and as such, hexadecimal notation is a dumpster fire for expressing albedos or other granular values.
2. Decimal notation is as close to basic comprehension as anything.
3. Sadly, as a result of confusion, hexadecimal has now taken on a new significance; as a means of communicating colour for graphic designers. How many thousands of print runs have been lost due to colour miscomprehension and the sense of powerlessness, and someone using a hex code as savior? As a result, folks with colour PTSD have gravitated to the cryptic and worthless hexadecimal format thinking it is more “computer-y” and therefore more absolute. All it amounts to however, is yet another layer of obfuscation, and the ability of the graphic designers to communicate colour is no better off.
Hopefully a general raising of the tide can happen, and we can move away from these ridiculous representations.
They aren’t helping anyone.
My issue was more with the fact that I see an encoded notation (0…1) as an obfuscation of a whole integer notation (0…255 in 8-bit). If you read e.g. 0.7, you basically don’t know if you’re outputting the 178th or the 179th level of brightness of any specific subpixel, right?
I have a feeling that might be irrelevant though, could you explain why?
Quantisation absolutely matters.
However, in this case, the *meaning* of the value is not communicated in an integer increment. That is, the value is better represented as a float, given that a float representation is more legible as a ratio relationship.
Utilizing integer encoding values speaks nothing about the ratio relationship, and thus obfuscates the meaning; a ratio of tristimulus magnitude.