graphics.Clear(color) - Incorrect color!
-
I have the following code, however it paints a different color :S Color blueColor = Color.FromArgb(41, 22, 111); g.clear(blueColor); It displays Ok on a display with a 32bit color depth, however when I change it to 16bit, it appears lighter than it should. I'm guessing this is because of the unused alpha value, or maybe it's doing something strange with the gamma - beats me? Does anyone know how to fix this so it displays the correct value on a 16bit display?? Thanks!
-
I have the following code, however it paints a different color :S Color blueColor = Color.FromArgb(41, 22, 111); g.clear(blueColor); It displays Ok on a display with a 32bit color depth, however when I change it to 16bit, it appears lighter than it should. I'm guessing this is because of the unused alpha value, or maybe it's doing something strange with the gamma - beats me? Does anyone know how to fix this so it displays the correct value on a 16bit display?? Thanks!
It doesn't sound strange to me, since RGB is a 24 bit value, even if computers like powers of 2. If no color table is used, then the 24 bit value needs to be encoded as a 16 bit color which is 5 bits for each color (red, green blue), 1 bit unused. I may be not completely right with that. E.g. a 16 bit color can be converted to a 32 bit value with:
Color32 = ( (((Color16 >> 10) & 0x1F) * 0xFF / 0x1F) | ((((Color16 >> 5) & 0x1F) * 0xFF / 0x1F) << 8) | ((( Color16 & 0x1F) * 0xFF / 0x1F) << 16));
Looks weird. A 16 bit color may look like tihs.7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 <- Bits U R R R R R G G G G G B B B B B <- Color U = Unused R = Red ...
-
It doesn't sound strange to me, since RGB is a 24 bit value, even if computers like powers of 2. If no color table is used, then the 24 bit value needs to be encoded as a 16 bit color which is 5 bits for each color (red, green blue), 1 bit unused. I may be not completely right with that. E.g. a 16 bit color can be converted to a 32 bit value with:
Color32 = ( (((Color16 >> 10) & 0x1F) * 0xFF / 0x1F) | ((((Color16 >> 5) & 0x1F) * 0xFF / 0x1F) << 8) | ((( Color16 & 0x1F) * 0xFF / 0x1F) << 16));
Looks weird. A 16 bit color may look like tihs.7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 <- Bits U R R R R R G G G G G B B B B B <- Color U = Unused R = Red ...
-
It doesn't sound strange to me, since RGB is a 24 bit value, even if computers like powers of 2. If no color table is used, then the 24 bit value needs to be encoded as a 16 bit color which is 5 bits for each color (red, green blue), 1 bit unused. I may be not completely right with that. E.g. a 16 bit color can be converted to a 32 bit value with:
Color32 = ( (((Color16 >> 10) & 0x1F) * 0xFF / 0x1F) | ((((Color16 >> 5) & 0x1F) * 0xFF / 0x1F) << 8) | ((( Color16 & 0x1F) * 0xFF / 0x1F) << 16));
Looks weird. A 16 bit color may look like tihs.7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 <- Bits U R R R R R G G G G G B B B B B <- Color U = Unused R = Red ...
-
IIRC rather than wasting a bit the green channel has 6 bits instead of 5 like the red/blue. I've no idea what makes green special.
yepp you are correct about the extra green bit.. because the human eye is more sensitive to see variations of green (i guess since our tree climbing/plains crawling days) //Roger
-
IIRC rather than wasting a bit the green channel has 6 bits instead of 5 like the red/blue. I've no idea what makes green special.
The human eye is more sensitive to differences in green.