Background I have an application that displays (large 2500X3000) 12-bit grayscale images within an image control. I am using the Gray16 pixel format and I am scaling the data to 16 bits by simple bit-shifting. Images have low contrast, so I implemented functionality to adjust brightness and contrast. The functionality was based on setting the image's Effect to a pixel shader that accepts brightness and contrast parameters. Obviously, this implementation is very efficient and avoids moving large amount of data around in memory. Problem When adjusting the contrast to a high setting, the displayed image exhibits unacceptable contouring, as though the bit-depth of the image has been reduced to 5 or 6. As a baseline test, I implemented the same functionality without a pixel shader - I applied the contrast and brightness settings to the image, reduced the bit-depth to 8 bits and displayed the image with the Gray8 pixel format (one that I have a great deal of familiarity with). The latter implementation showed little to no contouring. Question Does anyone know why applying a pixel shader would reduce the apparent bit-depth of an image? Does the pixel shader HLSL code work on reduced bit-depth data, e.g. eight or less, that is a function of the hardware/drivers or the OS? All data in the pixel shader are floats that are scaled from 0 to 1, so you haven't any idea of the quantization level (at least to my knowledge) Thanks in advance for any feedback
J
jimSehn
@jimSehn