Re: How to Display 16-bit grayscale
- From: "Arthur J. O'Dwyer" <ajo@xxxxxxxxxxxxxxxxxxxxx>
- Date: Sat, 19 Nov 2005 19:52:16 -0500 (EST)
On Sun, 20 Nov 2005, [jongware] wrote:
<benn686@xxxxxxxxxxx> wrote...Im currently displaying a grayscale image (black and white) using 8 bits. I am able to display a nice gradient from black to white in 256 steps.
video_buffer = (_uchar8)ddsd.lpSurface; video_buffer[ row * bytes_per_line + col] = RGB( col , col, col); //Use the same value for red, green blue to get b&w
Now, I would like to display another image (black & white grayscale) using 16 bits per pixel. How can I display a grayscale to go from pure black to pure white with 16 bits of resolution (2^16 steps)?
If your video card doesn't support 16-bit color it will never ever display 16 bits of grayscale. (This is *not* the same as 24 bits color. That's 3 times 8-bit. For 16 bit full color we should call it '48-bit', and yes, they exist, but most likely you don't have one.) If you only want to display *something* just lop off the 8 least significant bits and display the image in glorious 8-bit color. You don't have to throw away any bits in memory, just display them. Any precalculations on the image *before* displaying should use the full 16 bits.
Good technically correct answer. However, depending on how badly the OP wants to get something "like" 16-bit grayscale color, he might consider:
(1) Dithering. That is, taking the 16-bit image and reducing it to an 8-bit image by a method more clever than simply lopping off eight whole bits of information. For example, instead of displaying the 4x4 image
8888 8888 (16-bit: 8 := 0x0080) 8888 8888
0000 0000 (8-bit, lower 8 bits lopped off to leave 0 := 0x00) 0000 0000
you could use a dithering algorithm to display
0101 1010 (8-bit, dithered to have the same "average" gray value) 0101 1010
The state of the art in dithering is really quite good. Google for "dithering algorithm," or post specific questions here if you get stuck.
This won't get you 16 bits, and it can produce artifacts or spurious high frequencies in your data, but it will produce a nicer displayed image than merely lopping off bits willy-nilly.
(2) Hardware hacks. Back in the days of graphics demos, it was not uncommon to eke more bits out of a display by doing things like cycling
the palette in the middle of a horizontal retrace, or strobing quickly
between 0000 and 2222 to give the effect of 1111, or other such hacks.
(As recently as several years ago, I remember it being big to talk about "4-bit grayscale" on the TI-83 or -89, which technically only have 1 bit per pixel: on or off.)
This option is completely inappropriate for portable code, or code designed to run robustly, but it can give you impressive results if it
/is/ appropriate --- games, demos, etc.
I'm sure other methods of producing "almost-16-bit" color exist, too.
- Prev by Date: Re: How to Display 16-bit grayscale
- Next by Date: Re: What computer languages are standardised?
- Previous by thread: Re: How to Display 16-bit grayscale
- Next by thread: Re: How to Display 16-bit grayscale