Re: Simple Autoexposure algorithm for an Image sensor

From: Robert Wessel (robertwessel2_at_yahoo.com)
Date: 10/08/04


Date: 7 Oct 2004 22:16:14 -0700

clapper@bluewin.ch (Bryan) wrote in message news:<a01fd7ec.0410061138.581728eb@posting.google.com>...
> Hi,
>
> I'm involved in a project where we are building a still image capture
> device. We have a CMOS image sensor (Pixart PAS106B) connected via I2C
> to a philips lpc2106 (Arm 7) microcontroller.
>
> I'm able to capture images without trouble, however, I notice -
> depending on the ambient light - my image is either too light or too
> dark. To correct this problem, I want to write a small (hopefully
> simple) and fast autoexposure routine to adjust the exposure setting
> and hopefully get a more consistent image when the light varies.
>
> I was hoping someone might be able to point me to some resources
> (books, news groups, source code, people, etc) that might be able to
> help me understand the methods and best practices for accomplishing
> this.
>
> I was hoping that I could just get a "luminance" reading from the
> image sensor, but I guess I can't. So, I'm pretty sure that I need to
> evaluate (somehow) the light/darkness of the image, change the image
> sensor's exposre settings, grab another frame, test it, and so on.
>
> If anyone has any experience or knows where I might go to get a grip
> on the best way to do this, I'd be grateful.

The first thing to keep in mind is that "auto exposure" from behind
the lens is not actually generally possible, unless you some idea of
what you're looking at. The problem is that unless you know how much
light is *supposed* to be reflected, there's no way of telling how
brightly the object is lit, and therefore you can't determine the
correct exposure (which is properly based on the level of
illumination, *not* the level of reflection).

You'll notice that serious photographers are always running around
with incident light meters, which measure the intensity of light
falling on the object(s) in question, and they compute their proper
exposure from there. In short, a certain intensity of light falling
on a black object should get the same (film/sensor) exposure as the
same amount of light falling on a white object, even though there will
be much more light reflected towards the camera in the latter case.

So the best you can do is fake it. The simplest scheme, used for many
decades in cameras with built in exposure meters, is to assume that
the scene has a certain reflectivity, and just measure the intensity
of all the light coming through the lens, or perhaps just the center
portion. The standard in photography is to balance against an 18%
gray card (basically just a gray card of the correct density to
reflect 18% of the incident light - available at any decent photo
supply place). The "18% gray" card is supposed to be equivalent in
density to a "typical" scene (which explains why it's used).

For an application like this, you'd calibrate your sensor against such
a card, and then adjust the exposure of the image by adjusting the
exposure so that the entire brightness, averaged over the entire
sampling region, came out to the same value as you calibrated for the
gray card.

Trivially, convert all the pixel values to a linear scale, add them
all up, divide by the number of pixels, and adjust the exposure by the
ratio of that number to the value you'd see for the reference card.

One simple enhancement is to use a center-weighted metering scheme,
where you bias the exposure towards the center of the frame.

High end cameras perform quite complex schemes to try and "understand"
a frame enough to measure it accurately, often breaking up the scene
into many individual segments, and applying some balancing algorithm
to those. Some even go so far as to attempt image analysis to try and
recognize objects and apply reflectivity values to them.

I'd start with something simple, maybe a basic center-weighted scheme,
where the inner ninth (the middle third in both dimensions, or
something more-or-less in that range) counts for two thirds of the
exposure weight, and the remainder of the frame for the other third,
and play from there. Given that responses are not real linear, this
may cycle occur over several frame before it stabilizes (and you
probably want to add some sort of damping to this function). I don't
know if it's an issue for your application, but one thing to watch out
for is that if you're far enough off scale to lose all detail (either
via extreme under or overexposure), you'll want to have a fast step
function to try and get the exposure back into a "reasonable" range.