Re: User interface cognitive loading

On Feb 21, 8:41 pm, D Yuniskis <> wrote:
Hi Carl,

1 Lucky Texan wrote:
On Feb 21, 2:51 pm, D Yuniskis <> wrote:
1 Lucky Texan wrote:
On Feb 19, 3:20 pm, D Yuniskis <> wrote:

So, the question I pose is:  given that we increasingly use
multipurpose devices in our lives and that one wants to
*deliberately* reduce the complexity of the UI's on those
devices (either because we don't want to overload the
user -- imagine having a windowed interface on your
microwave oven -- or because we simply can't *afford* a
rich interface -- perhaps owing to space/cost constraints),
what sorts of reasonable criteria would govern how an
interface can successfully manage this information while
taking into account the users' limitations?
If (and it's a big if) I understnd where you interest lies, it is less
in 'information overload'  (I think the military has done a huge
amount of research in this area for fighter pilots/'battlefield'
conditions) and more in 'detection' of such overload/fatigue. If so, I

Yes.  Though think of it as *prediction* instead of detection.
I.e., what to *avoid* in designing a UI so that the user
*won't* be overloaded/fatigued/etc.


Contrast this with limited context interfaces in which the
"previous activity" is completely obscured by the newer
activity (e.g., a handheld device, aural interface, etc.).

So, my question tries to identify / qualify those types
of issues that make UI's inefficient in these reduced
context deployments.

expect a system to monitor 'key strokes' (mouse moves w'ever - user
Hmmm... that may have a corollary.  I.e., if you assume keystrokes
(mouse clicks, etc.) represent some basic measure of work or
cognition).  So, the fewer of these, the less taxing the

input) and their frequency/uniqueness rates. Possibly some type of eye
tracking could be helpful?

Even reading rates could predict the onset of overload. Again, the Air

Yes, but keep in mind this is c.a.e and most of the "devices"
we deal with aren't typical desktop applications.  I.e.,
the user rarely has to "read much".  Rather, he spends
time looking for a "display" (item) and adjusting a "control"
to affect some change.

Force has bumped into this issue. There is likely an entire branch of
psychology dealing with these issues.

As for the mechanics in a system, some could perhaps be implemented
with present or near-term technology. Certainly the military could
justify eye-tracking, brainwave monitoring or other indicators. But
reading rates, mouse click rates, typing speed, etc. Might be doable
now. I can also envision some add-on widgets that might allow for, say
a double right click to create a 'finger string'. As in tying a sting
around your finger. A type of bookmark that would recall the precise
conditions of the system (time, date, screen display, url, etc.) when
the user detected something troubling. May not be as precise as 'the

Actually, this is worth pursuing.  Though not just when "detected
something troubling" but, also, to serve as a "remember what I was
doing *now*".

I suspect a lot can be done with creating unique "screens" in
visual interfaces -- so the user recognizes what is happening
*on* that screen simply by it's overall appearance (layout, etc.).
Though this requires a conscious effort throughout the entire
system design to ensure this uniqueness is preserved.  I
suspect, too often, we strive for similarity in "screens"
instead of deliberate dis-similarity.

infilled date was wrong', but it may be enough of a clue that, when
the user reviews the recalled screen later, it triggers a memory like
"hmmm, what was her....OH YEAH!, that date is wrong!" .

fun stuff to think about.

*Taxing* stuff to think about!  :>  So much easier to just
look at a bunch of interfaces and say what's *wrong* with them!
yet, to do so in a way that allows "what's right" to be
extracted is challenging.

One other quick though, in some 'dedicated' systems,it can be very
important to make any deviation from the operator's 'expectation'
GREATLY noticeable. I've seen some poor early software in semi-
automated test stations, where some small line of text changes from
'pass' to fail. That's all. Well, the expectation could be something
like 97% good boards. So, as an operator, can you be relied on to
notice that text change when you have just tested 100-200 bds before a
bad one comes along? I told the programmer i wanted the screen to
change color, the font size to increase and, if available, a beeper to
sound! That is somewhat the opposite of information overload, perhaps
we'd call it 'tedium' w'ever. But, as you say, these things are
important. Things like preset, 'check-off' lists, and systems that do
not 'assume' an operator is paying attention and require 'distinct'
inputs to keep them aware - I guess that all falls near this issue huh?