1

Human-Centered Software Agents:
Lessons from Clumsy Automation

David D. Woods

Cognitive Systems Engineering Laboratory
Ohio State University

Introduction

The Cognitive Systems Engineering Laboratory (CSEL) has been studying the
actual impact of capable autonomous machine agents on human performance in
a variety of domains. The data shows that "strong, silent, difficult to
direct automation is not a team player" (Woods, 1996). The results of such
studies have led to an understanding of the importance of human-centered
technology development and to principles for making intelligent and
automated agents team players (Billings, 1996). These results have been
obtained in the crucible of complex settings such as aircraft cockpits,
space mission control centers, and operating rooms. These results can be
used to help developers of human-centered software agents for digital
information worlds avoid common pitfalls and classic design errors.

Clumsy Automation

The powers of technology continue to explode around us. The latest focus
of technologists is the power of very large interconnected networks such as
the World Wide Web and digital libraries. The potential of such technology
is balanced with concern that such systems overwhelm users with data,
options and sites. The solution, we are told, is software agents that will
alleviate the burdens faced by consumers in managing information and
interfaces. Promises are being made that agents will hide the complexity
associated with the Web or other large digital worlds. This will be
accomplished by automating many complex or tedious tasks. Agents will help
us to search, browse, manage email, schedule meetings, shop, monitor news,
and so forth. They will filter information for us and tailor it to our
context-specific needs. Some will also help us to collaborate with others.
By assisting with such tasks, agents will reduce our work and information
overload. They will enable a more customized, rewarding, and efficient
experience on the Web. Given this vision, current efforts have focused on
developing powerful autonomous software agents in the faith that "if we
build them, the benefits will come."

In contrast to these dreams and promises is data from a variety of domains
where capable machine agents have already been at work -- highly automated
flight decks in aviation, space mission control centers, operating rooms
and critical care settings in medicine. These machine agents often are
called automation, and they were built in part in the hope that they would
improve human performance by off loading work, freeing up attention, hiding
complexity -- the same kinds of justifications touted for the benefits of
software agents (Table 1 contrasts typical designer hopes for the impact of
their systems on cognition with the results of studies).

The pattern that emerged is that strong but silent and difficult to direct
machine agents create new operational complexities. In these studies we
interacted with many different operational people and organizations,
* through their descriptions of incidents where automated systems behaved
in surprising ways,
* through their behavior in incidents that occurred on the job,
* through their cognitive activities as analyzed in simulator studies that
examined the coordination between practitioner and automated systems in
specific task contexts,
* unfortunately, through the analysis of accidents where people
misunderstood what their automated partners were doing until disaster
struck.

One way to see the pattern is simply to listen to the voices that we heard
in our investigations. Users described and revealed clumsiness and
complexity. They described aspects of automation that were strong but
sometimes silent and difficult to direct when resources are limited and
pressure to perform is greatest. We saw and heard how they face new
challenges imposed by the tools that are supposed to serve them and provide
"added functionality." The complexity created when automated systems are
not human- or practice-centered is best expressed by the questions they
posed when working with "clumsy" machine agents:
* "What is it doing now?"
* "What will do next?"
* "How did I get into this mode/state?"
* "Why did it do this?"
* "Why won't it do what I want?"
* "Stop interrupting me while I am busy."
* "I know there is some way to get it to do what I want."
* "How do I stop this machine from doing this?"

These are evidence for automation surprises (Sarter, Woods and Billings, in
press). These are situations where users are surprised by actions taken
(or not taken) by automated agents. Automation surprises begin with
miscommunication and misassessments between the automation and users which
lead to a gap between the user's understanding of what the automated
systems are set up to do, what they are doing, and what they are going to
do.

The evidence shows strongly that the potential for automation surprises is
the greatest when three factors converge:
1. the automated systems can act on their own without immediately
preceding directions from their human partner (this kind of behavior arises
in particular through interactions among multiple automated subsystems),
2. gaps in users mental models of how their machine partners work in
different situations, and
3. weak feedback about the activities and future behavior of the agent
relative to the state of the world.

Designing Agents as Team Players

The dangers can, however, be predicted and reduced. The research results
also point to directions for developing more successful human-centered
automated systems. The key elements are:
* avoid operational complexity
* evaluate new systems in terms of their potential to create specific kinds
of human error and system failure,
* increase awareness and error detection by improved observability of
automation activities (provide feedback about current and future agent
activities),
* analyze the impact of new machine agents in terms of coordination demands
placed on the human user (make agents team players),
* give users the ability to direct the machine agent as a resource in the
process of meeting their (practitioners' goals),
* promote the growth of human expertise in understanding how agents work
and how to work agents in different kinds of situations.
Developers of the new breed of agents can avoid the pitfalls and exploit
the opportunities by using the hard won principles and techniques of
human-centered and practice-centered design.

Previous work has established that black box systems are not team players,
create new burdens and complexities, and lead to new errors and failures.
Some level of visibility of agent activities is required; some level of
understanding of how agents carry out their functions is required; some
level of management (delegation and re-direction) of agent activities is
needed. On the other hand, all of the most detailed data about systems may
overwhelm users; complete flexibility may create too many burdens leading
users to just do the job themselves. The key to research on human-centered
software agents is to find levels and types of feedback and coordination
that support team play between machine subordinates and human supervisor
that helps the human user achieve their goals in context.

For example, a common finding in studies that assess the impact of new
automation is that increasing the autonomy, authority and complexity of
machine agents creates the need increased feedback about agent activities
as they handle various situations or what has been termed, observability
(e.g., Norman, 1990; Sarter and Woods, 1995; Woods, 1996). Observability
is the technical term that refers to the cognitive work needed to extract
meaning from available data. This term captures the fundamental
relationship among data, observer and context of observation that is
fundamental to effective feedback. Observability is distinct from data
availability, which refers to the mere presence of data in some form in
some location (Sarter, Woods and Billings, in press). If "strong" software
agents are to be team players, they require new forms of feedback
emphasizing an integrated dynamic picture of the current situation, agent
activities, and how these may evolve in the future. Increasing autonomy
and authority of machine agents without an increase in observability create
automation surprises.

Another example concerns a common joint system architecture where the
human's role is to monitor the automated agent. When users determine that
the machine agent is not solving a problem adequately, they interrupt the
automated agent and take over the problem in its entirety. Thus, the human
is cast into the role of critiquing the machine, and the joint system
operates in essentially two modes - fully automatic or fully manual.
Previous work in several domains and with different types of machine agents
has shown that this is a poor cooperative architecture (e.g., Roth et al.,
1987; Layton et al., 1994; Sarter et al., in press). Either the machine
does all the job without any benefits of practitioners' information and
knowledge, despite the brittleness of the machine agents, or the user takes
over in the middle of a deteriorating or challenging situation without the
support of cognitive tools. One can summarize some of the results from
research in this area as, "it's not cooperation, if either you do it all
or I do it all." Cooperative problem solving occurs when the agents
coordinate activity in the process of solving the problem. Cooperating
agents have access to partial, overlapping information and knowledge
relevant to the problem at hand.

New user- and practice-oriented design philosophies and concepts are being
developed to address deficiencies in human-machine coordination. Their
common goal is to provide the basis to design integrated human-machine
teams that cooperate and communicate effectively as situations escalate in
tempo, demands, and difficulty. Another goal is to help developers
identify where problems can arise when new automation projects are
considered and therefore help mobilize the design resources to prevent
them.

Table 1. Designer's eye view of apparent benefits of new automation
contrasted with the real experience of operational personnel.

When new automation is introduced into a system or when there is an
increase in the autonomy of automated systems, developers often assume that
adding "automation" is a simple substitution of a machine activity for
human activity -- the substitution myth. Empirical data on the
relationship of people and technology suggests that is not the case (in
part this is because tasks and activities are highly interdependent or
coupled in real fields of practice). Instead, adding or expanding the
machine's role changes the cooperative architecture, changing the human's
role often in profound ways. New types or levels of automation shift the
human role to one of monitor, exception handler, and manager of automated
resources.

Putative benefit Real complexity

better results, transforms practice, the roles of people change
same system (substitution)

frees up resources: 1. create new kinds of cognitive work, often at
offloads work the wrong times

frees up resources: 2. more threads to track; makes it harder for
focus user attention practitioners to remain aware of and integrate
on the right answer all of the activities and changes around them

less knowledge new knowledge/skill demands

autonomous machine team play with people is critical to success

same feedback new levels and types of feedback are needed to support
peoples' new roles

generic flexibility explosion of features, options and modes create new
demands, types of errors, and paths towards failure

reduce human error both machines and people are fallible; new problems
associated with human-machine coordination breakdowns

Creating partially autonomous machine agents is, in part, like adding a new
team member. One result is the introduction of new coordination demands.
When it is hard to direct the machine agents and hard to see their
activities and intentions, it is difficult for human supervisorsto
coordinate activities. This is one factor that may explain why people
"escape" from clumsy automation as task demands escalate.

References

Norman, D.A. (1990). The 'problem' of automation: Inappropriate feedback
and interaction, not 'over-automation.' Philosophical Transactions of the
Royal Society of London, B 327:585--593.

Hutchins, E. (1995). Cognition in the Wild. MIT press.

CSEL References on Human-Centered Systems

General
N. Sarter, D.D. Woods and C. Billings. Automation Surprises. In G.
Salvendy, editor, Handbook of Human Factors/Ergonomics, second edition,
Wiley, New York, in press.

D.D. Woods and J.C. Watts. How Not To Have To Navigate Through Too Many
Displays. In Helander, M.G., Landauer, T.K. and Prabhu, P. (Eds.) Handbook
of Human-Computer Interaction, 2nd edition. Amsterdam, The Netherlands:
Elsevier Science, 1997.

D.D. Woods, E.S. Patterson, J. Corban and J.C. Watts. Bridging the Gap
between User-Centered Intentions and Actual Design Practice. Proceedings
of the Human Factors and Ergonomics Society, September, 1996.

D.D. Woods. Decomposing Automation: Apparent Simplicity, Real Complexity,
In R. Parasuraman and M. Mouloula, editors, Automation Technology and Human
Performance, Erlbaum, p. 3-17, 1996.

L. Johannesen. The Interactions of Alicyn in Cyberland, Interactions, 1(4),
46-57, 1994.

D.D. Woods. The price of flexibility in intelligent interfaces.
Knowledge-Based Systems, 6:1-8, 1993.

D.D. Woods, E.M. Roth, and K.B. Bennett. Explorations in joint
human-machine cognitive systems. In S. Robertson, W. Zachary, and J. Black,
editors, Cognition, Computing and Cooperation, Ablex Publishing, Norwood,
NJ, 1990.

Medicine
R.I. Cook and D.D. Woods. Adapting to new technology in the operating
room. Human Factors, 38(4), 593-613, 1996.

J.H. Obradovich and D.D. Woods. Users as designers: How people cope with
poor HCI design in computer-based medical devices. Human Factors, 38(4),
1996.

R.I. Cook and D.D. Woods. Implications of automation surprises in aviation
for the future of total intravenous anesthesia (TIVA). Journal of Clinical
Anesthesia, 8:29s-37s, 1996.

E. Moll van Charante, R.I. Cook, D.D. Woods, L. Yue and M.B. Howie.
Human-computer interaction in context: Physician interaction with automated
intravenous controllers in the heart room. In H.G. Stassen, editor,
Analysis, Design and Evaluation of Man-Machine Systems 1992, Pergamon
Press, 1993, p. 263-274.

Aviation
N. Sarter and D.D. Woods. Teamplay with a Powerful and Independent Agent:
A Corpus of Operational Experiences and Automation Surprises on the Airbus
A-320. Manuscript submitted for publication, 1997.

Billings, C.E. (1996). Aviation Automation: The Search For A
Human-Centered Approach. Hillsdale, N.J.: Lawrence Erlbaum Associates.

N. Sarter and D.D. Woods. "How in the world did we get into that mode?"
Mode error and awareness in supervisory control. Human Factors, 37: 5-19,
1995.

N. Sarter and D.D. Woods. 'Strong, Silent and Out of the Loop:' Properties
of Advanced (Cockpit) Automation and their Impact on Human-Automation
Interaction, Cognitive Systems Engineering Laboratory Report, CSEL
95-TR-01, The Ohio State University, Columbus OH, March 1995. Prepared for
NASA Ames Research Center.

N. Sarter and D.D. Woods. Pilot Interaction with Cockpit Automation II: An
Experimental Study of Pilot's Model and Awareness of the Flight Management
System. International Journal of Aviation Psychology, 4:1-28, 1994.

N. Sarter and D.D. Woods. Pilot Interaction with Cockpit Automation I:
Operational Experiences with the Flight Management System. International
Journal of Aviation Psychology, 2:303--321, 1992.

Electronic Troubleshooting
E.M. Roth, K. Bennett, and D.D. Woods. Human interaction with an
'intelligent' machine. International Journal of Man-Machine Studies,
27:479--525, 1987.

Space Systems and Process Control
J. Malin, D. Schreckenghost, D. Woods, S. Potter, L. Johannesen, M.
Holloway and K. Forbus. Making Intelligent Systems Team Players. NASA
Technical Report 104738, Johnson Space Center, Houston TX, 1991.

D. Ranson and D.D. Woods. Animating Computer Agents. Proceedings of Human
Interaction with Complex Systems, IEEE Computer Society Press, Los
Alamitos, CA, 1996.

D. Ranson and D.D. Woods. Opening Up Black Boxes: Visualizing Automation
Activity. Cognitive Systems Engineering Laboratory Report, CSEL 97-TR-01,
The Ohio State University, Columbus OH, January 1997.

J.C. Watts, D.D. Woods, J.M. Corban, E.S. Patterson, R. Kerr, and L. Hicks.
Voice Loops as Cooperative Aids in Space Shuttle Mission Control. In
Proceedings of Conputer-Supoprted Coopertaive Work. Boston MA, 1996.

J.C. Watts, D.D. Woods, E.S. Patterson. Functionally Distributed
Coordination during Anomaly Response in Space Shuttle Mission Control.
Proceedings of Human Interaction with Complex Systems, IEEE Computer
Society Press, Los Alamitos, CA, 1996.

L. Johannesen, R.I. Cook, and D.D. Woods. Cooperative communications in
dynamic fault management. In Proceedings of the 38th Annual Meeting of the
Human Factors and Ergonomics Society, October, Nashville TN, 1994.

S.S. Potter and D.D. Woods. Event-driven timeline displays: Beyond message
lists in human-intelligent system interaction. In Proceedings of IEEE
International Conference on Systems, Man, and Cybernetics, IEEE, 1991.

David D. Woods
Professor
Cognitive Systems Engineering Laboratory
210 Baker Systems
The Ohio State University
1971 Neil Avenue, Columbus, Ohio 43210
614-292-1700
614-292-7852 (fax)

Human-Centered Software Agents: Lesson…