Thursday 14 February 2013

The Hybrid Deliberative/Reactive Paradigm


Overview

By the end of the 1980’s, the trend in artificially intelligent robots was to
design and program using the Reactive Paradigm. The Reactive Paradigm
allowed robots to operate in real-time using inexpensive, commercially available
processors (e.g., HC6811) with no memory. But the cost of reactivity, of
course, was a system that eliminated planning or any functions which involved
remembering or reasoning about the global state of the robot relative

to its environment. This meant that a robot could not plan optimal trajectories
(path planning), make maps, monitor its own performance, or even select
the best behaviors to use to accomplish a task (general planning). Notice
that not all of these functions involve planning per se; map making involves
handling uncertainty, while performancemonitoring (and the implied objective
of what to do about degraded performance) involves problem solving
and learning. In order to differentiate these more cognitively oriented functions
from path planning, DELIBERATIVE the term deliberative was coined.
The Reactive Paradigm also suffered somewhat because most people found
that designing behaviors so that the desired overall behavior would emerge
was an art, not a science. Techniques for sequencing or assembling behaviors
to produce a system capable of achieving a series of sub-goals also relied
heavily on the designer. Couldn’t the robot be made to be smart enough to
select the necessary behaviors for a particular task and generate how they
should be sequenced over time?
Therefore, the new challenge for AI robotics at the beginning of the 1990’s
was how to put the planning, and deliberation, back into robots, but without
disrupting the success of the reactive behavioral control. The consensus was
that behavioral controlwas the “correct” way to do low level control, because
of its pragmatic success, and its elegance as a computational theory for both
biological and machine intelligence. As early as 1988, Ron Arkin was publishing
work on how to add more cognitive functions to a behavioral system
in the form of the Autonomous Robot Architecture (AuRA). Many roboticists
looked at adding layers of higher, more cognitive functions to their behavioral
systems, emulating the evolution of intelligence. This chapter will cover
five examples of architectures which illustrate this bottom-up, layering approach:
AuRA, Sensor Fusion Effects (SFX), 3T, Saphira, and TCA. Other robot
systems which do not strongly adhere to an architectural style, such as Rhino
and Minerva, will be discussed in later chapters.
During the 1990’s, members of the general AI community had become exposed
to the principles of reactive robots. The concept of considering an
intelligent system, or agent, as being situated in its environment, combined
with the existence proof that detailed, Shakey-like world representations are
not always necessary, led to a new style of planning. This change in planning
REACTIVE PLANNING was called reactive planning. Many researcherswho hadworked in traditional
AI became involved in robotics. One type of reactive planner for robots,
Jim Firby’s reactive-action packages (RAPs),53 was integrated as a layer within
the 3T architecture.21 Architectures stemming from the planning community
roots showed their traditional AI roots. They use a more top-down, hierar-

chical flavor with global world models, especially Saphira77 and TCA.131
Regardless of the bottom-up or top-down inspiration for including nonbehavioral
intelligence, architectures which use reactive behaviors, but also
incorporate planning, are now referred to as being part of the Hybrid Deliberative/
Reactive Paradigm. At first, Hybrids were viewed as an artifact
of research, without any real merit for robotic implementations. Some researchers
went so far as to recommend that if a robot was being designed to
operate in an unstructured environment, the designer should use the Reactive
Paradigm. If the task was to be performed in a knowledge-rich environment,
easy tomodel, then the Hierarchical Paradigmwas preferable, because
the software could be engineered specifically for the mission. Hybrids were
believed to be the worst of both worlds, saddling the fast execution times of
reactivity with the difficulties in developing hierarchical models.
The current thinking in the robotics community is that Hybrids are the
best general architectural solution for several reasons. First, the use of asynchronous
processing techniques (multi-tasking, threads, etc.) allow deliberative
functions to execute independently of reactive behaviors. A planner
can be slowly computing the next goal for a robot to navigate to, while the
robot is reactively navigating toward its current goal with fast update rates.
Second, good software modularity allows subsystems or objects in Hybrid
architectures to be mixed and matched for specific applications. Applications
which favor purely reactive behaviors can implement just the subset of
the architecture for behaviors, while more cognitively challenging domains
can use the entire architecture.


No comments:

Post a Comment