NIST RCS
Jim Albus at the National Bureau of Standards (later renamed the NationalInstitute of Standards and Technology or NIST) anticipated the need for intelligent
industrial manipulators, even as engineering and AI researchers were
splitting into two groups. He saw that one of the major obstacles in applying
AI to manufacturing robots was that there were no common terms, no
common set of design standards. This made industry and equipment manufacturers
leery of AI, for fear of buying an expensive robot that would not
be compatible with robots purchased in the future. He developed a very detailed
architecture called the Real-time Control System (RCS) Architecture to
serve as a guide for manufacturers who wanted to add more intelligence to
their robots. RCS used NHC in its design, as shown in Fig. 2.7.
SENSE activities are grouped into a set of modules under the heading
of sensory perception. The output of the sensors is passed off to the world
modeling module which constructs a global map using information in its
associated knowledge database about the sensors and any domain knowledge
(e.g., the robot is operating underwater). This organization is similar
to NHC. The main difference is that the sensory perception module introduces
a useful preprocessing step between the sensor and the fusion into a
world model. As will be seen in Ch. 6, sensor preprocessing is often referred
to as feature extraction.
The Value Judgment module provides most of the functionality associated
with the PLAN activity: it plans, then simulates the plans to ensure they will
work. Then, as with Shakey, the Planner hands off the plan to another module,
Behavior Generation, which converts the plans into actions that the robot
can actually perform (ACT). Notice that the Behavior Generation module is
similar to the Pilot in NHC, but there appears to be less focus on navigation
tasks. The term “behavior” will be used by Reactive and Hybrid Deliberative/
Reactive architectures. (This use of “behavior” in RCS is a bit of retrofit,
as Albus and his colleagues at NIST have attempted to incorporate new advances.
The integration of all sensing into a global world model for planning
and acting keeps RCS a Hierarchical architecture.) There is another module,
operator interface, which is not shown which allows a human to “observe”
and debug what a program constructed with the architecture is doing.
The standard was adapted by many government agencies, such as NASA
and the US Bureau of Mines, who were contracting with universities and
companies to build robot prototypes. RCS serves as a blueprint for saying:
“here’s the types of sensors I want, and they’ll be fused by this module into a
globalmap, etc.” The architecturewas considered too detailed and restrictive
when it was initially developed by most AI researchers, who continued development
of new architectures and paradigms on their own. Fig. 2.8 shows
three of the diverse mobile robots that have used RCS.
A close inspection of the NHC and RCS architectures suggests that they
are well suited for semi-autonomous control. The human operator could
provide the world model (via eyes and brain), decide the mission, decompose
it into a plan, and then into actions. The lower level controller (robot)
would carry out the actions. As robotics advanced, the robot could replace
more functions and “move up” the autonomy hierarchy. For example, taking
over the pilot’s responsibilities; the human could instruct the robot to
stay on the road until the first left turn. As AI advanced, the human would
only have to serve as the Mission Planner: “go to the White House.” And so
on. Albus noted this and worked with JPL to develop a version of RCS for
teleoperating a robot arm in space. This NASREM is called the NASREM architecture
and is still in use today.

No comments:
Post a Comment