Fig. 5.13 shows how elements of an actor’s script compares to a robot
CAUSAL CHAIN script. The main sequence of events is called a causal chain. The causal chain
is critical, because it embodies the coordination control program logic just as
a FSA does. It can be implemented in the same way. In NLP, scripts allow
the computer to keep up with a conversation that may be abbreviated. For
example, consider a computer trying to read and translate a book where the
main character has stopped in a restaurant. Good writers often eliminate all
the details of an event to concentrate on the ones that matter. This missing,
but implied, information is easy to extract. Suppose the book started with
“John ordered lobster.” This is a clue that serves as an index into the current
or relevant event of the script (eating at a restaurant), skipping over past
events (John arrived at the restaurant, John got a menu, etc.). They also focus
the system’s attention on the next likely event (look for a phrase that indicates
John has placed an order), so the computer can instantiate the function
which looks for this event. If the next sentence is “Armand brought out the
lobster and refilled the white wine,” the computer can infer that Armand is
the waiter and that John had previously ordered and received white wine,
without having been explicitly told.
In programming robots, people often like to abbreviate the routine portions
of control and concentrate on representing and debugging the important
sequence of events. Finite state automata force the designer to consider
and enumerate every possible transition, while scripts simplify the specification.
The concepts of indexing INDEXING and focus-of-attention are extremely valuable
FOCUS-OF-ATTENTION for coordinating behaviors in robots in an efficient and intuitive manner. Effective
implementations require asynchronous processing, so the implementation
is beyond the scope of this book. For example, suppose a Pick Up the
Trash robot boots up. The first action on the causal chain is to look for the
Coke cans. The designer though realizes that this behavior could generate
a random direction and move the robot, missing a can right in front of it.
Therefore, the designer wants the code to permit the robot to skip searching
the arena if it immediately sees a Coke can, and begin to pick up the can
without even calling the wander-for-goal(red) behavior. The designer also
knows that the next releaser after grab-trash exits to look for is blue, because
the cue for moving to the trash can and dropping off trash is blue.
The resulting script for an abstract behavior to accomplish a task is usually
the same as the programming logic derived from an FSA. In the case of Pick
Up the Trash, the script might look like:
for each update...
\\ look for props and cues first: cans, trash cans, gripper
rStatus=extract_color(red, rcx, rSize); \\ ignore rSize
if (rStatus==TRUE)
SEE_RED=TRUE;
else
SEE_RED=FALSE;
bStatus=extract_color(blue, bcx, bSize);
if (bStatus==TRUE){
SEE_BLUE=TRUE; NO_BLUE=FALSE;
} else {
SEE_BLUE=FALSE; NO_BLUE=TRUE;
}
AT_BLUE=looming(size, bSize);
gStatus=gripper_status();
if (gStatus==TRUE) {
FULL=TRUE; EMPTY=FALSE;
} else {
FULL=FALSE; EMPTY=TRUE;
}
\\index into the correct step in the causal chain
if (EMPTY){
if (SEE_RED){
move_to_goal(red);
else
wander();
} else{
grab_trash();
if (NO_BLUE)
wander();
else if (AT_BLUE)
drop_trash();
else if (SEE_BLUE)
move_to_goal(blue);
}

No comments:
Post a Comment