Step 3: Describe the Environment. This step is critical for two reasons.
First, it is a key factor in determining the situatedness of the robot. Second, it
identifies perceptual opportunities for the behaviors, both in how a perceptual
event will instantiate a new behavior, and in how the perceptual schema
for a behavior will function. Recall from Chapter 4 that the Reactive Paradigm
favors direct perception or affordance-based perception because it has
a rapid execution time and involves no reasoning or memory.
The course was laid out on a grassy field with gentle slopes. The course consisted
of a 10 foot wide lane marked in US Department of Transportation white paint,
roughly in the shape of a kidney (see Fig. 5.5). The exact length of the course and
layout of obstacles of the course were not known until the day of the competition, and
teams were not permitted to measure the course or run trials on it. Obstacles were all
stationary and consisted of bales of hay wrapped in either white or red plastic. The
bales were approximately 2 ft by 4 ft and never extended more than 3 feet into the
lane. The sonar was able to reliably detect the plastic covered bales at most angles of
approach at 8 feet away. The vehicles were scheduled to run between 9am and 5pm
onMay 22, regardless of weather or cloud cover. In addition to the visual challenges
of changing lighting due to clouds, the bales introduced shadows on the white lines
between 9–11am and 3–5pm. The sand pit was only 4 feet long and placed on a
straight segment of the course.
The analysis of the environment offered a simplification of the task. The placing of
the obstacles left a 4 ft wide open area. Since Omnibot was only 3 ft wide, the course
could be treated as having no obstacles if the robot could stay in the center of the lane
with a 0.5 ft tolerance. This eliminated the need for an avoid obstacle behavior.
The analysis of the environment also identified an affordance for controlling the
robot. The only object of interest to the robot was the white line, which should have
a high contrast to the green (dark gray) grass. But the exact lighting value of the
white line changed with the weather. However, if the camera was pointed directly
at one line, instead of trying to see both lines, the majority of the brightest points
in the image would belong to the line (this is a reduction in the signal to noise ratio
because more of the image has the line in it). Some of the bright points would be due
to reflections, but these were assumed to be randomly distributed. Therefore, if the
robot tried to keep the centroid of the white points in the center of the image, it would
stay in the center of the lane.
Step 4: Describe how the robot should act in response to its environment.
The purpose of this step is to identify the set of one or more candidate
primitive behaviors; these candidates will be refined or eliminated later. As
the designer describes how the robot should act, behaviors usually become
apparent. It should be emphasized that the point of this step is to concen
trate on what the robot should do, not how it will do it, although often the
designer sees both the what and the how at the same time.
In the case of the CSM entry, only one behavior was initially proposed: follow-line.
The perceptual schema would use the white line to compute the difference between
where the centroid of the white line was versus where it should be, while the motor
schema would convert that to a command to the steer motor.
In terms of expressing the behaviors for a task, it is often advantageous to
construct a behavior table as one way of BEHAVIOR TABLE at least getting all the behaviors on a
single sheet of paper. The releaser for each behavior is helpful for confirming
that the behaviors will operate correctly without conflict (remember, accidently
programming the robotic equivalent of male sticklebacks from Ch. 3
is undesirable). It is often useful for the designer to classify themotor schema
and the percept. For example, consider what happens if an implementation
has a purely reflexive move-to-goal motor schema and an avoid-obstacle behavior.
What happens if the avoid-obstacle behavior causes the robot to lose
perception of the goal? Oops, the perceptual schema returns no goal and the
move-to-goal behavior is terminated! Probably what the designer assumed
was that the behavior would be a fixed-action pattern and thereby the robot
would persist in moving toward the last known location of the goal.
As seen from the behavior table above, the CSM team initially proposed only one
behavior, follow-line. The follow-line behavior consisted of a motor schema, stay-onpath(
centroid), which was reflexive (stimulus-response) and taxis (it oriented the robot
relative to the stimulus). The perceptual schema, compute-centroid(image,white),
extracted an affordance of the centroid of white from the image as being the line. Only
the x component, or horizontal location, of the centroid was used, c_x.
Step 5: Refine each behavior. By this point, the designer has an overall
idea of the organization of the reactive system and what the activities are.
This step concentrates on the design of each individual behavior. As the
designer constructs the underlying algorithms for the motor and perceptual
schemas, it is important to be sure to consider both the normal range of environmental
conditions the robot is expected to operate in (e.g., the steady-state
case) and when the behavior will fail.
The follow-line behavior was based on the analysis that the only white things
in the environment were lines and plastic covered bales of hay. While this was a
good assumption, it led to a humorous event during the second heat of the competition.
As the robot was following the white line down the course, one of the judges
stepped into view of the camera. Unfortunately, the judge was wearing white shoes,
and Omnibot turned in a direction roughly in-between the shoes and the line. The
CSM team captain, Floyd Henning, realized what was happening and shouted at the
judge to move. Too late, the robot’s front wheels had already crossed over the line; its
camera was now facing outside the line and there was no chance of recovering. Suddenly,
right before the leftmost rear wheel was about to leave the boundary, Omnibot
straightened up and began going parallel to the line! The path turned to the right,
Omnibot crossed back into the path and re-acquired the line. She eventually went
out of bounds on a hair pin further down the course. The crowd went wild, while the
CSM team exchanged confused looks.
What happened to make Omnibot drive back in bounds? The perceptual schema
was using the 20% brightest pixels in the image for computing the centroid. When it
wandered onto the grass, Omnibot went straight because the reflection on the grass
was largely random and the positions cancelled out, leaving the centroid always in
the center of the image. The groundskeepers had cut the grass only in the areas
where the path was to be. Next to the path was a parallel swatch of uncut grass
loaded with dandelion seed puffs. The row of white puffs acted just as a white line,
and once in viewing range Omnibot obligingly corrected her course to be parallel to
them. It was sheer luck that the path curved so that when the dandelions ran out,
Omnibot continued straight and intersected with the path. While Omnibot wasn’t
programmed to react to shoes and dandelions, it did react correctly considering its
ecological niche.
No comments:
Post a Comment