Texture
The variety of sensors and algorithms available to roboticists can actuallydistract a designer from the task of designing an elegant sensor suite. In
most cases, reactive robots use range for navigation; robots need a sensor to
keep it from hitting things. Ian Horswill designed the software and camera
system of Polly, shown in Fig. 6.29, specifically to explore vision and the
relationship to the environment using subsumption.70 Horswill’s approach
is called lightweight vision, to LIGHTWEIGHT VISION distinguish its ecological flavor fromtraditional
model-based methods.
Polly served as an autonomous tour-guide at the MIT AI Laboratory and
Brown University during the early 1990’s. At that time vision processingwas
slow and expensive, which was totally at odds with the high update rates
needed for navigation by a reactivemobile robot. The percept for the obstacle
avoidance behavior was based on a clever affordance: texture. The halls of
the AI Lab were covered throughout with the same carpet. The “color” of the
carpet in the image tended to change due to lighting, but the overall texture
or “grain” did not. In this case, texture was measured as edges per unit area,
as seen with the fine positioning discussed in Ch. 3.
RADIAL DEPTH MAP The robot divided the field of view into angles or sectors, creating a radial
depth map, or the equivalent of a polar plot. Every sector with the texture
of the carpet was marked empty. If a person was standing on the carpet,
that patch would have a different texture and the robot would mark the area
as occupied. Although this methodology had some problems—for example,
strong shadows on the floor created “occupied” areas—it was fast and
elegant.
No comments:
Post a Comment