Designing a Sensor Suite
Historically, reactive robots used either inexpensive infrared (IR) or ultrasonictransducers to detect range. The earliest behaviors focused on basic
navigational skills such as obstacle avoidance and wall following. The percept
for these behaviors all involve knowing the distance to an occupied area
of space. Now with the advent of inexpensive miniature cameras and laser
range finders for consumer applications, computer vision is becoming increasingly
common. In agricultural and transportation applications of reactive
robots, GPS technology has become popular as well. This chapter
attempts to cover the basics of these sensing modalities, and how they are
used in mobile robots. Because the sensor market is rapidly changing, the
chapter will focus on how to design a suite of sensors for use by a robot,
rather the device details.
An artificially intelligent robot has to have some sensing in order to be considered
a true AI robot. If it cannot observe the world and the effects of its
actions, it cannot react. As noted in the chapter on “Action-Oriented Perception”
in Artificial Intelligence andMobile Robots: Case Studies of Successful Robot
Systems, the design of a set of sensors for a robot begins with an assessment
of the type of information that needs to be extracted.14 This information can
be either from proprioception (measurements PROPRIOCEPTION of movements relative to an inEXTEROCEPTION
ternal frame of reference), exteroception (measurements of the layout of the
EXPROPRIOCEPTION environment and objects relative to the robot’s frame of reference) or exproprioception
(measurement of the position of the robot body or parts relative
to the layout of the environment).
The Colorado School of Mines fielded an entry to the 1995 UGV competition
entry discussed in Ch. 5. This provides an example of different types
of sensing for a path following robot. In 1995, the follow-path behavior
was expanded to track both lines of the path using a wide angle lens
on the camera. follow-path could be considered exteroceptive because it
acquired information on the environment. However, the camera for the robot
was mounted on a panning mast, which was intended to turn to keep
the line in view, no matter what direction the path turned in. Therefore, the
robot had to know where the camera was turned relative to the robot’s internal
frame of reference in order to correctly transform the location of a white
line in image coordinates to a steering direction. This meant the information
needed for follow-path had both proprioceptive and exteroceptive components,
making the perception somewhat exproprioceptive. (If the robot
was extracting the pose of its camera from exteroception, it would be clearly
exproprioceptive.)
Due to a programming error, the follow-path behavior incorrectly assumed
that the exteroceptive camera data had been transformed by the proprioceptive
shaft encoder data from the panning mast into exproprioceptive data.
The robot needed the exproprioception to determine where it should move
next: turn to follow the path in camera coordinates, plus the compensation
for the current camera pan angle. The programming error resulted in the
robot acting as if the camera was aligned with the center of the robot at all
times. But the camera might be turned slightly to maintain the view of both
lines of the path through the pan-camera behavior. The resulting navigational
command might be to turn, but too little to make a difference, or even
to turn the wrong way. This subtle error surfaced as the robot went around
hair pin turns, causing the robot to go consistently out of bounds.
No comments:
Post a Comment