Computer Vision
Computer vision refers to processing COMPUTER VISION data from any modality which uses theIMAGE electromagnetic spectrum which produces an image. An image is essentially
a way of representing data in a picture-like format where there is a direct
physical correspondence to the scene being imaged. Unlike sonar, which
returns a single range reading which could correspond to an object anywhere
within a 30 cone, an image implies multiple readings placed in a
two-dimensional array or grid. Every element in the arraymaps onto a small
PIXELS region of space. The elements in image arrays are called pixels, a contraction
of the words “picture element.” The modality of the device determines what
the image measures. If a visible light camera is used, then the value stored
at each pixel is the value of the light (e.g., color). If a thermal camera is used,
then the value is the heat at that region. The function that converts a signal
IMAGE FUNCTION into a pixel value is called an image function.
Computer vision includes cameras, which produce images over the same
electromagnetic spectrum that humans see, to more exotic technologies: thermal
sensors, X-rays, laser range finders, and synthetic aperature radar. Simple
forms of computer vision are becoming more popular due to the drop in
prices and miniaturization of cameras and because reactive robots need to
exploit affordances such as color or texture.
As noted in the Introduction, computer vision is a separate field of study
from robotics, and has produced many useful algorithms for filtering out
noise, compensating for illumination problems, enhancing images, finding
lines, matching lines to models, extracting shapes and building 3D representations.
Reactive robots tend not to use those algorithms. Most of the algorithms,
especially those that remove noise, require many computations on
each pixel in the image; until recently, the algorithms were too computationally
expensive to run in real-time. Also, there was a resistance to algorithms
which required any type ofmemory ormodeling. Therefore a robot designed
to follow paths which used vision to extract the path boundary lines in the
current image based on knowledge of the width, then predicted where the
path boundary lines should be in the next image would be on the borderline
of reactivity.
No comments:
Post a Comment