Tuesday, 12 February 2013

The Seven Areas of AI


The Seven Areas of AI

Now that some possible uses and shortcomings of robots have been covered,
it is motivating to consider what are the areas of artificial intelligence and

how they could be used to overcome these problems. The Handbook of Artificial
Intelligence64 divides up the field into seven main areas: knowledge
representation, understanding natural language, learning, planning and problem
solving, inference, search, and vision.
1. Knowledge representation. An KNOWLEDGE important, but often overlooked, issue is
REPRESENTATION how does the robot represent its world, its task, and itself. Suppose a robot
is scanning a pile of rubble for a human. What kind of data structure and
algorithms would it take to represent what a human looks like? One way
is to construct a structural model: a person is composed of an oval head,
a cylindrical torso, smaller cylindrical arms with bilateral symmetry, etc.
Of course, what happens if only a portion of the human is visible?
UNDERSTANDING 2. Understanding natural language. Natural language is deceptively chal-
NATURAL LANGUAGE lenging, apart from the issue of recognizing words which is now being
done by commercial products such as Via Voice and Naturally Speaking.
It is not just a matter of looking up words, which is the subject of the
following apocryphal story about AI. The story goes that after Sputnik
went up, the US government needed to catch up with the Soviet scientists.
However, translating Russian scientific articles was time consuming and
not many US citizens could read technical reports in Russian. Therefore,
the US decided to use these newfangled computers to create translation
programs. The day came when the new program was ready for its first
test. It was given the proverb: the spirit is willing, but the flesh is weak.
The reported output: the vodka is strong, but the meat is rotten.
LEARNING 3. Learning. Imagine a robot that could be programmed by just watching a
human, or by just trying the task repeatedly itself.
PLANNING, PROBLEM 4. Planning and problem solving. Intelligence is associated with the ability
SOLVING to plan actions needed to accomplish a goal and solve problemswith those
plans or when they don’t work. One of the earliest childhood fables, the
Three Pigs and the Big, Bad Wolf, involves two unintelligent pigs who
don’t plan ahead and an intelligent pig who is able to solve the problem
of why his brothers’ houses have failed, as well as plan an unpleasant
demise for the wolf.
INFERENCE 5. Inference. Inference is generating an answer when there isn’t complete
information. Consider a planetary rover looking at a dark region on the
ground. Its range finder is broken and all it has left is its camera and a
fine AI system. Assume that depth information can’t be extracted from

the camera. Is the dark region a canyon? Is it a shadow? The rover will
need to use inference to either actively or passively disambiguate what
the dark region is (e.g., kick a rock at the dark area versus reason that
there is nothing nearby that could create that shadow).
6. Search. Search SEARCH doesn’t necessarily mean searching a large physical space
for an object. In AI terms, search means efficiently examining a knowledge
representation of a problem (called a “search space”) to find the
answer. Deep Blue, the computer that beat the World Chess master Gary
Kasparov, won by searching through almost all possible combinations of
moves to find the best move to make. The legal moves in chess given the
current state of the board formed the search space.
VISION 7. Vision. Vision is possibly the most valuable sense humans have. Studies
by Harvard psychologist Steven Kosslyn suggest that much of problem
solving abilities stem from the ability to visually simulate the effects of
actions in our head. As such, AI researchers have pursued creating vision
systems both to improve robotic actions and to supplement other work in
general machine intelligence.
Finally, there is a temptation to assume that the history of AI Robotics is the
story of how advances in AI have improved robotics. But that is not the
case. In many regards, robotics has played a pivotal role in advancing AI.
Breakthroughs in methods for planning (operations research types of problems)
came after the paradigmshift to reactivity in robotics in the late 1980’s
showed how unpredictable changes in the environment could actually be exploited
to simplify programming. Many of the search engines on the world
wide web use techniques developed for robotics. These programs are called
SOFTWARE AGENTS software agents: autonomous programs which can interact with and adapt to
WEB-BOT their world just like an animal or a smart robot. The term web-bot directly
reflects on the robotic heritage of these AI systems. Even animation is being
changed by advances in AI robotics. According to a keynote address given
by Danny Hillis at the 1997 Autonomous Agents conference, animators for
Disney’s Hunchback of Notre Dame programmed each cartoon character in the
crowd scenes as if it were a simulation of a robot, and used methods that will
be discussed in Ch. 4.


Summary
AI robotics is a distinct field, both historically and in scope, from industrial
robotics. Industrial robots has concentrated on control theory issues, particularly
solving the dynamics and kinematics of a robot. This is concerned with
having the stationary robot perform precise motions, repetitively in a structured
factory environment. AI robotics has concentrated on how a mobile
robot should handle unpredictable events in an unstructured world. The design
of an AI robot should consider how the robot will represent knowledge
about the world, whether it needs to understand natural language, can it
learn tasks, what kind of planning and problem solving will it have to do,
how much inference is expected, how can it rapidly search its database and
knowledge for answers, and what mechanisms will it use for perceiving the
world.
Teleoperation arose as an intermediate solution to tasks that required automation
but for which robots could not be adequately programmed to handle.
Teleoperation methods typically are cognitive fatiguing, require high
communication bandwidths and short communication delays, and require
one or more teleoperators per remote. Telepresence techniques attempt to
create a more natural interface for the human to control the robot and interpret
what it is doing and seeing, but at a high communication cost. Supervisory
control attempts to delegate portions of the task to the remote, either to
do autonomously (traded control) or with reduced, but continuous, human
interaction (shared control).



No comments:

Post a Comment