Tuesday, 12 February 2013

Agency and computational theory


Agency and computational theory

Even though it seems reasonable to explore biological and cognitive sciences
for insights in intelligence, how can we compare such different systems: carbon
and silicon “life” forms? One powerful means of conceptualizing the
different systems is to think of an abstract intelligent system. Consider something
we’ll call an agent. The agent AGENT is self-contained and independent. It has
its own “brains” and can interact with the world to make changes or to sense
what is happening. It has self-awareness. Under this definition, a person is
an agent. Likewise, a dog or a cat or a frog is an agent. More importantly,
an intelligent robot would be an agent, even certain kinds of web search engines
which continue to look for new items of interest to appear, even after
the user has logged off. Agency is a concept in artificial intelligence that allows
researchers to discuss the properties of intelligence without discussing
the details of how the intelligence got in the particular agent. In OOP terms,
“agent” is the superclass and the classes of “person” and “robot” are derived
from it.
Of course, just referring to animals, robots, and intelligent software packages
as “agents” doesn’t make the correspondences between intelligence any
clearer. One helpful way of seeing correspondences is to decide the level at
which these entities have something in common. The set of levels of comCOMPUTATIONAL
monality lead to what is often called a computational theory88 after David
THEORY Marr. Marr was a neurophysiologist who tried to recast biological vision
processes into new techniques for computer vision. The levels in a computational
theory can be greatly simplified as:

Level 1: Existence proof of what can/should be done. Suppose a roboticist
is interested in building a robot to search for survivors trapped in a building
after an earthquake. The roboticistmight consider animals which seek
out humans. As anyone who has been camping knows, mosquitoes are
very good at finding people. Mosquitoes provide an existence proof that
it is possible for a computationally simple agent to find a human being
using heat. At Level 1, agents can share a commonality of purpose or
functionality.
Level 2: Decomposition of “what” into inputs, outputs, and transformations.
This level can be thought of as creating a flow chart of “black
boxes.” Each box represents a transformation of an input into an output.
Returning to the example of a mosquito, the roboticist might realize from
biology that the mosquito finds humans by homing on the heat of a hu

man (or any warm blooded animal). If the mosquito senses a hot area, it
flies toward it. The roboticist canmodel this process as: input=thermal
image, output=steering command. The “black box” is how the mosquito
transforms the input into the output. One good guess might be
to take the centroid of the thermal image (the centroid weighted by the
heat in each area of the image) and steer to that. If the hot patch moves,
the thermal image will change with the next sensory update, and a new
steering command will be generated. This might not be exactly how the
mosquito actually steers, but it presents an idea of how a robot could
duplicate the functionality. Also notice that by focusing on the process
rather than the implementation, a roboticist doesn’t have to worry about
mosquitoes flying, while a search and rescue robot might have wheels. At
Level 2, agents can exhibit common processes.
Level 3: How to implement the process. This level of the computational theory
focuses on describing how each transformation, or black box, is implemented.
For example, in amosquito, the steering commandsmight be implementedwith
a special type of neural network, while in a robot, itmight
be implementedwith an algorithm which computes the angle between the
centroid of heat and where the robot is currently pointing. Likewise, a researcher
interested in thermal sensing might examine the mosquito to see
how it is able to detect temperature differences in such a small package;
electro-mechanical thermal sensors weigh close to a pound! At Level 3,
agents may have little or no commonality in their implementation.
It should be clear that Levels 1 and 2 are abstract enough to apply to any
agent. It is only at Level 3 that the differences between a robotic agent and
a biological agent really emerge. Some roboticists actively attempt to emulate
biology, reproducing the physiology and neural mechanisms. (Most
roboticists are familiarwith biology and ethology, but don’t try to make exact
duplicates of nature.) Fig. 3.1 shows work at CaseWestern Reserve’s Bio-Bot
Laboratory under the direction of Roger Quinn, reproducing a cockroach on
a neural level.
In general, it may not be possible, or even desirable, to duplicate how a
biological agent does something. Most roboticists do not strive to precisely
replicate animal intelligence, even though many build creatures which resemble
animals, as seen by the insect-like Genghis in Fig. 3.2. But by focusing
on Level 2 of a computational theory of intelligence, roboticists can gain
insights into how to organize intelligence.



No comments:

Post a Comment