Title :
Do agents need understanding?
Author_Institution :
MIT Media Lab., Heidelberg, Germany
Abstract :
There are several important ideas and questions that arise within AI work on agents. I address one of these questions by asking how much "human-like" understanding is necessary for a useful agent. One of the lessons of the new wave of AI research in the late 1980s and early 1990s was a greater appreciation for the artifacts introduced by a-priori assumptions about the description of problems. Systems of surprising effectiveness, flexibility and simplicity could solve apparently complicated problems by dropping the assumptions about identity, reference and generality imported from their common-sense linguistic formulation. By using encodings or representations which are simultaneously more specialized in some ways (by being task-specific) and more general in other ways (by their systemic regard for embeddedness as a design constraint), these systems introduced a different way of thinking about representation and intelligence. The intellectual contribution of the idea of an intelligent agent lies in a similar attitude toward the tasks of helping us deal with the mass of information and responsibilities around us.
Keywords :
"Humans","Law","Legal factors","Laboratories","Solids","Predictive models","Particle measurements","Clocks","Cleaning","Page description languages"
Journal_Title :
IEEE Expert