How Should Intelligence Be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or ? Papers from the AAAI Symposium
Sebastian Risi, Joel Lehman, Jeff Clune, Cochairs
November 15–17, 2013, Arlington, Virginia
Technical Report FS-13-02
152 pp., $35.00
ISBN 978-1-57735-640-0
[Add to Cart] [View Cart]
Artificial intelligence consists of many smaller communities with diverse approaches towards the common goal of creating computational intelligence. While they are unified by this common goal, and most take inspiration from human or biological intelligence, these separate communities are driven by different abstractions of intelligence (for example, symbolic manipulation, connectionist networks, or embodiment) and the processes that might produce it (such as reinforcement learning, evolutionary computation, or statistical machine learning). Because the particular choice of abstraction profoundly impacts how a problem is framed, reaching the ambitious goals of AI may be aided by carefully examining the promise, drawbacks, and motivations of popular abstractions.
Subfields of artificial intelligence often diversify from a core idea. For example, deep learning networks, models in computational neuroscience, and neuroevolution all take inspiration from biological neural networks as a potential pathway to AI. Most researchers choose to pursue the subfield (and by extension, abstraction) they see as most promising for leading to AI, which naturally results in significant debate and disagreement among researchers as to what abstraction is best. A better understanding and less polarized debate may be facilitated by a clear presentation and discussion of abstractions by their most knowledgeable proponents.