Multiagent Interaction without Prior Coordination
Stefano Albrecht, Jacob Crandall, Somchaya Liemhetcharat, Organizers
Technical Report WS-15-11
51 pp.
Electronic Version of the Technical Report
(Download only): $10.00 (Special Introductory Price)
Softcover version of the technical report: $25.00 softcover
(For international orders please shipping options before ordering on website.)
ISBN 978-1-57735-722-3
This workshop focuses on models and algorithms for multiagent interaction without prior coordination (MIPC). Interaction between agents is the defining attribute of multiagent systems, encompassing problems of planning in a decentralized setting, learning other agent models, composing teams with high task performance, and selected resource-bounded communication and coordination. There is significant variety in methodologies used to solve such problems, including symbolic reasoning about negotiation and argumentation, distributed optimization methods, machine learning methods such as multiagent reinforcement learning, and others. The majority of these well studied methods depends on some form of prior coordination. Often, the coordination is at the level of problem definition. For example, learning algorithms may assume that all agents share a common learning method or prior beliefs, distributed optimization methods may assume specific structural constraints regarding the partition of state space or cost/rewards, and symbolic methods often make strong assumptions regarding norms and protocols. In realistic problems, these assumptions are easily violated — calling for new models and algorithms that specifically address the case of ad hoc interactions. Similar issues are also becoming increasingly more pertinent in human-machine interactions, where there is a need for intelligent adaptive behaviour and assumptions regarding prior knowledge and communication are problematic.
Effective MIPC is most likely to be achieved as we bring together work from many different areas, including work on intelligent agents, machine learning, game theory, and operations research. For instance, game theorists have considered what happens to equilibria when common knowledge assumptions must be violated, agent designers are faced with mixed teams of humans and agents in open environments and developing variations on planning methods in response to this, and others. The goal of this workshop is to bring together these diverse viewpoints in an attempt to consolidate the common ground and identify new lines of attack.