PublicationsEfficient Offline Communication Policies for Factored Multiagent POMDPsJoão V. Messias, Matthijs T. J. Spaan, and Pedro U. Lima. Efficient Offline Communication Policies for Factored Multiagent POMDPs. In Advances in Neural Information Processing Systems, pp. 1917–1925, 2011. DownloadAbstractFactored Decentralized Partially Observable Markov Decision Processes (Dec-POMDPs) form a powerful framework for multiagent planning under uncertainty, but optimal solutions require a rigid history-based policy representation. In this paper we allow inter-agent communication which turns the problem in a centralized Multiagent POMDP (MPOMDP). We map belief distributions over state factors to an agent's local actions by exploiting structure in the joint MPOMDP policy. The key point is that when sparse dependencies between the agents' decisions exist, often the belief over its local state factors is sufficient for an agent to unequivocally identify the optimal action, and communication can be avoided. We formalize these notions by casting the problem into convex optimization form, and present experimental results illustrating the savings in communication that we can obtain. BibTeX Entry@InProceedings{Messias11nips, author = {Jo{\~a}o V. Messias and Matthijs T. J. Spaan and Pedro U. Lima}, title = {Efficient Offline Communication Policies for Factored Multiagent {POMDPs}}, booktitle = {Advances in Neural Information Processing Systems}, year = 2011, pages = {1917--1925} } Note: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. Generated by bib2html.pl (written by Patrick Riley) on Thu Feb 29, 2024 16:15:45 UTC |