Dear colleagues: We cordially invite you to attend the 7th MSDM workshop, which is held in conjunction with AAMAS 2012 (the 11th International Joint ConferenceMessage 1 of 1 , May 15, 2012View Source
We cordially invite you to attend the 7th MSDM workshop, which is held in conjunction with AAMAS 2012 (the 11th International Joint Conference on Autonomous Agents and Multiagent Systems), in Valencia, Spain.
It is an excellent opportunity for you to absorb the latest advancements in multiagent sequential decision-making research, and to actively discuss a variety of exciting on-going works. This year, we are extremely fortunate to have Prof. Makoto Yokoo as an invited speaker and he will give a talk regarding Repeated Games with Private Monitoring: A New Frontier for POMDP Researchers.
It will take place on June 5, 2012, preceding the AAMAS conference. Please join us and many of others in this community. We look forward to seeing you there!
The MSDM 2012 Organizers
CALL FOR PARTICIPATION
AAMAS 2012 Workshop
Multiagent Sequential Decision Making Under Uncertainty (MSDM)
The Seventh Workshop in the MSDM series
June 5, 2012, 9:00am - 6:30pm
Location & Organization
The 7th MSDM workshop is held in conjunction with AAMAS-2012 (the 11th International Joint Conference on Autonomous Agents and Multiagent Systems), in Valencia, Spain. It will take place on June 5, 2012, preceding the AAMAS conference.
Attending MSDM & AAMAS 2012
For the registration, please visit the following link:
In sequential decision making, an agent's objective is to choose actions, based on its observations of the world, in such a way that it expects to optimize its performance measure over the course of a series of such decisions. In environments where action consequences are non-deterministic or observations incomplete, Markov decision processes (MDPs)and partially observable MDPs (POMDPs) serve as the basis for principled approaches to single-agent sequential decision making. Extending these models to systems of multiple agents has become the subject of an increasingly active area of research over the past decade and a variety of models have emerged (e.g., the MMDP, Dec-POMDP, MTDP, I-POMDP, and POSG). The high computational complexity of these models has driven researchers to develop multiagent planning and learning methods that exploit the structure present in agents' interactions, methods that provide efficient approximate solutions, and methods that distribute computation among the agents.
The MSDM workshop serves several purposes. The primary purpose is to bring together researchers in the field of MSDM to present and discuss new work and preliminary ideas. Moreover, we aim to identify recent trends, to establish important directions for future research, and to discuss some of the topics mentioned below such as challenging application areas (e.g., cooperative robotics, distributed sensor and/or communication networks, decision support systems) and suitable evaluation methodologies. Finally, a goal of the workshop is to make the field more accessible to newcomers, by seeking to bring order in the large number of models and methods that have been introduced over the last decade.
Prof. Makoto Yokoo (Kyushu University)
Short Bio: Makoto Yokoo received the B.E. and M.E. degrees in electrical engineering, in 1984 and 1986, respectively, from the University of Tokyo, Japan, and the Ph.D. degree in information and communication engineering in 1995, from the University of Tokyo, Japan. From 1986 to 2004, he was a research scientist of Nippon Telegraph and Telephone Corporation (NTT). He is currently a Professor of Information Science and Electrical Engineering, Kyushu University. His research interests include multi-agent systems, constraint satisfaction, and mechanism design among self-interested agents.
He served as a general co-chair of International Conference on Autonomous Agents and MultiAgent Systems in 2007 (AAMAS-2007), and as a program co-chair of AAMAS-2003. He is currently the president of International Foundation for Autonomous Agent and Multiagent Systems (IFAAMAS). He received the ACM SIGART Autonomous Agents Research Award in 2004, and the IFAAMAS influential paper award in 2010.
Details: Repeated Games with Private Monitoring: A New Frontier for POMDP Researchers
Repeated games provide a formal and quite general framework to examine why self-interested agents to cooperate in a long-term relationship. Formally, repeated games refer to a class of models where the same set of agents repeatedly play the same game, called the 'stage game', over a long (typically, infinite) time horizon. The case where agents can perfectly observe each other's actions (perfect monitoring) is extensively studied; there exist rich theoretical results including the well-known folk theorem, which shows that any cooperative outcome is possible under several mild assumptions. However, in reality, long term relationships are often plagued by imperfect monitoring, i.e., agents cannot directly observe each other's actions; they observe signals that imperfectly reveal what actions have been taken.
Repeated games with imperfect monitoring are classified into two categories: the case of public monitoring, where players commonly observe a public signal, and the case of private monitoring. where each player observes a signal that is not observable to others. The imperfect public monitoring case shares many features with the perfect monitoring case, and we now have a good understanding of how it works. In contrast, the imperfect private monitoring case is still in its infancy. However, quite recently, we found that there exists a strong relationship between the equilibrium analysis in the imperfect private monitoring case and POMDP planning. In this talk, I sketch the main results of repeated games and describe how we can utilize POMDP techniques to analyze an equilibrium in the imperfect private monitoring case.
POMDPs in OpenMarkov and ProModelXML
Manuel Arias, Francisco Javier Díez, Miguel Ángel Palacios-Alonso, Mar Yebra, and Jorge Fernández
Solving Finite Horizon Decentralized POMDPs by Distributed Reinforcement Learning
Bikramjit Banerjee, Jeremy Lyle, Landon Kraemer, and Rajesh Yellamraju
Planning Delayed-Response Queries and Transient Policies under Reward Uncertainty
Robert Cohn, Edmund Durfee, and Satinder Singh
Improved Solution of Decentralized MDPs through Heuristic Search
Jilles Dibangoye, Christopher Amato, and Arnaud Doniec
Automated Equilibrium Analysis of Repeated Games with Private Monitoring: A POMDP Approach
Yongjoon Joe, Atsushi Iwasaki, Michihiro Kandori, Ichiro Obara and Makoto Yokoo
Exploiting Sparse Interactions for Optimizing Communication in Dec-MDPs
Francisco S. Melo, Matthijs Spaan, and Stefan Witwicki
Tree-based Pruning for Multiagent POMDPs with Delayed Communication
Frans Oliehoek and Matthijs Spaan
Strategic Behaviour Under Constrained Autonomy
Prioritized Shaping of Models for Solving DEC-POMDPs
Pradeep Varakantham, William Yeoh, Prasanna Velagapudi, Katia Sycara, and Paul Scerri
Coordinated Multi-Agent Learning for Decentralized POMDPs
Chongjie Zhang and Victor Lesser
Multiagent sequential decision making comprises (1) problem representation, (2) planning, (3) coordination, and (4) learning. The MSDM workshop addresses this full range of aspects. Topics of particular interest include:
- Challenging conventional assumptions
...model specification: where do the models come from?
...what is an appropriate level of abstraction for decision making?
- Novel representations, algorithms and complexity results
- Comparisons of algorithms
- Relationships between models and their assumptions
- Decentralized vs. centralized planning approaches
- Online vs. offline planning
- Communication and coordination during execution
- Dealing with...
...large numbers of agents
...large numbers of / continuous states, observations and actions
...long decision horizons.
- (Reinforcement) learning in partially observable multiagent systems
- Cooperative, competitive, and self-interested agents
- Application domains
- Benchmarks and evaluation methodologies
- Standardization of software
- High-level principles High-level principles in MSDM: past trends and future directions
Prashant Doshi University of Georgia
Stefan Witwicki INESC-ID, Instituto Superior Técnico
Jun-young Kwak University of Southern California
Frans A. Oliehoek Maastricht University
Akshat Kumar University of Massachusetts Amherst
Christopher Amato Aptima, Inc.
Raphen Becker Google
Daniel Bernstein University of Massachusetts Amherst
Aurélie Beynier University Pierre and Marie Curie (Paris 6)
Alan Carlin University of Massachusetts Amherst
Brahim Chaib-Draa Laval University
Georgios Chalkiadakis Technical University of Crete
François Charpillet INRIA
Ed Durfee University of Michigan
Alessandro Farinelli University of Verona
Alberto Finzi Universita di Napoli
Claudia Goldman GM Advanced Technical Center Israel
Michail Lagoudakis Technical University of Crete
Janusz Marecki IBM T.J. Watson Research Center
Francisco S. Melo INESC-ID Lisboa
Hala Mostafa BBN Technologies
Abdel-Illah Mouaddib Universit de Caen
Enrique Munoz De Cote INAOE, Mexico
Brenda Ng Lawrence Livermore National Laboratory
Praveen Paruchuri Carnegie Mellon University
David Pynadath University of Southern California
Xia Qu University of Georgia
Zinovi Rabinovich Bar-Ilan University
Anita Raja University of North Carolina at Charlotte
Paul Scerri Carnegie Mellon University
Jiaying Shen SRI International, Inc.
Matthijs Spaan Delft University of Technology
Katia Sycara Carnegie Mellon University
Karl Tuyls Maastricht University
Pradeep Varakantham Singapore Management University
Jianhui Wu Amazon
Makoto Yokoo Kyushu University
Chongjie Zhang University of Massachusetts Amherst
Shlomo Zilberstein University of Massachusetts Amherst
TEAMCORE Research Group
Computer Science Department
University of Southern California
Destina il 5 per mille all'ENEA
Abbiamo bisogno del tuo contributo per dare nuove opportunità ai giovani e al Paese.
Utilizzeremo il contributo per guidare una serie di giovani verso nuove professioni necessarie per garantire la sostenibilità dello sviluppo attraverso la conoscenza approfondita delle tecnologie e degli strumenti per l'efficienza energetica e la gestione del territorio.
Li formeremo sui temi dell'efficienza energetica, su come si effettua una diagnosi energetica in una industria o su un edificio pubblico e su come si interviene per renderli più efficienti.
Insegneremo loro a studiare il territorio, ad affrontare problemi di natura ambientale e a trovare il modo per risolverli.
Li formeremo e li affiancheremo per offrire soluzioni.
Questi giovani potranno avere in futuro un ruolo fondamentale per la crescita del Paese nel rispetto della natura e della salute di tutti i cittadini.
Il nostro codice fiscale è 01320740580
Per maggiori informazioni: www.enea.it