Description
Decision making is an important skill of autonomous agents. This tutorial
focuses on decision making under uncertainty
in sensing and acting, common in many
real-world systems. In particular, we will be
concerned with planning problems that optimize
how an agent should act given a model of its
environment and its task. Many of such
planning problems can be formalized as Markov
decision processes (MDPs) or extensions of
this model. The tutorial will give
an introduction of the MDP model and some of
its standard solution methods. Subsequently,
it will extend the decision making problem to
deal with noisy and imperfect sensors, known
as partially observable MDPs (POMDPs). As
agents often do not exist in isolation,
attention will be given to the problem of
decision making under uncertainty with
multiple, interacting agents. Finally, several
emerging topics in single and multiagent
decision making under uncertainty will be
highlighted.
|