Partially observable Markov decision process (POMDP)

Partially observable Markov decision process

Partially observable Markov decision processes are a generalization of MDPs that allow for uncertainty in the agent's observations. They model an agent's decision process where the system dynamics are determined by an MDP, but the agent cannot directly observe the underlying state. Instead, it must maintain a sensor model and the underlying MDP to determine the optimal action.

1 courses cover this concept

CS 294-40: Learning for robotics and control

UC Berkeley

Fall 2008

This advanced course focuses on the applications of machine learning in the robotics and control field. It covers a wide range of topics including Markov Decision Processes, control theories, estimation methodologies, and robotics principles. Recommended for graduate students.

No concepts data

+ 27 more concepts