Chapter

Modeling Uncertainty in MDPs
listen on SpotifyListen on Youtube
17:28 - 21:36 (04:07)

Markov Decision Processes (MDPs) model uncertainty in the future by allowing for various possible outcomes, but they also acknowledge that taking actions can change the state of the world, which is not deterministic. MDPs account for uncertainty in the world, even if it is not fully known.

Clips
The focus of neural networks is not just on present state uncertainty, but on uncertainty in how future events will unfold.
17:28 - 20:07 (02:39)
listen on SpotifyListen on Youtube
Neural Networks
Summary

The focus of neural networks is not just on present state uncertainty, but on uncertainty in how future events will unfold. By forming automated abstractions, machine learning engineers can create algorithms with better models for predicting the unpredictable.

Chapter
Modeling Uncertainty in MDPs
Episode
Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics
Podcast
Lex Fridman Podcast
This episode is an introduction to Markov decision processes, a model for predicting future outcomes based on the current state of a system and the probabilistic changes that might occur from taking different actions.
20:07 - 21:36 (01:28)
listen on SpotifyListen on Youtube
Markov Decision Processes
Summary

This episode is an introduction to Markov decision processes, a model for predicting future outcomes based on the current state of a system and the probabilistic changes that might occur from taking different actions. There is also a discussion on the problems of partially observed systems and how partially observed Markov decision processes address this issue.

Chapter
Modeling Uncertainty in MDPs
Episode
Leslie Kaelbling: Reinforcement Learning, Planning, and Robotics
Podcast
Lex Fridman Podcast