Chapter
The Power of Meta-Learning in AI
The concept of meta-learning now gets expanded upon by looking into the idea of meta, meta, meta learning with the ability to continually build stacks of learning to learn complex rules at a deeper level. This slow learning algorithm is changing the dynamics of a network and becoming its own learning algorithm.
Clips
The slow learning algorithm in recurrent neural networks could give rise to network dynamics that can turn into a powerful learning algorithm.
1:02:39 - 1:07:13 (04:33)
Summary
The slow learning algorithm in recurrent neural networks could give rise to network dynamics that can turn into a powerful learning algorithm. The longer you train the network using a reinforcement learning algorithm to adjust synaptic weights, the more interesting the activation dynamics become.
ChapterThe Power of Meta-Learning in AI
Episode#106 – Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind
PodcastLex Fridman Podcast
Yaniv Ovadia talks about the idea of meta, meta, meta learning and explores whether there is a limit to how abstract the meta-learning can get, as it builds stacks of learning to learn to learn.
1:07:13 - 1:12:46 (05:33)
Summary
Yaniv Ovadia talks about the idea of meta, meta, meta learning and explores whether there is a limit to how abstract the meta-learning can get, as it builds stacks of learning to learn to learn.
ChapterThe Power of Meta-Learning in AI
Episode#106 – Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind
PodcastLex Fridman Podcast
This podcast episode explores how the function of memory can be shaped by reinforcement learning through a series of interrelated tasks, as well as the role of dopamine in temporal difference learning.
1:12:46 - 1:16:44 (03:57)
Summary
This podcast episode explores how the function of memory can be shaped by reinforcement learning through a series of interrelated tasks, as well as the role of dopamine in temporal difference learning.