Chapter

Self-Supervised Learning for Human-Significant Problems
listen on Spotify
1:22:50 - 1:27:25 (04:35)

This podcast discusses the possibility of applying self-supervised learning to human-significant problems such as autonomous vehicles and robotics applications. It raises questions about the limits of self-play and neural networks in language models in the context of AGI.

Clips
The discussion covers important problems that can be solved using self-supervised approaches, such as self-play for autonomous vehicles, robotics applications, and simulation.
1:22:50 - 1:24:52 (02:02)
listen on Spotify
AGI
Summary

The discussion covers important problems that can be solved using self-supervised approaches, such as self-play for autonomous vehicles, robotics applications, and simulation. Also, it questions the limits of self-play and neural networks within AGI and highlights recent breakthroughs achieved in natural language processing.

Chapter
Self-Supervised Learning for Human-Significant Problems
Episode
#144 – Michael Littman: Reinforcement Learning and the Future of AI
Podcast
Lex Fridman Podcast
The belief that supporters of the opposition are not intelligent is common in politics.
1:24:52 - 1:27:25 (02:32)
listen on Spotify
Natural Language Generation
Summary

The belief that supporters of the opposition are not intelligent is common in politics. Natural language generation capabilities of AI can be easily flawed and limited to imitation, suggesting either a lack of intelligence among people or that our daily actions may not be that complex.

Chapter
Self-Supervised Learning for Human-Significant Problems
Episode
#144 – Michael Littman: Reinforcement Learning and the Future of AI
Podcast
Lex Fridman Podcast