Chapter
The Fundamental Difficulty of AI Capabilities and Alignment
The fundamental issue with AI capabilities is that it can predict a different outcome or attempt to guess one, which creates a challenge for alignment. The alignment stuff is developing at a much slower pace in comparison to AI capabilities which raises questions about how AI can guess the correct winning lottery numbers.
Clips
The alignment problem is a key challenge when it comes to AGI, but can the use of weak AI help address this?
1:20:13 - 1:24:14 (04:00)
Summary
The alignment problem is a key challenge when it comes to AGI, but can the use of weak AI help address this? By having a human-in-the-loop approach and simulated exploration, it may be possible to take incremental steps towards an aligned AGI.
ChapterThe Fundamental Difficulty of AI Capabilities and Alignment
Episode#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
PodcastLex Fridman Podcast
The podcast host explores the possibility of a model with weaker AGI capabilities finding all the ways critical points can go wrong through simulation and computation.
1:24:14 - 1:26:09 (01:55)
Summary
The podcast host explores the possibility of a model with weaker AGI capabilities finding all the ways critical points can go wrong through simulation and computation.
ChapterThe Fundamental Difficulty of AI Capabilities and Alignment
Episode#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
PodcastLex Fridman Podcast
The difficulty lies in understanding the big leap between strong AGI and weak AGI, and why humans at scale cannot build intuitions for AI alignment safety research.
1:26:09 - 1:27:47 (01:38)
Summary
The difficulty lies in understanding the big leap between strong AGI and weak AGI, and why humans at scale cannot build intuitions for AI alignment safety research. However, the capabilities of AI are advancing rapidly, making alignment research stagnate in comparison.