Chapter

The Fundamental Difficulty of AI Capabilities and Alignment
listen on SpotifyListen on Youtube
1:20:13 - 1:27:47 (07:34)

The fundamental issue with AI capabilities is that it can predict a different outcome or attempt to guess one, which creates a challenge for alignment. The alignment stuff is developing at a much slower pace in comparison to AI capabilities which raises questions about how AI can guess the correct winning lottery numbers.

Clips
The alignment problem is a key challenge when it comes to AGI, but can the use of weak AI help address this?
1:20:13 - 1:24:14 (04:00)
listen on SpotifyListen on Youtube
Artificial General Intelligence (AGI)
Summary

The alignment problem is a key challenge when it comes to AGI, but can the use of weak AI help address this? By having a human-in-the-loop approach and simulated exploration, it may be possible to take incremental steps towards an aligned AGI.

Chapter
The Fundamental Difficulty of AI Capabilities and Alignment
Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The podcast host explores the possibility of a model with weaker AGI capabilities finding all the ways critical points can go wrong through simulation and computation.
1:24:14 - 1:26:09 (01:55)
listen on SpotifyListen on Youtube
AGI
Summary

The podcast host explores the possibility of a model with weaker AGI capabilities finding all the ways critical points can go wrong through simulation and computation.

Chapter
The Fundamental Difficulty of AI Capabilities and Alignment
Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The difficulty lies in understanding the big leap between strong AGI and weak AGI, and why humans at scale cannot build intuitions for AI alignment safety research.
1:26:09 - 1:27:47 (01:38)
listen on SpotifyListen on Youtube
Artificial Intelligence
Summary

The difficulty lies in understanding the big leap between strong AGI and weak AGI, and why humans at scale cannot build intuitions for AI alignment safety research. However, the capabilities of AI are advancing rapidly, making alignment research stagnate in comparison.

Chapter
The Fundamental Difficulty of AI Capabilities and Alignment
Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast