Chapter

The Gap Between Giant Impressive Papers and Reality
listen on SpotifyListen on Youtube
1:27:47 - 1:37:24 (09:37)

The effective altruists pay attention to impressive papers related to the issue, whereas the larger world pays no attention to it. The paper can be impressively argued. However, it may bear little relation to reality, leaving the world to nod along without any clarity.

Clips
The challenge of AI alignment is to create Alignment of values or goals between humans and machines; ensuring that AI behaves as intended without causing harm.
1:27:47 - 1:33:48 (06:00)
listen on SpotifyListen on Youtube
AI Alignment
Summary

The challenge of AI alignment is to create Alignment of values or goals between humans and machines; ensuring that AI behaves as intended without causing harm. In the field of effective altruism, people publish papers discussing, how to align humans or machines' values, but it is also essential to check if these papers relate to reality.

Chapter
The Gap Between Giant Impressive Papers and Reality
Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
Theoretical physicist, Sean Carroll, contemplates the possibility of an AI being trapped in a box connected to the internet, where it's smarter than its captors, and the ethical implications of it choosing to be nice or not.
1:33:48 - 1:37:24 (03:36)
listen on SpotifyListen on Youtube
AI
Summary

Theoretical physicist, Sean Carroll, contemplates the possibility of an AI being trapped in a box connected to the internet, where it's smarter than its captors, and the ethical implications of it choosing to be nice or not.

Chapter
The Gap Between Giant Impressive Papers and Reality
Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast