Chapter

Open Sourcing GPT-4: Insights for AI Safety Research
listen on SpotifyListen on Youtube
28:20 - 34:25 (06:05)

The lack of AGI in GPT-4 could allow for open sourcing and transparency of its architecture, leading to valuable insights on the alignment problem and good AI safety research. Steel manning involves identifying the strongest arguments and bringing them up in a compelling way while disregarding unreasonable ones.

Clips
Open sourcing the architecture, research, and investigation of GPT-4 can lead to better understanding the alignment problem and conduct AI safety research.
28:20 - 31:07 (02:47)
listen on SpotifyListen on Youtube
AI Safety
Summary

Open sourcing the architecture, research, and investigation of GPT-4 can lead to better understanding the alignment problem and conduct AI safety research. Closed AI alternatives may be better suited for powerful applications not yet understood.

Chapter
Open Sourcing GPT-4: Insights for AI Safety Research
Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
Sam Harris discusses the concept of "steelmanning" arguments, where the listener tries to find the most powerful or reasonable version of their opponent's argument.
31:07 - 34:25 (03:17)
listen on SpotifyListen on Youtube
Argumentation
Summary

Sam Harris discusses the concept of "steelmanning" arguments, where the listener tries to find the most powerful or reasonable version of their opponent's argument. He emphasizes the importance of distinguishing between the reasonable and the "whack" interpretations of a speaker's ideas.

Chapter
Open Sourcing GPT-4: Insights for AI Safety Research
Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast