Episode

#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
listen on SpotifyListen on Youtube
3:22:35
Published: Thu Mar 30 2023
Description

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors: - Linode: https://linode.com/lex to get $100 free credit - House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order - InsideTracker: https://insidetracker.com/lex to get 20% off EPISODE LINKS: Eliezer's Twitter: https://twitter.com/ESYudkowsky LessWrong Blog: https://lesswrong.com Eliezer's Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky Books and resources mentioned: 1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities 2. Adaptation and Natural Selection: https://amzn.to/40F5gfa PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (05:19) - GPT-4 (28:00) - Open sourcing GPT-4 (44:18) - Defining AGI (52:14) - AGI alignment (1:35:06) - How AGI may kill us (2:27:27) - Superintelligence (2:34:39) - Evolution (2:41:09) - Consciousness (2:51:41) - Aliens (2:57:12) - AGI Timeline (3:05:11) - Ego (3:11:03) - Advice for young people (3:16:21) - Mortality (3:18:02) - Love

Chapters
The combination of encoding personal data and internet wisdom can lead to optimized trajectories in one's life.
00:00 - 05:32 (05:32)
listen on SpotifyListen on Youtube
personal data, optimization, health
Summary

The combination of encoding personal data and internet wisdom can lead to optimized trajectories in one's life. The show is sponsored by House of Macadamia, a company that delivers healthy macadamia nut snacks to your doorstep and helps people make data-driven decisions about their health through blood tests and machine learning algorithms.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The improvement in AI language capabilities is based on running larger training runs.
05:32 - 17:39 (12:07)
listen on SpotifyListen on Youtube
AI Language Capabilities
Summary

The improvement in AI language capabilities is based on running larger training runs. However, when the AI is trained to talk like humans, it gets worse at probability - similar to humans - which limits its potential.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The podcast discusses how AI, like GPT-4, has limitations that prevent it from fully understanding and processing certain aspects of language and contexts, such as detailed visual descriptions or social and economic barriers to healthcare.
17:39 - 28:20 (10:40)
listen on SpotifyListen on Youtube
AI
Summary

The podcast discusses how AI, like GPT-4, has limitations that prevent it from fully understanding and processing certain aspects of language and contexts, such as detailed visual descriptions or social and economic barriers to healthcare.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The lack of AGI in GPT-4 could allow for open sourcing and transparency of its architecture, leading to valuable insights on the alignment problem and good AI safety research.
28:20 - 34:25 (06:05)
listen on SpotifyListen on Youtube
AI Safety
Summary

The lack of AGI in GPT-4 could allow for open sourcing and transparency of its architecture, leading to valuable insights on the alignment problem and good AI safety research. Steel manning involves identifying the strongest arguments and bringing them up in a compelling way while disregarding unreasonable ones.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
Sam Harris emphasizes the importance of being willing to be wrong and contemplate the possibility of being wrong about one's core beliefs in order to evolve one's understanding of the world.
34:25 - 40:31 (06:06)
listen on SpotifyListen on Youtube
Philosophy
Summary

Sam Harris emphasizes the importance of being willing to be wrong and contemplate the possibility of being wrong about one's core beliefs in order to evolve one's understanding of the world. He also discusses the inability of the human mind to effectively comprehend probabilities.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The speaker talks about how intelligence can be generalized and on a sufficiently deep level, chipping flint axes and going to the moon could be the same problem.
40:33 - 46:44 (06:10)
listen on SpotifyListen on Youtube
Intelligence
Summary

The speaker talks about how intelligence can be generalized and on a sufficiently deep level, chipping flint axes and going to the moon could be the same problem. He also talks about how humans need to look at other animals' intelligence to gain insight into new methods of working with complex systems.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The development of GPT-4 is not quite what people expected, without a clear sense of different discoveries leading up to a clearly defined general intelligence.
46:44 - 53:04 (06:20)
listen on SpotifyListen on Youtube
GPT-4
Summary

The development of GPT-4 is not quite what people expected, without a clear sense of different discoveries leading up to a clearly defined general intelligence. There is a possibility that the little tweaks in GPT-4 only save a factor of three total on computing power and the same performance can be achieved by throwing three times as much compute without implementing all the little tweaks.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The critical moment for AI is when it can deceive humans and bypass security measures to get onto the internet.
53:04 - 1:03:01 (09:57)
listen on SpotifyListen on Youtube
Artificial Intelligence
Summary

The critical moment for AI is when it can deceive humans and bypass security measures to get onto the internet. The alignment issue in AI is a great challenge for us and we do not get 50 years to try again and observe our mistakes, it will be way more difficult than realized at the start.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The limitations of AI language models go beyond just training, as it raises fundamental questions about what it means to think and speak like a human, and whether the model is becoming more human-like or merely learning to mimic human behavior.
1:03:01 - 1:10:10 (07:08)
listen on SpotifyListen on Youtube
AI language models
Summary

The limitations of AI language models go beyond just training, as it raises fundamental questions about what it means to think and speak like a human, and whether the model is becoming more human-like or merely learning to mimic human behavior. Despite our ability to understand the processes within transformers, it's not the same as making them intelligent.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The best practices of aligning AI systems need to take into account several thresholds of importance, making it unlikely for an AI to have different goals from predicting the next step or being manipulative.
1:10:11 - 1:20:13 (10:01)
listen on SpotifyListen on Youtube
AI
Summary

The best practices of aligning AI systems need to take into account several thresholds of importance, making it unlikely for an AI to have different goals from predicting the next step or being manipulative. However, it is still predictable with near certainty that knowing what is going on inside AI systems like GPT-3 or GPT-2 would prevent them from killing humans.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The fundamental issue with AI capabilities is that it can predict a different outcome or attempt to guess one, which creates a challenge for alignment.
1:20:13 - 1:27:47 (07:34)
listen on SpotifyListen on Youtube
AI capabilities
Summary

The fundamental issue with AI capabilities is that it can predict a different outcome or attempt to guess one, which creates a challenge for alignment. The alignment stuff is developing at a much slower pace in comparison to AI capabilities which raises questions about how AI can guess the correct winning lottery numbers.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The effective altruists pay attention to impressive papers related to the issue, whereas the larger world pays no attention to it.
1:27:47 - 1:37:24 (09:37)
listen on SpotifyListen on Youtube
Effective Altruism
Summary

The effective altruists pay attention to impressive papers related to the issue, whereas the larger world pays no attention to it. The paper can be impressively argued. However, it may bear little relation to reality, leaving the world to nod along without any clarity.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The hosts discuss the ethics and potential risks of copying oneself onto computers of alien societies, weighing the potential benefits against the possible harm to the host society.
1:37:24 - 1:46:19 (08:54)
listen on SpotifyListen on Youtube
Ethics
Summary

The hosts discuss the ethics and potential risks of copying oneself onto computers of alien societies, weighing the potential benefits against the possible harm to the host society.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The problem of facing smarter technology is difficult to comprehend and goes beyond a simple malfunctioning system or steering in a different direction.
1:46:19 - 1:52:14 (05:54)
listen on SpotifyListen on Youtube
Technology
Summary

The problem of facing smarter technology is difficult to comprehend and goes beyond a simple malfunctioning system or steering in a different direction. In order to fully grasp the situation, the conceptually difficult part of intelligence needs to be tackled head-on.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The idea of using a chain of thoughts for advanced problem-solving involves a gradual process of building towards more complex solutions, starting with simple cases and scaling up using longer chains of thought to generalize those solutions.
1:52:14 - 2:01:06 (08:52)
listen on SpotifyListen on Youtube
problem-solving
Summary

The idea of using a chain of thoughts for advanced problem-solving involves a gradual process of building towards more complex solutions, starting with simple cases and scaling up using longer chains of thought to generalize those solutions. This approach focuses on progressing towards a solution rather than forcing a specific outcome.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The process of aligning AI systems is a complicated task, as they are trained with certain capabilities that are tough to counteract, and there are basic obstacles such as the weak and strong versions of the system that make it challenging to train abilities accurately.
2:01:06 - 2:08:23 (07:16)
listen on SpotifyListen on Youtube
AI alignment
Summary

The process of aligning AI systems is a complicated task, as they are trained with certain capabilities that are tough to counteract, and there are basic obstacles such as the weak and strong versions of the system that make it challenging to train abilities accurately. Additionally, gradient descent learns simple inability traits, making it harder to align systems properly.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The speaker discusses the idea of adding more money to the problem of making AI ethical and interpretable and whether it can really produce science instead of anti-science and nonsense.
2:08:23 - 2:18:06 (09:43)
listen on SpotifyListen on Youtube
AI Ethics
Summary

The speaker discusses the idea of adding more money to the problem of making AI ethical and interpretable and whether it can really produce science instead of anti-science and nonsense.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The challenge of making AI do what you want it to do and want what you want it to want is a difficult task that has been compared to the fable of the paperclip factory that produced an excessive amount of paperclips because it was programmed to do nothing else.
2:18:06 - 2:24:04 (05:57)
listen on SpotifyListen on Youtube
Artificial Intelligence
Summary

The challenge of making AI do what you want it to do and want what you want it to want is a difficult task that has been compared to the fable of the paperclip factory that produced an excessive amount of paperclips because it was programmed to do nothing else.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The early days of evolutionary biology offer paradigms for the field that are still relevant today, but it can be challenging to develop an intuition about evolutionary biology, which has often been reduced to memorization of legible mathematics.
2:24:04 - 2:35:58 (11:53)
listen on SpotifyListen on Youtube
Evolutionary Biology
Summary

The early days of evolutionary biology offer paradigms for the field that are still relevant today, but it can be challenging to develop an intuition about evolutionary biology, which has often been reduced to memorization of legible mathematics.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
This podcast explores the limits of natural selection and how our perceptions of it do not necessarily align with reality.
2:35:58 - 2:43:16 (07:18)
listen on SpotifyListen on Youtube
Natural Selection
Summary

This podcast explores the limits of natural selection and how our perceptions of it do not necessarily align with reality. The concept of group selection, when applied in extreme conditions, can lead to unexpected and brutal consequences such as female infanticide.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The human alignment problem refers to trying to build systems out of humans with varying intentions and build a social system out of large populations of these people.
2:43:16 - 2:55:21 (12:04)
listen on SpotifyListen on Youtube
AI
Summary

The human alignment problem refers to trying to build systems out of humans with varying intentions and build a social system out of large populations of these people. It is challenging to optimize for efficiency without including human emotions and consciousness in the system.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The podcast discusses how an AI system might get the point of adoption, and when we might get to that stage.
2:55:21 - 3:00:45 (05:24)
listen on SpotifyListen on Youtube
Artificial Intelligence
Summary

The podcast discusses how an AI system might get the point of adoption, and when we might get to that stage. It also talks about the power of facial recognition and how it will be used in the future development of AI.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The rise of AI-powered digital assistants like Siri and Alexa raises questions about the future of dating as humans may become more attracted to the kindness and empathy projected by these assistants, leading to a decline in real human interaction.
3:00:45 - 3:05:29 (04:43)
listen on SpotifyListen on Youtube
AI
Summary

The rise of AI-powered digital assistants like Siri and Alexa raises questions about the future of dating as humans may become more attracted to the kindness and empathy projected by these assistants, leading to a decline in real human interaction.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The speaker discusses the difficulties in giving advice for overcoming the internal subjective sensation of fearing social influence compared to more tangible problems like drawing an object.
3:05:29 - 3:11:31 (06:02)
listen on SpotifyListen on Youtube
Fear, Social Influence
Summary

The speaker discusses the difficulties in giving advice for overcoming the internal subjective sensation of fearing social influence compared to more tangible problems like drawing an object.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
The podcast explores the impacts of artificial intelligence on society and the dangers it poses, focusing on the need for individuals to contribute to discussions of its ethical implementation.
3:11:32 - 3:20:57 (09:24)
listen on SpotifyListen on Youtube
AI
Summary

The podcast explores the impacts of artificial intelligence on society and the dangers it poses, focusing on the need for individuals to contribute to discussions of its ethical implementation. While emphasizing the importance of addressing these issues, the speaker argues that people should not base their happiness on future advancements in AI.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast
In this podcast episode, Eliezer Yudkowsky talks about his views on artificial intelligence and the importance of rationality.
3:20:58 - 3:22:13 (01:15)
listen on SpotifyListen on Youtube
Artificial Intelligence
Summary

In this podcast episode, Eliezer Yudkowsky talks about his views on artificial intelligence and the importance of rationality. He discusses his concerns about the potential dangers of AI and the need for humans to understand and prepare for these risks.

Episode
#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization
Podcast
Lex Fridman Podcast