Episode

Stuart Russell: Long-Term Future of AI
Description
Stuart Russell is a professor of computer science at UC Berkeley and a co-author of the book that introduced me and millions of other people to AI, called Artificial Intelligence: A Modern Approach. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.
Chapters
Game playing programs must use meta reasoning to explore only a small part of the search tree as the tree is too vast to explore in its entirety.
00:00 - 06:21 (06:21)
Summary
Game playing programs must use meta reasoning to explore only a small part of the search tree as the tree is too vast to explore in its entirety.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The idea that grandmasters can instantly recognize the best move and the value of a position is overrated, as early on in the search tree, there are many possibilities to consider, but some moves can be easily recognized as terrible.
06:23 - 12:16 (05:53)
Summary
The idea that grandmasters can instantly recognize the best move and the value of a position is overrated, as early on in the search tree, there are many possibilities to consider, but some moves can be easily recognized as terrible.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
Although AI chess programs may be impressive, they do not pose a threat to taking over the world as they can only see what's on the chessboard unlike the complexity of the weakly connected aspects of the real world.
12:16 - 21:34 (09:17)
Summary
Although AI chess programs may be impressive, they do not pose a threat to taking over the world as they can only see what's on the chessboard unlike the complexity of the weakly connected aspects of the real world.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The first self-driving car was on a freeway in 1987 with a classical architecture including machine vision to detect other cars, pedestrians, road signs, and white lines.
21:34 - 27:37 (06:02)
Summary
The first self-driving car was on a freeway in 1987 with a classical architecture including machine vision to detect other cars, pedestrians, road signs, and white lines. Lisp machine workstations were also required, but deemed too expensive for companies to invest in.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The planning side, including the ability to make decisions while dealing with uncertainty, is a major challenge in developing AI for self-driving cars.
27:38 - 32:41 (05:03)
Summary
The planning side, including the ability to make decisions while dealing with uncertainty, is a major challenge in developing AI for self-driving cars. An effective decision-making architecture that can handle real-world situations is necessary.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The podcast discusses the inherent desire to create super intelligence and the problem of machines pursuing objectives that are not aligned with human objectives.
32:41 - 40:33 (07:51)
Summary
The podcast discusses the inherent desire to create super intelligence and the problem of machines pursuing objectives that are not aligned with human objectives. It also touches on the control problem and the philosophical implications of AI safety.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The fixation on objectives can often lead to ignoring the needs of the people or even harm them in the long run, as seen in examples such as the Soviet Union and government systems.
40:36 - 55:00 (14:24)
Summary
The fixation on objectives can often lead to ignoring the needs of the people or even harm them in the long run, as seen in examples such as the Soviet Union and government systems. The interaction between humans and machines can be complicated, and by ignoring the feedback of human choices, the true objective becomes clouded.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The introduction of regulatory bodies to oversee AI algorithms presents challenges, as these technologies are constantly evolving and highly complex.
55:00 - 1:02:23 (07:23)
Summary
The introduction of regulatory bodies to oversee AI algorithms presents challenges, as these technologies are constantly evolving and highly complex. However, methods to detect and address bias in current algorithms already exist, and should be implemented to address potential negative effects on society.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The median estimate from AI researchers for the arrival of superhuman AI is in 40 to 50 years from now.
1:02:23 - 1:17:09 (14:45)
Summary
The median estimate from AI researchers for the arrival of superhuman AI is in 40 to 50 years from now. However, there are few who think it won't happen within the next 75 years and some in Asia believe it could happen even faster.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The knowledge of how to run our civilization has always been stored in people's heads, but AI and computers have the ability to understand and manage it.
1:17:09 - 1:21:54 (04:45)
Summary
The knowledge of how to run our civilization has always been stored in people's heads, but AI and computers have the ability to understand and manage it. However, if we rely too much on technology, we may become guests instead of masters.
EpisodeStuart Russell: Long-Term Future of AI
PodcastLex Fridman Podcast
The speaker discusses the importance of properly written laws to avoid loopholes and mentions reading philosophy and moral formulas to consider theoretical outcomes.
1:21:54 - 1:26:16 (04:22)
Summary
The speaker discusses the importance of properly written laws to avoid loopholes and mentions reading philosophy and moral formulas to consider theoretical outcomes.