Chapter

The Danger in Fixating on Objectives
listen on Spotify
40:36 - 55:00 (14:24)

The fixation on objectives can often lead to ignoring the needs of the people or even harm them in the long run, as seen in examples such as the Soviet Union and government systems. The interaction between humans and machines can be complicated, and by ignoring the feedback of human choices, the true objective becomes clouded.

Clips
This podcast discusses the potential dangers of artificial intelligence blindly pursuing objectives without questioning their correctness, and the importance of cultural transmission of values to help avoid this problem.
40:36 - 43:18 (02:42)
listen on Spotify
Artificial Intelligence
Summary

This podcast discusses the potential dangers of artificial intelligence blindly pursuing objectives without questioning their correctness, and the importance of cultural transmission of values to help avoid this problem.

Chapter
The Danger in Fixating on Objectives
Episode
Stuart Russell: Long-Term Future of AI
Podcast
Lex Fridman Podcast
In this episode, the speaker discusses the dangers of fixating on an objective in complex systems, citing examples from history such as Germany and communist Russia.
43:18 - 49:32 (06:13)
listen on Spotify
systems thinking
Summary

In this episode, the speaker discusses the dangers of fixating on an objective in complex systems, citing examples from history such as Germany and communist Russia. They emphasize that government and corporations should serve people, but often become unserviceable when taken over by individuals with their own objectives.

Chapter
The Danger in Fixating on Objectives
Episode
Stuart Russell: Long-Term Future of AI
Podcast
Lex Fridman Podcast
The debate about the consequences of AI is not new and has been happening since the 19th century.
49:32 - 52:34 (03:01)
listen on Spotify
AI
Summary

The debate about the consequences of AI is not new and has been happening since the 19th century. If the formula for AI programming is wrong, we could end up with AI systems working against the intended outcome, with utilitarianism as an example of the ongoing challenge to define a clear formula for moral and political decision-making.

Chapter
The Danger in Fixating on Objectives
Episode
Stuart Russell: Long-Term Future of AI
Podcast
Lex Fridman Podcast
The scalability problem occurs when you have a process or system that works for a small number of inputs, but becomes unmanageable as the number of inputs increases.
52:35 - 55:00 (02:24)
listen on Spotify
scalability problem
Summary

The scalability problem occurs when you have a process or system that works for a small number of inputs, but becomes unmanageable as the number of inputs increases. In the case of pharmaceuticals, this can lead to serious consequences such as adulterated products that harm thousands of people.

Chapter
The Danger in Fixating on Objectives
Episode
Stuart Russell: Long-Term Future of AI
Podcast
Lex Fridman Podcast