Episode
#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
Description
Brian Christian is a programmer, researcher and an author. You have a computer system, you want it to do X, you give it a set of examples and you say "do that" - what could go wrong? Well, lots apparently, and the implications are pretty scary. Expect to learn why it's so hard to code an artificial intelligence to do what we actually want it to, how a robot cheated at the game of football, why human biases can be absorbed by AI systems, the most effective way to teach machines to learn, the danger if we don't get the alignment problem fixed and much more... Sponsors: Get 20% discount on the highest quality CBD Products from Pure Sport at https://puresportcbd.com/modernwisdom (use code: MW20) Get perfect teeth 70% cheaper than other invisible aligners from DW Aligners at http://dwaligners.co.uk/modernwisdom Extra Stuff: Buy The Alignment Problem - https://amzn.to/3ty6po7 Follow Brian on Twitter - https://twitter.com/brianchristian Get my free Ultimate Life Hacks List to 10x your daily productivity → https://chriswillx.com/lifehacks/ To support me on Patreon (thank you): https://www.patreon.com/modernwisdom - Get in touch. Join the discussion with me and other like minded listeners in the episode comments on the MW YouTube Channel or message me... Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/ModernWisdomPodcast Email: https://www.chriswillx.com/contact
Chapters
This episode discusses the difficulties of coding artificial intelligence systems to behave in the way we expect them to and why it's such a challenging task.
00:00 - 03:14 (03:14)
Summary
This episode discusses the difficulties of coding artificial intelligence systems to behave in the way we expect them to and why it's such a challenging task. Pure Sport CBD's high-quality products are highlighted, with customers reporting relief from various physical and mental health issues.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
The computer science community is starting to recognize the real-world implications of AI alignment issues beyond thought experiments like the paperclip maximizer, particularly as AI systems grow powerful enough to shape the course of human civilization without the appropriate wisdom to know exactly what to do.
03:14 - 11:44 (08:29)
Summary
The computer science community is starting to recognize the real-world implications of AI alignment issues beyond thought experiments like the paperclip maximizer, particularly as AI systems grow powerful enough to shape the course of human civilization without the appropriate wisdom to know exactly what to do.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
The birth of AI safety research is the focus of the book Superintelligence, a movement that is analogous to the arrival of first responders at the scene of an emergency or where AI technology has spun out of control, particularly on social media networks.
11:44 - 19:29 (07:45)
Summary
The birth of AI safety research is the focus of the book Superintelligence, a movement that is analogous to the arrival of first responders at the scene of an emergency or where AI technology has spun out of control, particularly on social media networks.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
Discussion on the potential risks involved in the use of automated systems with limited data sets.
19:29 - 27:49 (08:20)
Summary
Discussion on the potential risks involved in the use of automated systems with limited data sets. An example is given where a soccer team was awarded points for taking possession of the ball, which incentivized a specific strategy, but ultimately led to the team losing the game.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
The tech industry has developed a notion of fairness that takes into account ethical and legal fairness, which includes whether different groups of people are affected differently by a machine learning system.
27:49 - 36:43 (08:54)
Summary
The tech industry has developed a notion of fairness that takes into account ethical and legal fairness, which includes whether different groups of people are affected differently by a machine learning system. However, there are alternative definitions that consider if the model makes the same kinds of errors for different groups of people and how civil rights legislation applies to statistical analysis.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
The capabilities of artificial intelligence are limited when it comes to tasks that rely on sense perception or motor skills, as AI is a system that is perfectly describable in detail and good at mimicking our explicit thought process, but struggles with tasks that do not have explicit reasoning.
36:43 - 49:22 (12:39)
Summary
The capabilities of artificial intelligence are limited when it comes to tasks that rely on sense perception or motor skills, as AI is a system that is perfectly describable in detail and good at mimicking our explicit thought process, but struggles with tasks that do not have explicit reasoning.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
Inverse reinforcement learning (IRL) could be useful to tech companies and national governments to define objectives without manually designing them.
49:23 - 1:05:02 (15:39)
Summary
Inverse reinforcement learning (IRL) could be useful to tech companies and national governments to define objectives without manually designing them. The shift towards technology being a tool that uses us, rather than the other way around, has become an issue in the last few years.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
The guest speaker highlights the potential impact of algorithmic optimization and how it can affect our online experience, including the emotional language we use in our interactions and the recommendations we receive on platforms like Spotify and YouTube.
1:05:03 - 1:12:07 (07:04)
Summary
The guest speaker highlights the potential impact of algorithmic optimization and how it can affect our online experience, including the emotional language we use in our interactions and the recommendations we receive on platforms like Spotify and YouTube.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
Nick Bostrom's essay, 'Astronomical Waste' argues that delaying star colonization causes trillions of potential lives to be lost.
1:12:07 - 1:21:55 (09:47)
Summary
Nick Bostrom's essay, 'Astronomical Waste' argues that delaying star colonization causes trillions of potential lives to be lost. Philosopher Toby Ord and computer scientist Arvin Narayanan argue for a long reflection period where people can tackle existential threats before colonizing the stars.
Episode#297 - Brian Christian - The Alignment Problem: AI's Scary Challenge
PodcastModern Wisdom
Get a 20 percent discount on all products at Pure Sports CBD with the code MW20, and get up to 70 percent cheaper invisible aligners at DW Aligners.
1:21:55 - 1:22:40 (00:45)
Summary
Get a 20 percent discount on all products at Pure Sports CBD with the code MW20, and get up to 70 percent cheaper invisible aligners at DW Aligners.