Chapter

Fooling AI Systems with Adversarial Attacks
listen on Spotify
21:07 - 30:37 (09:30)

Researchers explore the concept of adversarial attacks, which aim to deceive artificial intelligence algorithms by planting small amounts of malicious code into their programs. They demonstrate how simple optical illusions could trick image recognition software, or how a perturbed stop sign could be misidentified as a speed limit sign by autonomous driving systems.

Clips
Tricking AI learning systems through inserting small amounts of poisoned data points has become easier, with researchers demonstrating how glasses can be incorrectly recognised as belonging to Putin.
21:07 - 26:14 (05:06)
listen on Spotify
AI learning, poisoned data points
Summary

Tricking AI learning systems through inserting small amounts of poisoned data points has become easier, with researchers demonstrating how glasses can be incorrectly recognised as belonging to Putin. The researchers also found that certain poisoned data points could cause the system to label someone as Trump, subtly making it impossible to distinguish between people in the model.

Chapter
Fooling AI Systems with Adversarial Attacks
Episode
#95 – Dawn Song: Adversarial Machine Learning and Computer Security
Podcast
Lex Fridman Podcast
The podcast discusses attacks on deep learning models in the physical and virtual space during the training and inference stages.
26:16 - 27:50 (01:34)
listen on Spotify
Deep Learning Models
Summary

The podcast discusses attacks on deep learning models in the physical and virtual space during the training and inference stages. The hosts also briefly touch on the application of these attacks on autonomous driving.

Chapter
Fooling AI Systems with Adversarial Attacks
Episode
#95 – Dawn Song: Adversarial Machine Learning and Computer Security
Podcast
Lex Fridman Podcast
The study explores whether adversarial examples can exist in the physical world, specifically in the context of autonomous driving and whether they can remain effective under different viewing conditions such as distance and angles.
27:50 - 30:37 (02:47)
listen on Spotify
Adversarial Examples
Summary

The study explores whether adversarial examples can exist in the physical world, specifically in the context of autonomous driving and whether they can remain effective under different viewing conditions such as distance and angles.

Chapter
Fooling AI Systems with Adversarial Attacks
Episode
#95 – Dawn Song: Adversarial Machine Learning and Computer Security
Podcast
Lex Fridman Podcast