Chapter

Protecting Sensitive Data in Machine Learning Models
listen on Spotify
1:02:39 - 1:10:23 (07:44)

The potential risk of attackers exploiting machine learning models and extracting sensitive information in original training datasets without knowing the model details is becoming increasingly important, and scholars are now focusing on defining a person's data privacy beyond their basic demographics. A possible defense to this risk was presented in this podcast.

Clips
Researchers discuss the possibility of attackers extracting sensitive information from machine learning models without knowing the model's parameters, and share their findings on a defense method with positive results.
1:02:39 - 1:10:23 (07:44)
listen on Spotify
Machine Learning
Summary

Researchers discuss the possibility of attackers extracting sensitive information from machine learning models without knowing the model's parameters, and share their findings on a defense method with positive results.

Chapter
Protecting Sensitive Data in Machine Learning Models
Episode
#95 – Dawn Song: Adversarial Machine Learning and Computer Security
Podcast
Lex Fridman Podcast