Episode
Yann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
Description
Yann LeCun is one of the fathers of deep learning, the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data. He is a professor at New York University, a Vice President & Chief AI Scientist at Facebook, co-recipient of the Turing Award for his work on deep learning. He is probably best known as the founder of convolutional neural networks, in particular their early application to optical character recognition. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon.
Chapters
Unconstrained objectives for autonomous AI systems can lead to damaging or stupid actions.
00:00 - 07:52 (07:52)
Summary
Unconstrained objectives for autonomous AI systems can lead to damaging or stupid actions. Designing constraints similar to the Hippocratic Oath may be necessary for the development of useful autonomous AI systems.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
Deep learning and human reasoning are not the same thing, and trying to scale a memory network to contain all of Wikipedia doesn't quite work.
07:52 - 17:14 (09:21)
Summary
Deep learning and human reasoning are not the same thing, and trying to scale a memory network to contain all of Wikipedia doesn't quite work. To achieve human-like reasoning, neural nets may require prior structure and different math than what is commonly used in computer science.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
Leon Battou and team have released a paper aimed at getting a neural net the ability to identify real causal relationships in order to address potential issues of data bias.
17:14 - 25:28 (08:14)
Summary
Leon Battou and team have released a paper aimed at getting a neural net the ability to identify real causal relationships in order to address potential issues of data bias. This research will be important as it helps find a way to interpret knowledge and use it to minimize energy spent and optimize objective function.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
The speaker discusses the difficulties and mistakes made when experimenting with neural nets, such as improperly initializing weights and making the network too small.
25:28 - 31:10 (05:41)
Summary
The speaker discusses the difficulties and mistakes made when experimenting with neural nets, such as improperly initializing weights and making the network too small.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
In this podcast, the speakers discuss benchmarks to measure AI and Machine Learning's ability to reason and access working memory, such as the BABY tasks, and provide advice on avoiding exaggerated claims regarding AI's similarity to the human brain.
31:11 - 38:08 (06:56)
Summary
In this podcast, the speakers discuss benchmarks to measure AI and Machine Learning's ability to reason and access working memory, such as the BABY tasks, and provide advice on avoiding exaggerated claims regarding AI's similarity to the human brain.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
The progress towards creating AGI is a tiny sliver, and simulations through robotics and games are methods used to advance the technology.
38:08 - 44:02 (05:54)
Summary
The progress towards creating AGI is a tiny sliver, and simulations through robotics and games are methods used to advance the technology.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
Self-supervised learning works in natural language processing by predicting missing words in a test corpus, while uncertainty in prediction is much easier to represent than in image and video recognition.
44:02 - 49:24 (05:21)
Summary
Self-supervised learning works in natural language processing by predicting missing words in a test corpus, while uncertainty in prediction is much easier to represent than in image and video recognition. However, progress is being made in the latter fields.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
The speaker explains how there is no conflict between different learning methods, like self-supervised, reinforcement, supervised, imitation or active learning.
49:24 - 56:48 (07:23)
Summary
The speaker explains how there is no conflict between different learning methods, like self-supervised, reinforcement, supervised, imitation or active learning. Combining methods can help achieve better results in various tasks by requiring fewer training hours.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
Machine learning systems can perform with higher accuracy if trained with millions of unlabeled data, achieving the same level of performance as supervised systems with fewer samples; this advancement could benefit medical image analysis.
56:48 - 1:00:41 (03:52)
Summary
Machine learning systems can perform with higher accuracy if trained with millions of unlabeled data, achieving the same level of performance as supervised systems with fewer samples; this advancement could benefit medical image analysis.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
The current approach to autonomy in driving involves a combination of engineering and some learning, but ultimately the solution will rely more on self-supervised learning in order to address corner cases and limitations.
1:00:41 - 1:05:44 (05:03)
Summary
The current approach to autonomy in driving involves a combination of engineering and some learning, but ultimately the solution will rely more on self-supervised learning in order to address corner cases and limitations.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
In this episode, the speaker discusses three ways to be stupid in AI, which include having the wrong objective, having the right objective but the wrong model, and being unable to figure out how to optimize your objective given your model.
1:05:44 - 1:10:17 (04:33)
Summary
In this episode, the speaker discusses three ways to be stupid in AI, which include having the wrong objective, having the right objective but the wrong model, and being unable to figure out how to optimize your objective given your model.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
The emergence of common sense is crucial in the development of Artificial Intelligence, through various mediums such as language interaction and virtual environments, to ensure that it has an understanding of how the world works and is not frustrating to communicate with.
1:10:17 - 1:14:26 (04:08)
Summary
The emergence of common sense is crucial in the development of Artificial Intelligence, through various mediums such as language interaction and virtual environments, to ensure that it has an understanding of how the world works and is not frustrating to communicate with.
EpisodeYann LeCun: Deep Learning, Convolutional Neural Networks, and Self-Supervised Learning
PodcastLex Fridman Podcast
The guest discusses the biological basis of fear and the importance of asking the right questions to test the intelligence of a human-level AI system.
1:14:26 - 1:15:56 (01:29)
Summary
The guest discusses the biological basis of fear and the importance of asking the right questions to test the intelligence of a human-level AI system.