The implementation of the cortical column equivalent in AI has the potential to revolutionize the way we learn and gain knowledge, having an impact as large or larger than computing in the last century. The challenge lies in engineering increasingly intelligent AI systems to fit seamlessly into our human world.
Elemental cognition aims to articulate better communication between humans and machines, which involves intellectual engagement and continuous dialogue. One of the challenges in achieving human-like artificial intelligence is the amount of physical features that constitute input from our bodies.
The next big step in AI could come from exploring the interface between cognitive science, neuroscience, AI, computer science, and philosophy of mind, as deep convolutional networks have limits in their understanding abilities. However, many machine learning researchers tend to ignore the way the human brain works.
The possibility of creating general artificial intelligence brings up questions about what it means to be human, and whether or not the imperfections in our own minds are actually strengths.
In this episode, the guest discusses whether the differences between human neural networks and artificial intelligence neural networks mean we can not reach a lifelike robot indistinguishable from a human.
The host and guest discuss the evolution of AI methods, particularly the success of deep learning for a specific set of problems, and speculate on its place as a chapter in the bigger view of AI in future decades.