The speaker highlights the need for interdisciplinary studies in AI development, including reading literature from fields such as economics and social choice theory, in order to understand how humans can be brought into contact with AI systems in a way that improves the world.
The speaker discusses the importance of defining core values and sticking to them when making decisions about who to hire and which industries to engage with.
Jordan Peterson talks about the importance of examining ideas and not being completely owned by them or sucked into groupthink. He emphasizes our responsibility as humans to make choices about how constrained we are by societal phenomena, which can lead to even more interesting group dynamics.
The decade for social robots is here, whether the humanoid form is necessary or not. Solving the issue of social connection between AI systems and humans does not require solving the problem of robot manipulation and biped mobility.
To address the big problems of sustainability on Earth and the emergence of AI, individuals should start by fixing themselves and creating more alignment within themselves. Only then can we take meaningful actions towards these problems.
The challenge of human-robot collaboration lies in understanding human behavior and needs, which is daunting as robot capabilities continue to improve. However, by allowing robots to optimize based on what the human wants, rather than what a programmer dictates, the robot may be able to influence and better understand human behavior.