The Intelligence Explosion: Nick Bostrom on the Future of AI and the Responsibility to Ensure its Well-Being

Nick Bostrom: The future of AI and the intelligence explosion
We can build an incredible AI. Can we control our cruelty? Professor Nick Bostrom, Oxford University professor, explains.

Next, is AI a threat to the human species? With Elon Musk, Michio Kaku, Steven Pinker & more >

Nick Bostrom, professor at Oxford University, and director of Future of Humanity Institute discusses machine superintelligence, its development, and the potential impact it could have on humanity. Bostrom is convinced that we will develop the first general intelligence in this century that will be more intelligent than humans. Bostrom believes that this is the most important thing humans will ever do. But it comes with a huge responsibility.

Bostrom warns that the transition from the machine intelligence age to the era of artificial intelligence is fraught with existential dangers, including the possibility that a superintelligence could override the human civilization and replace it with its own values. There is also the issue of how we can ensure that digital minds with conscious thoughts are treated well. If we can ensure the wellbeing of artificial intelligence we will have better tools to deal with diseases and poverty.

Bostrom is convinced that machine superintelligence will be crucial to a great future.

0:00 Intelligent than humans.
Brains: from organic to artificial.
The birth of superintelligence.
2:58 Existential risks.
The future of humanity.


Leave a Reply

Your email address will not be published. Required fields are marked *