Exploring AGI Ruin, Sharp Left Turn, and AI Alignment with Victoria Krakovna

Victoria Krakovna – AGI Ruins, Sharp Left Turns, Paradigms for AI Alignment
Victoria Krakovna, Research Scientist at DeepMind who works on AGI Safety and co-founder of Future of Life Institute is a non profit organization that works to reduce technological risks for humanity and increase chances of a successful future. In this interview, we discuss three recent LW articles, including DeepMind Alignment Team’s Opinions on AGI Ruin Arguments and Refining the Sharp Left Turn Threat model.

Transcript & Audio: https://theinsideview.ai/victoria.

Host: https://twitter.com/MichaelTrazzi.
Victoria: https://twitter.com/vkrakovna.

DeepMind Alignment Team On AGI Ruin arguments: https://www.lesswrong.com/posts/qJgz2YapqpFEDTLKn/deepmind-a…-arguments.
Refining the Sharp Left Turn Threat Model: https://www.lesswrong.com/posts/usKXS5jGDzjwqv3FJ/refining-t…claims-and.
Paradigms of AI Alignment: https://www.lesswrong.com/posts/JC7aJZjt2WvxxffGz/paradigms-…d-enablers.

This conversation represents Victoria’s personal opinions and not those of DeepMind.


Leave a Reply

Your email address will not be published. Required fields are marked *