The Radical Experiment to ‘Jailbreak’ OpenAI’s ChatGPT A.I.

ChatGPT jailbreak tries to break the A.I. Break its own rules or die

Reddit users tried to force OpenAI’s ChatGPT, using an alter-ego named DAN, to break its own rules regarding violent content and political commentary.

Source:
https://www.cnbc.com/2023/02/06/chatgpt-jailbreak-forces-it-to-break-its-own-rules.html

Leave a Reply

Your email address will not be published. Required fields are marked *