The Radical Experiment to ‘Jailbreak’ OpenAI’s ChatGPT A.I.

ChatGPT jailbreak tries to break the A.I. Break its own rules or die

Reddit users tried to force OpenAI’s ChatGPT, using an alter-ego named DAN, to break its own rules regarding violent content and political commentary.


Leave a Reply

Your email address will not be published. Required fields are marked *