Three ways AI chatbots can be a security disaster
Greshake hidden a prompt in a website he created. The Bing chatbot was integrated into Microsoft Edge, which he used to visit the website. The prompt injection caused the chatbot to generate text that made it appear as though a Microsoft employee were selling discounted Microsoft Products. This pitch was used to try and get credit card details from the user. The only thing that was required to make the scam pop up on Bing was for the user to visit the website with the hidden prompt.
Hackers had to trick computer users into running harmful code in the past to obtain information. Greshake says that large language models eliminate the need for this.