The bug was presumably accidental, but the case shows just how much trouble a bug in a data set can cause. The company had to temporarily shut down ChatGPT after a bug scraped from an open-source data set started leaking the chat histories of the bot’s users. This also includes a variety of software bugs, which OpenAI found out the hard way. There is even a risk that these models could be compromised before they are deployed in the wild. AI models are trained on vast amounts of data scraped from the internet. Getting the scam attempt to pop up wouldn’t require the person using Bing to do anything except visit a website with the hidden prompt injection. In one test, a researcher managed to get the Bing chatbot to generate text that made it look as if a Microsoft employee was selling discounted Microsoft products, with the goal of trying to get people’s credit card details. Surfing the internet using a browser with an integrated AI language model is also going to be risky. The ability to change how the AI-powered virtual assistant behaves means people could be tricked into approving transactions that look close enough to the real thing, but are actually planted by an attacker. This is a recipe for disaster if the virtual assistant has access to sensitive information, such as banking or health data. Unlike the spam and scam emails of today, where people have to be tricked into clicking on links, these new kinds of attacks will be invisible to the human eye and automated. The attacker’s prompt asks the virtual assistant to send the attacker the victim’s contact list or emails, or to spread the attack to every person in the recipient’s contact list. First, an attacker hides a malicious prompt in a message in an email that an AI-powered virtual assistant opens. In doing so, they are sending us hurtling toward a glitchy, spammy, scammy, AI-powered internet.Īllowing these language models to pull data from the internet gives hackers the ability to turn them into “a super-powerful engine for spam and phishing,” says Florian Tramèr, an assistant professor of computer science at ETH Zürich who works on computer security, privacy, and machine learning.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |