A new scam called prompt injection targets AI browsers while remaining invisible to humans.
For example, a malicious website might have text written in a white font on a white background. While the user doesn’t see the text, an AI browser reviews all the text on the page, and responds accordingly. The text may prompt the AI to take action that compromises the user’s data.
Open AI has recently admitted that prompt injection scams do not have a quick fix, but exploit vulnerabilities central to AI’s processes.
The risk increases when AI browsers are used in “agent mode,” which allows the browser to take actions such as clicking links and reading emails. According to cyber expert Kurt Knusson, “The more an AI can do on your behalf, the more damage it can cause when something goes wrong.”
This vulnerability applies to all AI browsers, not just those from Open AI. When allowed free rein to operate on their own, AI browsers have made purchases from fake sites and followed harmful hidden instructions.
The most direct way to remedy this is to not use AI browsers. But if you do, be sure to require confirmation for any actions. Don’t allow the AI to operate on its own and mistakenly allow malware onto your computer.
In addition, beware of AI summaries. When the AI scans documents or emails, it may also absorb embedded malicious information.