Industry insiders have learned that Google has decided not to fix a potential threat to Gmail users posed by an AI prompt injection vulnerability.
That's a mouthful, isn't it?
Imagine that you're trying to research a subject in Geography. After preparing a prompt for Gemini to begin it's work, a weakness in the system allows for a preexisting set of sub-prompts to cause the system to include a custom link as part of the result set. When the user views these results, they are not aware that the link was included with malicious intent.