There's nothing specific to Gemini and Antigravity here. This is an issue for all agent coding tools with cli access. Personally I'm hesitant to allow mine (I use Cline personally) access to a web search MCP and I tend to give it only relatively trustworthy URLs.
ArcHound|3 months ago
They forgot about a service which enables arbitrary redirects, so the attackers used it.
And LLM itself used the system shell to pro-actively bypass the file protection.
dabockster|3 months ago
Web search MCPs are generally fine. Whatever is facilitating tool use (whatever program is controlling both the AI model and MCP tool) is the real attack vector.
IshKebab|3 months ago
buu700|3 months ago
Vendors should really be encouraging this and providing tooling to facilitate it. There should be flashing red warnings in any agentic IDE/CLI whenever the user wants to use YOLO mode without a remote agent runner configured, and they should ideally even automate the process of installing and setting up the agent runner VM to connect to.
xmcqdpt2|3 months ago
connor4312|3 months ago
simonw|3 months ago
Does it do that using its own web fetch tool or is it smart enough to spot if it's about to run `curl` or `wget` or `python -c "import urllib.request; print(urllib.request.urlopen('https://www.example.com/').read())"`?
gizzlon|3 months ago
Prompt injection is just text, right? So if you can input some text and get a site to serve it it you win. There's got to be million of places where someone could do this, including under *.google.com. This seems like a whack-a-mole they are doomed to lose.
informal007|3 months ago
Hope google can do something for preventing prompt injection for AI community.
simonw|3 months ago
danudey|3 months ago