top | item 45031022

(no title)

medhir | 6 months ago

Personally, the only way I’m going to give an LLM access to a browser is if I’m running inference locally.

I’m sure there’s exploits that could be embedded into a model that make running locally risky as well, but giving remote access to Anthropic, OpenAI, etc just seems foolish.

Anyone having success with local LLMs and browser use?

discuss

order

onesociety2022|6 months ago

The primary risk with these browser agents is prompt injection attacks. Running it locally doesn't help you in that regard.

medhir|6 months ago

True, I wasn’t thinking very deeply when I wrote this comment… local models indeed are prone to the same exploits.

Regardless, giving a remote API access to a browser seems insane. Having had a chance to reflect, I’d be very wary of providing any LLM access to take actions with my personal computer. Sandbox the hell out of these things.

innagadadavida|6 months ago

If each LLM sessions is linked to the domain and restricted just like how we restrict cross domain communication, this problem can be solved? We can have a completely isolated LLM context per each domain.

alienbaby|6 months ago

I'm not sure how running inference locally will make any difference whatsoever? or do you also mean hosting the MCP tools it has access to?

rossant|6 months ago

I imagine local LLMs are almost as dangerous as remote ones as they're prone to the same type of attacks.