(no title)
BeefySwain | 1 day ago
Do websites want to prevent automated tooling, as indicated by everyone putting everything behind Cloudfare and CAPTCHAs since forever, or do websites want you to be able to automate things? Because I don't see how you can have both.
If I'm using Selenium it's a problem, but if I'm using Claude it's fine??
avaer|1 day ago
They own the user layer and models, and get to decide if your product will be used.
Think search monopoly, except your site doesn't even exist as far as users are concerned, it's only used via an agent, and only if Google allows.
The work of implementing this is on you. Google is building the hooks into the browser for you to do it; that's WebMCP.
It's all opaque; any oopsies/dark patterns will be blamed on the AI. The profits (and future ad revenue charged for sites to show up on the LLM's radar) will be claimed by Google.
The other AI companies are on board with this plan. Any questions?
moregrist|1 day ago
It’s the Google way.
[0] https://en.wikipedia.org/wiki/Accelerated_Mobile_Pages
oefrha|1 day ago
the_arun|1 day ago
solaire_oa|1 day ago
Just one example: Prompting the browser to "register example.com" means that Google/Anthropic gets to hustle registrars for SEO-style priority. Using countermeasures like captcha locks you out of the LLM market.
Google's incentive to allow you to shop around via traditional web search is decreased since traditional ads won't be as lucrative (businesses will catch on that blanket targeted ads aren't as effective as a "referral" that directs an LLM to sign-up/purchase/exchange something directly)... expect web search quality to decline, perhaps intentionally.
The only way to combat this, as far as I can conceptualize, is with open models, which are not yet as good as private ones, in no small part due to the extraordinary investment subsidization. We can hope for the bubble to pop, but plan for a deader Internet.
Meanwhile, trust online, at large, begins to evaporate as nobody can tell what is an LLM vs a human-conducted browser. The Internet at large is entering some very dark waters.
morkalork|1 day ago
socalgal2|1 day ago
https://www.perplexity.ai/comet
https://chatgpt.com/atlas/
https://arc.net/max
That is not in any way to suggest companies are ok to do bad things. I don't see anything bad here. I just see the inevitable. People are going to want to ask some AI for whatever they used to get from the internet. Many are already doing this. Who ever enables that for users best will get the users.
akersten|1 day ago
cameldrv|1 day ago
goku12|21 hours ago
There was a concept named Web 3.0 a while ago, aka the 'Semantic Web'. It wasn't the crypto/blockchain scam that we call Web3 today. The idea was to create a web of machine readable data based on shared ontologies. That would have effectively turned the web into a giant database of sorts, that the 'agents' could browse autonomously and derive conclusions from. This is sort of like how we browse the web to do research on any topic.
Since the data was already in a structured form in Web 3.0 instead of natural language, the agent would have been nowhere near the energy hogs that LLMs are today. Even the final conversion of conclusions into natural language would have been much more energy-efficient than the LLMs, since the conclusions were also structured. Combine that with the sorts of technology we have today, even a mediocre AI (by today's standards) would have performed splendidly.
Opponents called it impractical. But there already were smaller systems around from various scientific fields, operating on the same principle. And the proponents had already made a lot of headway. It was going to revolutionize information sharing. But what I think ultimately doomed it is the same reason you mentioned. The powers that be, didn't want smarter people. They wanted people who earned them money. That means those who spend their attention on dead scrolling feeds, trash ads and slop.
> but it's a shame that so many companies who built their empires on the shoulders of those visionaries think the only valid way to browse is with a human-eyeball-to-server chain of trust.
Yes, this! But only when your eyeball and attention earns them profit. Otherwise they are perfectly content with operating behind your backs and locking you out of decisions about how you want to operate the devices you paid for in full. This is why we can't have good things. No matter which way you look, the ruins of all the dreams lead to the same culprit - the insatiable greed of a minority. That makes me question exactly how much wealth one needs to live comfortably or even lavishly till their death.
nkassis|1 day ago
victorbjorklund|1 day ago
An e-commerce? Wanna automate buying your stuff - probably something they wanna allow under controlled forms
Wanna scrape the site to compare prices? Maybe less so.
candiddevmike|1 day ago
Also I just recently noticed Chrome now has a Klarna/BNPL thing as a built in payments option that I never asked for...
aragonite|1 day ago
The proposal (https://docs.google.com/document/d/1rtU1fRPS0bMqd9abMG_hc6K9...) draws the line at headless automation. It requires a visible browsing context.
> Since tool calls are handled in JavaScript, a browsing context (i.e. a browser tab or a webview) must be opened. There is no support for agents or assistive tools to call tools "headlessly," meaning without visible browser UI.
Intermernet|19 hours ago
est|23 hours ago
Someone at Chromium team is launching rapidly for an promotion
loveparade|1 day ago
aubanel|8 hours ago
- an agent loading the real page is waste for the server, because the data sent is a few megavytes, and you don't have the usual returns of an user seeing your ads
- BUT API requests (or here, MCP) are much lighter, a few dozen kB, so that makes the ROI positive again
At least that's my view : please tell me, anyone, if that reason doesn't make sense!
chrash|1 day ago
politelemon|1 day ago
bear3r|1 day ago
notatoad|21 hours ago
if you want to access my website using automated tools, that's fine. but if there's a certain automated tool that is consistently used to either break the site or attempt to defraud me, i'm going to do my best to block that tool. and sometimes that means blocking other, similar tools.
if the webMCP client in chrome behaves in a reasonable way that prevents abuse, then i don't see a problem with it. if scammers discover they can use it to scam, then websites will block it too.
DrScientist|15 hours ago
I think potentially the subtlety here is a sort of cooperative mode - the computer filling out a lot of the forms and doing the grunt, but it's important that the human is still in the loop - so they need to be able to share a UI with the agent.
Hence a agent friendly web page, rather than just an API.
fasbiner|1 day ago
Their vision is a world where they use all the automation regardless of safety or law, and we have to jump through extra hoops and engage in manual processes with AI that literally doesn't have the tool access to do what we need and will not contact a human.
OsrsNeedsf2P|1 day ago
zadikian|22 hours ago
dokdev|22 hours ago
sleight42|22 hours ago
BeefySwain|1 day ago
medi8r|23 hours ago
joshuanapoli|23 hours ago
nojs|1 day ago
dawnerd|1 day ago
jmalicki|1 day ago
parhamn|1 day ago
SilverElfin|1 day ago
WebMCP will become another channel controlled by big tech and it’ll come with controls. First they’ll lure people to use this method for the situations they want to allow, and then they’ll block everything else.
moron4hire|1 day ago
maximinus_thrax|1 day ago
Not if they don't want their rankings to tank. Now you'll need to make your website machine friendly while the lords of walled gardens will relentlessly block any sort of 'rogue' automated agent from accessing their services.
nudpiedo|1 day ago
buzzerbetrayed|1 day ago
manveerc|1 day ago
Sites that don’t want it will keep blocking. WebMCP doesn’t change that.
Your point about selenium is absolutely right. WebMCP is an unnecessary standard. Same developer effort as server-side MCP but routed through the browser, creating a copy that drifts from the actual UI. For the long tail that won’t build any agent interface, the browser should just get smarter at reading what’s already there.
Wrote about it here: https://open.substack.com/pub/manveerc/p/webmcp-false-econom...
arjunchint|1 day ago
Most sites don't want to expose APIs or care enough about setup and maintenance of said API.