> When companies like Cloudflare mischaracterize user-driven AI assistants as malicious bots, they're arguing that any automated tool serving users should be suspect
Strawmen. They aren't arguing that any automated tool should be suspect. They are arguing that an automated tool with sufficient computing power should be suspect. By Perplexity's reasoning, I should be able to set up a huge server farm and hit any website with 1,000,000 requests per second because 1 request is not seen as harmful. In this case, of course, the danger with AI is not a DOS attack but an attack against the way the internet is structured and the way website are supposed to work.
> This overblocking hurts everyone. Consider someone using AI to research medical conditions,
Of course you will put medical conditions in there: appeal to the hypothetical person with a medical problem, a rather contemptible and revolting argument.
> This undermines user choice
What happens to user choice when website designers stop making websites or writing for websites because the lack of direct interaction makes it no longer worthwile?
> An AI assistant works just like a human assistant.
That's like saying a Ferarri works like someone walking. Yes, they go from A to B, but the Ferarri can go 400km down a highway much faster than a human. So, no, it has fundamental speed and power differences that change the way the ecosystem works, and you can't ignore the ecosystem.
> This controversy reveals that Cloudflare's systems are fundamentally inadequate for distinguishing between legitimate AI assistants and actual threats.
As a website designer and writer, I consider all AI assistants to be actual threats, along with the entirety of Perplexity and all AI companies. And I'm not the only one: many content creators feel the same and hope your AI assistants are neutralized with as much extreme prejudice as possible.
> By Perplexity's reasoning, I should be able to set up a huge server farm and hit any website with 1,000,000 requests per second because 1 request is not seen as harmful.
That's a slippery slope all the was to absurd. They're not talking about millions of requests a second. They're talking about a browsing session (few page views) as a result of user's action. It's not even additional traffic and there's no extra concurrency - it's likely the same requests a user would make just with shorter delay.
While agents act on behalf of the user, they won't see nor click any ads; they won't sign-up to any newsletter; they won't buy the website owner a coffee. They don't act as humans just because humans triggered them. They simply take what they need and walk away.
Do you have an ad-blocker? If so, should website owners be able to disable your ad-blocker via a setting they send to you? It's their content, after all.
where's the front page CF callout for google search agent? they wouldn't dare. i don't remember the shaming for ad and newsletter pop up blockers.
that being said, agree with you that sites are not being used the way they were intended. i think this is part of the evolution of the web. it all began with no monetization, then swung far too much into it to the point of abuse. and now legitimate content creators are stuck in the middle.
what i disagree on is that CF has the right to, again allegedly, shame perplexity on false information. especially when OAI is solving captchas and google is also "misusing" websites.
i wish i had an answer to how we can evolve the web sustainably. my main gripe is the shaming and virtue signaling.
Funnily enough this would also mean some humans are not human, like me. I exhibit that behavior exactly as well. Maybe I'm an agent acting on behalf of myself, whatever that means.
Cloudflare did explain a proper solution: "Separate bots for separate activities". E.g. here: one bot for scraping/indexing, and one for non-persistent user-driven retrieval.
Website owners have a right to block both if they wish. Isn't it obvious that bypassing a bot block is a violation of the owners right to decide whom to admit?
Perplexity's almost seems to believe that "robots.txt was only made for scraping bots, so if our bot is not scraping, it's fair for us to ignore it and bypass the enforcement". And their core business is a bot, so they really should have known better.
> When companies like Cloudflare mischaracterize user-driven AI assistants as malicious bots, they're arguing that any automated tool serving users should be suspect—a position that would criminalize email clients and web browsers, or any other service a would-be gatekeeper decided they don’t like.
I wonder if Perplexity or others mix the traffic of the two types so they’re indistinguishable, specifically to make this argument.
@viraptor above mentions that they actually do try first with an explicit perplexity-agent: https://news.ycombinator.com/item?id=44797682 . So there's no ambiguity. The worst they could accuse Cloudflare, is that they don't give website owners an easy way to only block scrapers while allowing user-driven agents (do they?).
Perplexity claims their traffic was confused with Browserbase's - I think this is inevitable at scale without better ways to identify traffic (or more specifically in this case, AI agents / fetchers), based on working in this space.
Zooming out for a second, we might be in an analogous era to open email relays. In a few years, will you need to run an agent through a big service provider because other big service providers only trust each other?
Perplexity has really convinced me about this. There is a clear difference between automated bots scraping data at bulk for later use, and automated bots working on behalf of users on direct requests. I can see a reasonable argument that some of the first type of automation could be tolerable for websites with strict limits, the second type I think by default should not be tolerated at all.
Perplexity's value proposition appears to be "we're going to take the stuff off your website, and present it to our users. We're not going to show them your ads, we're not going to offer them your premium services or referrals to other products, we're going to strip out the value from your content and take it for our users".
You can argue all you want about whether that's 5k impressions a day or 1m impressions a day. It should be 0 impressions a day. It is literally just free-riding.
Also, they're meant to be a professional company taking VC money to build a business, why are writing whiny posts like a teenager? The impression I get with a lot of these companies is that their business is losing money hand over fist, they have no idea how they're going to make it work and they look absolutely panicked as a result. They come across like a company I would want to be nowhere near.
> Perplexity's value proposition appears to be "we're going to take the stuff off your website, and present it to our users. We're not going to show them your ads, we're not going to offer them your premium services or referrals to other products, we're going to strip out the value from your content and take it for our users".
This, exactly this is a primary reason why I use Perplexity. I want the valued content, without the unnecessary distractions that I'll never consciously touch anyway (there have been accidental clicks now and then, because some site designers really want people to click that ad and go all out to embed it into the content, and it only leads to great annoyance and sometimes a promise never to visit that site again).
google was also accused in its early days of free riding on other people's content - google news still remains controversial. Also, in its early days, google did not have to deal with a large counter-party like cloudflare which is now a gatekeeper of sorts.
The problem I see for chatgpt/perplexity and the like is this: for good responses to many questions, they have to index the web real-time. ie, they become a search-engine. However, they cannot share revenue with the content-providers since they dont have an ad-model. I wonder how this would be resolved - perhaps thru content licensing with the large publishers.
if what they're saying is true then this was a huge fuckup on CFs part. i was already a bit suspicious when they started gloating about OAI agent since it's been shown to literally state that it's solving a captcha to complete the task.
i guess it will come down to browserbase corroborating the claims.
Recently I've been unable to beat cloudflare's captcha at all, locking me out of many services. Modern problems require modern solutions, and it shouldn't hurt regular users.
That should be a property of a web site. E.g. the web site using a property in the Cloudflare configuration. That way there would be competition between websites allowing or not IA agents on user accounts.
vouaobrasil|6 months ago
Strawmen. They aren't arguing that any automated tool should be suspect. They are arguing that an automated tool with sufficient computing power should be suspect. By Perplexity's reasoning, I should be able to set up a huge server farm and hit any website with 1,000,000 requests per second because 1 request is not seen as harmful. In this case, of course, the danger with AI is not a DOS attack but an attack against the way the internet is structured and the way website are supposed to work.
> This overblocking hurts everyone. Consider someone using AI to research medical conditions,
Of course you will put medical conditions in there: appeal to the hypothetical person with a medical problem, a rather contemptible and revolting argument.
> This undermines user choice
What happens to user choice when website designers stop making websites or writing for websites because the lack of direct interaction makes it no longer worthwile?
> An AI assistant works just like a human assistant.
That's like saying a Ferarri works like someone walking. Yes, they go from A to B, but the Ferarri can go 400km down a highway much faster than a human. So, no, it has fundamental speed and power differences that change the way the ecosystem works, and you can't ignore the ecosystem.
> This controversy reveals that Cloudflare's systems are fundamentally inadequate for distinguishing between legitimate AI assistants and actual threats.
As a website designer and writer, I consider all AI assistants to be actual threats, along with the entirety of Perplexity and all AI companies. And I'm not the only one: many content creators feel the same and hope your AI assistants are neutralized with as much extreme prejudice as possible.
taskforcegemini|6 months ago
viraptor|6 months ago
That's a slippery slope all the was to absurd. They're not talking about millions of requests a second. They're talking about a browsing session (few page views) as a result of user's action. It's not even additional traffic and there's no extra concurrency - it's likely the same requests a user would make just with shorter delay.
just-tom|6 months ago
1gn15|6 months ago
Tokumei-no-hito|6 months ago
where's the front page CF callout for google search agent? they wouldn't dare. i don't remember the shaming for ad and newsletter pop up blockers.
that being said, agree with you that sites are not being used the way they were intended. i think this is part of the evolution of the web. it all began with no monetization, then swung far too much into it to the point of abuse. and now legitimate content creators are stuck in the middle.
what i disagree on is that CF has the right to, again allegedly, shame perplexity on false information. especially when OAI is solving captchas and google is also "misusing" websites.
i wish i had an answer to how we can evolve the web sustainably. my main gripe is the shaming and virtue signaling.
skeledrew|6 months ago
skybrian|6 months ago
lofaszvanitt|6 months ago
avallach|6 months ago
Website owners have a right to block both if they wish. Isn't it obvious that bypassing a bot block is a violation of the owners right to decide whom to admit?
Perplexity's almost seems to believe that "robots.txt was only made for scraping bots, so if our bot is not scraping, it's fair for us to ignore it and bypass the enforcement". And their core business is a bot, so they really should have known better.
viraptor|6 months ago
skeledrew|6 months ago
There is only a violation if the bot finds a way around a login block. Same for human. But whatever is on the public web is... public. For all.
SilverElfin|6 months ago
I wonder if Perplexity or others mix the traffic of the two types so they’re indistinguishable, specifically to make this argument.
avallach|6 months ago
astrange|6 months ago
Or are they just so bad at writing that their own style looks like it?
ronsor|6 months ago
bobbiechen|6 months ago
Zooming out for a second, we might be in an analogous era to open email relays. In a few years, will you need to run an agent through a big service provider because other big service providers only trust each other?
Traster|6 months ago
Perplexity's value proposition appears to be "we're going to take the stuff off your website, and present it to our users. We're not going to show them your ads, we're not going to offer them your premium services or referrals to other products, we're going to strip out the value from your content and take it for our users".
You can argue all you want about whether that's 5k impressions a day or 1m impressions a day. It should be 0 impressions a day. It is literally just free-riding.
Also, they're meant to be a professional company taking VC money to build a business, why are writing whiny posts like a teenager? The impression I get with a lot of these companies is that their business is losing money hand over fist, they have no idea how they're going to make it work and they look absolutely panicked as a result. They come across like a company I would want to be nowhere near.
skeledrew|6 months ago
This, exactly this is a primary reason why I use Perplexity. I want the valued content, without the unnecessary distractions that I'll never consciously touch anyway (there have been accidental clicks now and then, because some site designers really want people to click that ad and go all out to embed it into the content, and it only leads to great annoyance and sometimes a promise never to visit that site again).
bwfan123|6 months ago
The problem I see for chatgpt/perplexity and the like is this: for good responses to many questions, they have to index the web real-time. ie, they become a search-engine. However, they cannot share revenue with the content-providers since they dont have an ad-model. I wonder how this would be resolved - perhaps thru content licensing with the large publishers.
minimaxir|6 months ago
posperson|6 months ago
Tokumei-no-hito|6 months ago
i guess it will come down to browserbase corroborating the claims.
ChrisArchitect|6 months ago
Perplexity is using stealth, undeclared crawlers to evade no-crawl directives
https://news.ycombinator.com/item?id=44785636
MitPitt|6 months ago
faragon|6 months ago
unknown|6 months ago
[deleted]
doctor_radium|6 months ago
[deleted]