This post gets the reason why people are cutting off LLMs exactly backwards and consequently completely fails to address the core issue. The whole reason people are blocking LLMs is precisely that they believe it kills the flow of readers to your content. The LLMs present your ideas and content, maybe with super-tiny attribution that nobody notices or uses [1], maybe with no attribution at all, and you get nothing. People are blocking LLMs with the precise intent of trying to preserve the flow to their content, be it commercially, reputationally, whatever.
Users don't seek content for the attribution; that's extra noise, unless there's reason to contact the attributed. And given that many websites offer an inefficient flow to content, made of ads and/or unnecessarily animated things for example, the LLM is merely improving the experience for the user.
Fair, if your content is your product, but I’m more than happy for every LLM on the planet to summarize my page and hype the virtues of my product to its user.
That is basically several paragraphs to just say "well you should just adapt to the new world instead of pushing against bad practices". There is barely any "why" actually said here.
We just had the article about how AI search is leading to less clicks, so where is that supposed "pipeline"?
Also completely ignores how you may not want your information to be misconstrued (lied basically) to the user with a helpful link telling them where the source is, but they may never click through. And worse if they know that the information being told to them is wrong, they may then think it was because your site was wrong and trust you less, all without ever clicking that link.
> LLMs are the next generation’s search layer. They’re already generating massive amounts of pipeline for the companies and websites that have gotten good at getting their content displayed in LLMs
Just check your analytics dashboards and see where hits are now starting to come from. Saw on LinkedIn the other day that in the space I serve that a new customer found them via ChatGPT.
I'm surprised I don't see any comments here to this effect yet: isn't this just AMP 2.0? Website authors don't want their content scraped and rehosted by a 3rd party, even when that 3rd party claims it's for their own benefit. We have a whole kerfuffle about this nearly a decade ago. The arguments for both sides don't appear to have changed.
The first sentence of the article is literally wrong as it conflates LLM and the search part of a RAG (retrieval augmented generation, when you mix a web search and an LLM).
Blocking bots cuts you off from the next-generation search, because it cuts you off from search at all. So far, blocking LLM simply prevents you from being part of the training dataset, which is not the same thing.
Please stop upvoting such bad content it really makes Hackernews a terrible place for staying informed on LLMs.
> blocking LLM simply prevents you from being part of the training dataset
That's narrow. Perplexity, and other LLM agent services, do perform a regular web search to gain context, before generation their output. How else would they have access to recent data when the underlying LLM's knowledge cutoff is usual at least a few weeks?
Nonsensical article. Even if your goal is to create something on the web “for others” (as the article asserts), when 99.9% of your costs go to serving LLM crawlers, it puts that very objective at risk.
Your computer doesn't have the right to scrape what I say or do anything with it.
I know one of the primary reasons that I do anything online is to provide an outlet for someone else to see it. If I didn’t want someone else to see it, I’d write it down on my notebook, not on the public web.
Sounds like the same schpiel from the anti-privacy advocates who think that we should all expose everything we're doing because "you should have nothing to hide".
This article was written for Wired by Moxie Marlinspike in 2013, who went on to later develop the Signal protocol.
I don't want my thoughts or ideas spread across the web promiscuously. The things I say publicly are curated and full of context. That's why I have my own website, and don't post elsewhere.
I'm not playing the same game you are, which appears to be to post liberally and have loose thoughts to maximize "reach".
Screw this. I didn't put effort into writing many paragraphs of content for my own websites just so it could be summarized by an LLM. I wrote it because I wanted other human beings to read it.
This is just yet another person running an AI company telling me why I should provide free data and labor to the LLMs that power their company. These AI companies are acting as middlemen between the end-user and the content creator; its the latest iteration of an age-old business model which works-out great for the middlemen. Meanwhile, people on either side are taken advantage of.
If the "next-generation" of search is accessed mostly through an LLM, then there's no incentive to participate in it unless you're directly selling a product or service... and then you have to hope and pray the LLM doesn't lie and misrepresent you. Otherwise, if you're making a website to share information or show off your own work, there's zero incentive to participate.
If AI companies want to pay me cold hard cash every time they query my site, then we can negotiate.
“Providing high quality content that LLMs will actually cite is the new game in town.”
That is not my job nor is it my goal. These companies are taking my work, repurposing it, and selling it under the assumption that because they can access it they can sell it.
Maybe the OP should leave their house door open so people can come in and use his couch. The new game in town is to let other people use your couch.
The mental gymnastics in this post qualify for the Special Olympics.
It’s not dumb because Googlebot follows the robots.txt rules. That is the sincere crux of it all. No one is going to casually open their site up to Llms that are blatantly scraping their site to then use their information to displace them.
Not blocking violent, bad-actor scrapers is dumb. Letting through bad-actor scrapers because a bunch of rich people want to make it the norm is dumb.
Llms are not directing traffic to the sites and that is the tradeoff that site owners allow with Googlebot. Even if Perplexity or Claude will provide a source, the Llm user is most likely not asking/clicking for it 99% of the time.
As an amateur blogger, I would not like LLMs to "steal" my content, display the users the needed pieces they are looking for, while leaving me with zero visitors. The reason I write is to convey a particular message, which the meaning of gets lost, or worse communicated wrongly, due to LLMs.
As an online business owner, I do see both ChatGPT and Perplexity as referrers to my business, meaning that potential customers ask LLM a question/service recommendation, and LLM is directing them to my service, and I would not like to lose this vertical of organic customer acquisition.
---
On a completely different note, medium should die as a platform, together with substack. The amount of intrusive popups, "install our app" bars, and paywalls is just insane. Bloggers, especially technically savvy ones, should be able to host their own blog.
Once again I see someone mistaking an LLM regurgitating (in a right, wrong, misleading way, who knows) your content for people "accessing" your content. If the LLM sits between me and my reader and acts as a filter (because the information is rehashed, because maybe sometimes doesn't reference me), basically my goal is to provide information to tools for other companies to make money?
I don't write for money, but if you remove also the basic human interaction reader-writer, I might as well really write in my notes.
I agree with the sentiment. I remember Gwern in an interview remarking something to the effect that if you make your writing and thoughts invisible to LLMs, then your thoughts are going to be invisible to the future, as LLMs are here to stay.
The internet is dead. I might need to stop writing here. I have no clue how many of the users here are bots by now and that dissolves the fun of it. I'll start a book circle or something ...
The whole thing about LLM is training on content other people created, redirecting that traffic to you and ultimately earn money on it. The whole thing about LLM being pushed everywhere is to get free training data too.
This article is based on the false assumption that use of a site by an LLM directs any user traffic to it whatsoever.
Why would anyone choose to anonymously and freely provide content to LLMs? Actually the only use case for that is deliberately seeding misinformation. Which is likely already happening and will soon be the majority of the content accessible to LLMs regardless of what blocking legitimate content providers choose to use.
You write content so that you get paid, usually through ads and clicks. If people aren't seeing your content because am LLM has consumed it and is regurgitating it and taking your ad clicks, then there's no benefit for you only for the LLM. You're doing the work of Sam Altman and helping him attain his multibillionaire status and you get nothing in return.
> how many of you wouldn’t hook up your website to Google?
If there was a paid-only search engine with dubious ethics practices that was overwhelming my site with traffic in order resell search trained off of (among other things) my personally generated content, I would absolute block it.
LLMs are not search engines, and I'm not gaining any followers or customers in any meaningful way because an LLM indexes my site.
> it also cuts you off from the fastest-growing distribution channel on the web.
I haven't seen the needle tip at all in my acquisition channels from LLMs. Unless you're a household name or very large, LLMs aren't going to shill for your business.
> most LLMs have an agentic web-search component that will actively generate links
Totally. Which is why I don't care if the LLMs index it. Let web content search be good, and lead LLMs to good content; product placement in LLM weights ain't what I'm gonna optimize for, or even permit, if it comes at a cost to me and my infra.
> LLMs are not search engines, and I'm not gaining any followers or customers in any meaningful way because an LLM indexes my site.
Counterpoint: my wife owns an accounting firm and publishes a lot of highly valuable informational content on their website's blog. Stuff like sales tax policies and rates in certain states, accounting/payroll best practices articles, etc. I guess you could call it "content marketing".
Lately they have been getting highly qualified leads coming from LLMs that cite her website's content when answering questions like "What is the sales tax nexus policy in California?". Users presumably follow the citation and then engage with the website, eventually becoming a very warm lead.
So LLMs are obviously not search engines in the conventional sense, but it doesn't mean they are not useful at generating valuable traffic to your marketing website.
> LLMs are not search engines, and I'm not gaining any followers or customers in any meaningful way because an LLM indexes my site.
Friends of mine run a service company, and they already see a significant number of customers reach out because they found them using ChatGPT (et al), not Google. By significant I mean ~20% or so.
Also, for e-commerce, Deep Research from OpenAI, is way better in doing product recommendations than Google. That's my goto place to find most stuff novadays (e.g. I purchased dancing shoes, pants, air cleaners, an air conditioner, supplements and a ton of other things using the recommendations of DR - no search engine comes even close to it)
> If there was a paid-only search engine with dubious ethics practices that was overwhelming my site with traffic in order resell search trained off of (among other things) my personally generated content, I would absolute block it.
Let’s compare Google with OpenAI:
Paid-only: neither check; both have free tiers, eventually supported by ads (Google took 10+ years before it got littered with ads, I promise OpenAI will make the ad experience even stinkier because they keep you on the site as opposed to Google who only have you for a few seconds. The ads will be blinky, and they will be nested into the content.
Dubious ethics: both check.
Overwhelming bot traffic: both check.
Make money on your content: both check.
> LLMs are not search engines, and I'm not gaining any followers or customers in any meaningful way because an LLM indexes my site.
So paywall it?
The Anubis PoW captcha is an option, too. Then you will block trainers and allow agents.
When I asked dang about why are very low quality submissions allowed despite them resulting in low quality discussion, he told me and I quote:
> You can flag submissions that you think don't belong on Hacker News.
Well, this is a very low quality submission in my eyes. A tiny read with an unsubstantiated, purely contrarian take that completely misses the point of the debate. Just to be clear, I think anyone is free to post anything on their blogs, that's what they're for, but I don't think posts like these contribute to HN having a good atmosphere for discussion; if I were to write something like this, I'd be ok with it being unsuitable for HN.
BTW I hadn't flagged this before reading your comment. I've done so after reading the submission though.
Good, because this “next generation search” doesn’t cite sources, invents falsehoods, steals content, and doesn’t direct traffic to the site in question, which was the whole point of search engines in the first place.
The fact LLM companies constantly keep getting dinged for ignoring every barrier we throw up to stop their scraping short of something like Anubis shows what their real goal is: theft, monopolization, and reality authoring.
Even if we take the argument at face value, we should allow LLMs to train their models for free, on the backs of real people's work, just so that there's a chance that they actually improved well enough to replace humans, all that just to have a temporary boost on search discovery of our content.
Not to mention LLMs still spew a lot of badly wrong results (no I will not anthropomorphize the models, they're not ready yet).
This is one heck of a poison chalice. 王先生,你願意喝這杯鶴酒嗎?
jerf|6 months ago
[1]: https://www.pewresearch.org/short-reads/2025/07/22/google-us...
skeledrew|6 months ago
piker|6 months ago
andrewmutz|6 months ago
Would removing your website from google search results cause people to go directly to your website?
nerdjon|6 months ago
We just had the article about how AI search is leading to less clicks, so where is that supposed "pipeline"?
Also completely ignores how you may not want your information to be misconstrued (lied basically) to the user with a helpful link telling them where the source is, but they may never click through. And worse if they know that the information being told to them is wrong, they may then think it was because your site was wrong and trust you less, all without ever clicking that link.
riffraff|6 months ago
[citation needed]
cpursley|6 months ago
bellBivDinesh|6 months ago
How about the fact that Google (ideally) sends users to you rather than sharing your work unattributed?
endemic|6 months ago
shortstuffsushi|6 months ago
eric-burel|6 months ago
skeledrew|6 months ago
That's narrow. Perplexity, and other LLM agent services, do perform a regular web search to gain context, before generation their output. How else would they have access to recent data when the underlying LLM's knowledge cutoff is usual at least a few weeks?
ryandrake|6 months ago
SideburnsOfDoom|6 months ago
tartoran|6 months ago
mflaherty22|6 months ago
JSR_FDED|6 months ago
politelemon|6 months ago
ashwinsundar|6 months ago
Your computer doesn't have the right to scrape what I say or do anything with it.
Sounds like the same schpiel from the anti-privacy advocates who think that we should all expose everything we're doing because "you should have nothing to hide".https://archive.is/WjbcU
This article was written for Wired by Moxie Marlinspike in 2013, who went on to later develop the Signal protocol.
I don't want my thoughts or ideas spread across the web promiscuously. The things I say publicly are curated and full of context. That's why I have my own website, and don't post elsewhere.
I'm not playing the same game you are, which appears to be to post liberally and have loose thoughts to maximize "reach".
ayaros|6 months ago
This is just yet another person running an AI company telling me why I should provide free data and labor to the LLMs that power their company. These AI companies are acting as middlemen between the end-user and the content creator; its the latest iteration of an age-old business model which works-out great for the middlemen. Meanwhile, people on either side are taken advantage of.
If the "next-generation" of search is accessed mostly through an LLM, then there's no incentive to participate in it unless you're directly selling a product or service... and then you have to hope and pray the LLM doesn't lie and misrepresent you. Otherwise, if you're making a website to share information or show off your own work, there's zero incentive to participate.
If AI companies want to pay me cold hard cash every time they query my site, then we can negotiate.
merelysounds|6 months ago
I guess that’s the problem - search being only a component.
Is the possible search traffic worth having your content become part of an LLM’s training set and possibly used elsewhere?
I guess the answer depends on the content and the website’s business model.
pryelluw|6 months ago
That is not my job nor is it my goal. These companies are taking my work, repurposing it, and selling it under the assumption that because they can access it they can sell it.
Maybe the OP should leave their house door open so people can come in and use his couch. The new game in town is to let other people use your couch.
The mental gymnastics in this post qualify for the Special Olympics.
righthand|6 months ago
Not blocking violent, bad-actor scrapers is dumb. Letting through bad-actor scrapers because a bunch of rich people want to make it the norm is dumb.
Llms are not directing traffic to the sites and that is the tradeoff that site owners allow with Googlebot. Even if Perplexity or Claude will provide a source, the Llm user is most likely not asking/clicking for it 99% of the time.
skwee357|6 months ago
As an amateur blogger, I would not like LLMs to "steal" my content, display the users the needed pieces they are looking for, while leaving me with zero visitors. The reason I write is to convey a particular message, which the meaning of gets lost, or worse communicated wrongly, due to LLMs.
As an online business owner, I do see both ChatGPT and Perplexity as referrers to my business, meaning that potential customers ask LLM a question/service recommendation, and LLM is directing them to my service, and I would not like to lose this vertical of organic customer acquisition.
---
On a completely different note, medium should die as a platform, together with substack. The amount of intrusive popups, "install our app" bars, and paywalls is just insane. Bloggers, especially technically savvy ones, should be able to host their own blog.
Wilder7977|6 months ago
iamwil|6 months ago
gwern|6 months ago
Disposal8433|6 months ago
unknown|6 months ago
[deleted]
johnnienaked|6 months ago
If you do you're just feeding the AI monster for free.
rightbyte|6 months ago
watwut|6 months ago
ramoz|6 months ago
LLMs are being blocked by standard bot detection - and the use cases are very much the same. People want smarter bots for the same shitty use cases.
skywhopper|6 months ago
Why would anyone choose to anonymously and freely provide content to LLMs? Actually the only use case for that is deliberately seeding misinformation. Which is likely already happening and will soon be the majority of the content accessible to LLMs regardless of what blocking legitimate content providers choose to use.
Mars008|6 months ago
andreagrandi|6 months ago
blindriver|6 months ago
You write content so that you get paid, usually through ads and clicks. If people aren't seeing your content because am LLM has consumed it and is regurgitating it and taking your ad clicks, then there's no benefit for you only for the LLM. You're doing the work of Sam Altman and helping him attain his multibillionaire status and you get nothing in return.
jkingsman|6 months ago
If there was a paid-only search engine with dubious ethics practices that was overwhelming my site with traffic in order resell search trained off of (among other things) my personally generated content, I would absolute block it.
LLMs are not search engines, and I'm not gaining any followers or customers in any meaningful way because an LLM indexes my site.
> it also cuts you off from the fastest-growing distribution channel on the web.
I haven't seen the needle tip at all in my acquisition channels from LLMs. Unless you're a household name or very large, LLMs aren't going to shill for your business.
> most LLMs have an agentic web-search component that will actively generate links
Totally. Which is why I don't care if the LLMs index it. Let web content search be good, and lead LLMs to good content; product placement in LLM weights ain't what I'm gonna optimize for, or even permit, if it comes at a cost to me and my infra.
caseyohara|6 months ago
Counterpoint: my wife owns an accounting firm and publishes a lot of highly valuable informational content on their website's blog. Stuff like sales tax policies and rates in certain states, accounting/payroll best practices articles, etc. I guess you could call it "content marketing".
Lately they have been getting highly qualified leads coming from LLMs that cite her website's content when answering questions like "What is the sales tax nexus policy in California?". Users presumably follow the citation and then engage with the website, eventually becoming a very warm lead.
So LLMs are obviously not search engines in the conventional sense, but it doesn't mean they are not useful at generating valuable traffic to your marketing website.
vb-8448|6 months ago
^^^^
This
For the moment, and for the foreseeable future, you are just giving your content for free (and have to pay the hosting bill).
kolinko|6 months ago
Friends of mine run a service company, and they already see a significant number of customers reach out because they found them using ChatGPT (et al), not Google. By significant I mean ~20% or so.
Also, for e-commerce, Deep Research from OpenAI, is way better in doing product recommendations than Google. That's my goto place to find most stuff novadays (e.g. I purchased dancing shoes, pants, air cleaners, an air conditioner, supplements and a ton of other things using the recommendations of DR - no search engine comes even close to it)
sackfield|6 months ago
sshine|6 months ago
Let’s compare Google with OpenAI:
Paid-only: neither check; both have free tiers, eventually supported by ads (Google took 10+ years before it got littered with ads, I promise OpenAI will make the ad experience even stinkier because they keep you on the site as opposed to Google who only have you for a few seconds. The ads will be blinky, and they will be nested into the content.
Dubious ethics: both check.
Overwhelming bot traffic: both check.
Make money on your content: both check.
> LLMs are not search engines, and I'm not gaining any followers or customers in any meaningful way because an LLM indexes my site.
So paywall it?
The Anubis PoW captcha is an option, too. Then you will block trainers and allow agents.
bufferoverflow|6 months ago
[deleted]
acrispino|6 months ago
frozenseven|6 months ago
debugnik|6 months ago
> You can flag submissions that you think don't belong on Hacker News.
Well, this is a very low quality submission in my eyes. A tiny read with an unsubstantiated, purely contrarian take that completely misses the point of the debate. Just to be clear, I think anyone is free to post anything on their blogs, that's what they're for, but I don't think posts like these contribute to HN having a good atmosphere for discussion; if I were to write something like this, I'd be ok with it being unsuitable for HN.
BTW I hadn't flagged this before reading your comment. I've done so after reading the submission though.
Disposal8433|6 months ago
stego-tech|6 months ago
The fact LLM companies constantly keep getting dinged for ignoring every barrier we throw up to stop their scraping short of something like Anubis shows what their real goal is: theft, monopolization, and reality authoring.
cactusplant7374|6 months ago
sshine|6 months ago
The training done on the content does not provide citable references with current models. The agentic search and summary done post-training does.
A lot of the heavy traffic is for training, though, because AI companies are in competition for large amounts of training data.
BoorishBears|6 months ago
lambdadelirium|6 months ago
altairprime|6 months ago
calyth2018|6 months ago
Not to mention LLMs still spew a lot of badly wrong results (no I will not anthropomorphize the models, they're not ready yet).
This is one heck of a poison chalice. 王先生,你願意喝這杯鶴酒嗎?
girriPal|6 months ago
[deleted]