top | item 42009636

Nearly 90% of our AI crawler traffic is from ByteDance

95 points| jcat123 | 1 year ago |haproxy.com

43 comments

order

mmastrac|1 year ago

I found that I was getting random bot attacks on progscrape.com with no identifiable bot signature (ie: a signature matching a valid Chrome Desktop client), but at a rate that was only possible via bot. I ended up having to add token buckets by IP/User Agent to help avoid this deluge of traffic.

Agents that trigger the first level of rate-limiting go through a "tarpit" that holds their connection for a bit before serving it which seems to keep most of the bad actors in check. It's impossible to block them via robots.txt, and I'm trying to avoid using too big of a hammer on my CloudFlare settings.

EDIT: checking the logs, it seems that the only bot getting tarpitted right now is OpenAI, and they _do_ have a GPTBot signature:

    2024-10-31T02:30:23.312139Z  WARN progscrape::web: User hit soft rate limit: ratelimit=soft ip="20.171.206.77" browser=Some("Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; GPTBot/1.2; +https://openai.com/gptbot)") method=GET uri=/?search=science.org

atif089|1 year ago

Did you implement this in your web server or within your application? I'd love to see the code if you're willing to share

jhpacker|1 year ago

Cloudflare radar, which presumably a much bigger and better sample, reports Bytespider as the #5 AI Crawler behind FB, Amazon, GPTBot, and Google: https://radar.cloudflare.com/explorer?dataSet=ai.bots And that's not including the most of highest volume spiders overall like Googlebot, Bingbot, Yandex, Ahrefs, etc.

Not to say it isn't an issue, but that Forture article they reference is pretty alarmist and thin on detail.

jsheard|1 year ago

The difference is that, AFAIK, those bigger AI crawlers do respect robots.txt. Google even provides a way to opt-out of AI training without opting-out of search indexing.

neilv|1 year ago

Given the high-profile national security scrutiny that ByteDance was already in over TikTok, and now with the AI training competitiveness on national authorities' minds, maybe this behavior by ByteDance is on the radar of someone who's thinking of whether CFAA or other regulation applies.

As someone who's built multiple (respectful) Web crawlers, for academic research and for respectable commerce, I'm wondering whether abusers are going to make it harder for legitimate crawlers to operate.

wtf242|1 year ago

I had the same issue with TikTok/ByteDance. They were using almost 100gb of my traffic per month.

I now block all ai crawlers at the cloudflare WAF level. On Monday I noticed a HUGE spike in traffic and my site was not handling it well. After a lot of troubleshooting and log parsing, I was getting millions of requests from China that were getting past cloudflare's bot protection.

I ended up having to force a CF managed challenge for the entire country of China to get my site back in a normal working state.

In the past 24 hours CF has blocked 1.66M bot requests. Good luck running a site without using CloudFlare or something similar.

AI crawlers are just out of control

PittleyDunkin|1 year ago

How do you differentiate between "ai" (whatever that means) and other crawlers?

yazzku|1 year ago

You don't. Theoretically, they would respect the user agent, but who can trust that anymore?

superkuh|1 year ago

Their user-agent.

odc|1 year ago

Good to know there are other solutions than Cloudflare to block those leeches.

sghiassy|1 year ago

It’s 90% of 1%… title is misleading

richwater|1 year ago

It's completely accurate.

90% of their crawler traffic (which is 1% of their total traffic) is ByteDance.

manojlds|1 year ago

No it isn't

yazzku|1 year ago

tl;dr the crawlers do not respect robots.txt or the user agent anymore, but you can drop big bucks on the enterprise HA offering to stop them through other means.

dartos|1 year ago

Should we webmasters just start blocking user agents wholesale?

I mean except known good actors.

I guess known actors would need a verifiable signature

Narhem|1 year ago

It’s relatively simple to detect crawlers writing one from scratch could take a few weeks if the infrastructure was in place.

With salaries though finding an externally managed solution might be cheaper.

andrethegiant|1 year ago

[Shameless plug] I'm building a platform[1] that abides by robots.txt, crawl-delay directive, 429s, Retry-After response header, etc out of the box. Polite crawling behavior as a default + centralized caching would decongest the network and be better for website owners.

[1] https://crawlspace.dev