top | item 47187299

(no title)

neoromantique | 2 days ago

Vast majority of websites today can and should be static, which makes even the aggressive llm scrapping non-issue.

discuss

order

PaulDavisThe1st|2 days ago

One of the things that a lot of LLM scrapers are fetching are git repositories. They could just use git clone to fetch everything at once. But instead, they fetch them commit by commit. That's about as static as you can get, and it is absolutely NOT a non-issue.

LoganDark|2 days ago

No... Basically all git servers have to generate the file contents, diffs etc. on-demand because they don't store static pages for every single possible combination of view parameters. Git repositories also typically don't store full copies of all versions of a file that have ever existed either; they're incremental. You could pre-render everything statically, but that could take up gigabytes or more for any repo of non-trivial size.

neoromantique|2 days ago

that's a pretty niche issue, but fairly easy to solve.

Prebuild statically the most common commits (last XX) and heavily rate limit deeper ones