top | item 37069320

(no title)

burnhamup | 2 years ago

The theory I've heard is related to 'crawl budget'. Google is only going to devote a finite amount of time to indexing your site. If the number of articles on your site exceeds that time, some portion of your site won't be indexed. So by 'pruning' undesirable pages, you might boost attention on the articles you want indexed. No clue how this ends up working in practice.

Google's suggestion isn't to delete pages, but maybe mark some pages with a no index header.

https://developers.google.com/search/docs/crawling-indexing/...

discuss

order

crazygringo|2 years ago

But as that linked guide explains, that's only relevant for sites with e.g. over a million pages changing once a week.

That's for stuff like large e-commerce sites with constantly changing product info.

Google is clear that if your content doesn't change often (in the way that news articles don't), then crawl budget is irrelevant.

snowwrestler|2 years ago

Google crawls the entire page, not just the subset of text that you, a human, recognize as the unchanged article.

It’s easy to change millions of pages once a week with on-load CMS features like content recommendations. Visit an old article and look at the related articles, most read, read this next, etc widgets around the page. They’ll be showing current content, which changes frequently even if the old article text itself does not.

linkjuice4all|2 years ago

It’s possible they examined the server logs for requests from GoogleBot and found it wasting time on old content (this was not mentioned in the article but would be a very telling data point beyond just “engagement metrics”).

There’s some methodology to trying to direct Google crawls to certain sections of the site first - but typically Google already has a lot of your URLs indexed and it’s just refreshing from that list.

codedokode|2 years ago

To determine whether content changes Google has to spend budget as well, hasn't it? So it has to fetch that 20-years old article.

throw0101a|2 years ago

> The theory I've heard is related to 'crawl budget'. Google is only going to devote a finite amount of time to indexing your site.

Once a site has been indexed once, should it really be crawled again? Perhaps Google should search for RSS/Atom feeds on sites and poll those regularly for updates: that way they don't waste time doing to a site scrape multiple times.

Old(er) articles, once crawled, don't really have to be babysat. If Google wants to double-check that an already-crawled site hasn't changed too much, they can do a statistical sampling of random links on it using ETag / If-Modified-Since / whatever.

jrochkind1|2 years ago

The SiteMap, which was invented by Google and designed to give information to crawlers, already includes last-updated info.

No need to invent a new system based on RSS/Atom, there is already an actually existing and in-use system based on SiteMap.

So, what you suggest is already happening -- or at least, the system is already there for it to happen. It's possible Google does not trust the last modified info given by site owners enough, or for other reasons does not use your suggested approach, I can't say.

https://developers.google.com/search/docs/crawling-indexing/...

jszymborski|2 years ago

I can imagine a malicious actor changing an SEO-friendly page to something spammy and not SEO-friendly. Since E-Tag and If-Modified-Since are returned by the server, they can be manipulated.

Just a guess though.

influx|2 years ago

This should be what sitemap.xml provides already.

0cf8612b2e1e|2 years ago

Even if that rule were true, why wouldn’t everything in the say, top NNN internet sites get an exemption? It is the Internet’s most hit content, why would it not be exhaustively indexed?

Alternatively, other than ads, what is changing on a CNN article from 10 years ago? Why would that still be getting daily scans?

progmetaldev|2 years ago

Probably bad technology detecting a change. Things like current news showing up beneath the article, which changes whenever a new article is added. I've seen this happen on quite a few large websites. It might be technologically easier to drop old articles than the amount of time to fix whatever they use to determine if a page has changed. You would think a site like CNET wouldn't have to deal with something like that, but sometimes these sites that have been around for a long time have some serious outdated tech.

kenjackson|2 years ago

That's a good point about the static nature of some pages. Is there any way to tell a crawler to crawl this page, but after this date don't crawl again, but keep anything you previously crawled.

em-bee|2 years ago

the ads are different.

i am tracking rss feeds of many sites, and on some i get notifications for old articles because something irrelevant in the page changed.

bhandziuk|2 years ago

CNET* not CNN. But everything you say is still true.

tedunangst|2 years ago

How does Wikipedia manage to remain indexed?

pessimizer|2 years ago

Google is paying Wikipedia through "Wikimedia Enterprise." If Wikipedia weren't able to sucker people into thinking that they're poverty-stricken, Google would probably prop it up like they do Firefox.

sznio|2 years ago

Google search still prefers to give me at least 2-3 blogspam pages before the Wikipedia page with exactly the same keywords in the title as my query.

lkbm|2 years ago

If I were establishing a "crawl budget", it would be adjusted by value. If you're consistently serving up hits as I crawl, I'll keep crawling. If it's a hundred pages that will basically never be a first page result, maybe not.

Wikipedia had a long tail of low-value content, but even the low-value content tends to be among the highest value for its given focus. e.g., I don't know how many people search "Danish trade monopoly in Iceland", and the Wikipedia article on it isn't fantastic, but it's a pretty good start[0]. Good enough to serve up as the main snippet on Google.

[0] https://en.wikipedia.org/wiki/Danish_trade_monopoly_in_Icela...

snowwrestler|2 years ago

Wikipedia’s strongest SEO weapon is how often wiki links get clicked on result pages, with no return.

They’re just truly useful pages, and that is reflected in how people interact with them.

lmm|2 years ago

Purely speculating, Wikipedia has a huge number of inbound links (likely many more than CNet or even than more popular sites) which crawler allocation might be proportionate to. Even if it only crawled pages that had a specific link from an external site, that would be enough for Google to get pretty good coverage of Wikipedia.

skissane|2 years ago

Very likely Google special-cases Wikipedia

ericd|2 years ago

Your site isn’t worthy of the same crawl budget as Wikipedia.

jesprenj|2 years ago

They could specify in the sitemap how often do old articles change. Or set a indefinite caching header.

codedokode|2 years ago

Google might not trust the sitemap because it sometimes is wrong.

nevi-me|2 years ago

It could be better to opt those articles out of the crawler. Unless that's more effort. If articles included the year and month in the URL prefix, I would disallow /201* instead.